Meta/Mixture of Experts

MetaLlama 4 Scout 17B-16E

Llama 4 Scout 109B — efficient MoE with 17B active. Multilingual and multimodal.

chatcodingmultilingualvisionDistilled
109B
Parameters (17B active)
512K
Context length
21
Benchmarks
17
Quantizations
200K
HF downloads
Architecture
MoE
Released
2025-04-05
Layers
48
KV Heads
8
Head Dim
128
Family
llama

Quantization Options

QuantBitsVRAMQuality
IQ2_XXS2.3832.9 GBlow
IQ2_M2.9340.4 GBlow
Q2_K3.1643.5 GBlow
IQ3_XXS3.2544.8 GBlow
IQ3_XS3.548.2 GBlow
Q3_K_S3.6450.1 GBlow
IQ3_M3.7651.7 GBlow
Q3_K_M455.0 GBlow
Q3_K_L4.359.1 GBmoderate
IQ4_XS4.4661.3 GBmoderate
Q4_K_S4.6764.1 GBmoderate
Q4_K_M4.8967.1 GBgood
Q5_K_S5.5776.4 GBgood
Q5_K_M5.778.2 GBgood
Q6_K6.5689.9 GBexcellent
Q8_08.5116.3 GBlossless
FP1616218.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN LLAMA 4 SCOUT 17B-16E NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (21)

Arena Elo1491
MATH-50084.4
MMLU-PRO74.3
MMMU73.4
GPQA Diamond58.7
GPQA57.2
IFEval54.8
BBH51.4
IFBench39.5
LiveCodeBench29.9
AA Long Context25.8
MATH21.8
MUSR20.8
SciCode17.0
τ²-Bench15.5
AIME14.0
AA Math14.0
AA Intelligence13.5
AA Coding6.7
HLE4.3
Terminal-Bench1.5

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run llama4-scout:17b-16e-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX PRO 5000 72 GB Blackwell
72 GB VRAM • 1340 GB/s
NVIDIA
$6999
NVIDIA H100 SXM5 80GB
80 GB VRAM • 3350 GB/s
NVIDIA
$25000
NVIDIA H100 PCIe 80GB
80 GB VRAM • 2000 GB/s
NVIDIA
$25000
NVIDIA A100 SXM 80GB
80 GB VRAM • 2039 GB/s
NVIDIA
$10000
NVIDIA A100 PCIe 80GB
80 GB VRAM • 1935 GB/s
NVIDIA
$10000
NVIDIA A100 SXM4 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
$15000
NVIDIA A100 PCIe 80 GB
80 GB VRAM • 1940 GB/s
NVIDIA
$10000
NVIDIA A100X
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H100 PCIe 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
$25000
NVIDIA H100 SXM5 80 GB
80 GB VRAM • 3360 GB/s
NVIDIA
$25000
NVIDIA H100 CNX
80 GB VRAM • 2040 GB/s
NVIDIA
$25000
NVIDIA A800 PCIe 80 GB
80 GB VRAM • 1940 GB/s
NVIDIA
NVIDIA A800 SXM4 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H800 PCIe 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H800 SXM5
80 GB VRAM • 3360 GB/s
NVIDIA
NVIDIA RTX 6000D
84 GB VRAM • 1570 GB/s
NVIDIA
$7500
NVIDIA B200
90 GB VRAM • 4100 GB/s
NVIDIA
$30000
NVIDIA H100 NVL 94 GB
94 GB VRAM • 3940 GB/s
NVIDIA
$30000
NVIDIA H100 SXM5 94 GB
94 GB VRAM • 3360 GB/s
NVIDIA
$25000
RTX Pro 6000
96 GB VRAM • 1792 GB/s
NVIDIA
$8565
NVIDIA H100 PCIe 96 GB
96 GB VRAM • 3360 GB/s
NVIDIA
$25000
NVIDIA H100 SXM5 96 GB
96 GB VRAM • 3360 GB/s
NVIDIA
$25000
Intel Data Center GPU Max 1350
96 GB VRAM • 2460 GB/s
INTEL
NVIDIA RTX PRO 6000 Blackwell Server
96 GB VRAM • 1790 GB/s
NVIDIA
$9999
NVIDIA RTX PRO 6000 Blackwell
96 GB VRAM • 1790 GB/s
NVIDIA
$9999
AMD Instinct MI300A
120 GB VRAM • 5300 GB/s
AMD
$12000
Apple M4 Max (128GB)
128 GB VRAM • 546 GB/s
APPLE
$3999
AMD Instinct MI250X
128 GB VRAM • 3277 GB/s
AMD
$10000
Apple M1 Ultra (128GB)
128 GB VRAM • 800 GB/s
APPLE
$4999
Apple M2 Ultra (128GB)
128 GB VRAM • 800 GB/s
APPLE
$3999

Find the best GPU for Llama 4 Scout 17B-16E

Build Hardware for Llama 4 Scout 17B-16E

Llama 4 Scout 109B — efficient MoE with 17B active. Multilingual and multimodal.

▸ SPEC SHEET

Llama 4 Scout 17B-16E109B MoE.

▸ SPECIFICATIONS
PARAMETERS
109B (17B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
512K tokens
CAPABILITIES
chat, coding, multilingual, vision
RELEASE DATE
2025-04-05
PROVIDER
Meta
FAMILY
llama
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_XXS2.3832.9 GB65%
IQ2_M2.9340.4 GB75%
Q2_K3.1643.5 GB78%
IQ3_XXS3.2544.8 GB82%
IQ3_XS3.548.2 GB84%
Q3_K_S3.6450.1 GB85%
IQ3_M3.7651.7 GB86%
Q3_K_M455.0 GB88%
Q3_K_L4.359.1 GB90%
IQ4_XS4.4661.3 GB92%
Q4_K_S4.6764.1 GB93%
Q4_K_M4.8967.1 GB94%
Q5_K_S5.5776.4 GB96%
Q5_K_M5.778.2 GB96%
Q6_K6.5689.9 GB97%
Q8_08.5116.3 GB100%
FP1616218.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO74.3
MATH21.8
IFEval54.8
BBH51.4
MMMU73.4
GPQA57.2
MUSR20.8
Arena Elo1491.0
GPQA Diamond58.7
LiveCodeBench29.9
AIME14.0
MATH-50084.4
HLE4.3
AA Intelligence13.5
AA Coding6.7
AA Math14.0
aa_ifbench39.5
aa_terminal_bench1.5
aa_tau215.5
aa_scicode17.0
aa_lcr25.8
§ 02RUN COMMAND

Run Llama 4 Scout 17B-16E locally with Ollama — needs 67.1 GB VRAM at Q4_K_M:

$ollama run llama4-scout:17b-16e