Meta/Mixture of Experts

MetaLlama-4-Maverick-17B-128E

Llama 4 Maverick — Meta's 400B MoE. 128 experts, frontier-class performance.

chatvisionreasoningDistilled
400B
Parameters (17B active)
1024K
Context length
20
Benchmarks
17
Quantizations
50K
HF downloads
Architecture
MoE
Released
2025-04-05
Layers
48
KV Heads
8
Head Dim
128
Family
llama

Quantization Options

QuantBitsVRAMQuality
IQ2_XXS2.38119.5 GBlow
IQ2_M2.93147.0 GBlow
Q2_K3.16158.5 GBlow
IQ3_XXS3.25163.0 GBlow
IQ3_XS3.5175.5 GBlow
Q3_K_S3.64182.5 GBlow
IQ3_M3.76188.5 GBlow
Q3_K_M4200.5 GBlow
Q3_K_L4.3215.5 GBmoderate
IQ4_XS4.46223.5 GBmoderate
Q4_K_S4.67234.0 GBmoderate
Q4_K_M4.89245.0 GBgood
Q5_K_S5.57279.0 GBgood
Q5_K_M5.7285.5 GBgood
Q6_K6.56328.5 GBexcellent
Q8_08.5425.5 GBlossless
FP1616800.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN LLAMA-4-MAVERICK-17B-128E NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (20)

Arena Elo1292
MATH-50088.9
IFEval86.0
HumanEval85.0
MATH78.0
MMBench78.0
MMLU-PRO69.0
GPQA Diamond67.1
MMMU61.0
AA Long Context46.0
IFBench43.0
LiveCodeBench39.7
SciCode33.1
AIME19.3
AA Math19.3
AA Intelligence18.4
τ²-Bench17.8
AA Coding15.6
Terminal-Bench6.8
HLE4.8

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run llama:400b-q4_K_M

Tag may need adjustment — check ollama.com/library/llama for available tags.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

AMD Radeon Instinct MI325X
288 GB VRAM • 10300 GB/s
AMD
$20000
AMD Radeon Instinct MI350X
288 GB VRAM • 8190 GB/s
AMD
$25000
AMD Radeon Instinct MI355X
288 GB VRAM • 8190 GB/s
AMD
$30000
Apple M4 Ultra (384GB)
384 GB VRAM • 1092 GB/s
APPLE
$9999
Apple M5 Ultra (384GB)
384 GB VRAM • 1228 GB/s
APPLE

Find the best GPU for Llama-4-Maverick-17B-128E

Build Hardware for Llama-4-Maverick-17B-128E

Llama 4 Maverick — Meta's 400B MoE. 128 experts, frontier-class performance.

▸ SPEC SHEET

Llama-4-Maverick-17B-128E400B MoE.

▸ SPECIFICATIONS
PARAMETERS
400B (17B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
1024K tokens
CAPABILITIES
chat, vision, reasoning
RELEASE DATE
2025-04-05
PROVIDER
Meta
FAMILY
llama
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_XXS2.38119.5 GB65%
IQ2_M2.93147.0 GB75%
Q2_K3.16158.5 GB78%
IQ3_XXS3.25163.0 GB82%
IQ3_XS3.5175.5 GB84%
Q3_K_S3.64182.5 GB85%
IQ3_M3.76188.5 GB86%
Q3_K_M4200.5 GB88%
Q3_K_L4.3215.5 GB90%
IQ4_XS4.46223.5 GB92%
Q4_K_S4.67234.0 GB93%
Q4_K_M4.89245.0 GB94%
Q5_K_S5.57279.0 GB96%
Q5_K_M5.7285.5 GB96%
Q6_K6.56328.5 GB97%
Q8_08.5425.5 GB100%
FP1616800.5 GB100%
§ 01BENCHMARK SCORES
HumanEval85.0
MMLU-PRO69.0
MATH78.0
IFEval86.0
MMMU61.0
MMBench78.0
Arena Elo1292.0
GPQA Diamond67.1
LiveCodeBench39.7
AIME19.3
MATH-50088.9
HLE4.8
AA Intelligence18.4
AA Coding15.6
AA Math19.3
aa_ifbench43.0
aa_terminal_bench6.8
aa_tau217.8
aa_scicode33.1
aa_lcr46.0