Llama 4 Scout 17B-16E — 109B MoE.
- PARAMETERS
- 109B (17B active)
- ARCHITECTURE
- Mixture of Experts
- CONTEXT LENGTH
- 512K tokens
- CAPABILITIES
- chat, coding, multilingual, vision
- RELEASE DATE
- 2025-04-05
- PROVIDER
- Meta
- FAMILY
- llama
| QUANT | BPW | VRAM | QUALITY |
|---|---|---|---|
| IQ2_XXS | 2.38 | 32.9 GB | 65% |
| IQ2_M | 2.93 | 40.4 GB | 75% |
| Q2_K | 3.16 | 43.5 GB | 78% |
| IQ3_XXS | 3.25 | 44.8 GB | 82% |
| IQ3_XS | 3.5 | 48.2 GB | 84% |
| Q3_K_S | 3.64 | 50.1 GB | 85% |
| IQ3_M | 3.76 | 51.7 GB | 86% |
| Q3_K_M | 4 | 55.0 GB | 88% |
| Q3_K_L | 4.3 | 59.1 GB | 90% |
| IQ4_XS | 4.46 | 61.3 GB | 92% |
| Q4_K_S | 4.67 | 64.1 GB | 93% |
| Q4_K_M | 4.89 | 67.1 GB | 94% |
| Q5_K_S | 5.57 | 76.4 GB | 96% |
| Q5_K_M | 5.7 | 78.2 GB | 96% |
| Q6_K | 6.56 | 89.9 GB | 97% |
| Q8_0 | 8.5 | 116.3 GB | 100% |
| FP16 | 16 | 218.5 GB | 100% |
Run Llama 4 Scout 17B-16E locally with Ollama — needs 67.1 GB VRAM at Q4_K_M:
ollama run llama4-scout:17b-16e