OLMo 3 32B — 32B Parameter Dense LLM
Model Specifications
- Parameters
- 32B
- Architecture
- Dense Transformer
- Context Length
- 63K tokens
- Capabilities
- chat, reasoning, coding, math
- Release Date
- 2025-12-01
- Family
- olmo
VRAM Requirements
| Quantization | BPW | VRAM | Quality |
|---|---|---|---|
| IQ3_XXS | 3.25 | 13.5 GB | 82% |
| IQ3_XS | 3.5 | 14.5 GB | 84% |
| Q3_K_S | 3.64 | 15.0 GB | 85% |
| IQ3_M | 3.76 | 15.5 GB | 86% |
| Q3_K_M | 4 | 16.5 GB | 88% |
| Q3_K_L | 4.3 | 17.7 GB | 90% |
| IQ4_XS | 4.46 | 18.3 GB | 92% |
| Q4_K_S | 4.67 | 19.2 GB | 93% |
| Q4_K_M | 4.89 | 20.0 GB | 94% |
| Q5_K_S | 5.57 | 22.8 GB | 96% |
| Q5_K_M | 5.7 | 23.3 GB | 96% |
| Q6_K | 6.56 | 26.7 GB | 97% |
| Q8_0 | 8.5 | 34.5 GB | 100% |
| FP16 | 16 | 64.5 GB | 100% |
Benchmark Scores
HumanEval91.5
MATH96.2
IFEval93.8
How to Run OLMo 3 32B
Run OLMo 3 32B locally with Ollama (needs 20.0 GB VRAM at Q4_K_M):
ollama run olmo-3:32bCompatible GPUs (30)
GPUs that can run OLMo 3 32B at Q4_K_M quantization:
NVIDIA RTX 4090(24GB, 1008 GB/s)NVIDIA RTX 3090 Ti(24GB, 1008 GB/s)NVIDIA RTX 3090(24GB, 936 GB/s)AMD RX 7900 XTX(24GB, 960 GB/s)Apple M4 Pro (24GB)(24GB, 273 GB/s)NVIDIA L4 24GB(24GB, 300 GB/s)NVIDIA A10 24GB(24GB, 600 GB/s)Apple M2 (24GB)(24GB, 100 GB/s)Apple M3 (24GB)(24GB, 100 GB/s)Apple M4 (24GB)(24GB, 120 GB/s)NVIDIA Tesla M40 24 GB(24GB, 288 GB/s)NVIDIA Tesla P10(24GB, 694 GB/s)NVIDIA Tesla P40(24GB, 347 GB/s)NVIDIA Quadro RTX 6000(24GB, 672 GB/s)NVIDIA Quadro RTX 6000 Passive(24GB, 624 GB/s)