Allen AI/Dense

Allen AIOLMo 3 7B

OLMo 3 7B — AI2's fully open model with open training data and code.

chatcodingreasoningmathThinkingTool Use
7B
Parameters
32K
Context length
20
Benchmarks
6
Quantizations
60K
HF downloads
Architecture
Dense
Released
2025-03-01
Layers
32
KV Heads
32
Head Dim
128
Family
olmo

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.894.8 GBgood
Q5_K_S5.575.4 GBgood
Q5_K_M5.75.5 GBgood
Q6_K6.566.2 GBexcellent
Q8_08.57.9 GBlossless
FP161614.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN OLMO 3 7B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (20)

Arena Elo1030
MATH87.3
IFEval85.6
HumanEval77.2
BBH71.2
AIME70.7
AA Math70.7
LiveCodeBench61.7
MBPP60.2
GPQA Diamond51.6
GPQA48.6
MATH-50041.3
IFBench32.8
MMLU-PRO18.6
τ²-Bench12.6
SciCode10.3
AA Intelligence9.4
AA Coding7.6
HLE5.7
MUSR4.7

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run olmo3:7b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA Tesla K20c
5 GB VRAM • 208 GB/s
NVIDIA
NVIDIA Tesla K20m
5 GB VRAM • 208 GB/s
NVIDIA
NVIDIA Tesla K20s
5 GB VRAM • 208 GB/s
NVIDIA
NVIDIA RTX 3050 6GB
6 GB VRAM • 168 GB/s
NVIDIA
$169
Intel Arc A380
6 GB VRAM • 186 GB/s
INTEL
$129
NVIDIA RTX 2060 6GB
6 GB VRAM • 336 GB/s
NVIDIA
$150
NVIDIA GTX 1660 Ti
6 GB VRAM • 288 GB/s
NVIDIA
$140
NVIDIA Tesla C2070
6 GB VRAM • 143 GB/s
NVIDIA
NVIDIA Tesla C2075
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla C2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla M2070
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2070-Q
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2075
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla X2070
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla X2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla K20X
6 GB VRAM • 250 GB/s
NVIDIA
NVIDIA Tesla K20Xm
6 GB VRAM • 250 GB/s
NVIDIA

Find the best GPU for OLMo 3 7B

Build Hardware for OLMo 3 7B

OLMo 3 7B — AI2's fully open model with open training data and code.

▸ SPEC SHEET

OLMo 3 7B7B Dense.

▸ SPECIFICATIONS
PARAMETERS
7B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
32K tokens
CAPABILITIES
chat, coding, reasoning, math
RELEASE DATE
2025-03-01
PROVIDER
Allen AI
FAMILY
olmo
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.894.8 GB94%
Q5_K_S5.575.4 GB96%
Q5_K_M5.75.5 GB96%
Q6_K6.566.2 GB97%
Q8_08.57.9 GB100%
FP161614.5 GB100%
§ 01BENCHMARK SCORES
HumanEval77.2
MMLU-PRO18.6
MATH87.3
IFEval85.6
BBH71.2
GPQA48.6
MUSR4.7
MBPP60.2
Arena Elo1030.0
GPQA Diamond51.6
LiveCodeBench61.7
AIME70.7
HLE5.7
AA Intelligence9.4
AA Coding7.6
AA Math70.7
aa_ifbench32.8
aa_tau212.6
aa_scicode10.3
MATH-50041.3
§ 02RUN COMMAND

Run OLMo 3 7B locally with Ollama — needs 4.8 GB VRAM at Q4_K_M:

$ollama run olmo3:7b