▸ SPEC SHEET
GPT-OSS 120B — 117B MoE.
▸ SPECIFICATIONS
- PARAMETERS
- 117B (5.1B active)
- ARCHITECTURE
- Mixture of Experts
- CONTEXT LENGTH
- 128K tokens
- CAPABILITIES
- chat, coding, reasoning, tool_use
- RELEASE DATE
- 2026-02-14
- PROVIDER
- OpenAI
- FAMILY
- gpt-oss
▸ VRAM REQUIREMENTS
| QUANT | BPW | VRAM | QUALITY |
|---|---|---|---|
| IQ2_XXS | 2.38 | 35.3 GB | 65% |
| IQ2_M | 2.93 | 43.3 GB | 75% |
| Q2_K | 3.16 | 46.7 GB | 78% |
| IQ3_XXS | 3.25 | 48.0 GB | 82% |
| IQ3_XS | 3.5 | 51.7 GB | 84% |
| Q3_K_S | 3.64 | 53.7 GB | 85% |
| IQ3_M | 3.76 | 55.5 GB | 86% |
| Q3_K_M | 4 | 59.0 GB | 88% |
| Q3_K_L | 4.3 | 63.4 GB | 90% |
| IQ4_XS | 4.46 | 65.7 GB | 92% |
| Q4_K_S | 4.67 | 68.8 GB | 93% |
| Q4_K_M | 4.89 | 72.0 GB | 94% |
| Q5_K_S | 5.57 | 81.9 GB | 96% |
| Q5_K_M | 5.7 | 83.9 GB | 96% |
| Q6_K | 6.56 | 96.4 GB | 97% |
| Q8_0 | 8.5 | 124.8 GB | 100% |
| FP16 | 16 | 234.5 GB | 100% |
§ 01BENCHMARK SCORES
HumanEval88.3
MMLU-PRO90.0
LiveCodeBench70.7
SWE-bench62.4
AIME97.9
MATH-50066.7
GPQA Diamond80.1
HLE14.9
AA Intelligence24.5
AA Coding15.5
AA Math66.7
aa_ifbench58.3
aa_terminal_bench5.3
aa_tau245.0
aa_scicode36.0
aa_lcr43.7
§ 02RUN COMMAND
Run GPT-OSS 120B locally with Ollama — needs 72.0 GB VRAM at Q4_K_M:
$
ollama run gpt-oss:120b§ 03COMPATIBLE GPUs
30 @ Q4_K_MNVIDIA H100 SXM5 80GB
80 GB · 3350 GB/s
NVIDIA H100 PCIe 80GB
80 GB · 2000 GB/s
NVIDIA A100 SXM 80GB
80 GB · 2039 GB/s
NVIDIA A100 PCIe 80GB
80 GB · 1935 GB/s
NVIDIA A100 SXM4 80 GB
80 GB · 2040 GB/s
NVIDIA A100 PCIe 80 GB
80 GB · 1940 GB/s
NVIDIA A100X
80 GB · 2040 GB/s
NVIDIA H100 PCIe 80 GB
80 GB · 2040 GB/s
NVIDIA H100 SXM5 80 GB
80 GB · 3360 GB/s
NVIDIA H100 CNX
80 GB · 2040 GB/s
NVIDIA A800 PCIe 80 GB
80 GB · 1940 GB/s
NVIDIA A800 SXM4 80 GB
80 GB · 2040 GB/s
NVIDIA H800 PCIe 80 GB
80 GB · 2040 GB/s
NVIDIA H800 SXM5
80 GB · 3360 GB/s
NVIDIA RTX 6000D
84 GB · 1570 GB/s