OpenAI/Mixture of Experts

OpenAIGPT-OSS 20B

OpenAI's first open-weight model. MoE with 3.6B active params — runs on 12GB RAM. Matches o3-mini on reasoning benchmarks.

chatcodingreasoningtool_use
21B
Parameters (3.6B active)
128K
Context length
16
Benchmarks
10
Quantizations
Architecture
MoE
Released
2026-02-14
Layers
28
KV Heads
8
Head Dim
128
Family
gpt-oss

Quantization Options

QuantBitsVRAMQuality
Q3_K_M411.0 GBlow
Q3_K_L4.311.8 GBmoderate
IQ4_XS4.4612.2 GBmoderate
Q4_K_S4.6712.7 GBmoderate
Q4_K_M4.8913.3 GBgood
Q5_K_S5.5715.1 GBgood
Q5_K_M5.715.5 GBgood
Q6_K6.5617.7 GBexcellent
Q8_08.522.8 GBlossless
FP161642.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN GPT-OSS 20B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (16)

AIME98.7
MATH-50089.3
MMLU-PRO85.3
HumanEval81.7
GPQA Diamond71.5
IFEval69.5
LiveCodeBench65.2
AA Math62.3
IFBench57.8
τ²-Bench50.3
SciCode34.0
AA Long Context31.0
AA Intelligence20.8
AA Coding14.4
HLE10.9
Terminal-Bench4.5

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run gpt-oss:20b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 5080
16 GB VRAM • 960 GB/s
NVIDIA
$999
NVIDIA RTX 5070 Ti
16 GB VRAM • 896 GB/s
NVIDIA
$749
NVIDIA RTX 4080 SUPER
16 GB VRAM • 736 GB/s
NVIDIA
$999
NVIDIA RTX 4080
16 GB VRAM • 717 GB/s
NVIDIA
$1199
AMD RX 7900 GRE
16 GB VRAM • 576 GB/s
AMD
$549
AMD RX 7800 XT
16 GB VRAM • 624 GB/s
AMD
$499
AMD RX 7600 XT
16 GB VRAM • 288 GB/s
AMD
$329
AMD RX 6950 XT
16 GB VRAM • 576 GB/s
AMD
$449
AMD RX 6900 XT
16 GB VRAM • 512 GB/s
AMD
$469
AMD RX 6800 XT
16 GB VRAM • 512 GB/s
AMD
$599
AMD RX 6800
16 GB VRAM • 512 GB/s
AMD
$599
Intel Arc A770 16GB
16 GB VRAM • 560 GB/s
INTEL
$349
Apple M1 Pro (16GB)
16 GB VRAM • 200 GB/s
APPLE
$999
Apple M2 Pro (16GB)
16 GB VRAM • 200 GB/s
APPLE
$1299
Apple M4 (16GB)
16 GB VRAM • 120 GB/s
APPLE
$499
NVIDIA Tesla T4 16GB
16 GB VRAM • 320 GB/s
NVIDIA
$800
NVIDIA V100 PCIe 16GB
16 GB VRAM • 900 GB/s
NVIDIA
$2000
AMD RX 9070 XT
16 GB VRAM • 640 GB/s
AMD
$599
AMD RX 9070
16 GB VRAM • 672 GB/s
AMD
$549
Apple M1 (16GB)
16 GB VRAM • 68.25 GB/s
APPLE
$699
Apple M2 (16GB)
16 GB VRAM • 100 GB/s
APPLE
$799
Apple M3 (16GB)
16 GB VRAM • 100 GB/s
APPLE
$799
NVIDIA Tesla P100 DGXS
16 GB VRAM • 732 GB/s
NVIDIA
NVIDIA Tesla P100 PCIe 16 GB
16 GB VRAM • 732 GB/s
NVIDIA
NVIDIA Tesla P100 SXM2
16 GB VRAM • 732 GB/s
NVIDIA
NVIDIA Tesla V100 PCIe 16 GB
16 GB VRAM • 897 GB/s
NVIDIA
NVIDIA Tesla V100 SXM2 16 GB
16 GB VRAM • 1130 GB/s
NVIDIA

Find the best GPU for GPT-OSS 20B

Build Hardware for GPT-OSS 20B
▸ SPEC SHEET

GPT-OSS 20B21B MoE.

▸ SPECIFICATIONS
PARAMETERS
21B (3.6B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
128K tokens
CAPABILITIES
chat, coding, reasoning, tool_use
RELEASE DATE
2026-02-14
PROVIDER
OpenAI
FAMILY
gpt-oss
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q3_K_M411.0 GB88%
Q3_K_L4.311.8 GB90%
IQ4_XS4.4612.2 GB92%
Q4_K_S4.6712.7 GB93%
Q4_K_M4.8913.3 GB94%
Q5_K_S5.5715.1 GB96%
Q5_K_M5.715.5 GB96%
Q6_K6.5617.7 GB97%
Q8_08.522.8 GB100%
FP161642.5 GB100%
§ 01BENCHMARK SCORES
HumanEval81.7
MMLU-PRO85.3
IFEval69.5
LiveCodeBench65.2
AIME98.7
MATH-50089.3
GPQA Diamond71.5
HLE10.9
AA Intelligence20.8
AA Coding14.4
AA Math62.3
aa_ifbench57.8
aa_terminal_bench4.5
aa_tau250.3
aa_scicode34.0
aa_lcr31.0
§ 02RUN COMMAND

Run GPT-OSS 20B locally with Ollama — needs 13.3 GB VRAM at Q4_K_M:

$ollama run gpt-oss:20b