DeepSeek/Mixture of Experts

DeepSeekDeepSeek V2 236B

DeepSeek V2 — innovative MLA attention + MoE. Great efficiency for its quality.

chatcodingreasoning
236B
Parameters (21B active)
125K
Context length
1
Benchmarks
17
Quantizations
0
Architecture
MoE
Released
2024-05-06
Layers
60
KV Heads
128
Head Dim
40
Family
deepseek

Quantization Options

QuantBitsVRAMQuality
IQ2_XXS2.3870.7 GBlow
IQ2_M2.9386.9 GBlow
Q2_K3.1693.7 GBlow
IQ3_XXS3.2596.4 GBlow
IQ3_XS3.5103.7 GBlow
Q3_K_S3.64107.9 GBlow
IQ3_M3.76111.4 GBlow
Q3_K_M4118.5 GBlow
Q3_K_L4.3127.3 GBmoderate
IQ4_XS4.46132.1 GBmoderate
Q4_K_S4.67138.3 GBmoderate
Q4_K_M4.89144.7 GBgood
Q5_K_S5.57164.8 GBgood
Q5_K_M5.7168.6 GBgood
Q6_K6.56194.0 GBexcellent
Q8_08.5251.2 GBlossless
FP1616472.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN DEEPSEEK V2 236B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (1)

BigCodeBench40.4

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run deepseek-v2:236b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

AMD Instinct MI300X
192 GB VRAM • 5300 GB/s
AMD
$15000
Apple M2 Ultra (192GB)
192 GB VRAM • 800 GB/s
APPLE
$5499
Apple M3 Ultra (192GB)
192 GB VRAM • 800 GB/s
APPLE
$6999
Apple M4 Ultra (192GB)
192 GB VRAM • 1092 GB/s
APPLE
$7499
AMD Radeon Instinct MI300A
192 GB VRAM • 10300 GB/s
AMD
$12000
AMD Radeon Instinct MI300X
192 GB VRAM • 10300 GB/s
AMD
$15000
AMD Radeon Instinct MI308X
192 GB VRAM • 10300 GB/s
AMD
$12000
Apple M5 Ultra (192GB)
192 GB VRAM • 1228 GB/s
APPLE
AMD Radeon Instinct MI325X
288 GB VRAM • 10300 GB/s
AMD
$20000
AMD Radeon Instinct MI350X
288 GB VRAM • 8190 GB/s
AMD
$25000
AMD Radeon Instinct MI355X
288 GB VRAM • 8190 GB/s
AMD
$30000
Apple M4 Ultra (384GB)
384 GB VRAM • 1092 GB/s
APPLE
$9999
Apple M5 Ultra (384GB)
384 GB VRAM • 1228 GB/s
APPLE

Find the best GPU for DeepSeek V2 236B

Build Hardware for DeepSeek V2 236B

DeepSeek V2 — innovative MLA attention + MoE. Great efficiency for its quality.

▸ SPEC SHEET

DeepSeek V2 236B236B MoE.

▸ SPECIFICATIONS
PARAMETERS
236B (21B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
125K tokens
CAPABILITIES
chat, coding, reasoning
RELEASE DATE
2024-05-06
PROVIDER
DeepSeek
FAMILY
deepseek
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_XXS2.3870.7 GB65%
IQ2_M2.9386.9 GB75%
Q2_K3.1693.7 GB78%
IQ3_XXS3.2596.4 GB82%
IQ3_XS3.5103.7 GB84%
Q3_K_S3.64107.9 GB85%
IQ3_M3.76111.4 GB86%
Q3_K_M4118.5 GB88%
Q3_K_L4.3127.3 GB90%
IQ4_XS4.46132.1 GB92%
Q4_K_S4.67138.3 GB93%
Q4_K_M4.89144.7 GB94%
Q5_K_S5.57164.8 GB96%
Q5_K_M5.7168.6 GB96%
Q6_K6.56194.0 GB97%
Q8_08.5251.2 GB100%
FP1616472.5 GB100%
§ 01BENCHMARK SCORES
BigCodeBench40.4
§ 02RUN COMMAND

Run DeepSeek V2 236B locally with Ollama — needs 144.7 GB VRAM at Q4_K_M:

$ollama run deepseek-v2:236b