DeepSeek/Mixture of Experts

DeepSeekDeepSeek V3.2-Speciale

Reasoning-optimized variant of V3.2. Extended thinking for complex math and code tasks.

chatcodingreasoningtool_use
671B
Parameters (37B active)
128K
Context length
15
Benchmarks
17
Quantizations
Architecture
MoE
Released
2025-12-01
Layers
61
KV Heads
8
Head Dim
128
Family
deepseek

Quantization Options

QuantBitsVRAMQuality
IQ2_XXS2.38200.1 GBlow
IQ2_M2.93246.2 GBlow
Q2_K3.16265.5 GBlow
IQ3_XXS3.25273.1 GBlow
IQ3_XS3.5294.1 GBlow
Q3_K_S3.64305.8 GBlow
IQ3_M3.76315.9 GBlow
Q3_K_M4336.0 GBlow
Q3_K_L4.3361.2 GBmoderate
IQ4_XS4.46374.6 GBmoderate
Q4_K_S4.67392.2 GBmoderate
Q4_K_M4.89410.6 GBgood
Q5_K_S5.57467.7 GBgood
Q5_K_M5.7478.6 GBgood
Q6_K6.56550.7 GBexcellent
Q8_08.5713.4 GBlossless
FP16161342.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN DEEPSEEK V3.2-SPECIALE NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (15)

AIME97.0
MATH-50096.7
AA Math96.7
LiveCodeBench89.6
MMLU-PRO85.0
GPQA Diamond82.4
SWE-bench70.0
IFBench63.9
AA Long Context59.3
BigCodeBench50.0
SciCode44.0
AA Coding37.9
Terminal-Bench34.8
AA Intelligence29.4
HLE26.1

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run deepseek:671b-q4_K_M

Tag may need adjustment — check ollama.com/library/deepseek for available tags.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

Find the best GPU for DeepSeek V3.2-Speciale

Build Hardware for DeepSeek V3.2-Speciale
▸ SPEC SHEET

DeepSeek V3.2-Speciale671B MoE.

▸ SPECIFICATIONS
PARAMETERS
671B (37B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
128K tokens
CAPABILITIES
chat, coding, reasoning, tool_use
RELEASE DATE
2025-12-01
PROVIDER
DeepSeek
FAMILY
deepseek
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_XXS2.38200.1 GB65%
IQ2_M2.93246.2 GB75%
Q2_K3.16265.5 GB78%
IQ3_XXS3.25273.1 GB82%
IQ3_XS3.5294.1 GB84%
Q3_K_S3.64305.8 GB85%
IQ3_M3.76315.9 GB86%
Q3_K_M4336.0 GB88%
Q3_K_L4.3361.2 GB90%
IQ4_XS4.46374.6 GB92%
Q4_K_S4.67392.2 GB93%
Q4_K_M4.89410.6 GB94%
Q5_K_S5.57467.7 GB96%
Q5_K_M5.7478.6 GB96%
Q6_K6.56550.7 GB97%
Q8_08.5713.4 GB100%
FP16161342.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO85.0
BigCodeBench50.0
LiveCodeBench89.6
SWE-bench70.0
AIME97.0
MATH-50096.7
GPQA Diamond82.4
HLE26.1
AA Intelligence29.4
AA Coding37.9
AA Math96.7
aa_ifbench63.9
aa_terminal_bench34.8
aa_scicode44.0
aa_lcr59.3