DeepSeek-V3 684.5B — 684.5B Parameter Mixture of Experts LLM
Model Specifications
- Parameters
- 684.5B (37B active)
- Architecture
- Mixture of Experts
- Context Length
- 160K tokens
- Capabilities
- chat
- Release Date
- 2025-01-20
- Provider
- DeepSeek
- Family
- deepseek
VRAM Requirements
| Quantization | BPW | VRAM | Quality |
|---|---|---|---|
| IQ2_XXS | 2.38 | 204.1 GB | 65% |
| IQ2_M | 2.93 | 251.2 GB | 75% |
| Q2_K | 3.16 | 270.9 GB | 78% |
| IQ3_XXS | 3.25 | 278.6 GB | 82% |
| IQ3_XS | 3.5 | 300.0 GB | 84% |
| Q3_K_S | 3.64 | 311.9 GB | 85% |
| IQ3_M | 3.76 | 322.2 GB | 86% |
| Q3_K_M | 4 | 342.7 GB | 88% |
| Q3_K_L | 4.3 | 368.4 GB | 90% |
| IQ4_XS | 4.46 | 382.1 GB | 92% |
| Q4_K_S | 4.67 | 400.1 GB | 93% |
| Q4_K_M | 4.89 | 418.9 GB | 94% |
| Q5_K_S | 5.57 | 477.1 GB | 96% |
| Q5_K_M | 5.7 | 488.2 GB | 96% |
| Q6_K | 6.56 | 561.8 GB | 97% |
| Q8_0 | 8.5 | 727.8 GB | 100% |
| FP16 | 16 | 1369.5 GB | 100% |
Benchmark Scores
BigCodeBench50.0
Arena Elo1373.0
How to Run DeepSeek-V3 684.5B
Run DeepSeek-V3 684.5B locally with Ollama (needs 418.9 GB VRAM at Q4_K_M):
ollama run deepseek-v3