▸ SPEC SHEET
Mixtral-8x7B — 46.7B MoE.
▸ SPECIFICATIONS
- PARAMETERS
- 46.7B (13B active)
- ARCHITECTURE
- Mixture of Experts
- CONTEXT LENGTH
- 32K tokens
- CAPABILITIES
- chat, coding
- RELEASE DATE
- 2024-01-11
- PROVIDER
- Mistral AI
- FAMILY
- mistral
▸ VRAM REQUIREMENTS
| QUANT | BPW | VRAM | QUALITY |
|---|---|---|---|
| IQ3_XXS | 3.25 | 19.5 GB | 82% |
| IQ3_XS | 3.5 | 20.9 GB | 84% |
| Q3_K_S | 3.64 | 21.7 GB | 85% |
| IQ3_M | 3.76 | 22.4 GB | 86% |
| Q3_K_M | 4 | 23.8 GB | 88% |
| Q3_K_L | 4.3 | 25.6 GB | 90% |
| IQ4_XS | 4.46 | 26.5 GB | 92% |
| Q4_K_S | 4.67 | 27.7 GB | 93% |
| Q4_K_M | 4.89 | 29.0 GB | 94% |
| Q5_K_S | 5.57 | 33.0 GB | 96% |
| Q5_K_M | 5.7 | 33.8 GB | 96% |
| Q6_K | 6.56 | 38.8 GB | 97% |
| Q8_0 | 8.5 | 50.1 GB | 100% |
| FP16 | 16 | 93.9 GB | 100% |
§ 01BENCHMARK SCORES
MMLU-PRO29.6
MATH12.2
IFEval59.0
BBH37.1
GPQA9.5
MUSR16.7
GPQA Diamond29.2
LiveCodeBench6.6
MATH-50029.9
HLE4.5
AA Intelligence7.7
aa_scicode2.8
AIME0.0
§ 02RUN COMMAND
Run Mixtral-8x7B locally with Ollama — needs 29.0 GB VRAM at Q4_K_M:
$
ollama run mixtral:8x7b§ 03COMPATIBLE GPUs
30 @ Q4_K_MNVIDIA RTX 5090
32 GB · 1792 GB/s
Apple M1 Max (32GB)
32 GB · 400 GB/s
Apple M2 Max (32GB)
32 GB · 400 GB/s
NVIDIA V100 SXM2 32GB
32 GB · 900 GB/s
Apple M2 Pro (32GB)
32 GB · 200 GB/s
Apple M4 (32GB)
32 GB · 120 GB/s
NVIDIA Tesla V100 DGXS 32 GB
32 GB · 897 GB/s
NVIDIA Tesla V100 PCIe 32 GB
32 GB · 897 GB/s
NVIDIA Tesla V100 SXM2 32 GB
32 GB · 898 GB/s
NVIDIA Tesla V100 SXM3 32 GB
32 GB · 981 GB/s
AMD Radeon Instinct MI60
32 GB · 1020 GB/s
NVIDIA Tesla V100S PCIe 32 GB
32 GB · 1130 GB/s
AMD Radeon Instinct MI100
32 GB · 1230 GB/s
NVIDIA RTX 5000 Ada Generation
32 GB · 576 GB/s
NVIDIA GeForce RTX 5090
32 GB · 1790 GB/s