MiniMax-M2.5 228.7B — 228.7B Parameter Mixture of Experts LLM
Model Specifications
- Parameters
- 228.7B (21B active)
- Architecture
- Mixture of Experts
- Context Length
- 192K tokens
- Capabilities
- chat
- Release Date
- 2026-03-10
- Provider
- MiniMax
- Family
- minimax
VRAM Requirements
| Quantization | BPW | VRAM | Quality |
|---|---|---|---|
| IQ2_XXS | 2.38 | 68.5 GB | 65% |
| IQ2_M | 2.93 | 84.3 GB | 75% |
| Q2_K | 3.16 | 90.8 GB | 78% |
| IQ3_XXS | 3.25 | 93.4 GB | 82% |
| IQ3_XS | 3.5 | 100.5 GB | 84% |
| Q3_K_S | 3.64 | 104.5 GB | 85% |
| IQ3_M | 3.76 | 108.0 GB | 86% |
| Q3_K_M | 4 | 114.8 GB | 88% |
| Q3_K_L | 4.3 | 123.4 GB | 90% |
| IQ4_XS | 4.46 | 128.0 GB | 92% |
| Q4_K_S | 4.67 | 134.0 GB | 93% |
| Q4_K_M | 4.89 | 140.3 GB | 94% |
| Q5_K_S | 5.57 | 159.7 GB | 96% |
| Q5_K_M | 5.7 | 163.4 GB | 96% |
| Q6_K | 6.56 | 188.0 GB | 97% |
| Q8_0 | 8.5 | 243.5 GB | 100% |
| FP16 | 16 | 457.9 GB | 100% |
Benchmark Scores
MATH86.3
GPQA85.2
Arena Elo1495.0
GPQA Diamond84.8
HLE19.1
AA Intelligence41.9
AA Coding37.4
How to Run MiniMax-M2.5 228.7B
Run MiniMax-M2.5 228.7B locally with Ollama (needs 140.3 GB VRAM at Q4_K_M):
ollama run minimax:228bCompatible GPUs (16)
GPUs that can run MiniMax-M2.5 228.7B at Q4_K_M quantization:
NVIDIA H200 NVL(141GB, 4890 GB/s)NVIDIA H200 SXM 141 GB(141GB, 4890 GB/s)NVIDIA B300(144GB, 4100 GB/s)AMD Instinct MI300X(192GB, 5300 GB/s)Apple M2 Ultra (192GB)(192GB, 800 GB/s)Apple M3 Ultra (192GB)(192GB, 800 GB/s)Apple M4 Ultra (192GB)(192GB, 1092 GB/s)AMD Radeon Instinct MI300A(192GB, 10300 GB/s)AMD Radeon Instinct MI300X(192GB, 10300 GB/s)AMD Radeon Instinct MI308X(192GB, 10300 GB/s)Apple M5 Ultra (192GB)(192GB, 1228 GB/s)AMD Radeon Instinct MI325X(288GB, 10300 GB/s)AMD Radeon Instinct MI350X(288GB, 8190 GB/s)AMD Radeon Instinct MI355X(288GB, 8190 GB/s)Apple M4 Ultra (384GB)(384GB, 1092 GB/s)