Kimi-Linear-48B-A3B — 48B Parameter Mixture of Experts LLM
Model Specifications
- Parameters
- 48B (3B active)
- Architecture
- Mixture of Experts
- Context Length
- 1024K tokens
- Capabilities
- chat, coding, reasoning, multilingual
- Release Date
- 2025-10-30
- Provider
- Moonshot
- Family
- kimi
VRAM Requirements
| Quantization | BPW | VRAM | Quality |
|---|---|---|---|
| IQ3_XXS | 3.25 | 20.0 GB | 82% |
| IQ3_XS | 3.5 | 21.5 GB | 84% |
| Q3_K_S | 3.64 | 22.3 GB | 85% |
| IQ3_M | 3.76 | 23.0 GB | 86% |
| Q3_K_M | 4 | 24.5 GB | 88% |
| Q3_K_L | 4.3 | 26.3 GB | 90% |
| IQ4_XS | 4.46 | 27.2 GB | 92% |
| Q4_K_S | 4.67 | 28.5 GB | 93% |
| Q4_K_M | 4.89 | 29.8 GB | 94% |
| Q5_K_S | 5.57 | 33.9 GB | 96% |
| Q5_K_M | 5.7 | 34.7 GB | 96% |
| Q6_K | 6.56 | 39.8 GB | 97% |
| Q8_0 | 8.5 | 51.5 GB | 100% |
| FP16 | 16 | 96.5 GB | 100% |
Benchmark Scores
MMLU-PRO51.0
How to Run Kimi-Linear-48B-A3B
Run Kimi-Linear-48B-A3B locally with Ollama (needs 29.8 GB VRAM at Q4_K_M):
ollama run kimi-linear:48b-a3bCompatible GPUs (30)
GPUs that can run Kimi-Linear-48B-A3B at Q4_K_M quantization:
NVIDIA RTX 5090(32GB, 1792 GB/s)Apple M1 Max (32GB)(32GB, 400 GB/s)Apple M2 Max (32GB)(32GB, 400 GB/s)NVIDIA V100 SXM2 32GB(32GB, 900 GB/s)Apple M2 Pro (32GB)(32GB, 200 GB/s)Apple M4 (32GB)(32GB, 120 GB/s)NVIDIA Tesla V100 DGXS 32 GB(32GB, 897 GB/s)NVIDIA Tesla V100 PCIe 32 GB(32GB, 897 GB/s)NVIDIA Tesla V100 SXM2 32 GB(32GB, 898 GB/s)NVIDIA Tesla V100 SXM3 32 GB(32GB, 981 GB/s)AMD Radeon Instinct MI60(32GB, 1020 GB/s)NVIDIA Tesla V100S PCIe 32 GB(32GB, 1130 GB/s)AMD Radeon Instinct MI100(32GB, 1230 GB/s)NVIDIA RTX 5000 Ada Generation(32GB, 576 GB/s)NVIDIA GeForce RTX 5090(32GB, 1790 GB/s)