Gemma 4 26B A4B — 26B Parameter Mixture of Experts LLM
Model Specifications
- Parameters
- 26B (4B active)
- Architecture
- Mixture of Experts
- Context Length
- 256K tokens
- Capabilities
- chat, coding, reasoning, multilingual, vision
- Release Date
- 2026-04-02
- Provider
- Family
- gemma
VRAM Requirements
| Quantization | BPW | VRAM | Quality |
|---|---|---|---|
| Q3_K_M | 4 | 13.5 GB | 88% |
| Q3_K_L | 4.3 | 14.5 GB | 90% |
| IQ4_XS | 4.46 | 15.0 GB | 92% |
| Q4_K_S | 4.67 | 15.7 GB | 93% |
| Q4_K_M | 4.89 | 16.4 GB | 94% |
| Q5_K_S | 5.57 | 18.6 GB | 96% |
| Q5_K_M | 5.7 | 19.0 GB | 96% |
| Q6_K | 6.56 | 21.8 GB | 97% |
| Q8_0 | 8.5 | 28.1 GB | 100% |
| FP16 | 16 | 52.5 GB | 100% |
Benchmark Scores
MMLU-PRO82.6
BBH64.8
Arena Elo1441.0
GPQA Diamond82.3
HLE8.7
LiveCodeBench77.1
AIME88.3
How to Run Gemma 4 26B A4B
Run Gemma 4 26B A4B locally with Ollama (needs 16.4 GB VRAM at Q4_K_M):
ollama run gemma4:26bCompatible GPUs (30)
GPUs that can run Gemma 4 26B A4B at Q4_K_M quantization:
Apple M3 Pro (18GB)(18GB, 150 GB/s)AMD RX 7900 XT(20GB, 800 GB/s)NVIDIA RTX 4000 Ada 20GB(20GB, 432 GB/s)NVIDIA A10M(20GB, 500 GB/s)NVIDIA GeForce RTX 3080 Ti 20 GB(20GB, 760 GB/s)AMD Radeon RX 7900 XT(20GB, 800 GB/s)NVIDIA RTX 4000 Ada Generation(20GB, 360 GB/s)NVIDIA RTX 4000 SFF Ada Generation(20GB, 280 GB/s)NVIDIA RTX A4500(20GB, 640 GB/s)NVIDIA RTX 4090(24GB, 1008 GB/s)NVIDIA RTX 3090 Ti(24GB, 1008 GB/s)NVIDIA RTX 3090(24GB, 936 GB/s)AMD RX 7900 XTX(24GB, 960 GB/s)Apple M4 Pro (24GB)(24GB, 273 GB/s)NVIDIA L4 24GB(24GB, 300 GB/s)