Qwen 3.5 122B A10B — 122B Parameter Mixture of Experts LLM
Model Specifications
- Parameters
- 122B (10B active)
- Architecture
- Mixture of Experts
- Context Length
- 256K tokens
- Capabilities
- chat, coding, reasoning, multilingual, vision, math
- Release Date
- 2026-02-24
- Provider
- Alibaba
- Family
- qwen
VRAM Requirements
| Quantization | BPW | VRAM | Quality |
|---|---|---|---|
| IQ2_XXS | 2.38 | 36.8 GB | 65% |
| IQ2_M | 2.93 | 45.2 GB | 75% |
| Q2_K | 3.16 | 48.7 GB | 78% |
| IQ3_XXS | 3.25 | 50.1 GB | 82% |
| IQ3_XS | 3.5 | 53.9 GB | 84% |
| Q3_K_S | 3.64 | 56.0 GB | 85% |
| IQ3_M | 3.76 | 57.8 GB | 86% |
| Q3_K_M | 4 | 61.5 GB | 88% |
| Q3_K_L | 4.3 | 66.1 GB | 90% |
| IQ4_XS | 4.46 | 68.5 GB | 92% |
| Q4_K_S | 4.67 | 71.7 GB | 93% |
| Q4_K_M | 4.89 | 75.1 GB | 94% |
| Q5_K_S | 5.57 | 85.4 GB | 96% |
| Q5_K_M | 5.7 | 87.4 GB | 96% |
| Q6_K | 6.56 | 100.5 GB | 97% |
| Q8_0 | 8.5 | 130.1 GB | 100% |
| FP16 | 16 | 244.5 GB | 100% |
Benchmark Scores
MMLU-PRO86.7
IFEval93.4
MMMU83.9
MMBench92.8
GPQA Diamond86.6
HLE25.3
LiveCodeBench78.9
SWE-bench72.0
How to Run Qwen 3.5 122B A10B
Run Qwen 3.5 122B A10B locally with Ollama (needs 75.1 GB VRAM at Q4_K_M):
ollama run qwen3.5:122b-a10bCompatible GPUs (30)
GPUs that can run Qwen 3.5 122B A10B at Q4_K_M quantization:
NVIDIA H100 SXM5 80GB(80GB, 3350 GB/s)NVIDIA H100 PCIe 80GB(80GB, 2000 GB/s)NVIDIA A100 SXM 80GB(80GB, 2039 GB/s)NVIDIA A100 PCIe 80GB(80GB, 1935 GB/s)NVIDIA A100 SXM4 80 GB(80GB, 2040 GB/s)NVIDIA A100 PCIe 80 GB(80GB, 1940 GB/s)NVIDIA A100X(80GB, 2040 GB/s)NVIDIA H100 PCIe 80 GB(80GB, 2040 GB/s)NVIDIA H100 SXM5 80 GB(80GB, 3360 GB/s)NVIDIA H100 CNX(80GB, 2040 GB/s)NVIDIA A800 PCIe 80 GB(80GB, 1940 GB/s)NVIDIA A800 SXM4 80 GB(80GB, 2040 GB/s)NVIDIA H800 PCIe 80 GB(80GB, 2040 GB/s)NVIDIA H800 SXM5(80GB, 3360 GB/s)NVIDIA RTX 6000D(84GB, 1570 GB/s)