▸ SPEC SHEET
Qwen 3.5 35B A3B — 35B MoE.
▸ SPECIFICATIONS
- PARAMETERS
- 35B (3B active)
- ARCHITECTURE
- Mixture of Experts
- CONTEXT LENGTH
- 256K tokens
- CAPABILITIES
- chat, coding, reasoning, multilingual, vision, math
- RELEASE DATE
- 2026-02-01
- PROVIDER
- Alibaba
- FAMILY
- qwen
▸ VRAM REQUIREMENTS
| QUANT | BPW | VRAM | QUALITY |
|---|---|---|---|
| IQ3_XXS | 3.25 | 14.7 GB | 82% |
| IQ3_XS | 3.5 | 15.8 GB | 84% |
| Q3_K_S | 3.64 | 16.4 GB | 85% |
| IQ3_M | 3.76 | 16.9 GB | 86% |
| Q3_K_M | 4 | 18.0 GB | 88% |
| Q3_K_L | 4.3 | 19.3 GB | 90% |
| IQ4_XS | 4.46 | 20.0 GB | 92% |
| Q4_K_S | 4.67 | 20.9 GB | 93% |
| Q4_K_M | 4.89 | 21.9 GB | 94% |
| Q5_K_S | 5.57 | 24.9 GB | 96% |
| Q5_K_M | 5.7 | 25.4 GB | 96% |
| Q6_K | 6.56 | 29.2 GB | 97% |
| Q8_0 | 8.5 | 37.7 GB | 100% |
| FP16 | 16 | 70.5 GB | 100% |
§ 01BENCHMARK SCORES
MMLU-PRO85.3
MATH59.7
IFEval91.9
BBH58.3
MMMU75.1
GPQA15.2
MUSR19.1
BigCodeBench32.3
MMBench91.5
Arena Elo1485.0
GPQA Diamond81.9
HLE12.8
AA Intelligence30.7
AA Coding16.8
aa_ifbench72.5
aa_terminal_bench26.5
aa_tau289.2
aa_scicode37.7
aa_lcr62.7
§ 02RUN COMMAND
Run Qwen 3.5 35B A3B locally with Ollama — needs 21.9 GB VRAM at Q4_K_M:
$
ollama run qwen3.5:35b-a3b§ 03COMPATIBLE GPUs
30 @ Q4_K_MNVIDIA RTX 4090
24 GB · 1008 GB/s
NVIDIA RTX 3090 Ti
24 GB · 1008 GB/s
NVIDIA RTX 3090
24 GB · 936 GB/s
AMD RX 7900 XTX
24 GB · 960 GB/s
Apple M4 Pro (24GB)
24 GB · 273 GB/s
NVIDIA L4 24GB
24 GB · 300 GB/s
NVIDIA A10 24GB
24 GB · 600 GB/s
Apple M2 (24GB)
24 GB · 100 GB/s
Apple M3 (24GB)
24 GB · 100 GB/s
Apple M4 (24GB)
24 GB · 120 GB/s
NVIDIA Tesla M40 24 GB
24 GB · 288 GB/s
NVIDIA Tesla P10
24 GB · 694 GB/s
NVIDIA Tesla P40
24 GB · 347 GB/s
NVIDIA Quadro RTX 6000
24 GB · 672 GB/s
NVIDIA Quadro RTX 6000 Passive
24 GB · 624 GB/s