▸ SPEC SHEET
GLM 4.7 Flash — 31B MoE.
▸ SPECIFICATIONS
- PARAMETERS
- 31B (3B active)
- ARCHITECTURE
- Mixture of Experts
- CONTEXT LENGTH
- 193K tokens
- CAPABILITIES
- chat, coding, reasoning, tool_use
- RELEASE DATE
- 2026-01-29
- PROVIDER
- Zhipu AI
- FAMILY
- glm
▸ VRAM REQUIREMENTS
| QUANT | BPW | VRAM | QUALITY |
|---|---|---|---|
| IQ3_XXS | 3.25 | 13.1 GB | 82% |
| IQ3_XS | 3.5 | 14.1 GB | 84% |
| Q3_K_S | 3.64 | 14.6 GB | 85% |
| IQ3_M | 3.76 | 15.1 GB | 86% |
| Q3_K_M | 4 | 16.0 GB | 88% |
| Q3_K_L | 4.3 | 17.2 GB | 90% |
| IQ4_XS | 4.46 | 17.8 GB | 92% |
| Q4_K_S | 4.67 | 18.6 GB | 93% |
| Q4_K_M | 4.89 | 19.4 GB | 94% |
| Q5_K_S | 5.57 | 22.1 GB | 96% |
| Q5_K_M | 5.7 | 22.6 GB | 96% |
| Q6_K | 6.56 | 25.9 GB | 97% |
| Q8_0 | 8.5 | 33.4 GB | 100% |
| FP16 | 16 | 62.5 GB | 100% |
§ 01BENCHMARK SCORES
GPQA75.2
GPQA Diamond58.1
LiveCodeBench64.0
AIME91.6
HLE14.4
AA Intelligence30.1
AA Coding25.9
aa_ifbench46.3
aa_terminal_bench3.8
aa_tau291.8
aa_scicode25.5
aa_lcr14.7
§ 02RUN COMMAND
Run GLM 4.7 Flash locally with Ollama — needs 19.4 GB VRAM at Q4_K_M:
$
ollama run glm-4.7-flash§ 03COMPATIBLE GPUs
30 @ Q4_K_MAMD RX 7900 XT
20 GB · 800 GB/s
NVIDIA RTX 4000 Ada 20GB
20 GB · 432 GB/s
NVIDIA A10M
20 GB · 500 GB/s
NVIDIA GeForce RTX 3080 Ti 20 GB
20 GB · 760 GB/s
AMD Radeon RX 7900 XT
20 GB · 800 GB/s
NVIDIA RTX 4000 Ada Generation
20 GB · 360 GB/s
NVIDIA RTX 4000 SFF Ada Generation
20 GB · 280 GB/s
NVIDIA RTX A4500
20 GB · 640 GB/s
NVIDIA RTX 4090
24 GB · 1008 GB/s
NVIDIA RTX 3090 Ti
24 GB · 1008 GB/s
NVIDIA RTX 3090
24 GB · 936 GB/s
AMD RX 7900 XTX
24 GB · 960 GB/s
Apple M4 Pro (24GB)
24 GB · 273 GB/s
NVIDIA L4 24GB
24 GB · 300 GB/s
NVIDIA A10 24GB
24 GB · 600 GB/s