Zhipu AI/Mixture of Experts

Zhipu AIGLM-5 744B

Zhipu flagship MoE. 744B total, 40B active. 200K context. Trained on Huawei Ascend.

chatcodingreasoningmultilingual
744B
Parameters (40B active)
198K
Context length
3
Benchmarks
17
Quantizations
Architecture
MoE
Released
2026-02-01
Layers
78
KV Heads
64
Head Dim
64
Family
glm

Quantization Options

QuantBitsVRAMQuality
IQ2_XXS2.38221.8 GBlow
IQ2_M2.93273.0 GBlow
Q2_K3.16294.4 GBlow
IQ3_XXS3.25302.7 GBlow
IQ3_XS3.5326.0 GBlow
Q3_K_S3.64339.0 GBlow
IQ3_M3.76350.2 GBlow
Q3_K_M4372.5 GBlow
Q3_K_L4.3400.4 GBmoderate
IQ4_XS4.46415.3 GBmoderate
Q4_K_S4.67434.8 GBmoderate
Q4_K_M4.89455.3 GBgood
Q5_K_S5.57518.5 GBgood
Q5_K_M5.7530.6 GBgood
Q6_K6.56610.6 GBexcellent
Q8_08.5791.0 GBlossless
FP16161488.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN GLM-5 744B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (3)

AIME92.7
GPQA Diamond86.0
SWE-bench77.8

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run glm-5:q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

Find the best GPU for GLM-5 744B

Build Hardware for GLM-5 744B
▸ SPEC SHEET

GLM-5 744B744B MoE.

▸ SPECIFICATIONS
PARAMETERS
744B (40B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
198K tokens
CAPABILITIES
chat, coding, reasoning, multilingual
RELEASE DATE
2026-02-01
PROVIDER
Zhipu AI
FAMILY
glm
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_XXS2.38221.8 GB65%
IQ2_M2.93273.0 GB75%
Q2_K3.16294.4 GB78%
IQ3_XXS3.25302.7 GB82%
IQ3_XS3.5326.0 GB84%
Q3_K_S3.64339.0 GB85%
IQ3_M3.76350.2 GB86%
Q3_K_M4372.5 GB88%
Q3_K_L4.3400.4 GB90%
IQ4_XS4.46415.3 GB92%
Q4_K_S4.67434.8 GB93%
Q4_K_M4.89455.3 GB94%
Q5_K_S5.57518.5 GB96%
Q5_K_M5.7530.6 GB96%
Q6_K6.56610.6 GB97%
Q8_08.5791.0 GB100%
FP16161488.5 GB100%
§ 01BENCHMARK SCORES
SWE-bench77.8
AIME92.7
GPQA Diamond86.0
§ 02RUN COMMAND

Run GLM-5 744B locally with Ollama — needs 455.3 GB VRAM at Q4_K_M:

$ollama run glm-5