Alibaba/Dense

AlibabaQwQ-32B

QwQ-32B — Alibaba's reasoning model. Competitive with OpenAI o1 on math.

chatreasoningmathThinking
32.5B
Parameters
128K
Context length
18
Benchmarks
14
Quantizations
Architecture
Dense
Released
2024-12-19
Layers
64
KV Heads
8
Head Dim
128
Family
qwen

Quantization Options

QuantBitsVRAMQuality
IQ3_XXS3.2513.7 GBlow
IQ3_XS3.514.7 GBlow
Q3_K_S3.6415.3 GBlow
IQ3_M3.7615.8 GBlow
Q3_K_M416.7 GBlow
Q3_K_L4.318.0 GBmoderate
IQ4_XS4.4618.6 GBmoderate
Q4_K_S4.6719.5 GBmoderate
Q4_K_M4.8920.4 GBgood
Q5_K_S5.5723.1 GBgood
Q5_K_M5.723.6 GBgood
Q6_K6.5627.1 GBexcellent
Q8_08.535.0 GBlossless
FP161665.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN QWQ-32B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (18)

Arena Elo1332
MATH-50095.7
HumanEval87.2
MBPP77.0
LiveCodeBench63.1
GPQA Diamond59.3
IFEval57.8
MATH53.2
BBH53.0
MMLU-PRO51.1
BigCodeBench44.6
AIME29.0
AA Math29.0
AA Intelligence19.7
MUSR19.5
GPQA14.9
HLE8.2
SciCode3.8

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run qwq:32b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 4090
24 GB VRAM • 1008 GB/s
NVIDIA
$1599
NVIDIA RTX 3090 Ti
24 GB VRAM • 1008 GB/s
NVIDIA
$999
NVIDIA RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$850
AMD RX 7900 XTX
24 GB VRAM • 960 GB/s
AMD
$999
Apple M4 Pro (24GB)
24 GB VRAM • 273 GB/s
APPLE
$1399
NVIDIA L4 24GB
24 GB VRAM • 300 GB/s
NVIDIA
$2500
NVIDIA A10 24GB
24 GB VRAM • 600 GB/s
NVIDIA
$3500
Apple M2 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M3 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M4 (24GB)
24 GB VRAM • 120 GB/s
APPLE
$699
NVIDIA Tesla M40 24 GB
24 GB VRAM • 288 GB/s
NVIDIA
NVIDIA Tesla P10
24 GB VRAM • 694 GB/s
NVIDIA
NVIDIA Tesla P40
24 GB VRAM • 347 GB/s
NVIDIA
NVIDIA Quadro RTX 6000
24 GB VRAM • 672 GB/s
NVIDIA
$4000
NVIDIA GeForce RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$1499
NVIDIA A10 PCIe
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA A10G
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA RTX A5000
24 GB VRAM • 768 GB/s
NVIDIA
$2500
NVIDIA GeForce RTX 4090
24 GB VRAM • 1010 GB/s
NVIDIA
$1599
NVIDIA L40 CNX
24 GB VRAM • 864 GB/s
NVIDIA
$5000
NVIDIA L40G
24 GB VRAM • 864 GB/s
NVIDIA
$5000
NVIDIA A30 PCIe
24 GB VRAM • 933 GB/s
NVIDIA
NVIDIA A30X
24 GB VRAM • 1220 GB/s
NVIDIA

Find the best GPU for QwQ-32B

Build Hardware for QwQ-32B

QwQ-32B — Alibaba's reasoning model. Competitive with OpenAI o1 on math.

▸ SPEC SHEET

QwQ-32B32.5B Dense.

▸ SPECIFICATIONS
PARAMETERS
32.5B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
128K tokens
CAPABILITIES
chat, reasoning, math
RELEASE DATE
2024-12-19
PROVIDER
Alibaba
FAMILY
qwen
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ3_XXS3.2513.7 GB82%
IQ3_XS3.514.7 GB84%
Q3_K_S3.6415.3 GB85%
IQ3_M3.7615.8 GB86%
Q3_K_M416.7 GB88%
Q3_K_L4.318.0 GB90%
IQ4_XS4.4618.6 GB92%
Q4_K_S4.6719.5 GB93%
Q4_K_M4.8920.4 GB94%
Q5_K_S5.5723.1 GB96%
Q5_K_M5.723.6 GB96%
Q6_K6.5627.1 GB97%
Q8_08.535.0 GB100%
FP161665.5 GB100%
§ 01BENCHMARK SCORES
HumanEval87.2
MMLU-PRO51.1
MATH53.2
IFEval57.8
BBH53.0
GPQA14.9
MUSR19.5
MBPP77.0
BigCodeBench44.6
Arena Elo1332.0
GPQA Diamond59.3
LiveCodeBench63.1
AIME29.0
MATH-50095.7
HLE8.2
AA Intelligence19.7
AA Math29.0
aa_scicode3.8
§ 02RUN COMMAND

Run QwQ-32B locally with Ollama — needs 20.4 GB VRAM at Q4_K_M:

$ollama run qwq:32b