Alibaba/Mixture of Experts

AlibabaQwen3-Coder 30B-A3B

Qwen's most popular local coding model. MoE 30B (3B active). 256K context.

codingtool_usereasoning
30.5B
Parameters (3.3B active)
256K
Context length
20
Benchmarks
14
Quantizations
Architecture
MoE
Released
2025-07-31
Layers
48
KV Heads
4
Head Dim
128
Family
qwen

Quantization Options

QuantBitsVRAMQuality
IQ3_XXS3.2512.9 GBlow
IQ3_XS3.513.8 GBlow
Q3_K_S3.6414.4 GBlow
IQ3_M3.7614.8 GBlow
Q3_K_M415.7 GBlow
Q3_K_L4.316.9 GBmoderate
IQ4_XS4.4617.5 GBmoderate
Q4_K_S4.6718.3 GBmoderate
Q4_K_M4.8919.1 GBgood
Q5_K_S5.5721.7 GBgood
Q5_K_M5.722.2 GBgood
Q6_K6.5625.5 GBexcellent
Q8_08.532.9 GBlossless
FP161661.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN QWEN3-CODER 30B-A3B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (20)

MATH-50089.3
IFEval78.9
MATH59.7
BBH58.3
MMLU-PRO52.9
GPQA Diamond51.6
LiveCodeBench40.3
τ²-Bench34.5
IFBench32.7
BigCodeBench32.3
AIME29.0
AA Math29.0
AA Long Context29.0
SciCode27.8
AA Intelligence20.0
AA Coding19.4
MUSR19.1
GPQA15.2
Terminal-Bench15.2
HLE4.0

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run qwen3-coder:30b-a3b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

AMD RX 7900 XT
20 GB VRAM • 800 GB/s
AMD
$849
NVIDIA A10M
20 GB VRAM • 500 GB/s
NVIDIA
NVIDIA RTX A4500
20 GB VRAM • 640 GB/s
NVIDIA
$2000
NVIDIA RTX 4090
24 GB VRAM • 1008 GB/s
NVIDIA
$1599
NVIDIA RTX 3090 Ti
24 GB VRAM • 1008 GB/s
NVIDIA
$999
NVIDIA RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$850
AMD RX 7900 XTX
24 GB VRAM • 960 GB/s
AMD
$999
Apple M4 Pro (24GB)
24 GB VRAM • 273 GB/s
APPLE
$1399
NVIDIA L4 24GB
24 GB VRAM • 300 GB/s
NVIDIA
$2500
NVIDIA A10 24GB
24 GB VRAM • 600 GB/s
NVIDIA
$3500
Apple M2 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M3 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M4 (24GB)
24 GB VRAM • 120 GB/s
APPLE
$699
NVIDIA Tesla M40 24 GB
24 GB VRAM • 288 GB/s
NVIDIA
NVIDIA Tesla P10
24 GB VRAM • 694 GB/s
NVIDIA
NVIDIA Tesla P40
24 GB VRAM • 347 GB/s
NVIDIA
NVIDIA Quadro RTX 6000
24 GB VRAM • 672 GB/s
NVIDIA
$4000
NVIDIA GeForce RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$1499
NVIDIA A10 PCIe
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA A10G
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA RTX A5000
24 GB VRAM • 768 GB/s
NVIDIA
$2500
NVIDIA GeForce RTX 4090
24 GB VRAM • 1010 GB/s
NVIDIA
$1599
NVIDIA L40 CNX
24 GB VRAM • 864 GB/s
NVIDIA
$5000

Find the best GPU for Qwen3-Coder 30B-A3B

Build Hardware for Qwen3-Coder 30B-A3B
▸ SPEC SHEET

Qwen3-Coder 30B-A3B30.5B MoE.

▸ SPECIFICATIONS
PARAMETERS
30.5B (3.3B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
256K tokens
CAPABILITIES
coding, tool_use, reasoning
RELEASE DATE
2025-07-31
PROVIDER
Alibaba
FAMILY
qwen
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ3_XXS3.2512.9 GB82%
IQ3_XS3.513.8 GB84%
Q3_K_S3.6414.4 GB85%
IQ3_M3.7614.8 GB86%
Q3_K_M415.7 GB88%
Q3_K_L4.316.9 GB90%
IQ4_XS4.4617.5 GB92%
Q4_K_S4.6718.3 GB93%
Q4_K_M4.8919.1 GB94%
Q5_K_S5.5721.7 GB96%
Q5_K_M5.722.2 GB96%
Q6_K6.5625.5 GB97%
Q8_08.532.9 GB100%
FP161661.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO52.9
MATH59.7
IFEval78.9
BBH58.3
GPQA15.2
MUSR19.1
BigCodeBench32.3
LiveCodeBench40.3
AIME29.0
MATH-50089.3
GPQA Diamond51.6
HLE4.0
AA Intelligence20.0
AA Coding19.4
AA Math29.0
aa_ifbench32.7
aa_terminal_bench15.2
aa_tau234.5
aa_scicode27.8
aa_lcr29.0
§ 02RUN COMMAND

Run Qwen3-Coder 30B-A3B locally with Ollama — needs 19.1 GB VRAM at Q4_K_M:

$ollama run qwen3-coder:30b-a3b