Alibaba/Mixture of Experts

AlibabaQwen3-Coder-Next

Qwen3-Coder-Next — 80B MoE focused on coding and tool use.

chatcodingreasoningtool_useThinkingTool Use
80B
Parameters (3B active)
256K
Context length
17
Benchmarks
16
Quantizations
1.2M
HF downloads
Architecture
MoE
Released
2026-01-30
Layers
48
KV Heads
2
Head Dim
256
Family
qwen

Quantization Options

QuantBitsVRAMQuality
IQ2_M2.9329.8 GBlow
Q2_K3.1632.1 GBlow
IQ3_XXS3.2533.0 GBlow
IQ3_XS3.535.5 GBlow
Q3_K_S3.6436.9 GBlow
IQ3_M3.7638.1 GBlow
Q3_K_M440.5 GBlow
Q3_K_L4.343.5 GBmoderate
IQ4_XS4.4645.1 GBmoderate
Q4_K_S4.6747.2 GBmoderate
Q4_K_M4.8949.4 GBgood
Q5_K_S5.5756.2 GBgood
Q5_K_M5.757.5 GBgood
Q6_K6.5666.1 GBexcellent
Q8_08.585.5 GBlossless
FP1616160.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN QWEN3-CODER-NEXT NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (17)

IFEval85.9
τ²-Bench79.5
GPQA Diamond73.7
SWE-bench70.6
BBH60.5
MATH60.1
MMLU-PRO50.4
AA Long Context40.0
IFBench35.2
BigCodeBench33.2
SciCode32.3
AA Intelligence28.3
AA Coding22.9
GPQA19.4
Terminal-Bench18.2
MUSR12.3
HLE9.3

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run qwen3-coder-next:q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Apple M1 Ultra (64GB)
64 GB VRAM • 800 GB/s
APPLE
$2499
Apple M2 Ultra (64GB)
64 GB VRAM • 800 GB/s
APPLE
$2999
Apple M4 Max (64GB)
64 GB VRAM • 546 GB/s
APPLE
$2899
Apple M2 Max (64GB)
64 GB VRAM • 400 GB/s
APPLE
$2299
Apple M3 Max (64GB)
64 GB VRAM • 300 GB/s
APPLE
$2799
Apple M4 Pro (64GB)
64 GB VRAM • 273 GB/s
APPLE
$2599
AMD Radeon Instinct MI200
64 GB VRAM • 1640 GB/s
AMD
$10000
AMD Radeon Instinct MI210
64 GB VRAM • 1640 GB/s
AMD
$8000
NVIDIA H100 SXM5 64 GB
64 GB VRAM • 2020 GB/s
NVIDIA
$25000
NVIDIA Jetson AGX Orin 64 GB
64 GB VRAM • 205 GB/s
NVIDIA
NVIDIA Jetson T4000
64 GB VRAM • 273 GB/s
NVIDIA
Apple M5 Pro (64GB)
64 GB VRAM • 200 GB/s
APPLE
Apple M5 Max (64GB)
64 GB VRAM • 614 GB/s
APPLE
NVIDIA RTX PRO 5000 72 GB Blackwell
72 GB VRAM • 1340 GB/s
NVIDIA
$6999
NVIDIA H100 SXM5 80GB
80 GB VRAM • 3350 GB/s
NVIDIA
$25000
NVIDIA H100 PCIe 80GB
80 GB VRAM • 2000 GB/s
NVIDIA
$25000
NVIDIA A100 SXM 80GB
80 GB VRAM • 2039 GB/s
NVIDIA
$10000
NVIDIA A100 PCIe 80GB
80 GB VRAM • 1935 GB/s
NVIDIA
$10000
NVIDIA A100 SXM4 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
$15000
NVIDIA A100 PCIe 80 GB
80 GB VRAM • 1940 GB/s
NVIDIA
$10000
NVIDIA A100X
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H100 PCIe 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
$25000
NVIDIA H100 SXM5 80 GB
80 GB VRAM • 3360 GB/s
NVIDIA
$25000
NVIDIA H100 CNX
80 GB VRAM • 2040 GB/s
NVIDIA
$25000
NVIDIA A800 PCIe 80 GB
80 GB VRAM • 1940 GB/s
NVIDIA
NVIDIA A800 SXM4 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H800 PCIe 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H800 SXM5
80 GB VRAM • 3360 GB/s
NVIDIA
NVIDIA RTX 6000D
84 GB VRAM • 1570 GB/s
NVIDIA
$7500
NVIDIA B200
90 GB VRAM • 4100 GB/s
NVIDIA
$30000

Find the best GPU for Qwen3-Coder-Next

Build Hardware for Qwen3-Coder-Next

Qwen3-Coder-Next — 80B MoE focused on coding and tool use.

▸ SPEC SHEET

Qwen3-Coder-Next80B MoE.

▸ SPECIFICATIONS
PARAMETERS
80B (3B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
256K tokens
CAPABILITIES
chat, coding, reasoning, tool_use
RELEASE DATE
2026-01-30
PROVIDER
Alibaba
FAMILY
qwen
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_M2.9329.8 GB75%
Q2_K3.1632.1 GB78%
IQ3_XXS3.2533.0 GB82%
IQ3_XS3.535.5 GB84%
Q3_K_S3.6436.9 GB85%
IQ3_M3.7638.1 GB86%
Q3_K_M440.5 GB88%
Q3_K_L4.343.5 GB90%
IQ4_XS4.4645.1 GB92%
Q4_K_S4.6747.2 GB93%
Q4_K_M4.8949.4 GB94%
Q5_K_S5.5756.2 GB96%
Q5_K_M5.757.5 GB96%
Q6_K6.5666.1 GB97%
Q8_08.585.5 GB100%
FP1616160.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO50.4
MATH60.1
IFEval85.9
BBH60.5
GPQA19.4
MUSR12.3
BigCodeBench33.2
GPQA Diamond73.7
HLE9.3
AA Intelligence28.3
AA Coding22.9
SWE-bench70.6
aa_ifbench35.2
aa_terminal_bench18.2
aa_tau279.5
aa_scicode32.3
aa_lcr40.0
§ 02RUN COMMAND

Run Qwen3-Coder-Next locally with Ollama — needs 49.4 GB VRAM at Q4_K_M:

$ollama run qwen3-coder-next