WizardLM/Mixture of Experts

WizardLMWizardLM 2 8x22B

WizardLM 2 8x22B — Microsoft's MoE model for complex instructions.

chatreasoning
141B
Parameters (39B active)
64K
Context length
7
Benchmarks
17
Quantizations
0
Architecture
MoE
Released
2024-04-15
Layers
56
KV Heads
8
Head Dim
128
Family
mistral

Quantization Options

QuantBitsVRAMQuality
IQ2_XXS2.3842.4 GBlow
IQ2_M2.9352.1 GBlow
Q2_K3.1656.2 GBlow
IQ3_XXS3.2557.8 GBlow
IQ3_XS3.562.2 GBlow
Q3_K_S3.6464.6 GBlow
IQ3_M3.7666.8 GBlow
Q3_K_M471.0 GBlow
Q3_K_L4.376.3 GBmoderate
IQ4_XS4.4679.1 GBmoderate
Q4_K_S4.6782.8 GBmoderate
Q4_K_M4.8986.7 GBgood
Q5_K_S5.5798.7 GBgood
Q5_K_M5.7101.0 GBgood
Q6_K6.56116.1 GBexcellent
Q8_08.5150.3 GBlossless
FP1616282.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN WIZARDLM 2 8X22B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (7)

IFEval84.0
BBH52.7
MMLU-PRO50.7
MATH49.5
GPQA24.9
GPQA Diamond17.6
MUSR17.2

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run wizardlm2:8x22b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA B200
90 GB VRAM • 4100 GB/s
NVIDIA
$30000
NVIDIA H100 NVL 94 GB
94 GB VRAM • 3940 GB/s
NVIDIA
$30000
NVIDIA H100 SXM5 94 GB
94 GB VRAM • 3360 GB/s
NVIDIA
$25000
RTX Pro 6000
96 GB VRAM • 1792 GB/s
NVIDIA
$8565
NVIDIA H100 PCIe 96 GB
96 GB VRAM • 3360 GB/s
NVIDIA
$25000
NVIDIA H100 SXM5 96 GB
96 GB VRAM • 3360 GB/s
NVIDIA
$25000
Intel Data Center GPU Max 1350
96 GB VRAM • 2460 GB/s
INTEL
NVIDIA RTX PRO 6000 Blackwell Server
96 GB VRAM • 1790 GB/s
NVIDIA
$9999
NVIDIA RTX PRO 6000 Blackwell
96 GB VRAM • 1790 GB/s
NVIDIA
$9999
AMD Instinct MI300A
120 GB VRAM • 5300 GB/s
AMD
$12000
Apple M4 Max (128GB)
128 GB VRAM • 546 GB/s
APPLE
$3999
AMD Instinct MI250X
128 GB VRAM • 3277 GB/s
AMD
$10000
Apple M1 Ultra (128GB)
128 GB VRAM • 800 GB/s
APPLE
$4999
Apple M2 Ultra (128GB)
128 GB VRAM • 800 GB/s
APPLE
$3999
AMD Radeon Instinct MI250
128 GB VRAM • 3280 GB/s
AMD
$12000
AMD Radeon Instinct MI250X
128 GB VRAM • 3280 GB/s
AMD
$15000
AMD Radeon Instinct MI300
128 GB VRAM • 6550 GB/s
AMD
$12000
Intel Data Center GPU Max 1550
128 GB VRAM • 3280 GB/s
INTEL
Intel Data Center GPU Max Subsystem
128 GB VRAM • 3210 GB/s
INTEL
NVIDIA GB10
128 GB VRAM • 273 GB/s
NVIDIA
NVIDIA Jetson T5000
128 GB VRAM • 273 GB/s
NVIDIA
Apple M5 Max (128GB)
128 GB VRAM • 614 GB/s
APPLE
NVIDIA H200 SXM 141GB
140 GB VRAM • 4800 GB/s
NVIDIA
$30000
NVIDIA H200 NVL
141 GB VRAM • 4890 GB/s
NVIDIA
$35000
NVIDIA H200 SXM 141 GB
141 GB VRAM • 4890 GB/s
NVIDIA
$30000
NVIDIA B300
144 GB VRAM • 4100 GB/s
NVIDIA
$35000
AMD Instinct MI300X
192 GB VRAM • 5300 GB/s
AMD
$15000
Apple M2 Ultra (192GB)
192 GB VRAM • 800 GB/s
APPLE
$5499
Apple M3 Ultra (192GB)
192 GB VRAM • 800 GB/s
APPLE
$6999
Apple M4 Ultra (192GB)
192 GB VRAM • 1092 GB/s
APPLE
$7499

Find the best GPU for WizardLM 2 8x22B

Build Hardware for WizardLM 2 8x22B

WizardLM 2 8x22B — Microsoft's MoE model for complex instructions.

▸ SPEC SHEET

WizardLM 2 8x22B141B MoE.

▸ SPECIFICATIONS
PARAMETERS
141B (39B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
64K tokens
CAPABILITIES
chat, reasoning
RELEASE DATE
2024-04-15
PROVIDER
WizardLM
FAMILY
mistral
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_XXS2.3842.4 GB65%
IQ2_M2.9352.1 GB75%
Q2_K3.1656.2 GB78%
IQ3_XXS3.2557.8 GB82%
IQ3_XS3.562.2 GB84%
Q3_K_S3.6464.6 GB85%
IQ3_M3.7666.8 GB86%
Q3_K_M471.0 GB88%
Q3_K_L4.376.3 GB90%
IQ4_XS4.4679.1 GB92%
Q4_K_S4.6782.8 GB93%
Q4_K_M4.8986.7 GB94%
Q5_K_S5.5798.7 GB96%
Q5_K_M5.7101.0 GB96%
Q6_K6.56116.1 GB97%
Q8_08.5150.3 GB100%
FP1616282.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO50.7
MATH49.5
IFEval84.0
BBH52.7
GPQA24.9
MUSR17.2
GPQA Diamond17.6
§ 02RUN COMMAND

Run WizardLM 2 8x22B locally with Ollama — needs 86.7 GB VRAM at Q4_K_M:

$ollama run wizardlm2:8x22b