Mistral AI/Mixture of Experts

Mistral AIMistral Large 3

Mistral Large 3 — 675B MoE flagship. Frontier-class with vision and tool use.

chatcodingreasoningvisiontool_useTool Use
675B
Parameters (39B active)
256K
Context length
13
Benchmarks
17
Quantizations
1K
HF downloads
Architecture
MoE
Released
2025-12-02
Layers
88
KV Heads
8
Head Dim
128
Family
mistral

Quantization Options

QuantBitsVRAMQuality
IQ2_XXS2.38201.3 GBlow
IQ2_M2.93247.7 GBlow
Q2_K3.16267.1 GBlow
IQ3_XXS3.25274.7 GBlow
IQ3_XS3.5295.8 GBlow
Q3_K_S3.64307.6 GBlow
IQ3_M3.76317.7 GBlow
Q3_K_M4338.0 GBlow
Q3_K_L4.3363.3 GBmoderate
IQ4_XS4.46376.8 GBmoderate
Q4_K_S4.67394.5 GBmoderate
Q4_K_M4.89413.1 GBgood
Q5_K_S5.57470.5 GBgood
Q5_K_M5.7481.4 GBgood
Q6_K6.56554.0 GBexcellent
Q8_08.5717.7 GBlossless
FP16161350.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN MISTRAL LARGE 3 NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (13)

GPQA Diamond68.0
MMLU-PRO51.5
LiveCodeBench46.5
AIME38.0
AA Math38.0
IFBench36.2
SciCode36.2
AA Long Context34.7
τ²-Bench24.6
AA Intelligence22.8
AA Coding22.7
Terminal-Bench15.9
HLE4.1

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run mistral:675b-q4_K_M

Tag may need adjustment — check ollama.com/library/mistral for available tags.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

Find the best GPU for Mistral Large 3

Build Hardware for Mistral Large 3

Mistral Large 3 — 675B MoE flagship. Frontier-class with vision and tool use.

▸ SPEC SHEET

Mistral Large 3675B MoE.

▸ SPECIFICATIONS
PARAMETERS
675B (39B active)
ARCHITECTURE
Mixture of Experts
CONTEXT LENGTH
256K tokens
CAPABILITIES
chat, coding, reasoning, vision, tool_use
RELEASE DATE
2025-12-02
PROVIDER
Mistral AI
FAMILY
mistral
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_XXS2.38201.3 GB65%
IQ2_M2.93247.7 GB75%
Q2_K3.16267.1 GB78%
IQ3_XXS3.25274.7 GB82%
IQ3_XS3.5295.8 GB84%
Q3_K_S3.64307.6 GB85%
IQ3_M3.76317.7 GB86%
Q3_K_M4338.0 GB88%
Q3_K_L4.3363.3 GB90%
IQ4_XS4.46376.8 GB92%
Q4_K_S4.67394.5 GB93%
Q4_K_M4.89413.1 GB94%
Q5_K_S5.57470.5 GB96%
Q5_K_M5.7481.4 GB96%
Q6_K6.56554.0 GB97%
Q8_08.5717.7 GB100%
FP16161350.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO51.5
GPQA Diamond68.0
LiveCodeBench46.5
AIME38.0
HLE4.1
AA Intelligence22.8
AA Coding22.7
AA Math38.0
aa_ifbench36.2
aa_terminal_bench15.9
aa_tau224.6
aa_scicode36.2
aa_lcr34.7