INTEL· GENERATION 12.5

Intel Data Center GPU Max Subsystem

VRAM
128 GB
FLAGSHIP
BANDWIDTH
3210
GB/S
MODELS Q4
302/331
91%
7B Q4 SPEED
~147
BLAZING
▸ MODEL COVERAGE @ Q491% OF ALL
▸ ESTIMATED SPEED· BY MODEL SIZE @ Q4

Average speeds at Q4 quantization. Actual performance varies by model architecture and context length.

3B
~342
TOK/S
7B
~147
TOK/S
14B
~73
TOK/S
32B
~32
TOK/S
70B
~15
TOK/S
▸ SPECIFICATIONS
VRAM
128 GB
BANDWIDTH
3210 GB/s
FP16 COMPUTE
52.4 TFLOPS
TDP
2400W
MEMORY
HBM2e
ARCHITECTURE
Generation 12.5
COMPUTE UNITS
1024
PCIE
Gen 5 x16
266
FAST MODELS · >30 TOK/S
Real-time chat speed
293
USABLE · >10 TOK/S
Comfortable for all tasks
302
TOTAL COMPATIBLE
Fit in VRAM at Q4
▸ RENT IT IN THE CLOUD

Buying Intel Data Center GPU Max Subsystem costs $15–$40k and isn’t practical for most teams. Spin one up by the hour instead:

Spin up in ~60s. Pay by the second. Cancel anytime.

Some links are affiliate links — we may earn a small commission at no extra cost to you. This helps keep FitMyLLM free and independent.

▸ COMPATIBLE MODELS· 302
S
nomic-embed-text-v1.5 100M0.1B
EMBEDDING·8K CTX· CHAT
10272
TOK/S · 0% VRAM
S
GPT-2 124M0.124B
GPT2·1K CTX· CHAT
8284
TOK/S · 0% VRAM
S
SmolLM2 135M0.135B
SMOLLM·2K CTX· CHAT
7609
TOK/S · 0% VRAM
S
bge-large-en-v1.5 335M0.335B
EMBEDDING·1K CTX· CHAT
3066
TOK/S · 1% VRAM
S
mxbai-embed-large-v10.335B
EMBEDDING·1K CTX· EMBEDDING
3066
TOK/S · 1% VRAM
S
Snowflake Arctic Embed L0.335B
EMBEDDING·1K CTX· EMBEDDING
3066
TOK/S · 1% VRAM
S
GPT-2 Medium 345M0.345B
GPT2·1K CTX· CHAT
2977
TOK/S · 1% VRAM
S
SmolLM2 360M0.36B
SMOLLM·8K CTX· CHAT
2853
TOK/S · 1% VRAM
S
Falcon-H1 0.5B0.5B
FALCON·128K CTX· CHAT
2054
TOK/S · 1% VRAM
S
Qwen 1.5 0.5B0.5B
QWEN·32K CTX· CHAT
2054
TOK/S · 1% VRAM
S
Qwen 2.5 0.5B0.5B
QWEN·32K CTX· CHAT
2054
TOK/S · 1% VRAM
S
BGE-M30.568B
EMBEDDING·8K CTX· EMBEDDING
1808
TOK/S · 1% VRAM
S
Qwen3 0.6B0.6B
QWEN·32K CTX· CHAT· REASONING
1712
TOK/S · 1% VRAM
S
GPT-2 Large 774M0.774B
GPT2·1K CTX· CHAT
1327
TOK/S · 1% VRAM
S
Qwen 3.5 0.8B0.8B
QWEN·256K CTX· CHAT· CODING· MULTILINGUAL
1284
TOK/S · 1% VRAM
S
Qwen3.5-0.8B0.9B
QWEN·256K CTX· CHAT
1141
TOK/S · 1% VRAM
S
Falcon3-1B1B
FALCON·32K CTX· CHAT
1027
TOK/S · 1% VRAM
S
InternLM2 1B1B
INTERNLM·32K CTX· CHAT
1027
TOK/S · 1% VRAM
S
TinyLlama 1.1B1.1B
LLAMA·2K CTX· CHAT
934
TOK/S · 1% VRAM
S
LFM2.5-1.2B-Thinking1.2B
LFM·122K CTX· CHAT· REASONING· TOOL_USE
856
TOK/S · 1% VRAM
S
Llama-3.2-1B1.2B
LLAMA·4K CTX· CHAT
856
TOK/S · 1% VRAM
S
DeepSeek Coder 1.3B1.3B
DEEPSEEK·16K CTX· CODING
790
TOK/S · 1% VRAM
S
EXAONE-4.0-1.2B1.3B
EXAONE·64K CTX· CHAT
790
TOK/S · 1% VRAM
S
OPT 1.3B1.3B
OPT·2K CTX· CHAT
790
TOK/S · 1% VRAM
S
Phi-1 1.3B1.3B
PHI·2K CTX· CODING
790
TOK/S · 1% VRAM
S
Phi-1.5 1.3B1.3B
PHI·2K CTX· CHAT· CODING
790
TOK/S · 1% VRAM
S
granite-4.0-h-tiny 6.9B6.9BMoE
GRANITE·128K CTX· CHAT
685
TOK/S · 4% VRAM
S
Falcon-H1 1.5B1.5B
FALCON·128K CTX· CHAT· CODING
685
TOK/S · 1% VRAM
S
GPT-2 XL 1.5B1.5B
GPT2·1K CTX· CHAT
685
TOK/S · 1% VRAM
S
Qwen2.5-Coder-1.5B1.5B
QWEN·32K CTX· CHAT· TOOL_USE· CODING
685
TOK/S · 1% VRAM
S
Qwen2 Math 1.5B1.5B
QWEN·4K CTX· REASONING
685
TOK/S · 1% VRAM
S
Qwen 2.5 1.5B1.5B
QWEN·32K CTX· CHAT· CODING
685
TOK/S · 1% VRAM
S
Yi Coder 1.5B1.5B
YI·125K CTX· CODING
685
TOK/S · 1% VRAM
S
stablelm-2-1_6b1.6B
STABLELM·4K CTX· CHAT
642
TOK/S · 1% VRAM
S
Qwen3 1.7B1.7B
QWEN·32K CTX· CHAT· REASONING
604
TOK/S · 1% VRAM
S
SmolLM2 1.7B1.71B
SMOLLM·8K CTX· CHAT
601
TOK/S · 1% VRAM
S
Qwen 1.5 1.8B1.8B
QWEN·32K CTX· CHAT
571
TOK/S · 1% VRAM
S
Moondream2 1.9B1.9B
OTHER·2K CTX· VISION· CHAT
541
TOK/S · 1% VRAM
S
Gemma 1 2B2B
GEMMA·8K CTX· CHAT
514
TOK/S · 1% VRAM
S
Granite 3.0 2B2B
GRANITE·128K CTX· CHAT· CODING
514
TOK/S · 1% VRAM
▸ NEXT STEP

Get personalized recommendations.

See ranked models with benchmark scores, run commands, and precise speed estimates for your Data Center GPU Max Subsystem.

▸ DEVICE UNDER TEST

Intel Data Center GPU Max Subsystem128 GB VRAM.

DATA CENTER GPU MAX SUBSYSTEM SPEC
BRAND
Intel
VRAM
128 GB HBM2e
BANDWIDTH
3210 GB/s
FP16 COMPUTE
52.4 TFLOPS
FP32 COMPUTE
52.4 TFLOPS
TDP
2400 W
ARCHITECTURE
Generation 12.5
▸ AI CAPABILITY
302/ 331 models @ Q4

With 128 GB VRAM and 3210 GB/s bandwidth, this GPU handles models up to 180B parameters.

Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~367 tok/s.

§ 01TOP MODELS FOR DATA CENTER GPU MAX SUBSYSTEM
302 FIT · SHOWING 20
MODELSIZEVRAM Q4TOK/SAVG
Falcon 180B180B110.5 GB651.3
bloom 176.2B176.2B108.2 GB615.0
dots.llm1.inst 142.8B142.8B87.8 GB7
WizardLM 2 8x22B141B86.7 GB2642.4
Mixtral-8x22B140.6B86.4 GB2631.9
DBRX 132B132B81.2 GB2946.3
Qwen3.5-122B-A10B125.1B77.0 GB845.5
Pixtral Large 124B124B76.3 GB839.3
Mistral-Large 123B123B75.7 GB833.5
Devstral 2 123B123B75.7 GB838.1
Qwen 3.5 122B A10B122B75.1 GB10356.8
Nemotron 3 Super 120B120B73.8 GB8657.3
Nemotron 3 Super 120B-A12B120B73.8 GB8653.2
Mistral Small 4 119B119B73.2 GB15850.2
GPT-OSS 120B117B72.0 GB20154.1
Command A 111B111B68.3 GB927.6
GLM 4.5 Air110B67.7 GB8651.0
Qwen 1.5 110B110B67.7 GB933.4
Llama 4 Scout 17B-16E109B67.1 GB6033.9
Sarvam 105B105B64.7 GB1048.0