NVIDIA· TURING

NVIDIA CMP 50HX

VRAM
10 GB
ENTRY-LEVEL
BANDWIDTH
560
GB/S
MODELS Q4
177/331
53%
7B Q4 SPEED
~64
BLAZING
▸ MODEL COVERAGE @ Q453% OF ALL
▸ ESTIMATED SPEED· BY MODEL SIZE @ Q4

Average speeds at Q4 quantization. Actual performance varies by model architecture and context length.

3B
~149
TOK/S
7B
~64
TOK/S
14B
~32
TOK/S
32B
18.0GB NEEDED
70B
39.4GB NEEDED
▸ SPECIFICATIONS
VRAM
10 GB
BANDWIDTH
560 GB/s
FP16 COMPUTE
22.1 TFLOPS
TDP
250W
MEMORY
GDDR6
ARCHITECTURE
Turing
CUDA CORES
3,584
TENSOR CORES
448
PCIE
Gen 1 x4
177
FAST MODELS · >30 TOK/S
Real-time chat speed
177
USABLE · >10 TOK/S
Comfortable for all tasks
177
TOTAL COMPATIBLE
Fit in VRAM at Q4
▸ RENT IT IN THE CLOUD

Buying CMP 50HX costs $15–$40k and isn’t practical for most teams. Spin one up by the hour instead:

Spin up in ~60s. Pay by the second. Cancel anytime.

Some links are affiliate links — we may earn a small commission at no extra cost to you. This helps keep FitMyLLM free and independent.

▸ COMPATIBLE MODELS· 177
S
nomic-embed-text-v1.5 100M0.1B
EMBEDDING·8K CTX· CHAT
4480
TOK/S · 5% VRAM
S
GPT-2 124M0.124B
GPT2·1K CTX· CHAT
3613
TOK/S · 6% VRAM
S
SmolLM2 135M0.135B
SMOLLM·2K CTX· CHAT
3319
TOK/S · 6% VRAM
S
bge-large-en-v1.5 335M0.335B
EMBEDDING·1K CTX· CHAT
1337
TOK/S · 7% VRAM
S
mxbai-embed-large-v10.335B
EMBEDDING·1K CTX· EMBEDDING
1337
TOK/S · 7% VRAM
S
Snowflake Arctic Embed L0.335B
EMBEDDING·1K CTX· EMBEDDING
1337
TOK/S · 7% VRAM
S
GPT-2 Medium 345M0.345B
GPT2·1K CTX· CHAT
1299
TOK/S · 7% VRAM
S
SmolLM2 360M0.36B
SMOLLM·8K CTX· CHAT
1244
TOK/S · 7% VRAM
S
Falcon-H1 0.5B0.5B
FALCON·128K CTX· CHAT
896
TOK/S · 8% VRAM
S
Qwen 1.5 0.5B0.5B
QWEN·32K CTX· CHAT
896
TOK/S · 8% VRAM
S
Qwen 2.5 0.5B0.5B
QWEN·32K CTX· CHAT
896
TOK/S · 8% VRAM
S
BGE-M30.568B
EMBEDDING·8K CTX· EMBEDDING
789
TOK/S · 8% VRAM
S
Qwen3 0.6B0.6B
QWEN·32K CTX· CHAT· REASONING
747
TOK/S · 9% VRAM
S
GPT-2 Large 774M0.774B
GPT2·1K CTX· CHAT
579
TOK/S · 10% VRAM
S
Qwen 3.5 0.8B0.8B
QWEN·256K CTX· CHAT· CODING· MULTILINGUAL
560
TOK/S · 10% VRAM
S
Qwen3.5-0.8B0.9B
QWEN·256K CTX· CHAT
498
TOK/S · 10% VRAM
S
Falcon3-1B1B
FALCON·32K CTX· CHAT
448
TOK/S · 11% VRAM
S
InternLM2 1B1B
INTERNLM·32K CTX· CHAT
448
TOK/S · 11% VRAM
S
TinyLlama 1.1B1.1B
LLAMA·2K CTX· CHAT
407
TOK/S · 12% VRAM
S
LFM2.5-1.2B-Thinking1.2B
LFM·122K CTX· CHAT· REASONING· TOOL_USE
373
TOK/S · 12% VRAM
S
Llama-3.2-1B1.2B
LLAMA·4K CTX· CHAT
373
TOK/S · 12% VRAM
S
DeepSeek Coder 1.3B1.3B
DEEPSEEK·16K CTX· CODING
345
TOK/S · 13% VRAM
S
EXAONE-4.0-1.2B1.3B
EXAONE·64K CTX· CHAT
345
TOK/S · 13% VRAM
S
OPT 1.3B1.3B
OPT·2K CTX· CHAT
345
TOK/S · 13% VRAM
S
Phi-1 1.3B1.3B
PHI·2K CTX· CODING
345
TOK/S · 13% VRAM
S
Phi-1.5 1.3B1.3B
PHI·2K CTX· CHAT· CODING
345
TOK/S · 13% VRAM
S
granite-4.0-h-tiny 6.9B6.9BMoE
GRANITE·128K CTX· CHAT
299
TOK/S · 47% VRAM
S
Falcon-H1 1.5B1.5B
FALCON·128K CTX· CHAT· CODING
299
TOK/S · 14% VRAM
S
GPT-2 XL 1.5B1.5B
GPT2·1K CTX· CHAT
299
TOK/S · 14% VRAM
S
Qwen2.5-Coder-1.5B1.5B
QWEN·32K CTX· CHAT· TOOL_USE· CODING
299
TOK/S · 14% VRAM
S
Qwen2 Math 1.5B1.5B
QWEN·4K CTX· REASONING
299
TOK/S · 14% VRAM
S
Qwen 2.5 1.5B1.5B
QWEN·32K CTX· CHAT· CODING
299
TOK/S · 14% VRAM
S
Yi Coder 1.5B1.5B
YI·125K CTX· CODING
299
TOK/S · 14% VRAM
S
stablelm-2-1_6b1.6B
STABLELM·4K CTX· CHAT
280
TOK/S · 15% VRAM
S
Qwen3 1.7B1.7B
QWEN·32K CTX· CHAT· REASONING
264
TOK/S · 15% VRAM
S
SmolLM2 1.7B1.71B
SMOLLM·8K CTX· CHAT
262
TOK/S · 15% VRAM
S
Qwen 1.5 1.8B1.8B
QWEN·32K CTX· CHAT
249
TOK/S · 16% VRAM
S
Moondream2 1.9B1.9B
OTHER·2K CTX· VISION· CHAT
236
TOK/S · 16% VRAM
S
Gemma 1 2B2B
GEMMA·8K CTX· CHAT
224
TOK/S · 17% VRAM
S
Granite 3.0 2B2B
GRANITE·128K CTX· CHAT· CODING
224
TOK/S · 17% VRAM
▸ NEXT STEP

Get personalized recommendations.

See ranked models with benchmark scores, run commands, and precise speed estimates for your CMP 50HX.

▸ DEVICE UNDER TEST

NVIDIA CMP 50HX10 GB VRAM.

CMP 50HX SPEC
BRAND
NVIDIA
VRAM
10 GB GDDR6
BANDWIDTH
560 GB/s
FP16 COMPUTE
22.1 TFLOPS
FP32 COMPUTE
11.1 TFLOPS
CUDA CORES
3,584
TENSOR CORES
448
TDP
250 W
ARCHITECTURE
Turing
▸ AI CAPABILITY
177/ 331 models @ Q4

With 10 GB VRAM and 560 GB/s bandwidth, this GPU handles models up to 13.1B parameters.

Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~64 tok/s.

§ 01TOP MODELS FOR CMP 50HX
177 FIT · SHOWING 20
MODELSIZEVRAM Q4TOK/SAVG
LLaVA-1.5 13B13.1B8.5 GB3452.1
Baichuan2 13B13B8.4 GB3423.6
Llama 2 13B13B8.4 GB3419.7
CodeLlama 13B13B8.4 GB3419.7
Vicuna 13B13B8.4 GB3411.8
LLaMA 1 13B13B8.4 GB3432.9
OPT 13B13B8.4 GB3435.8
Orca 2 13B13B8.4 GB3425.4
LLaVA-1.6 Vicuna 13B13B8.4 GB3437.4
WizardCoder Python 13B13B8.4 GB3460.1
WizardLM 13B13B8.4 GB3419.5
Mistral-Nemo 12.2B12.2B7.9 GB3722.4
Dolly v2 12B12B7.8 GB376.4
gemma-3-12b12B7.8 GB3726.6
TranslateGemma 12B12B7.8 GB3735.1
Pixtral 12B12B7.8 GB3752.1
StableLM 2 12B12B7.8 GB3721.3
Falcon2 11B11B7.2 GB4133.2
Llama-3.2-11B-Vision-Instruct11B7.2 GB4134.4
SOLAR-10.7B10.7B7.0 GB4228.2