NVIDIA· AMPERE

NVIDIA PG506-232

VRAM
24 GB
HIGH-END
BANDWIDTH
933
GB/S
MODELS Q4
249/331
75%
7B Q4 SPEED
~107
BLAZING
▸ MODEL COVERAGE @ Q475% OF ALL
▸ ESTIMATED SPEED· BY MODEL SIZE @ Q4

Average speeds at Q4 quantization. Actual performance varies by model architecture and context length.

3B
~249
TOK/S
7B
~107
TOK/S
14B
~53
TOK/S
32B
~23
TOK/S
70B
39.4GB NEEDED
▸ SPECIFICATIONS
VRAM
24 GB
BANDWIDTH
933 GB/s
FP16 COMPUTE
10.3 TFLOPS
TDP
165W
MEMORY
HBM2
ARCHITECTURE
Ampere
CUDA CORES
3,584
TENSOR CORES
224
PCIE
Gen 4 x16
213
FAST MODELS · >30 TOK/S
Real-time chat speed
249
USABLE · >10 TOK/S
Comfortable for all tasks
249
TOTAL COMPATIBLE
Fit in VRAM at Q4
▸ DON’T WANT TO BUY?

Test PG506-232 (or anything bigger) without committing. Pay by the second, cancel anytime.

Spin up in ~60s. Pay by the second. Cancel anytime.

Some links are affiliate links — we may earn a small commission at no extra cost to you. This helps keep FitMyLLM free and independent.

▸ COMPATIBLE MODELS· 249
S
nomic-embed-text-v1.5 100M0.1B
EMBEDDING·8K CTX· CHAT
7464
TOK/S · 2% VRAM
S
GPT-2 124M0.124B
GPT2·1K CTX· CHAT
6019
TOK/S · 2% VRAM
S
SmolLM2 135M0.135B
SMOLLM·2K CTX· CHAT
5529
TOK/S · 2% VRAM
S
bge-large-en-v1.5 335M0.335B
EMBEDDING·1K CTX· CHAT
2228
TOK/S · 3% VRAM
S
mxbai-embed-large-v10.335B
EMBEDDING·1K CTX· EMBEDDING
2228
TOK/S · 3% VRAM
S
Snowflake Arctic Embed L0.335B
EMBEDDING·1K CTX· EMBEDDING
2228
TOK/S · 3% VRAM
S
GPT-2 Medium 345M0.345B
GPT2·1K CTX· CHAT
2163
TOK/S · 3% VRAM
S
SmolLM2 360M0.36B
SMOLLM·8K CTX· CHAT
2073
TOK/S · 3% VRAM
S
Falcon-H1 0.5B0.5B
FALCON·128K CTX· CHAT
1493
TOK/S · 3% VRAM
S
Qwen 1.5 0.5B0.5B
QWEN·32K CTX· CHAT
1493
TOK/S · 3% VRAM
S
Qwen 2.5 0.5B0.5B
QWEN·32K CTX· CHAT
1493
TOK/S · 3% VRAM
S
BGE-M30.568B
EMBEDDING·8K CTX· EMBEDDING
1314
TOK/S · 3% VRAM
S
Qwen3 0.6B0.6B
QWEN·32K CTX· CHAT· REASONING
1244
TOK/S · 4% VRAM
S
GPT-2 Large 774M0.774B
GPT2·1K CTX· CHAT
964
TOK/S · 4% VRAM
S
Qwen 3.5 0.8B0.8B
QWEN·256K CTX· CHAT· CODING· MULTILINGUAL
933
TOK/S · 4% VRAM
S
Qwen3.5-0.8B0.9B
QWEN·256K CTX· CHAT
829
TOK/S · 4% VRAM
S
Falcon3-1B1B
FALCON·32K CTX· CHAT
746
TOK/S · 5% VRAM
S
InternLM2 1B1B
INTERNLM·32K CTX· CHAT
746
TOK/S · 5% VRAM
S
TinyLlama 1.1B1.1B
LLAMA·2K CTX· CHAT
679
TOK/S · 5% VRAM
S
LFM2.5-1.2B-Thinking1.2B
LFM·122K CTX· CHAT· REASONING· TOOL_USE
622
TOK/S · 5% VRAM
S
Llama-3.2-1B1.2B
LLAMA·4K CTX· CHAT
622
TOK/S · 5% VRAM
S
DeepSeek Coder 1.3B1.3B
DEEPSEEK·16K CTX· CODING
574
TOK/S · 5% VRAM
S
EXAONE-4.0-1.2B1.3B
EXAONE·64K CTX· CHAT
574
TOK/S · 5% VRAM
S
OPT 1.3B1.3B
OPT·2K CTX· CHAT
574
TOK/S · 5% VRAM
S
Phi-1 1.3B1.3B
PHI·2K CTX· CODING
574
TOK/S · 5% VRAM
S
Phi-1.5 1.3B1.3B
PHI·2K CTX· CHAT· CODING
574
TOK/S · 5% VRAM
S
granite-4.0-h-tiny 6.9B6.9BMoE
GRANITE·128K CTX· CHAT
498
TOK/S · 20% VRAM
S
Falcon-H1 1.5B1.5B
FALCON·128K CTX· CHAT· CODING
498
TOK/S · 6% VRAM
S
GPT-2 XL 1.5B1.5B
GPT2·1K CTX· CHAT
498
TOK/S · 6% VRAM
S
Qwen2.5-Coder-1.5B1.5B
QWEN·32K CTX· CHAT· TOOL_USE· CODING
498
TOK/S · 6% VRAM
S
Qwen2 Math 1.5B1.5B
QWEN·4K CTX· REASONING
498
TOK/S · 6% VRAM
S
Qwen 2.5 1.5B1.5B
QWEN·32K CTX· CHAT· CODING
498
TOK/S · 6% VRAM
S
Yi Coder 1.5B1.5B
YI·125K CTX· CODING
498
TOK/S · 6% VRAM
S
stablelm-2-1_6b1.6B
STABLELM·4K CTX· CHAT
467
TOK/S · 6% VRAM
S
Qwen3 1.7B1.7B
QWEN·32K CTX· CHAT· REASONING
439
TOK/S · 6% VRAM
S
SmolLM2 1.7B1.71B
SMOLLM·8K CTX· CHAT
436
TOK/S · 6% VRAM
S
Qwen 1.5 1.8B1.8B
QWEN·32K CTX· CHAT
415
TOK/S · 7% VRAM
S
Moondream2 1.9B1.9B
OTHER·2K CTX· VISION· CHAT
393
TOK/S · 7% VRAM
S
Gemma 1 2B2B
GEMMA·8K CTX· CHAT
373
TOK/S · 7% VRAM
S
Granite 3.0 2B2B
GRANITE·128K CTX· CHAT· CODING
373
TOK/S · 7% VRAM
▸ NEXT STEP

Get personalized recommendations.

See ranked models with benchmark scores, run commands, and precise speed estimates for your PG506-232.

▸ DEVICE UNDER TEST

NVIDIA PG506-23224 GB VRAM.

PG506-232 SPEC
BRAND
NVIDIA
VRAM
24 GB HBM2
BANDWIDTH
933 GB/s
FP16 COMPUTE
10.3 TFLOPS
FP32 COMPUTE
10.3 TFLOPS
CUDA CORES
3,584
TENSOR CORES
224
TDP
165 W
ARCHITECTURE
Ampere
▸ AI CAPABILITY
249/ 331 models @ Q4

With 24 GB VRAM and 933 GB/s bandwidth, this GPU handles models up to 34.4B parameters.

Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~107 tok/s.

§ 01TOP MODELS FOR PG506-232
249 FIT · SHOWING 20
MODELSIZEVRAM Q4TOK/SAVG
Nous Capybara 34B34.4B21.5 GB2242.0
Yi-1.5 34B34.4B21.5 GB2245.3
Falcon-H1 34B34B21.3 GB2266.1
CodeLlama 34B34B21.3 GB2225.4
Nous Hermes 2 34B34B21.3 GB2247.0
Phind CodeLlama 34B34B21.3 GB2268.1
LLaVA-1.6 Yi 34B34B21.3 GB2247.4
WizardCoder Python 34B34B21.3 GB2273.2
Yi 34B34B21.3 GB2233.4
DeepSeek Coder 33B33B20.7 GB2326.0
Vicuna 33B33B20.7 GB2317.2
LLaMA 1 30B33B20.7 GB2317.8
DeepSeek-R1-Distill-Qwen-32B32.8B20.5 GB2346.9
Qwen3 32B32.8B20.5 GB2354.9
Qwen2.5-32B32.5B20.4 GB2354.3
Qwen 2.5 Coder 32B32.5B20.4 GB2348.0
QwQ-32B32.5B20.4 GB2345.1
OLMo-2-0325-32B32.2B20.2 GB2359.1
Aya Expanse 32B32B20.0 GB2335.9
Cogito 32B32B20.0 GB2339.4