NVIDIA/ Ampere

NVIDIA PG506-217

24 GB
VRAM (High-end)
933 GB/s
Memory Bandwidth
229 / 290
Models fit at Q4 (79%)
~107 tok/s
7B Q4 speed (Blazing)
Model coverage at Q479% of all models

Specifications

VRAM
24 GB
Bandwidth
933 GB/s
FP16 Compute
10.3 TFLOPS
TDP
165W
Memory
HBM2
Architecture
Ampere
CUDA Cores
3,584
Tensor Cores
224
PCIe
Gen 4 x16
197
Fast models (>30 tok/s)
Real-time chat speed
229
Usable models (>10 tok/s)
Comfortable for all tasks
229
Total compatible
Fit in VRAM at Q4

Compatible Models(229)

S
GPT-2 124M0.124B
gpt21K ctxchat
6019
tok/s
2% VRAM
S
SmolLM2 135M0.135B
smollm2K ctxchat
5529
tok/s
2% VRAM
S
Nomic Embed Text v1.50.137B
embedding8K ctxembedding
5448
tok/s
1% VRAM
S
mxbai-embed-large-v10.335B
embedding1K ctxembedding
2228
tok/s
2% VRAM
S
Snowflake Arctic Embed L0.335B
embedding1K ctxembedding
2228
tok/s
2% VRAM
S
GPT-2 Medium 345M0.345B
gpt21K ctxchat
2163
tok/s
2% VRAM
S
SmolLM2 360M0.36B
smollm8K ctxchat
2073
tok/s
2% VRAM
S
Qwen 1.5 0.5B0.5B
qwen32K ctxchat
1493
tok/s
3% VRAM
S
Qwen 2.5 0.5B0.5B
qwen32K ctxchat
1493
tok/s
3% VRAM
S
BGE-M30.568B
embedding8K ctxembedding
1314
tok/s
3% VRAM
S
Qwen3 0.6B0.6B
qwen32K ctxchatreasoning
1244
tok/s
3% VRAM
S
GPT-2 Large 774M0.774B
gpt21K ctxchat
964
tok/s
3% VRAM
S
Qwen3-0.6B0.8B
qwen40K ctxchat
933
tok/s
4% VRAM
S
Qwen3.5-0.8B0.9B
qwen256K ctxchat
829
tok/s
4% VRAM
S
Falcon3-1B1B
falcon32K ctxchat
746
tok/s
4% VRAM
S
InternLM2 1B1B
internlm32K ctxchat
746
tok/s
4% VRAM
S
TinyLlama-1.1B1.1B
llama2K ctxchat
679
tok/s
5% VRAM
S
TinyLlama 1.1B1.1B
llama2K ctxchat
679
tok/s
5% VRAM
S
Llama-3.2-1B1.2B
llama4K ctxchat
622
tok/s
5% VRAM
S
LFM2.5-1.2B-Thinking1.2B
lfm122K ctxchatreasoningtool_use
622
tok/s
3% VRAM
S
DeepSeek Coder 1.3B1.3B
deepseek16K ctxcoding
574
tok/s
5% VRAM
S
OPT 1.3B1.3B
opt2K ctxchat
574
tok/s
5% VRAM
S
Phi-1 1.3B1.3B
phi2K ctxcoding
574
tok/s
5% VRAM
S
Phi-1.5 1.3B1.3B
phi2K ctxchatcoding
574
tok/s
5% VRAM
S
GPT-2 XL 1.5B1.5B
gpt21K ctxchat
498
tok/s
5% VRAM
S
Qwen2.5-Coder-1.5B1.5B
qwen32K ctxcodingchat
498
tok/s
6% VRAM
S
Qwen2 Math 1.5B1.5B
qwen4K ctxreasoning
498
tok/s
6% VRAM
S
Qwen 2.5 1.5B1.5B
qwen32K ctxchatcoding
498
tok/s
6% VRAM
S
Yi Coder 1.5B1.5B
yi125K ctxcoding
498
tok/s
6% VRAM
S
stablelm-2-1_6b1.6B
stablelm4K ctxchat
467
tok/s
6% VRAM
S
Qwen3 1.7B1.7B
qwen32K ctxchatreasoning
439
tok/s
6% VRAM
S
SmolLM2 1.7B1.71B
smollm8K ctxchat
436
tok/s
6% VRAM
S
Qwen 1.5 1.8B1.8B
qwen32K ctxchat
415
tok/s
6% VRAM
S
Moondream2 1.9B1.9B
other2K ctxvisionchat
393
tok/s
6% VRAM
S
Granite 4.0 Tiny7BMoE
granite125K ctxchatcodingmultilingual
373
tok/s
18% VRAM
S
Gemma 1 2B2B
gemma8K ctxchat
373
tok/s
7% VRAM
S
Granite 3.0 2B2B
granite128K ctxchatcoding
373
tok/s
7% VRAM
S
Granite 3.1 2B2B
granite128K ctxchatcoding
373
tok/s
7% VRAM
S
Qwen2-VL 2B2.21B
qwen32K ctxchatvision
338
tok/s
7% VRAM
S
Qwen3.5-2B2.3B
qwen256K ctxchat
325
tok/s
7% VRAM

Get personalized recommendations

See ranked models with benchmark scores, run commands, and precise speed estimates for your PG506-217.