NVIDIA/ Ada Lovelace

NVIDIA L20

48 GB
VRAM (Flagship)
864 GB/s
Memory Bandwidth
253 / 290
Models fit at Q4 (87%)
~99 tok/s
7B Q4 speed (Blazing)
Model coverage at Q487% of all models

Specifications

VRAM
48 GB
Bandwidth
864 GB/s
FP16 Compute
59.4 TFLOPS
TDP
275W
Memory
GDDR6
Architecture
Ada Lovelace
CUDA Cores
11,776
Tensor Cores
368
PCIe
Gen 4 x16
201
Fast models (>30 tok/s)
Real-time chat speed
253
Usable models (>10 tok/s)
Comfortable for all tasks
253
Total compatible
Fit in VRAM at Q4

Compatible Models(253)

S
GPT-2 124M0.124B
gpt21K ctxchat
5574
tok/s
1% VRAM
S
SmolLM2 135M0.135B
smollm2K ctxchat
5120
tok/s
1% VRAM
S
Nomic Embed Text v1.50.137B
embedding8K ctxembedding
5045
tok/s
1% VRAM
S
mxbai-embed-large-v10.335B
embedding1K ctxembedding
2063
tok/s
1% VRAM
S
Snowflake Arctic Embed L0.335B
embedding1K ctxembedding
2063
tok/s
1% VRAM
S
GPT-2 Medium 345M0.345B
gpt21K ctxchat
2003
tok/s
1% VRAM
S
SmolLM2 360M0.36B
smollm8K ctxchat
1920
tok/s
1% VRAM
S
Qwen 1.5 0.5B0.5B
qwen32K ctxchat
1382
tok/s
2% VRAM
S
Qwen 2.5 0.5B0.5B
qwen32K ctxchat
1382
tok/s
2% VRAM
S
BGE-M30.568B
embedding8K ctxembedding
1217
tok/s
1% VRAM
S
Qwen3 0.6B0.6B
qwen32K ctxchatreasoning
1152
tok/s
1% VRAM
S
GPT-2 Large 774M0.774B
gpt21K ctxchat
893
tok/s
2% VRAM
S
Qwen3-0.6B0.8B
qwen40K ctxchat
864
tok/s
2% VRAM
S
Qwen3.5-0.8B0.9B
qwen256K ctxchat
768
tok/s
2% VRAM
S
Falcon3-1B1B
falcon32K ctxchat
691
tok/s
2% VRAM
S
InternLM2 1B1B
internlm32K ctxchat
691
tok/s
2% VRAM
S
TinyLlama-1.1B1.1B
llama2K ctxchat
628
tok/s
2% VRAM
S
TinyLlama 1.1B1.1B
llama2K ctxchat
628
tok/s
2% VRAM
S
Llama-3.2-1B1.2B
llama4K ctxchat
576
tok/s
2% VRAM
S
LFM2.5-1.2B-Thinking1.2B
lfm122K ctxchatreasoningtool_use
576
tok/s
1% VRAM
S
DeepSeek Coder 1.3B1.3B
deepseek16K ctxcoding
532
tok/s
3% VRAM
S
OPT 1.3B1.3B
opt2K ctxchat
532
tok/s
2% VRAM
S
Phi-1 1.3B1.3B
phi2K ctxcoding
532
tok/s
3% VRAM
S
Phi-1.5 1.3B1.3B
phi2K ctxchatcoding
532
tok/s
3% VRAM
S
GPT-2 XL 1.5B1.5B
gpt21K ctxchat
461
tok/s
3% VRAM
S
Qwen2.5-Coder-1.5B1.5B
qwen32K ctxcodingchat
461
tok/s
3% VRAM
S
Qwen2 Math 1.5B1.5B
qwen4K ctxreasoning
461
tok/s
3% VRAM
S
Qwen 2.5 1.5B1.5B
qwen32K ctxchatcoding
461
tok/s
3% VRAM
S
Yi Coder 1.5B1.5B
yi125K ctxcoding
461
tok/s
3% VRAM
S
stablelm-2-1_6b1.6B
stablelm4K ctxchat
432
tok/s
3% VRAM
S
Qwen3 1.7B1.7B
qwen32K ctxchatreasoning
407
tok/s
3% VRAM
S
SmolLM2 1.7B1.71B
smollm8K ctxchat
404
tok/s
3% VRAM
S
Qwen 1.5 1.8B1.8B
qwen32K ctxchat
384
tok/s
3% VRAM
S
Moondream2 1.9B1.9B
other2K ctxvisionchat
364
tok/s
3% VRAM
S
Granite 4.0 Tiny7BMoE
granite125K ctxchatcodingmultilingual
346
tok/s
9% VRAM
S
Gemma 1 2B2B
gemma8K ctxchat
346
tok/s
3% VRAM
S
Granite 3.0 2B2B
granite128K ctxchatcoding
346
tok/s
3% VRAM
S
Granite 3.1 2B2B
granite128K ctxchatcoding
346
tok/s
3% VRAM
S
Qwen2-VL 2B2.21B
qwen32K ctxchatvision
313
tok/s
4% VRAM
S
Qwen3.5-2B2.3B
qwen256K ctxchat
301
tok/s
4% VRAM

Get personalized recommendations

See ranked models with benchmark scores, run commands, and precise speed estimates for your L20.