NVIDIA/ Ada Lovelace

NVIDIA RTX 4000 Mobile Ada Generation

12 GB
VRAM (Entry-level)
432 GB/s
Memory Bandwidth
182 / 290
Models fit at Q4 (63%)
~49 tok/s
7B Q4 speed (Fast)
Model coverage at Q463% of all models

Specifications

VRAM
12 GB
Bandwidth
432 GB/s
FP16 Compute
24.7 TFLOPS
TDP
110W
Memory
GDDR6
Architecture
Ada Lovelace
CUDA Cores
7,424
Tensor Cores
232
PCIe
Gen 4 x16
150
Fast models (>30 tok/s)
Real-time chat speed
182
Usable models (>10 tok/s)
Comfortable for all tasks
182
Total compatible
Fit in VRAM at Q4

Compatible Models(182)

S
GPT-2 124M0.124B
gpt21K ctxchat
2787
tok/s
3% VRAM
S
SmolLM2 135M0.135B
smollm2K ctxchat
2560
tok/s
3% VRAM
S
Nomic Embed Text v1.50.137B
embedding8K ctxembedding
2523
tok/s
2% VRAM
S
mxbai-embed-large-v10.335B
embedding1K ctxembedding
1032
tok/s
4% VRAM
S
Snowflake Arctic Embed L0.335B
embedding1K ctxembedding
1032
tok/s
4% VRAM
S
GPT-2 Medium 345M0.345B
gpt21K ctxchat
1002
tok/s
4% VRAM
S
SmolLM2 360M0.36B
smollm8K ctxchat
960
tok/s
4% VRAM
S
Qwen 1.5 0.5B0.5B
qwen32K ctxchat
691
tok/s
6% VRAM
S
Qwen 2.5 0.5B0.5B
qwen32K ctxchat
691
tok/s
6% VRAM
S
BGE-M30.568B
embedding8K ctxembedding
608
tok/s
5% VRAM
S
Qwen3 0.6B0.6B
qwen32K ctxchatreasoning
576
tok/s
5% VRAM
S
GPT-2 Large 774M0.774B
gpt21K ctxchat
447
tok/s
6% VRAM
S
Qwen3-0.6B0.8B
qwen40K ctxchat
432
tok/s
8% VRAM
S
Qwen3.5-0.8B0.9B
qwen256K ctxchat
384
tok/s
8% VRAM
S
Falcon3-1B1B
falcon32K ctxchat
346
tok/s
8% VRAM
S
InternLM2 1B1B
internlm32K ctxchat
346
tok/s
8% VRAM
S
TinyLlama-1.1B1.1B
llama2K ctxchat
314
tok/s
9% VRAM
S
TinyLlama 1.1B1.1B
llama2K ctxchat
314
tok/s
9% VRAM
S
Llama-3.2-1B1.2B
llama4K ctxchat
288
tok/s
10% VRAM
S
LFM2.5-1.2B-Thinking1.2B
lfm122K ctxchatreasoningtool_use
288
tok/s
6% VRAM
S
DeepSeek Coder 1.3B1.3B
deepseek16K ctxcoding
266
tok/s
10% VRAM
S
OPT 1.3B1.3B
opt2K ctxchat
266
tok/s
10% VRAM
S
Phi-1 1.3B1.3B
phi2K ctxcoding
266
tok/s
10% VRAM
S
Phi-1.5 1.3B1.3B
phi2K ctxchatcoding
266
tok/s
10% VRAM
S
GPT-2 XL 1.5B1.5B
gpt21K ctxchat
230
tok/s
11% VRAM
S
Qwen2.5-Coder-1.5B1.5B
qwen32K ctxcodingchat
230
tok/s
11% VRAM
S
Qwen2 Math 1.5B1.5B
qwen4K ctxreasoning
230
tok/s
11% VRAM
S
Qwen 2.5 1.5B1.5B
qwen32K ctxchatcoding
230
tok/s
11% VRAM
S
Yi Coder 1.5B1.5B
yi125K ctxcoding
230
tok/s
11% VRAM
S
stablelm-2-1_6b1.6B
stablelm4K ctxchat
216
tok/s
12% VRAM
S
Qwen3 1.7B1.7B
qwen32K ctxchatreasoning
203
tok/s
12% VRAM
S
SmolLM2 1.7B1.71B
smollm8K ctxchat
202
tok/s
12% VRAM
S
Qwen 1.5 1.8B1.8B
qwen32K ctxchat
192
tok/s
13% VRAM
S
Moondream2 1.9B1.9B
other2K ctxvisionchat
182
tok/s
11% VRAM
S
Granite 4.0 Tiny7BMoE
granite125K ctxchatcodingmultilingual
173
tok/s
37% VRAM
S
Gemma 1 2B2B
gemma8K ctxchat
173
tok/s
13% VRAM
S
Granite 3.0 2B2B
granite128K ctxchatcoding
173
tok/s
13% VRAM
S
Granite 3.1 2B2B
granite128K ctxchatcoding
173
tok/s
13% VRAM
S
Qwen2-VL 2B2.21B
qwen32K ctxchatvision
156
tok/s
14% VRAM
S
Qwen3.5-2B2.3B
qwen256K ctxchat
150
tok/s
15% VRAM

Get personalized recommendations

See ranked models with benchmark scores, run commands, and precise speed estimates for your RTX 4000 Mobile Ada Generation.