NVIDIA· AMPERE

NVIDIA A10M

VRAM
20 GB
MID-RANGE
BANDWIDTH
500
GB/S
MODELS Q4
212/331
64%
7B Q4 SPEED
~57
BLAZING
▸ MODEL COVERAGE @ Q464% OF ALL
▸ ESTIMATED SPEED· BY MODEL SIZE @ Q4

Average speeds at Q4 quantization. Actual performance varies by model architecture and context length.

3B
~133
TOK/S
7B
~57
TOK/S
14B
~29
TOK/S
32B
~13
TOK/S
70B
39.4GB NEEDED
▸ SPECIFICATIONS
VRAM
20 GB
BANDWIDTH
500 GB/s
FP16 COMPUTE
23.4 TFLOPS
TDP
150W
MEMORY
GDDR6
ARCHITECTURE
Ampere
CUDA CORES
7,168
TENSOR CORES
224
PCIE
Gen 4 x16
184
FAST MODELS · >30 TOK/S
Real-time chat speed
212
USABLE · >10 TOK/S
Comfortable for all tasks
212
TOTAL COMPATIBLE
Fit in VRAM at Q4
▸ DON’T WANT TO BUY?

Test A10M (or anything bigger) without committing. Pay by the second, cancel anytime.

Spin up in ~60s. Pay by the second. Cancel anytime.

Some links are affiliate links — we may earn a small commission at no extra cost to you. This helps keep FitMyLLM free and independent.

▸ COMPATIBLE MODELS· 212
S
nomic-embed-text-v1.5 100M0.1B
EMBEDDING·8K CTX· CHAT
4000
TOK/S · 3% VRAM
S
GPT-2 124M0.124B
GPT2·1K CTX· CHAT
3226
TOK/S · 3% VRAM
S
SmolLM2 135M0.135B
SMOLLM·2K CTX· CHAT
2963
TOK/S · 3% VRAM
S
bge-large-en-v1.5 335M0.335B
EMBEDDING·1K CTX· CHAT
1194
TOK/S · 3% VRAM
S
mxbai-embed-large-v10.335B
EMBEDDING·1K CTX· EMBEDDING
1194
TOK/S · 3% VRAM
S
Snowflake Arctic Embed L0.335B
EMBEDDING·1K CTX· EMBEDDING
1194
TOK/S · 3% VRAM
S
GPT-2 Medium 345M0.345B
GPT2·1K CTX· CHAT
1159
TOK/S · 3% VRAM
S
SmolLM2 360M0.36B
SMOLLM·8K CTX· CHAT
1111
TOK/S · 4% VRAM
S
Falcon-H1 0.5B0.5B
FALCON·128K CTX· CHAT
800
TOK/S · 4% VRAM
S
Qwen 1.5 0.5B0.5B
QWEN·32K CTX· CHAT
800
TOK/S · 4% VRAM
S
Qwen 2.5 0.5B0.5B
QWEN·32K CTX· CHAT
800
TOK/S · 4% VRAM
S
BGE-M30.568B
EMBEDDING·8K CTX· EMBEDDING
704
TOK/S · 4% VRAM
S
Qwen3 0.6B0.6B
QWEN·32K CTX· CHAT· REASONING
667
TOK/S · 4% VRAM
S
GPT-2 Large 774M0.774B
GPT2·1K CTX· CHAT
517
TOK/S · 5% VRAM
S
Qwen 3.5 0.8B0.8B
QWEN·256K CTX· CHAT· CODING· MULTILINGUAL
500
TOK/S · 5% VRAM
S
Qwen3.5-0.8B0.9B
QWEN·256K CTX· CHAT
444
TOK/S · 5% VRAM
S
Falcon3-1B1B
FALCON·32K CTX· CHAT
400
TOK/S · 5% VRAM
S
InternLM2 1B1B
INTERNLM·32K CTX· CHAT
400
TOK/S · 5% VRAM
S
TinyLlama 1.1B1.1B
LLAMA·2K CTX· CHAT
364
TOK/S · 6% VRAM
S
LFM2.5-1.2B-Thinking1.2B
LFM·122K CTX· CHAT· REASONING· TOOL_USE
333
TOK/S · 6% VRAM
S
Llama-3.2-1B1.2B
LLAMA·4K CTX· CHAT
333
TOK/S · 6% VRAM
S
DeepSeek Coder 1.3B1.3B
DEEPSEEK·16K CTX· CODING
308
TOK/S · 6% VRAM
S
EXAONE-4.0-1.2B1.3B
EXAONE·64K CTX· CHAT
308
TOK/S · 6% VRAM
S
OPT 1.3B1.3B
OPT·2K CTX· CHAT
308
TOK/S · 6% VRAM
S
Phi-1 1.3B1.3B
PHI·2K CTX· CODING
308
TOK/S · 6% VRAM
S
Phi-1.5 1.3B1.3B
PHI·2K CTX· CHAT· CODING
308
TOK/S · 6% VRAM
S
granite-4.0-h-tiny 6.9B6.9BMoE
GRANITE·128K CTX· CHAT
267
TOK/S · 24% VRAM
S
Falcon-H1 1.5B1.5B
FALCON·128K CTX· CHAT· CODING
267
TOK/S · 7% VRAM
S
GPT-2 XL 1.5B1.5B
GPT2·1K CTX· CHAT
267
TOK/S · 7% VRAM
S
Qwen2.5-Coder-1.5B1.5B
QWEN·32K CTX· CHAT· TOOL_USE· CODING
267
TOK/S · 7% VRAM
S
Qwen2 Math 1.5B1.5B
QWEN·4K CTX· REASONING
267
TOK/S · 7% VRAM
S
Qwen 2.5 1.5B1.5B
QWEN·32K CTX· CHAT· CODING
267
TOK/S · 7% VRAM
S
Yi Coder 1.5B1.5B
YI·125K CTX· CODING
267
TOK/S · 7% VRAM
S
stablelm-2-1_6b1.6B
STABLELM·4K CTX· CHAT
250
TOK/S · 7% VRAM
S
Qwen3 1.7B1.7B
QWEN·32K CTX· CHAT· REASONING
235
TOK/S · 8% VRAM
S
SmolLM2 1.7B1.71B
SMOLLM·8K CTX· CHAT
234
TOK/S · 8% VRAM
S
Qwen 1.5 1.8B1.8B
QWEN·32K CTX· CHAT
222
TOK/S · 8% VRAM
S
Moondream2 1.9B1.9B
OTHER·2K CTX· VISION· CHAT
211
TOK/S · 8% VRAM
S
Gemma 1 2B2B
GEMMA·8K CTX· CHAT
200
TOK/S · 9% VRAM
S
Granite 3.0 2B2B
GRANITE·128K CTX· CHAT· CODING
200
TOK/S · 9% VRAM
▸ NEXT STEP

Get personalized recommendations.

See ranked models with benchmark scores, run commands, and precise speed estimates for your A10M.

▸ DEVICE UNDER TEST

NVIDIA A10M20 GB VRAM.

A10M SPEC
BRAND
NVIDIA
VRAM
20 GB GDDR6
BANDWIDTH
500 GB/s
FP16 COMPUTE
23.4 TFLOPS
FP32 COMPUTE
23.4 TFLOPS
CUDA CORES
7,168
TENSOR CORES
224
TDP
150 W
ARCHITECTURE
Ampere
▸ AI CAPABILITY
212/ 331 models @ Q4

With 20 GB VRAM and 500 GB/s bandwidth, this GPU handles models up to 28B parameters.

Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~57 tok/s.

§ 01TOP MODELS FOR A10M
212 FIT · SHOWING 20
MODELSIZEVRAM Q4TOK/SAVG
PaliGemma 2 28B28B17.6 GB1438.6
Qwen3.5-27B27.8B17.5 GB1459.4
gemma-3-27b27.4B17.2 GB1527.2
gemma-2-27b27.2B17.1 GB1534.6
TranslateGemma 27B27B17.0 GB1538.6
Gemma 4 26B A4B26B16.4 GB10047.9
Mistral-Small-24B24B15.2 GB1725.0
Mistral-Small-3.1-24B24B15.2 GB1728.8
Magistral Small 24B24B15.2 GB1747.0
Devstral Small 2 24B24B15.2 GB1733.4
Codestral 22B22.2B14.1 GB1850.1
Devstral Small 22B22.2B14.1 GB1835.5
Mistral Small 22B22.2B14.1 GB1835.2
SOLAR-Pro 22B22.1B14.0 GB1844.2
ERNIE 4.5 21B A3B21B13.3 GB133
GPT-OSS 20B21B13.3 GB11152.9
InternLM2 20B19.8B12.6 GB2045.1
InternLM2.5 20B19.8B12.6 GB2050.9
Ling-lite 16.8B16.8B10.8 GB167
DeepSeek V2 Lite 16B16B10.3 GB16738.0