NVIDIA· AMPERE

NVIDIA A100 SXM4 40 GB

VRAM
40 GB
HIGH-END
BANDWIDTH
1560
GB/S
MODELS Q4
261/331
79%
7B Q4 SPEED
~178
BLAZING
▸ MODEL COVERAGE @ Q479% OF ALL
▸ ESTIMATED SPEED· BY MODEL SIZE @ Q4

Average speeds at Q4 quantization. Actual performance varies by model architecture and context length.

3B
~416
TOK/S
7B
~178
TOK/S
14B
~89
TOK/S
32B
~39
TOK/S
70B
39.4GB NEEDED
▸ SPECIFICATIONS
VRAM
40 GB
BANDWIDTH
1560 GB/s
FP16 COMPUTE
78 TFLOPS
TDP
400W
MEMORY
HBM2e
ARCHITECTURE
Ampere
CUDA CORES
6,912
TENSOR CORES
432
PCIE
Gen 4 x16
MSRP
$10,000
260
FAST MODELS · >30 TOK/S
Real-time chat speed
261
USABLE · >10 TOK/S
Comfortable for all tasks
261
TOTAL COMPATIBLE
Fit in VRAM at Q4
▸ RENT IT IN THE CLOUD

Buying A100 SXM4 40 GB costs $15–$40k and isn’t practical for most teams. Spin one up by the hour instead:

Spin up in ~60s. Pay by the second. Cancel anytime.

Some links are affiliate links — we may earn a small commission at no extra cost to you. This helps keep FitMyLLM free and independent.

▸ COMPATIBLE MODELS· 261
S
nomic-embed-text-v1.5 100M0.1B
EMBEDDING·8K CTX· CHAT
12480
TOK/S · 1% VRAM
S
GPT-2 124M0.124B
GPT2·1K CTX· CHAT
10065
TOK/S · 1% VRAM
S
SmolLM2 135M0.135B
SMOLLM·2K CTX· CHAT
9244
TOK/S · 1% VRAM
S
bge-large-en-v1.5 335M0.335B
EMBEDDING·1K CTX· CHAT
3725
TOK/S · 2% VRAM
S
mxbai-embed-large-v10.335B
EMBEDDING·1K CTX· EMBEDDING
3725
TOK/S · 2% VRAM
S
Snowflake Arctic Embed L0.335B
EMBEDDING·1K CTX· EMBEDDING
3725
TOK/S · 2% VRAM
S
GPT-2 Medium 345M0.345B
GPT2·1K CTX· CHAT
3617
TOK/S · 2% VRAM
S
SmolLM2 360M0.36B
SMOLLM·8K CTX· CHAT
3467
TOK/S · 2% VRAM
S
Falcon-H1 0.5B0.5B
FALCON·128K CTX· CHAT
2496
TOK/S · 2% VRAM
S
Qwen 1.5 0.5B0.5B
QWEN·32K CTX· CHAT
2496
TOK/S · 2% VRAM
S
Qwen 2.5 0.5B0.5B
QWEN·32K CTX· CHAT
2496
TOK/S · 2% VRAM
S
BGE-M30.568B
EMBEDDING·8K CTX· EMBEDDING
2197
TOK/S · 2% VRAM
S
Qwen3 0.6B0.6B
QWEN·32K CTX· CHAT· REASONING
2080
TOK/S · 2% VRAM
S
GPT-2 Large 774M0.774B
GPT2·1K CTX· CHAT
1612
TOK/S · 2% VRAM
S
Qwen 3.5 0.8B0.8B
QWEN·256K CTX· CHAT· CODING· MULTILINGUAL
1560
TOK/S · 2% VRAM
S
Qwen3.5-0.8B0.9B
QWEN·256K CTX· CHAT
1387
TOK/S · 3% VRAM
S
Falcon3-1B1B
FALCON·32K CTX· CHAT
1248
TOK/S · 3% VRAM
S
InternLM2 1B1B
INTERNLM·32K CTX· CHAT
1248
TOK/S · 3% VRAM
S
TinyLlama 1.1B1.1B
LLAMA·2K CTX· CHAT
1135
TOK/S · 3% VRAM
S
LFM2.5-1.2B-Thinking1.2B
LFM·122K CTX· CHAT· REASONING· TOOL_USE
1040
TOK/S · 3% VRAM
S
Llama-3.2-1B1.2B
LLAMA·4K CTX· CHAT
1040
TOK/S · 3% VRAM
S
DeepSeek Coder 1.3B1.3B
DEEPSEEK·16K CTX· CODING
960
TOK/S · 3% VRAM
S
EXAONE-4.0-1.2B1.3B
EXAONE·64K CTX· CHAT
960
TOK/S · 3% VRAM
S
OPT 1.3B1.3B
OPT·2K CTX· CHAT
960
TOK/S · 3% VRAM
S
Phi-1 1.3B1.3B
PHI·2K CTX· CODING
960
TOK/S · 3% VRAM
S
Phi-1.5 1.3B1.3B
PHI·2K CTX· CHAT· CODING
960
TOK/S · 3% VRAM
S
granite-4.0-h-tiny 6.9B6.9BMoE
GRANITE·128K CTX· CHAT
832
TOK/S · 12% VRAM
S
Falcon-H1 1.5B1.5B
FALCON·128K CTX· CHAT· CODING
832
TOK/S · 4% VRAM
S
GPT-2 XL 1.5B1.5B
GPT2·1K CTX· CHAT
832
TOK/S · 4% VRAM
S
Qwen2.5-Coder-1.5B1.5B
QWEN·32K CTX· CHAT· TOOL_USE· CODING
832
TOK/S · 4% VRAM
S
Qwen2 Math 1.5B1.5B
QWEN·4K CTX· REASONING
832
TOK/S · 4% VRAM
S
Qwen 2.5 1.5B1.5B
QWEN·32K CTX· CHAT· CODING
832
TOK/S · 4% VRAM
S
Yi Coder 1.5B1.5B
YI·125K CTX· CODING
832
TOK/S · 4% VRAM
S
stablelm-2-1_6b1.6B
STABLELM·4K CTX· CHAT
780
TOK/S · 4% VRAM
S
Qwen3 1.7B1.7B
QWEN·32K CTX· CHAT· REASONING
734
TOK/S · 4% VRAM
S
SmolLM2 1.7B1.71B
SMOLLM·8K CTX· CHAT
730
TOK/S · 4% VRAM
S
Qwen 1.5 1.8B1.8B
QWEN·32K CTX· CHAT
693
TOK/S · 4% VRAM
S
Moondream2 1.9B1.9B
OTHER·2K CTX· VISION· CHAT
657
TOK/S · 4% VRAM
S
Gemma 1 2B2B
GEMMA·8K CTX· CHAT
624
TOK/S · 4% VRAM
S
Granite 3.0 2B2B
GRANITE·128K CTX· CHAT· CODING
624
TOK/S · 4% VRAM
▸ NEXT STEP

Get personalized recommendations.

See ranked models with benchmark scores, run commands, and precise speed estimates for your A100 SXM4 40 GB.

▸ DEVICE UNDER TEST

NVIDIA A100 SXM4 40 GB40 GB VRAM.

A100 SXM4 40 GB SPEC
BRAND
NVIDIA
VRAM
40 GB HBM2e
BANDWIDTH
1560 GB/s
FP16 COMPUTE
78 TFLOPS
FP32 COMPUTE
19.5 TFLOPS
CUDA CORES
6,912
TENSOR CORES
432
TDP
400 W
ARCHITECTURE
Ampere
MSRP
$10000
▸ AI CAPABILITY
261/ 331 models @ Q4

With 40 GB VRAM and 1560 GB/s bandwidth, this GPU handles models up to 51.6B parameters.

Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~178 tok/s.

§ 01TOP MODELS FOR A100 SXM4 40 GB
261 FIT · SHOWING 20
MODELSIZEVRAM Q4TOK/SAVG
Jamba 1.5 Mini 52B51.6B32.0 GB10424.2
Kimi-Linear-48B-A3B48B29.8 GB41626.6
Nemotron-H 47B47B29.2 GB2784.6
Mixtral-8x7B46.7B29.0 GB9618.8
Nous-Hermes-2-Mixtral-8x7B-DPO46.7B29.0 GB9627.4
Dolphin 2.6 Mixtral 8x7B46.7B29.0 GB9623.8
Phi-3.5 MoE 42B41.9B26.1 GB18956.7
Falcon 40B40B24.9 GB3120.9
Qwen3.5-35B-A3B36B22.5 GB3548.5
c4ai-command-r-v01 35B35B21.9 GB3627.5
Qwen 3.5 35B A3B35B21.9 GB41653.3
Qwen 3.6 35B A3B35B21.9 GB41662.7
Nous Capybara 34B34.4B21.5 GB3642.0
Yi-1.5 34B34.4B21.5 GB3645.3
Falcon-H1 34B34B21.3 GB3766.1
CodeLlama 34B34B21.3 GB3725.4
Nous Hermes 2 34B34B21.3 GB3747.0
Phind CodeLlama 34B34B21.3 GB3768.1
LLaVA-1.6 Yi 34B34B21.3 GB3747.4
WizardCoder Python 34B34B21.3 GB3773.2