NVIDIA· HOPPER

NVIDIA H100 SXM5 64 GB

VRAM
64 GB
FLAGSHIP
BANDWIDTH
2020
GB/S
MODELS Q4
281/331
85%
7B Q4 SPEED
~231
BLAZING
▸ MODEL COVERAGE @ Q485% OF ALL
▸ ESTIMATED SPEED· BY MODEL SIZE @ Q4

Average speeds at Q4 quantization. Actual performance varies by model architecture and context length.

3B
~539
TOK/S
7B
~231
TOK/S
14B
~115
TOK/S
32B
~51
TOK/S
70B
~23
TOK/S
▸ SPECIFICATIONS
VRAM
64 GB
BANDWIDTH
2020 GB/s
FP16 COMPUTE
267.6 TFLOPS
TDP
700W
MEMORY
HBM3
ARCHITECTURE
Hopper
CUDA CORES
16,896
TENSOR CORES
528
PCIE
Gen 5 x16
MSRP
$25,000
263
FAST MODELS · >30 TOK/S
Real-time chat speed
281
USABLE · >10 TOK/S
Comfortable for all tasks
281
TOTAL COMPATIBLE
Fit in VRAM at Q4
▸ RENT IT IN THE CLOUD

Buying H100 SXM5 64 GB costs $15–$40k and isn’t practical for most teams. Spin one up by the hour instead:

Spin up in ~60s. Pay by the second. Cancel anytime.

Some links are affiliate links — we may earn a small commission at no extra cost to you. This helps keep FitMyLLM free and independent.

▸ COMPATIBLE MODELS· 281
S
nomic-embed-text-v1.5 100M0.1B
EMBEDDING·8K CTX· CHAT
16160
TOK/S · 1% VRAM
S
GPT-2 124M0.124B
GPT2·1K CTX· CHAT
13032
TOK/S · 1% VRAM
S
SmolLM2 135M0.135B
SMOLLM·2K CTX· CHAT
11970
TOK/S · 1% VRAM
S
bge-large-en-v1.5 335M0.335B
EMBEDDING·1K CTX· CHAT
4824
TOK/S · 1% VRAM
S
mxbai-embed-large-v10.335B
EMBEDDING·1K CTX· EMBEDDING
4824
TOK/S · 1% VRAM
S
Snowflake Arctic Embed L0.335B
EMBEDDING·1K CTX· EMBEDDING
4824
TOK/S · 1% VRAM
S
GPT-2 Medium 345M0.345B
GPT2·1K CTX· CHAT
4684
TOK/S · 1% VRAM
S
SmolLM2 360M0.36B
SMOLLM·8K CTX· CHAT
4489
TOK/S · 1% VRAM
S
Falcon-H1 0.5B0.5B
FALCON·128K CTX· CHAT
3232
TOK/S · 1% VRAM
S
Qwen 1.5 0.5B0.5B
QWEN·32K CTX· CHAT
3232
TOK/S · 1% VRAM
S
Qwen 2.5 0.5B0.5B
QWEN·32K CTX· CHAT
3232
TOK/S · 1% VRAM
S
BGE-M30.568B
EMBEDDING·8K CTX· EMBEDDING
2845
TOK/S · 1% VRAM
S
Qwen3 0.6B0.6B
QWEN·32K CTX· CHAT· REASONING
2693
TOK/S · 1% VRAM
S
GPT-2 Large 774M0.774B
GPT2·1K CTX· CHAT
2088
TOK/S · 2% VRAM
S
Qwen 3.5 0.8B0.8B
QWEN·256K CTX· CHAT· CODING· MULTILINGUAL
2020
TOK/S · 2% VRAM
S
Qwen3.5-0.8B0.9B
QWEN·256K CTX· CHAT
1796
TOK/S · 2% VRAM
S
Falcon3-1B1B
FALCON·32K CTX· CHAT
1616
TOK/S · 2% VRAM
S
InternLM2 1B1B
INTERNLM·32K CTX· CHAT
1616
TOK/S · 2% VRAM
S
TinyLlama 1.1B1.1B
LLAMA·2K CTX· CHAT
1469
TOK/S · 2% VRAM
S
LFM2.5-1.2B-Thinking1.2B
LFM·122K CTX· CHAT· REASONING· TOOL_USE
1347
TOK/S · 2% VRAM
S
Llama-3.2-1B1.2B
LLAMA·4K CTX· CHAT
1347
TOK/S · 2% VRAM
S
DeepSeek Coder 1.3B1.3B
DEEPSEEK·16K CTX· CODING
1243
TOK/S · 2% VRAM
S
EXAONE-4.0-1.2B1.3B
EXAONE·64K CTX· CHAT
1243
TOK/S · 2% VRAM
S
OPT 1.3B1.3B
OPT·2K CTX· CHAT
1243
TOK/S · 2% VRAM
S
Phi-1 1.3B1.3B
PHI·2K CTX· CODING
1243
TOK/S · 2% VRAM
S
Phi-1.5 1.3B1.3B
PHI·2K CTX· CHAT· CODING
1243
TOK/S · 2% VRAM
S
granite-4.0-h-tiny 6.9B6.9BMoE
GRANITE·128K CTX· CHAT
1077
TOK/S · 7% VRAM
S
Falcon-H1 1.5B1.5B
FALCON·128K CTX· CHAT· CODING
1077
TOK/S · 2% VRAM
S
GPT-2 XL 1.5B1.5B
GPT2·1K CTX· CHAT
1077
TOK/S · 2% VRAM
S
Qwen2.5-Coder-1.5B1.5B
QWEN·32K CTX· CHAT· TOOL_USE· CODING
1077
TOK/S · 2% VRAM
S
Qwen2 Math 1.5B1.5B
QWEN·4K CTX· REASONING
1077
TOK/S · 2% VRAM
S
Qwen 2.5 1.5B1.5B
QWEN·32K CTX· CHAT· CODING
1077
TOK/S · 2% VRAM
S
Yi Coder 1.5B1.5B
YI·125K CTX· CODING
1077
TOK/S · 2% VRAM
S
stablelm-2-1_6b1.6B
STABLELM·4K CTX· CHAT
1010
TOK/S · 2% VRAM
S
Qwen3 1.7B1.7B
QWEN·32K CTX· CHAT· REASONING
951
TOK/S · 2% VRAM
S
SmolLM2 1.7B1.71B
SMOLLM·8K CTX· CHAT
945
TOK/S · 2% VRAM
S
Qwen 1.5 1.8B1.8B
QWEN·32K CTX· CHAT
898
TOK/S · 2% VRAM
S
Moondream2 1.9B1.9B
OTHER·2K CTX· VISION· CHAT
851
TOK/S · 3% VRAM
S
Gemma 1 2B2B
GEMMA·8K CTX· CHAT
808
TOK/S · 3% VRAM
S
Granite 3.0 2B2B
GRANITE·128K CTX· CHAT· CODING
808
TOK/S · 3% VRAM
▸ NEXT STEP

Get personalized recommendations.

See ranked models with benchmark scores, run commands, and precise speed estimates for your H100 SXM5 64 GB.

▸ DEVICE UNDER TEST

NVIDIA H100 SXM5 64 GB64 GB VRAM.

H100 SXM5 64 GB SPEC
BRAND
NVIDIA
VRAM
64 GB HBM3
BANDWIDTH
2020 GB/s
FP16 COMPUTE
267.6 TFLOPS
FP32 COMPUTE
66.9 TFLOPS
CUDA CORES
16,896
TENSOR CORES
528
TDP
700 W
ARCHITECTURE
Hopper
MSRP
$25000
▸ AI CAPABILITY
281/ 331 models @ Q4

With 64 GB VRAM and 2020 GB/s bandwidth, this GPU handles models up to 90B parameters.

Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~231 tok/s.

§ 01TOP MODELS FOR H100 SXM5 64 GB
281 FIT · SHOWING 20
MODELSIZEVRAM Q4TOK/SAVG
Llama-3.2-90B-Vision-Instruct90B55.5 GB1848.5
Hunyuan A13B80B49.4 GB12481.1
Qwen3-Coder-Next80B49.4 GB53943.0
Qwen2.5-72B72.7B44.9 GB2239.7
Qwen2-VL 72B72.7B44.9 GB2255.5
Qwen 1.5 72B72B44.5 GB2249.7
Qwen2 Math 72B72B44.5 GB2249.7
DeepSeek R1 Distill Llama 70B70.6B43.6 GB2342.4
Llama 3.3 70B70.6B43.6 GB2344.8
Llama 3.1 70B70.6B43.6 GB2333.2
Llama 3 70B70.6B43.6 GB2344.1
Llama-3.1-Nemotron-70B70.6B43.6 GB2343.7
Cogito 70B70B43.3 GB23
Llama 2 70B70B43.3 GB2333.4
CodeLlama 70B70B43.3 GB2345.7
Dolphin Llama 3 70B70B43.3 GB2345.7
Tulu 3 70B70B43.3 GB2359.4
WizardLM 70B70B43.3 GB2328.5
OPT 66B66B40.8 GB24
LLaMA 1 65B65.2B40.3 GB2542.6