▸ DEVICE UNDER TEST
NVIDIA H100 SXM5 64 GB — 64 GB VRAM.
▸ H100 SXM5 64 GB SPEC
- BRAND
- NVIDIA
- VRAM
- 64 GB HBM3
- BANDWIDTH
- 2020 GB/s
- FP16 COMPUTE
- 267.6 TFLOPS
- FP32 COMPUTE
- 66.9 TFLOPS
- CUDA CORES
- 16,896
- TENSOR CORES
- 528
- TDP
- 700 W
- ARCHITECTURE
- Hopper
- MSRP
- $25000
▸ AI CAPABILITY
281/ 331 models @ Q4
With 64 GB VRAM and 2020 GB/s bandwidth, this GPU handles models up to 90B parameters.
Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~231 tok/s.
§ 01TOP MODELS FOR H100 SXM5 64 GB
281 FIT · SHOWING 20| MODEL | SIZE | VRAM Q4 | TOK/S | AVG |
|---|---|---|---|---|
| Llama-3.2-90B-Vision-Instruct | 90B | 55.5 GB | 18 | 48.5 |
| Hunyuan A13B | 80B | 49.4 GB | 124 | 81.1 |
| Qwen3-Coder-Next | 80B | 49.4 GB | 539 | 43.0 |
| Qwen2.5-72B | 72.7B | 44.9 GB | 22 | 39.7 |
| Qwen2-VL 72B | 72.7B | 44.9 GB | 22 | 55.5 |
| Qwen 1.5 72B | 72B | 44.5 GB | 22 | 49.7 |
| Qwen2 Math 72B | 72B | 44.5 GB | 22 | 49.7 |
| DeepSeek R1 Distill Llama 70B | 70.6B | 43.6 GB | 23 | 42.4 |
| Llama 3.3 70B | 70.6B | 43.6 GB | 23 | 44.8 |
| Llama 3.1 70B | 70.6B | 43.6 GB | 23 | 33.2 |
| Llama 3 70B | 70.6B | 43.6 GB | 23 | 44.1 |
| Llama-3.1-Nemotron-70B | 70.6B | 43.6 GB | 23 | 43.7 |
| Cogito 70B | 70B | 43.3 GB | 23 | — |
| Llama 2 70B | 70B | 43.3 GB | 23 | 33.4 |
| CodeLlama 70B | 70B | 43.3 GB | 23 | 45.7 |
| Dolphin Llama 3 70B | 70B | 43.3 GB | 23 | 45.7 |
| Tulu 3 70B | 70B | 43.3 GB | 23 | 59.4 |
| WizardLM 70B | 70B | 43.3 GB | 23 | 28.5 |
| OPT 66B | 66B | 40.8 GB | 24 | — |
| LLaMA 1 65B | 65.2B | 40.3 GB | 25 | 42.6 |