▸ DEVICE UNDER TEST
NVIDIA H800 PCIe 80 GB — 80 GB VRAM.
▸ H800 PCIE 80 GB SPEC
- BRAND
- NVIDIA
- VRAM
- 80 GB HBM2e
- BANDWIDTH
- 2040 GB/s
- FP16 COMPUTE
- 204.9 TFLOPS
- FP32 COMPUTE
- 51.2 TFLOPS
- CUDA CORES
- 14,592
- TENSOR CORES
- 456
- TDP
- 350 W
- ARCHITECTURE
- Hopper
▸ AI CAPABILITY
287/ 331 models @ Q4
With 80 GB VRAM and 2040 GB/s bandwidth, this GPU handles models up to 111B parameters.
Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~233 tok/s.
§ 01TOP MODELS FOR H800 PCIE 80 GB
287 FIT · SHOWING 20| MODEL | SIZE | VRAM Q4 | TOK/S | AVG |
|---|---|---|---|---|
| Command A 111B | 111B | 68.3 GB | 15 | 27.6 |
| GLM 4.5 Air | 110B | 67.7 GB | 136 | 51.0 |
| Qwen 1.5 110B | 110B | 67.7 GB | 15 | 33.4 |
| Llama 4 Scout 17B-16E | 109B | 67.1 GB | 96 | 33.9 |
| Sarvam 105B | 105B | 64.7 GB | 16 | 48.0 |
| Command-R+ 104B | 104B | 64.1 GB | 16 | 52.7 |
| Llama-3.2-90B-Vision-Instruct | 90B | 55.5 GB | 18 | 48.5 |
| Hunyuan A13B | 80B | 49.4 GB | 126 | 81.1 |
| Qwen3-Coder-Next | 80B | 49.4 GB | 544 | 43.0 |
| Qwen2.5-72B | 72.7B | 44.9 GB | 22 | 39.7 |
| Qwen2-VL 72B | 72.7B | 44.9 GB | 22 | 55.5 |
| Qwen 1.5 72B | 72B | 44.5 GB | 23 | 49.7 |
| Qwen2 Math 72B | 72B | 44.5 GB | 23 | 49.7 |
| DeepSeek R1 Distill Llama 70B | 70.6B | 43.6 GB | 23 | 42.4 |
| Llama 3.3 70B | 70.6B | 43.6 GB | 23 | 44.8 |
| Llama 3.1 70B | 70.6B | 43.6 GB | 23 | 33.2 |
| Llama 3 70B | 70.6B | 43.6 GB | 23 | 44.1 |
| Llama-3.1-Nemotron-70B | 70.6B | 43.6 GB | 23 | 43.7 |
| Cogito 70B | 70B | 43.3 GB | 23 | — |
| Llama 2 70B | 70B | 43.3 GB | 23 | 33.4 |