▸ DEVICE UNDER TEST
NVIDIA A100 PCIe 40 GB — 40 GB VRAM.
▸ A100 PCIE 40 GB SPEC
- BRAND
- NVIDIA
- VRAM
- 40 GB HBM2e
- BANDWIDTH
- 1560 GB/s
- FP16 COMPUTE
- 78 TFLOPS
- FP32 COMPUTE
- 19.5 TFLOPS
- CUDA CORES
- 6,912
- TENSOR CORES
- 432
- TDP
- 250 W
- ARCHITECTURE
- Ampere
- MSRP
- $10000
▸ AI CAPABILITY
261/ 331 models @ Q4
With 40 GB VRAM and 1560 GB/s bandwidth, this GPU handles models up to 51.6B parameters.
Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~178 tok/s.
§ 01TOP MODELS FOR A100 PCIE 40 GB
261 FIT · SHOWING 20| MODEL | SIZE | VRAM Q4 | TOK/S | AVG |
|---|---|---|---|---|
| Jamba 1.5 Mini 52B | 51.6B | 32.0 GB | 104 | 24.2 |
| Kimi-Linear-48B-A3B | 48B | 29.8 GB | 416 | 26.6 |
| Nemotron-H 47B | 47B | 29.2 GB | 27 | 84.6 |
| Mixtral-8x7B | 46.7B | 29.0 GB | 96 | 18.8 |
| Nous-Hermes-2-Mixtral-8x7B-DPO | 46.7B | 29.0 GB | 96 | 27.4 |
| Dolphin 2.6 Mixtral 8x7B | 46.7B | 29.0 GB | 96 | 23.8 |
| Phi-3.5 MoE 42B | 41.9B | 26.1 GB | 189 | 56.7 |
| Falcon 40B | 40B | 24.9 GB | 31 | 20.9 |
| Qwen3.5-35B-A3B | 36B | 22.5 GB | 35 | 48.5 |
| c4ai-command-r-v01 35B | 35B | 21.9 GB | 36 | 27.5 |
| Qwen 3.5 35B A3B | 35B | 21.9 GB | 416 | 53.3 |
| Qwen 3.6 35B A3B | 35B | 21.9 GB | 416 | 62.7 |
| Nous Capybara 34B | 34.4B | 21.5 GB | 36 | 42.0 |
| Yi-1.5 34B | 34.4B | 21.5 GB | 36 | 45.3 |
| Falcon-H1 34B | 34B | 21.3 GB | 37 | 66.1 |
| CodeLlama 34B | 34B | 21.3 GB | 37 | 25.4 |
| Nous Hermes 2 34B | 34B | 21.3 GB | 37 | 47.0 |
| Phind CodeLlama 34B | 34B | 21.3 GB | 37 | 68.1 |
| LLaVA-1.6 Yi 34B | 34B | 21.3 GB | 37 | 47.4 |
| WizardCoder Python 34B | 34B | 21.3 GB | 37 | 73.2 |