▸ DEVICE UNDER TEST
NVIDIA Tesla P100 PCIe 12 GB — 12 GB VRAM.
▸ TESLA P100 PCIE 12 GB SPEC
- BRAND
- NVIDIA
- VRAM
- 12 GB HBM2
- BANDWIDTH
- 549 GB/s
- FP16 COMPUTE
- 19.1 TFLOPS
- FP32 COMPUTE
- 9.5 TFLOPS
- CUDA CORES
- 3,584
- TDP
- 250 W
- ARCHITECTURE
- Pascal
▸ AI CAPABILITY
194/ 331 models @ Q4
With 12 GB VRAM and 549 GB/s bandwidth, this GPU handles models up to 16.8B parameters.
Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~63 tok/s.
§ 01TOP MODELS FOR TESLA P100 PCIE 12 GB
194 FIT · SHOWING 20| MODEL | SIZE | VRAM Q4 | TOK/S | AVG |
|---|---|---|---|---|
| Ling-lite 16.8B | 16.8B | 10.8 GB | 183 | — |
| DeepSeek V2 Lite 16B | 16B | 10.3 GB | 183 | 38.0 |
| DeepSeek-Coder-V2-Lite 15.7B | 15.7B | 10.1 GB | 183 | 43.0 |
| DeepSeek-VL2 Small 16B | 15.7B | 10.1 GB | 183 | 43.1 |
| StarCoder 15B | 15.5B | 10.0 GB | 28 | 21.0 |
| StarCoder2 15B | 15B | 9.7 GB | 29 | 26.5 |
| DeepSeek R1 Distill Qwen 14B | 14.8B | 9.5 GB | 30 | 43.9 |
| DeepCoder 14B | 14.8B | 9.5 GB | 30 | 38.7 |
| Qwen2.5-Coder-14B | 14.8B | 9.5 GB | 30 | 41.3 |
| Qwen2.5-14B | 14.8B | 9.5 GB | 30 | 41.3 |
| Qwen3 14B | 14.8B | 9.5 GB | 30 | 45.7 |
| Ministral 3 14B | 14B | 9.0 GB | 31 | 25.9 |
| Phi-3-medium-14b | 14B | 9.0 GB | 31 | 33.7 |
| phi-4 14B | 14B | 9.0 GB | 31 | 33.7 |
| Phi-4-reasoning 14B | 14B | 9.0 GB | 31 | 33.7 |
| Phi-4-multimodal 14B | 14B | 9.0 GB | 31 | 42.0 |
| Qwen 1.5 14B | 14B | 9.0 GB | 31 | 41.3 |
| LLaVA-1.5 13B | 13.1B | 8.5 GB | 34 | 52.1 |
| Baichuan2 13B | 13B | 8.4 GB | 34 | 23.6 |
| Llama 2 13B | 13B | 8.4 GB | 34 | 19.7 |