▸ DEVICE UNDER TEST
AMD Instinct MI300A — 120 GB VRAM.
▸ INSTINCT MI300A SPEC
- BRAND
- AMD
- VRAM
- 120 GB HBM3
- BANDWIDTH
- 5300 GB/s
- FP16 COMPUTE
- 980.6 TFLOPS
- FP32 COMPUTE
- 122.6 TFLOPS
- TDP
- 550 W
- ARCHITECTURE
- CDNA3
- MSRP
- $12000
▸ AI CAPABILITY
300/ 331 models @ Q4
With 120 GB VRAM and 5300 GB/s bandwidth, this GPU handles models up to 142.8B parameters.
Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~606 tok/s.
§ 01TOP MODELS FOR INSTINCT MI300A
300 FIT · SHOWING 20| MODEL | SIZE | VRAM Q4 | TOK/S | AVG |
|---|---|---|---|---|
| dots.llm1.inst 142.8B | 142.8B | 87.8 GB | 33 | — |
| WizardLM 2 8x22B | 141B | 86.7 GB | 121 | 42.4 |
| Mixtral-8x22B | 140.6B | 86.4 GB | 120 | 31.9 |
| DBRX 132B | 132B | 81.2 GB | 131 | 46.3 |
| Qwen3.5-122B-A10B | 125.1B | 77.0 GB | 38 | 45.5 |
| Pixtral Large 124B | 124B | 76.3 GB | 38 | 39.3 |
| Mistral-Large 123B | 123B | 75.7 GB | 38 | 33.5 |
| Devstral 2 123B | 123B | 75.7 GB | 38 | 38.1 |
| Qwen 3.5 122B A10B | 122B | 75.1 GB | 471 | 56.8 |
| Nemotron 3 Super 120B | 120B | 73.8 GB | 393 | 57.3 |
| Nemotron 3 Super 120B-A12B | 120B | 73.8 GB | 393 | 53.2 |
| Mistral Small 4 119B | 119B | 73.2 GB | 725 | 50.2 |
| GPT-OSS 120B | 117B | 72.0 GB | 924 | 54.1 |
| Command A 111B | 111B | 68.3 GB | 42 | 27.6 |
| GLM 4.5 Air | 110B | 67.7 GB | 393 | 51.0 |
| Qwen 1.5 110B | 110B | 67.7 GB | 43 | 33.4 |
| Llama 4 Scout 17B-16E | 109B | 67.1 GB | 277 | 33.9 |
| Sarvam 105B | 105B | 64.7 GB | 45 | 48.0 |
| Command-R+ 104B | 104B | 64.1 GB | 45 | 52.7 |
| Llama-3.2-90B-Vision-Instruct | 90B | 55.5 GB | 52 | 48.5 |