Llama-3.2-11B-Vision-Instruct — 11B Dense.
- PARAMETERS
- 11B
- ARCHITECTURE
- Dense Transformer
- CONTEXT LENGTH
- 128K tokens
- CAPABILITIES
- vision, chat
- RELEASE DATE
- 2024-09-25
- PROVIDER
- Meta
- FAMILY
- llama
| QUANT | BPW | VRAM | QUALITY |
|---|---|---|---|
| Q3_K_M | 4 | 6.0 GB | 88% |
| Q3_K_L | 4.3 | 6.4 GB | 90% |
| IQ4_XS | 4.46 | 6.6 GB | 92% |
| Q4_K_S | 4.67 | 6.9 GB | 93% |
| Q4_K_M | 4.89 | 7.2 GB | 94% |
| Q5_K_S | 5.57 | 8.1 GB | 96% |
| Q5_K_M | 5.7 | 8.3 GB | 96% |
| Q6_K | 6.56 | 9.5 GB | 97% |
| Q8_0 | 8.5 | 12.2 GB | 100% |
| FP16 | 16 | 22.5 GB | 100% |
Run Llama-3.2-11B-Vision-Instruct locally with Ollama — needs 7.2 GB VRAM at Q4_K_M:
ollama run llama3.2-vision:11b