Meta/Dense

MetaLlama-3.2-11B-Vision-Instruct

Llama 3.2 11B Vision Instruct — instruction-tuned for visual Q&A tasks.

visionchatThinkingDistilled
11B
Parameters
128K
Context length
14
Benchmarks
10
Quantizations
1.5M
HF downloads
Architecture
Dense
Released
2024-09-25
Layers
32
KV Heads
8
Head Dim
128
Family
llama

Quantization Options

QuantBitsVRAMQuality
Q3_K_M46.0 GBlow
Q3_K_L4.36.4 GBmoderate
IQ4_XS4.466.6 GBmoderate
Q4_K_S4.676.9 GBmoderate
Q4_K_M4.897.2 GBgood
Q5_K_S5.578.1 GBgood
Q5_K_M5.78.3 GBgood
Q6_K6.569.5 GBexcellent
Q8_08.512.2 GBlossless
FP161622.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN LLAMA-3.2-11B-VISION-INSTRUCT NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (14)

MMBench76.8
IFEval70.0
MMLU-PRO55.0
BBH55.0
MMMU50.7
HumanEval45.0
MATH35.0
GPQA35.0
GPQA Diamond22.1
MUSR18.0
LiveCodeBench11.0
HLE5.2
AIME1.7
MATH-5001.7

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run llama3.2-vision:11b-instruct-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 4060
8 GB VRAM • 272 GB/s
NVIDIA
$299
NVIDIA RTX 3070 Ti
8 GB VRAM • 608 GB/s
NVIDIA
$499
NVIDIA RTX 3070
8 GB VRAM • 448 GB/s
NVIDIA
$325
NVIDIA RTX 3060 Ti
8 GB VRAM • 448 GB/s
NVIDIA
$250
NVIDIA RTX 3050 8GB
8 GB VRAM • 224 GB/s
NVIDIA
$249
AMD RX 7600
8 GB VRAM • 288 GB/s
AMD
$269
AMD RX 6650 XT
8 GB VRAM • 280 GB/s
AMD
$399
Intel Arc A750
8 GB VRAM • 512 GB/s
INTEL
$199
Apple M1 (8GB)
8 GB VRAM • 68 GB/s
APPLE
$499
Apple M2 (8GB)
8 GB VRAM • 100 GB/s
APPLE
$599
Apple M3 (8GB)
8 GB VRAM • 100 GB/s
APPLE
$599
NVIDIA RTX 2080
8 GB VRAM • 448 GB/s
NVIDIA
$260
NVIDIA RTX 2070
8 GB VRAM • 448 GB/s
NVIDIA
$200
NVIDIA GTX 1080
8 GB VRAM • 320 GB/s
NVIDIA
$130
NVIDIA GTX 1070 Ti
8 GB VRAM • 256 GB/s
NVIDIA
$120
NVIDIA GTX 1070
8 GB VRAM • 256 GB/s
NVIDIA
$100
NVIDIA RTX 3060 8GB
8 GB VRAM • 224 GB/s
NVIDIA
$280
AMD RX 6600 XT
8 GB VRAM • 256 GB/s
AMD
$200
AMD RX 6600
8 GB VRAM • 224 GB/s
AMD
$165
AMD RX 5700 XT
8 GB VRAM • 448 GB/s
AMD
$150
AMD RX 5700
8 GB VRAM • 448 GB/s
AMD
$130
Intel Arc A580
8 GB VRAM • 512 GB/s
INTEL
$179
NVIDIA RTX 5060
8 GB VRAM • 448 GB/s
NVIDIA
$299
NVIDIA Tesla K8
8 GB VRAM • 160 GB/s
NVIDIA
NVIDIA Tesla M60
8 GB VRAM • 160 GB/s
NVIDIA

Find the best GPU for Llama-3.2-11B-Vision-Instruct

Build Hardware for Llama-3.2-11B-Vision-Instruct

Llama 3.2 11B Vision Instruct — instruction-tuned for visual Q&A tasks.

▸ SPEC SHEET

Llama-3.2-11B-Vision-Instruct11B Dense.

▸ SPECIFICATIONS
PARAMETERS
11B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
128K tokens
CAPABILITIES
vision, chat
RELEASE DATE
2024-09-25
PROVIDER
Meta
FAMILY
llama
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q3_K_M46.0 GB88%
Q3_K_L4.36.4 GB90%
IQ4_XS4.466.6 GB92%
Q4_K_S4.676.9 GB93%
Q4_K_M4.897.2 GB94%
Q5_K_S5.578.1 GB96%
Q5_K_M5.78.3 GB96%
Q6_K6.569.5 GB97%
Q8_08.512.2 GB100%
FP161622.5 GB100%
§ 01BENCHMARK SCORES
HumanEval45.0
MMLU-PRO55.0
MATH35.0
IFEval70.0
BBH55.0
MMMU50.7
GPQA35.0
MUSR18.0
MMBench76.8
LiveCodeBench11.0
AIME1.7
MATH-5001.7
GPQA Diamond22.1
HLE5.2
§ 02RUN COMMAND

Run Llama-3.2-11B-Vision-Instruct locally with Ollama — needs 7.2 GB VRAM at Q4_K_M:

$ollama run llama3.2-vision:11b