Meta/Dense

MetaLlama-3.2-90B-Vision-Instruct

Llama 3.2 90B Vision Instruct — top open multimodal model for complex visual tasks.

visionchatThinkingDistilled
90B
Parameters
128K
Context length
13
Benchmarks
16
Quantizations
450K
HF downloads
Architecture
Dense
Released
2024-09-25
Layers
80
KV Heads
8
Head Dim
128
Family
llama

Quantization Options

QuantBitsVRAMQuality
IQ2_M2.9333.5 GBlow
Q2_K3.1636.0 GBlow
IQ3_XXS3.2537.1 GBlow
IQ3_XS3.539.9 GBlow
Q3_K_S3.6441.4 GBlow
IQ3_M3.7642.8 GBlow
Q3_K_M445.5 GBlow
Q3_K_L4.348.9 GBmoderate
IQ4_XS4.4650.7 GBmoderate
Q4_K_S4.6753.0 GBmoderate
Q4_K_M4.8955.5 GBgood
Q5_K_S5.5763.2 GBgood
Q5_K_M5.764.6 GBgood
Q6_K6.5674.3 GBexcellent
Q8_08.596.1 GBlossless
FP1616180.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN LLAMA-3.2-90B-VISION-INSTRUCT NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (13)

MMBench85.5
IFEval82.0
BBH70.0
MMLU-PRO68.0
HumanEval65.0
MMMU60.3
MATH52.0
GPQA48.0
GPQA Diamond43.2
MUSR25.0
LiveCodeBench21.4
AIME5.0
HLE4.9

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run llama3.2-vision:90b-instruct-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Apple M1 Ultra (64GB)
64 GB VRAM • 800 GB/s
APPLE
$2499
Apple M2 Ultra (64GB)
64 GB VRAM • 800 GB/s
APPLE
$2999
Apple M4 Max (64GB)
64 GB VRAM • 546 GB/s
APPLE
$2899
Apple M2 Max (64GB)
64 GB VRAM • 400 GB/s
APPLE
$2299
Apple M3 Max (64GB)
64 GB VRAM • 300 GB/s
APPLE
$2799
Apple M4 Pro (64GB)
64 GB VRAM • 273 GB/s
APPLE
$2599
AMD Radeon Instinct MI200
64 GB VRAM • 1640 GB/s
AMD
$10000
AMD Radeon Instinct MI210
64 GB VRAM • 1640 GB/s
AMD
$8000
NVIDIA H100 SXM5 64 GB
64 GB VRAM • 2020 GB/s
NVIDIA
$25000
NVIDIA Jetson AGX Orin 64 GB
64 GB VRAM • 205 GB/s
NVIDIA
NVIDIA Jetson T4000
64 GB VRAM • 273 GB/s
NVIDIA
Apple M5 Pro (64GB)
64 GB VRAM • 200 GB/s
APPLE
Apple M5 Max (64GB)
64 GB VRAM • 614 GB/s
APPLE
NVIDIA RTX PRO 5000 72 GB Blackwell
72 GB VRAM • 1340 GB/s
NVIDIA
$6999
NVIDIA H100 SXM5 80GB
80 GB VRAM • 3350 GB/s
NVIDIA
$25000
NVIDIA H100 PCIe 80GB
80 GB VRAM • 2000 GB/s
NVIDIA
$25000
NVIDIA A100 SXM 80GB
80 GB VRAM • 2039 GB/s
NVIDIA
$10000
NVIDIA A100 PCIe 80GB
80 GB VRAM • 1935 GB/s
NVIDIA
$10000
NVIDIA A100 SXM4 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
$15000
NVIDIA A100 PCIe 80 GB
80 GB VRAM • 1940 GB/s
NVIDIA
$10000
NVIDIA A100X
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H100 PCIe 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
$25000
NVIDIA H100 SXM5 80 GB
80 GB VRAM • 3360 GB/s
NVIDIA
$25000
NVIDIA H100 CNX
80 GB VRAM • 2040 GB/s
NVIDIA
$25000
NVIDIA A800 PCIe 80 GB
80 GB VRAM • 1940 GB/s
NVIDIA
NVIDIA A800 SXM4 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H800 PCIe 80 GB
80 GB VRAM • 2040 GB/s
NVIDIA
NVIDIA H800 SXM5
80 GB VRAM • 3360 GB/s
NVIDIA
NVIDIA RTX 6000D
84 GB VRAM • 1570 GB/s
NVIDIA
$7500
NVIDIA B200
90 GB VRAM • 4100 GB/s
NVIDIA
$30000

Find the best GPU for Llama-3.2-90B-Vision-Instruct

Build Hardware for Llama-3.2-90B-Vision-Instruct

Llama 3.2 90B Vision Instruct — top open multimodal model for complex visual tasks.

▸ SPEC SHEET

Llama-3.2-90B-Vision-Instruct90B Dense.

▸ SPECIFICATIONS
PARAMETERS
90B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
128K tokens
CAPABILITIES
vision, chat
RELEASE DATE
2024-09-25
PROVIDER
Meta
FAMILY
llama
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ2_M2.9333.5 GB75%
Q2_K3.1636.0 GB78%
IQ3_XXS3.2537.1 GB82%
IQ3_XS3.539.9 GB84%
Q3_K_S3.6441.4 GB85%
IQ3_M3.7642.8 GB86%
Q3_K_M445.5 GB88%
Q3_K_L4.348.9 GB90%
IQ4_XS4.4650.7 GB92%
Q4_K_S4.6753.0 GB93%
Q4_K_M4.8955.5 GB94%
Q5_K_S5.5763.2 GB96%
Q5_K_M5.764.6 GB96%
Q6_K6.5674.3 GB97%
Q8_08.596.1 GB100%
FP1616180.5 GB100%
§ 01BENCHMARK SCORES
HumanEval65.0
MMLU-PRO68.0
MATH52.0
IFEval82.0
BBH70.0
MMMU60.3
GPQA48.0
MUSR25.0
MMBench85.5
LiveCodeBench21.4
AIME5.0
GPQA Diamond43.2
HLE4.9
§ 02RUN COMMAND

Run Llama-3.2-90B-Vision-Instruct locally with Ollama — needs 55.5 GB VRAM at Q4_K_M:

$ollama run llama3.2-vision:90b