Alibaba/Dense

AlibabaQwen2-VL 2B

Qwen2-VL 2B — tiny vision-language model for image understanding.

chatvision
2.21B
Parameters
32K
Context length
8
Benchmarks
6
Quantizations
300K
HF downloads
Architecture
Dense
Released
2024-10-03
Layers
28
KV Heads
2
Head Dim
128
Family
qwen

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.891.8 GBgood
Q5_K_S5.572.0 GBgood
Q5_K_M5.72.1 GBgood
Q6_K6.562.3 GBexcellent
Q8_08.52.8 GBlossless
FP16164.9 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN QWEN2-VL 2B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (8)

MMBench74.7
IFEval47.7
MMMU41.1
MATH20.8
MMLU-PRO19.8
BBH18.3
MUSR4.0
GPQA0.0

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run qwen:2b-q4_K_M

Tag may need adjustment — check ollama.com/library/qwen for available tags.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for Qwen2-VL 2B

Build Hardware for Qwen2-VL 2B

Qwen2-VL 2B — tiny vision-language model for image understanding.

▸ SPEC SHEET

Qwen2-VL 2B2.21B Dense.

▸ SPECIFICATIONS
PARAMETERS
2.21B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
32K tokens
CAPABILITIES
chat, vision
RELEASE DATE
2024-10-03
PROVIDER
Alibaba
FAMILY
qwen
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.891.8 GB94%
Q5_K_S5.572.0 GB96%
Q5_K_M5.72.1 GB96%
Q6_K6.562.3 GB97%
Q8_08.52.8 GB100%
FP16164.9 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO19.8
MATH20.8
IFEval47.7
BBH18.3
MMMU41.1
GPQA0.0
MUSR4.0
MMBench74.7