LLaVA Team/Dense

LLaVA-1.6 Vicuna 13B

visionchat
13B
Parameters
4K
Context length
9
Benchmarks
4
Quantizations
180K
HF downloads
Architecture
Dense
Released
2024-01-30
Layers
40
KV Heads
40
Head Dim
128
Family
other

LLaVA Model Card

Model details

Model type: LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: lmsys/vicuna-13b-v1.5

Model date: LLaVA-v1.6-Vicuna-13B was trained in December 2023.

Paper or resources for more information: https://llava-vl.github.io/

Quantizations & VRAM

Q4_K_M4.5 bpw
8.7 GB
VRAM required
94%
Quality
Q6_K6.5 bpw
11.2 GB
VRAM required
97%
Quality
Q8_08 bpw
13.9 GB
VRAM required
100%
Quality
FP1616 bpw
26.4 GB
VRAM required
100%
Quality

Benchmarks (9)

MMBench70.0
IFEval62.0
MMLU-PRO50.0
BBH48.0
MMMU36.4
GPQA30.0
HumanEval15.9
MUSR14.0
MATH10.2

Run with Ollama

$ollama run llava:13b

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for LLaVA-1.6 Vicuna 13B

Build Hardware for LLaVA-1.6 Vicuna 13B