Google/Dense

Googlegemma-2-2b

chat
2.6B
Parameters
4K
Context length
9
Benchmarks
6
Quantizations
429K
HF downloads
Architecture
Dense
Released
2024-04-09
Layers
26
KV Heads
4
Head Dim
256
Family
gemma

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.892.1 GBgood
Q5_K_S5.572.3 GBgood
Q5_K_M5.72.3 GBgood
Q6_K6.562.6 GBexcellent
Q8_08.53.3 GBlossless
FP16165.7 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

Benchmarks (9)

Arena Elo1159
IFEval57.5
MBPP27.9
HumanEval27.2
BBH19.7
MMLU-PRO18.8
MUSR15.3
MATH9.1
GPQA7.4

Run this model

Easiest way to get starteddocs →
curl -fsSL https://ollama.com/install.sh | sh
$ollama run gemma2:2.6b-instruct-q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

Setup guide

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for gemma-2-2b

Build Hardware for gemma-2-2b

gemma-2-2b2.6B Parameter Dense LLM

Model Specifications

Parameters
2.6B
Architecture
Dense Transformer
Context Length
4K tokens
Capabilities
chat
Release Date
2024-04-09
Provider
Google
Family
gemma

VRAM Requirements

QuantizationBPWVRAMQuality
Q4_K_M4.892.1 GB94%
Q5_K_S5.572.3 GB96%
Q5_K_M5.72.3 GB96%
Q6_K6.562.6 GB97%
Q8_08.53.3 GB100%
FP16165.7 GB100%

Benchmark Scores

HumanEval27.2
MMLU-PRO18.8
MATH9.1
IFEval57.5
BBH19.7
GPQA7.4
MUSR15.3
MBPP27.9
Arena Elo1159.0

How to Run gemma-2-2b

Run gemma-2-2b locally with Ollama (needs 2.1 GB VRAM at Q4_K_M):

ollama run gemma2:2.6b

Compatible GPUs (30)

GPUs that can run gemma-2-2b at Q4_K_M quantization: