Google/Dense

GoogleGemma 4 E2B

Gemma 4 E2B — 2.3B effective params from 5.1B total via Per-Layer Embeddings. Vision + audio, 128K context.

chatmultilingualvisionaudio
5.1B
Parameters
128K
Context length
5
Benchmarks
6
Quantizations
100K
HF downloads
Architecture
Dense
Released
2026-04-02
Layers
35
KV Heads
1
Head Dim
256
Family
gemma

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.893.6 GBgood
Q5_K_S5.574.0 GBgood
Q5_K_M5.74.1 GBgood
Q6_K6.564.7 GBexcellent
Q8_08.55.9 GBlossless
FP161610.7 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

Benchmarks (5)

MMLU-PRO60.0
LiveCodeBench44.0
GPQA Diamond43.4
AIME37.5
BBH21.9

Run this model

Easiest way to get starteddocs →
curl -fsSL https://ollama.com/install.sh | sh
$ollama run gemma4:e2b:q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

Setup guide

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for Gemma 4 E2B

Build Hardware for Gemma 4 E2B

Gemma 4 E2B5.1B Parameter Dense LLM

Model Specifications

Parameters
5.1B
Architecture
Dense Transformer
Context Length
128K tokens
Capabilities
chat, multilingual, vision, audio
Release Date
2026-04-02
Provider
Google
Family
gemma

VRAM Requirements

QuantizationBPWVRAMQuality
Q4_K_M4.893.6 GB94%
Q5_K_S5.574.0 GB96%
Q5_K_M5.74.1 GB96%
Q6_K6.564.7 GB97%
Q8_08.55.9 GB100%
FP161610.7 GB100%

Benchmark Scores

MMLU-PRO60.0
BBH21.9
GPQA Diamond43.4
LiveCodeBench44.0
AIME37.5

How to Run Gemma 4 E2B

Run Gemma 4 E2B locally with Ollama (needs 3.6 GB VRAM at Q4_K_M):

ollama run gemma4:e2b

Compatible GPUs (30)

GPUs that can run Gemma 4 E2B at Q4_K_M quantization: