Google/Dense

GoogleGemma 3n E4B

Gemma 3n E4B — mobile-friendly model with vision and multilingual support.

chatcodingmultilingualvision
8B
Parameters
32K
Context length
21
Benchmarks
6
Quantizations
200K
HF downloads
Architecture
Dense
Released
2025-06-25
Layers
34
KV Heads
4
Head Dim
256
Family
gemma

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.895.4 GBgood
Q5_K_S5.576.1 GBgood
Q5_K_M5.76.2 GBgood
Q6_K6.567.0 GBexcellent
Q8_08.59.0 GBlossless
FP161616.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN GEMMA 3N E4B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (21)

Arena Elo1440
MATH-50077.1
HumanEval75.0
IFEval73.8
MBPP63.6
BBH52.9
MMLU-PRO50.6
GPQA Diamond29.6
IFBench27.9
MATH23.2
MUSR15.0
LiveCodeBench14.6
AIME14.3
AA Math14.3
GPQA14.2
SciCode8.1
AA Intelligence6.4
τ²-Bench5.0
HLE4.4
AA Coding4.2
Terminal-Bench2.3

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run gemma3n:e4b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 3050 6GB
6 GB VRAM • 168 GB/s
NVIDIA
$169
Intel Arc A380
6 GB VRAM • 186 GB/s
INTEL
$129
NVIDIA RTX 2060 6GB
6 GB VRAM • 336 GB/s
NVIDIA
$150
NVIDIA GTX 1660 Ti
6 GB VRAM • 288 GB/s
NVIDIA
$140
NVIDIA Tesla C2070
6 GB VRAM • 143 GB/s
NVIDIA
NVIDIA Tesla C2075
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla C2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla M2070
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2070-Q
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2075
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla X2070
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla X2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla K20X
6 GB VRAM • 250 GB/s
NVIDIA
NVIDIA Tesla K20Xm
6 GB VRAM • 250 GB/s
NVIDIA

Find the best GPU for Gemma 3n E4B

Build Hardware for Gemma 3n E4B

Gemma 3n E4B — mobile-friendly model with vision and multilingual support.

▸ SPEC SHEET

Gemma 3n E4B8B Dense.

▸ SPECIFICATIONS
PARAMETERS
8B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
32K tokens
CAPABILITIES
chat, coding, multilingual, vision
RELEASE DATE
2025-06-25
PROVIDER
Google
FAMILY
gemma
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.895.4 GB94%
Q5_K_S5.576.1 GB96%
Q5_K_M5.76.2 GB96%
Q6_K6.567.0 GB97%
Q8_08.59.0 GB100%
FP161616.5 GB100%
§ 01BENCHMARK SCORES
HumanEval75.0
MMLU-PRO50.6
MATH23.2
IFEval73.8
BBH52.9
GPQA14.2
MUSR15.0
MBPP63.6
Arena Elo1440.0
GPQA Diamond29.6
LiveCodeBench14.6
AIME14.3
MATH-50077.1
HLE4.4
AA Intelligence6.4
AA Coding4.2
AA Math14.3
aa_ifbench27.9
aa_terminal_bench2.3
aa_tau25.0
aa_scicode8.1
§ 02RUN COMMAND

Run Gemma 3n E4B locally with Ollama — needs 5.4 GB VRAM at Q4_K_M:

$ollama run gemma3n:e4b