Google/Dense

GoogleGemma 1 2B

Google's lightweight open model. Clean training data, great for fine-tuning.

chat
2B
Parameters
8K
Context length
8
Benchmarks
6
Quantizations
0
Architecture
Dense
Released
2024-02-21
Layers
18
KV Heads
1
Head Dim
256
Family
gemma

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.891.7 GBgood
Q5_K_S5.571.9 GBgood
Q5_K_M5.71.9 GBgood
Q6_K6.562.1 GBexcellent
Q8_08.52.6 GBlossless
FP16164.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN GEMMA 1 2B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (8)

MBPP46.6
IFEval46.2
HumanEval20.7
MMLU-PRO13.9
BBH13.2
MUSR12.9
GPQA4.1
MATH3.7

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run gemma:2b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for Gemma 1 2B

Build Hardware for Gemma 1 2B

Google's lightweight open model. Clean training data, great for fine-tuning.

▸ SPEC SHEET

Gemma 1 2B2B Dense.

▸ SPECIFICATIONS
PARAMETERS
2B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
8K tokens
CAPABILITIES
chat
RELEASE DATE
2024-02-21
PROVIDER
Google
FAMILY
gemma
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.891.7 GB94%
Q5_K_S5.571.9 GB96%
Q5_K_M5.71.9 GB96%
Q6_K6.562.1 GB97%
Q8_08.52.6 GB100%
FP16164.5 GB100%
§ 01BENCHMARK SCORES
HumanEval20.7
MMLU-PRO13.9
MATH3.7
IFEval46.2
BBH13.2
GPQA4.1
MUSR12.9
MBPP46.6
§ 02RUN COMMAND

Run Gemma 1 2B locally with Ollama — needs 1.7 GB VRAM at Q4_K_M:

$ollama run gemma:2b