Google/Dense

GoogleTranslateGemma 4B

TranslateGemma 4B — Google's translation model for 7 languages.

chatmultilingualvision
4B
Parameters
128K
Context length
6
Benchmarks
6
Quantizations
109K
HF downloads
Architecture
Dense
Released
2026-01-13
Layers
34
KV Heads
4
Head Dim
256
Family
gemma

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.892.9 GBgood
Q5_K_S5.573.3 GBgood
Q5_K_M5.73.3 GBgood
Q6_K6.563.8 GBexcellent
Q8_08.54.7 GBlossless
FP16168.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN TRANSLATEGEMMA 4B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (6)

IFEval19.7
MMLU-PRO3.5
BBH3.5
MATH2.3
MUSR2.1
GPQA1.7

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run translategemma:q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA Tesla C2050
3 GB VRAM • 144 GB/s
NVIDIA
NVIDIA Tesla M2050
3 GB VRAM • 148 GB/s
NVIDIA
NVIDIA Tesla S2050
3 GB VRAM • 148 GB/s
NVIDIA

Find the best GPU for TranslateGemma 4B

Build Hardware for TranslateGemma 4B

TranslateGemma 4B — Google's translation model for 7 languages.

▸ SPEC SHEET

TranslateGemma 4B4B Dense.

▸ SPECIFICATIONS
PARAMETERS
4B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
128K tokens
CAPABILITIES
chat, multilingual, vision
RELEASE DATE
2026-01-13
PROVIDER
Google
FAMILY
gemma
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.892.9 GB94%
Q5_K_S5.573.3 GB96%
Q5_K_M5.73.3 GB96%
Q6_K6.563.8 GB97%
Q8_08.54.7 GB100%
FP16168.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO3.5
MATH2.3
IFEval19.7
BBH3.5
GPQA1.7
MUSR2.1
§ 02RUN COMMAND

Run TranslateGemma 4B locally with Ollama — needs 2.9 GB VRAM at Q4_K_M:

$ollama run translategemma