Google/Dense

GoogleTranslateGemma 12B

TranslateGemma 12B — stronger translation model with vision support.

chatmultilingualvision
12B
Parameters
128K
Context length
6
Benchmarks
10
Quantizations
397K
HF downloads
Architecture
Dense
Released
2026-01-13
Layers
48
KV Heads
8
Head Dim
256
Family
gemma

Quantization Options

QuantBitsVRAMQuality
Q3_K_M46.5 GBlow
Q3_K_L4.36.9 GBmoderate
IQ4_XS4.467.2 GBmoderate
Q4_K_S4.677.5 GBmoderate
Q4_K_M4.897.8 GBgood
Q5_K_S5.578.8 GBgood
Q5_K_M5.79.0 GBgood
Q6_K6.5610.3 GBexcellent
Q8_08.513.2 GBlossless
FP161624.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN TRANSLATEGEMMA 12B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (6)

IFEval80.5
BBH44.2
MMLU-PRO37.4
MATH23.3
GPQA12.8
MUSR12.2

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run translategemma:q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 4060
8 GB VRAM • 272 GB/s
NVIDIA
$299
NVIDIA RTX 3070 Ti
8 GB VRAM • 608 GB/s
NVIDIA
$499
NVIDIA RTX 3070
8 GB VRAM • 448 GB/s
NVIDIA
$325
NVIDIA RTX 3060 Ti
8 GB VRAM • 448 GB/s
NVIDIA
$250
NVIDIA RTX 3050 8GB
8 GB VRAM • 224 GB/s
NVIDIA
$249
AMD RX 7600
8 GB VRAM • 288 GB/s
AMD
$269
AMD RX 6650 XT
8 GB VRAM • 280 GB/s
AMD
$399
Intel Arc A750
8 GB VRAM • 512 GB/s
INTEL
$199
Apple M1 (8GB)
8 GB VRAM • 68 GB/s
APPLE
$499
Apple M2 (8GB)
8 GB VRAM • 100 GB/s
APPLE
$599
Apple M3 (8GB)
8 GB VRAM • 100 GB/s
APPLE
$599
NVIDIA RTX 2080
8 GB VRAM • 448 GB/s
NVIDIA
$260
NVIDIA RTX 2070
8 GB VRAM • 448 GB/s
NVIDIA
$200
NVIDIA GTX 1080
8 GB VRAM • 320 GB/s
NVIDIA
$130
NVIDIA GTX 1070 Ti
8 GB VRAM • 256 GB/s
NVIDIA
$120
NVIDIA GTX 1070
8 GB VRAM • 256 GB/s
NVIDIA
$100
NVIDIA RTX 3060 8GB
8 GB VRAM • 224 GB/s
NVIDIA
$280
AMD RX 6600 XT
8 GB VRAM • 256 GB/s
AMD
$200
AMD RX 6600
8 GB VRAM • 224 GB/s
AMD
$165
AMD RX 5700 XT
8 GB VRAM • 448 GB/s
AMD
$150
AMD RX 5700
8 GB VRAM • 448 GB/s
AMD
$130
Intel Arc A580
8 GB VRAM • 512 GB/s
INTEL
$179
NVIDIA RTX 5060
8 GB VRAM • 448 GB/s
NVIDIA
$299
NVIDIA Tesla K8
8 GB VRAM • 160 GB/s
NVIDIA
NVIDIA Tesla M60
8 GB VRAM • 160 GB/s
NVIDIA

Find the best GPU for TranslateGemma 12B

Build Hardware for TranslateGemma 12B

TranslateGemma 12B — stronger translation model with vision support.

▸ SPEC SHEET

TranslateGemma 12B12B Dense.

▸ SPECIFICATIONS
PARAMETERS
12B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
128K tokens
CAPABILITIES
chat, multilingual, vision
RELEASE DATE
2026-01-13
PROVIDER
Google
FAMILY
gemma
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q3_K_M46.5 GB88%
Q3_K_L4.36.9 GB90%
IQ4_XS4.467.2 GB92%
Q4_K_S4.677.5 GB93%
Q4_K_M4.897.8 GB94%
Q5_K_S5.578.8 GB96%
Q5_K_M5.79.0 GB96%
Q6_K6.5610.3 GB97%
Q8_08.513.2 GB100%
FP161624.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO37.4
MATH23.3
IFEval80.5
BBH44.2
GPQA12.8
MUSR12.2
§ 02RUN COMMAND

Run TranslateGemma 12B locally with Ollama — needs 7.8 GB VRAM at Q4_K_M:

$ollama run translategemma