Google/Dense

Googlegemma-3-12b

chatvisionThinking
12B
Parameters
128K
Context length
21
Benchmarks
10
Quantizations
0
Architecture
Dense
Released
2024-06-27
Layers
48
KV Heads
8
Head Dim
256
Family
gemma

Quantization Options

QuantBitsVRAMQuality
Q3_K_M46.5 GBlow
Q3_K_L4.36.9 GBmoderate
IQ4_XS4.467.2 GBmoderate
Q4_K_S4.677.5 GBmoderate
Q4_K_M4.897.8 GBgood
Q5_K_S5.578.8 GBgood
Q5_K_M5.79.0 GBgood
Q6_K6.5610.3 GBexcellent
Q8_08.513.2 GBlossless
FP161624.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN GEMMA-3-12B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (21)

Arena Elo1337
MATH-50085.3
IFEval80.5
MMMU59.6
BBH44.2
MMLU-PRO37.4
IFBench36.7
GPQA Diamond34.9
MATH23.3
AIME18.3
AA Math18.3
SciCode17.4
LiveCodeBench13.7
GPQA12.8
MUSR12.2
τ²-Bench10.8
AA Intelligence8.8
AA Long Context6.7
AA Coding6.3
HLE4.8
Terminal-Bench0.8

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run gemma3:12.0b-instruct-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 4060
8 GB VRAM • 272 GB/s
NVIDIA
$299
NVIDIA RTX 3070 Ti
8 GB VRAM • 608 GB/s
NVIDIA
$499
NVIDIA RTX 3070
8 GB VRAM • 448 GB/s
NVIDIA
$325
NVIDIA RTX 3060 Ti
8 GB VRAM • 448 GB/s
NVIDIA
$250
NVIDIA RTX 3050 8GB
8 GB VRAM • 224 GB/s
NVIDIA
$249
AMD RX 7600
8 GB VRAM • 288 GB/s
AMD
$269
AMD RX 6650 XT
8 GB VRAM • 280 GB/s
AMD
$399
Intel Arc A750
8 GB VRAM • 512 GB/s
INTEL
$199
Apple M1 (8GB)
8 GB VRAM • 68 GB/s
APPLE
$499
Apple M2 (8GB)
8 GB VRAM • 100 GB/s
APPLE
$599
Apple M3 (8GB)
8 GB VRAM • 100 GB/s
APPLE
$599
NVIDIA RTX 2080
8 GB VRAM • 448 GB/s
NVIDIA
$260
NVIDIA RTX 2070
8 GB VRAM • 448 GB/s
NVIDIA
$200
NVIDIA GTX 1080
8 GB VRAM • 320 GB/s
NVIDIA
$130
NVIDIA GTX 1070 Ti
8 GB VRAM • 256 GB/s
NVIDIA
$120
NVIDIA GTX 1070
8 GB VRAM • 256 GB/s
NVIDIA
$100
NVIDIA RTX 3060 8GB
8 GB VRAM • 224 GB/s
NVIDIA
$280
AMD RX 6600 XT
8 GB VRAM • 256 GB/s
AMD
$200
AMD RX 6600
8 GB VRAM • 224 GB/s
AMD
$165
AMD RX 5700 XT
8 GB VRAM • 448 GB/s
AMD
$150
AMD RX 5700
8 GB VRAM • 448 GB/s
AMD
$130
Intel Arc A580
8 GB VRAM • 512 GB/s
INTEL
$179
NVIDIA RTX 5060
8 GB VRAM • 448 GB/s
NVIDIA
$299
NVIDIA Tesla K8
8 GB VRAM • 160 GB/s
NVIDIA
NVIDIA Tesla M60
8 GB VRAM • 160 GB/s
NVIDIA

Find the best GPU for gemma-3-12b

Build Hardware for gemma-3-12b

Read the full model card for detailed information about this model.

▸ SPEC SHEET

gemma-3-12b12B Dense.

▸ SPECIFICATIONS
PARAMETERS
12B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
128K tokens
CAPABILITIES
chat, vision
RELEASE DATE
2024-06-27
PROVIDER
Google
FAMILY
gemma
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q3_K_M46.5 GB88%
Q3_K_L4.36.9 GB90%
IQ4_XS4.467.2 GB92%
Q4_K_S4.677.5 GB93%
Q4_K_M4.897.8 GB94%
Q5_K_S5.578.8 GB96%
Q5_K_M5.79.0 GB96%
Q6_K6.5610.3 GB97%
Q8_08.513.2 GB100%
FP161624.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO37.4
MATH23.3
IFEval80.5
BBH44.2
MMMU59.6
GPQA12.8
MUSR12.2
Arena Elo1337.0
GPQA Diamond34.9
LiveCodeBench13.7
AIME18.3
MATH-50085.3
HLE4.8
AA Intelligence8.8
AA Coding6.3
AA Math18.3
aa_ifbench36.7
aa_terminal_bench0.8
aa_tau210.8
aa_scicode17.4
aa_lcr6.7
§ 02RUN COMMAND

Run gemma-3-12b locally with Ollama — needs 7.8 GB VRAM at Q4_K_M:

$ollama run gemma3:12b