Google/Dense

GoogleGemma 4 31B

Gemma 4 31B — Google's most capable open model. Dense 31B with 256K context, vision, and thinking mode.

chatcodingreasoningmultilingualvision
31B
Parameters
256K
Context length
7
Benchmarks
14
Quantizations
500K
HF downloads
Architecture
Dense
Released
2026-04-02
Layers
60
KV Heads
16
Head Dim
256
Family
gemma

Quantization Options

QuantBitsVRAMQuality
IQ3_XXS3.2513.1 GBlow
IQ3_XS3.514.1 GBlow
Q3_K_S3.6414.6 GBlow
IQ3_M3.7615.1 GBlow
Q3_K_M416.0 GBlow
Q3_K_L4.317.2 GBmoderate
IQ4_XS4.4617.8 GBmoderate
Q4_K_S4.6718.6 GBmoderate
Q4_K_M4.8919.4 GBgood
Q5_K_S5.5722.1 GBgood
Q5_K_M5.722.6 GBgood
Q6_K6.5625.9 GBexcellent
Q8_08.533.4 GBlossless
FP161662.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

Benchmarks (7)

Arena Elo1452
AIME89.2
MMLU-PRO85.2
GPQA Diamond84.3
LiveCodeBench80.0
BBH74.4
HLE19.5

Run this model

Easiest way to get starteddocs →
curl -fsSL https://ollama.com/install.sh | sh
$ollama run gemma4:31b:q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

Setup guide

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

AMD RX 7900 XT
20 GB VRAM • 800 GB/s
AMD
$849
NVIDIA RTX 4000 Ada 20GB
20 GB VRAM • 432 GB/s
NVIDIA
$1250
NVIDIA A10M
20 GB VRAM • 500 GB/s
NVIDIA
NVIDIA GeForce RTX 3080 Ti 20 GB
20 GB VRAM • 760 GB/s
NVIDIA
$1199
AMD Radeon RX 7900 XT
20 GB VRAM • 800 GB/s
AMD
$899
NVIDIA RTX 4000 Ada Generation
20 GB VRAM • 360 GB/s
NVIDIA
$1250
NVIDIA RTX 4000 SFF Ada Generation
20 GB VRAM • 280 GB/s
NVIDIA
$1250
NVIDIA RTX A4500
20 GB VRAM • 640 GB/s
NVIDIA
$2000
NVIDIA RTX 4090
24 GB VRAM • 1008 GB/s
NVIDIA
$1599
NVIDIA RTX 3090 Ti
24 GB VRAM • 1008 GB/s
NVIDIA
$999
NVIDIA RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$850
AMD RX 7900 XTX
24 GB VRAM • 960 GB/s
AMD
$999
Apple M4 Pro (24GB)
24 GB VRAM • 273 GB/s
APPLE
$1399
NVIDIA L4 24GB
24 GB VRAM • 300 GB/s
NVIDIA
$2500
NVIDIA A10 24GB
24 GB VRAM • 600 GB/s
NVIDIA
$3500
Apple M2 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M3 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M4 (24GB)
24 GB VRAM • 120 GB/s
APPLE
$699
NVIDIA Tesla M40 24 GB
24 GB VRAM • 288 GB/s
NVIDIA
NVIDIA Tesla P10
24 GB VRAM • 694 GB/s
NVIDIA
NVIDIA Tesla P40
24 GB VRAM • 347 GB/s
NVIDIA
NVIDIA Quadro RTX 6000
24 GB VRAM • 672 GB/s
NVIDIA
$4000
NVIDIA Quadro RTX 6000 Passive
24 GB VRAM • 624 GB/s
NVIDIA
$4000
NVIDIA GeForce RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$1499
NVIDIA A10 PCIe
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA A10G
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA RTX A5000
24 GB VRAM • 768 GB/s
NVIDIA
$2500
NVIDIA GeForce RTX 3090 Ti
24 GB VRAM • 1010 GB/s
NVIDIA
$1999
NVIDIA GeForce RTX 4090
24 GB VRAM • 1010 GB/s
NVIDIA
$1599
NVIDIA L40 CNX
24 GB VRAM • 864 GB/s
NVIDIA
$5000

Find the best GPU for Gemma 4 31B

Build Hardware for Gemma 4 31B

Gemma 4 31B31B Parameter Dense LLM

Model Specifications

Parameters
31B
Architecture
Dense Transformer
Context Length
256K tokens
Capabilities
chat, coding, reasoning, multilingual, vision
Release Date
2026-04-02
Provider
Google
Family
gemma

VRAM Requirements

QuantizationBPWVRAMQuality
IQ3_XXS3.2513.1 GB82%
IQ3_XS3.514.1 GB84%
Q3_K_S3.6414.6 GB85%
IQ3_M3.7615.1 GB86%
Q3_K_M416.0 GB88%
Q3_K_L4.317.2 GB90%
IQ4_XS4.4617.8 GB92%
Q4_K_S4.6718.6 GB93%
Q4_K_M4.8919.4 GB94%
Q5_K_S5.5722.1 GB96%
Q5_K_M5.722.6 GB96%
Q6_K6.5625.9 GB97%
Q8_08.533.4 GB100%
FP161662.5 GB100%

Benchmark Scores

MMLU-PRO85.2
BBH74.4
Arena Elo1452.0
GPQA Diamond84.3
HLE19.5
LiveCodeBench80.0
AIME89.2

How to Run Gemma 4 31B

Run Gemma 4 31B locally with Ollama (needs 19.4 GB VRAM at Q4_K_M):

ollama run gemma4:31b

Compatible GPUs (30)

GPUs that can run Gemma 4 31B at Q4_K_M quantization: