granite/Dense

graniteGranite 3.3 2B

chatreasoningcodingtool_use
2B
Parameters
125K
Context length
1
Benchmarks
6
Quantizations
Architecture
Dense
Released
2025-04-16
Layers
28
KV Heads
8
Head Dim
128
Family
granite

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.891.7 GBgood
Q5_K_S5.571.9 GBgood
Q5_K_M5.71.9 GBgood
Q6_K6.562.1 GBexcellent
Q8_08.52.6 GBlossless
FP16164.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

Benchmarks (1)

BigCodeBench20.5

Run this model

Easiest way to get starteddocs →
curl -fsSL https://ollama.com/install.sh | sh
$ollama run granite3.3:2b:q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

Setup guide

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for Granite 3.3 2B

Build Hardware for Granite 3.3 2B

Granite 3.3 2B2B Parameter Dense LLM

Model Specifications

Parameters
2B
Architecture
Dense Transformer
Context Length
125K tokens
Capabilities
chat, reasoning, coding, tool_use
Release Date
2025-04-16
Family
granite

VRAM Requirements

QuantizationBPWVRAMQuality
Q4_K_M4.891.7 GB94%
Q5_K_S5.571.9 GB96%
Q5_K_M5.71.9 GB96%
Q6_K6.562.1 GB97%
Q8_08.52.6 GB100%
FP16164.5 GB100%

Benchmark Scores

BigCodeBench20.5

How to Run Granite 3.3 2B

Run Granite 3.3 2B locally with Ollama (needs 1.7 GB VRAM at Q4_K_M):

ollama run granite3.3:2b

Compatible GPUs (30)

GPUs that can run Granite 3.3 2B at Q4_K_M quantization: