HuggingFace/Dense

HuggingFaceSmolLM2 135M

HuggingFace's tiny model for edge devices and on-device inference.

chatTool Use
0.135B
Parameters
2K
Context length
6
Benchmarks
6
Quantizations
100K
HF downloads
Architecture
Dense
Released
2024-11-21
Layers
30
KV Heads
3
Head Dim
64
Family
smollm

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.890.6 GBgood
Q5_K_S5.570.6 GBgood
Q5_K_M5.70.6 GBgood
Q6_K6.560.6 GBexcellent
Q8_08.50.6 GBlossless
FP16160.8 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN SMOLLM2 135M NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (6)

IFEval21.2
MUSR13.3
BBH3.3
MMLU-PRO1.4
MATH1.4
GPQA1.1

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run smollm:0.135b-q4_K_M

Tag may need adjustment — check ollama.com/library/smollm for available tags.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for SmolLM2 135M

Build Hardware for SmolLM2 135M

HuggingFace's tiny model for edge devices and on-device inference.

▸ SPEC SHEET

SmolLM2 135M0.135B Dense.

▸ SPECIFICATIONS
PARAMETERS
0.135B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
2K tokens
CAPABILITIES
chat
RELEASE DATE
2024-11-21
PROVIDER
HuggingFace
FAMILY
smollm
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.890.6 GB94%
Q5_K_S5.570.6 GB96%
Q5_K_M5.70.6 GB96%
Q6_K6.560.6 GB97%
Q8_08.50.6 GB100%
FP16160.8 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO1.4
MATH1.4
IFEval21.2
BBH3.3
GPQA1.1
MUSR13.3