Liquid AI/Dense

Liquid AILFM2.5-1.2B-Thinking

Liquid AI LFM 2.5 — novel liquid foundation model with reasoning capabilities at 1.2B scale.

chatreasoningtool_useThinkingTool Use
1.2B
Parameters
122K
Context length
8
Benchmarks
6
Quantizations
33K
HF downloads
Architecture
Dense
Released
2026-01-20
Layers
16
KV Heads
8
Head Dim
64
Family
lfm

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.891.2 GBgood
Q5_K_S5.571.3 GBgood
Q5_K_M5.71.3 GBgood
Q6_K6.561.5 GBexcellent
Q8_08.51.8 GBlossless
FP16162.9 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN LFM2.5-1.2B-THINKING NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (8)

IFBench41.8
MMLU-PRO37.9
GPQA Diamond37.9
τ²-Bench19.6
AA Intelligence8.1
HLE6.1
SciCode4.2
AA Coding1.4

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run lfm2.5-thinking:q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA Tesla C870
2 GB VRAM • 76.8 GB/s
NVIDIA
NVIDIA Tesla D870
2 GB VRAM • 76.8 GB/s
NVIDIA
NVIDIA Tesla S870
2 GB VRAM • 76.8 GB/s
NVIDIA

Find the best GPU for LFM2.5-1.2B-Thinking

Build Hardware for LFM2.5-1.2B-Thinking

Liquid AI LFM 2.5 — novel liquid foundation model with reasoning capabilities at 1.2B scale.

▸ SPEC SHEET

LFM2.5-1.2B-Thinking1.2B Dense.

▸ SPECIFICATIONS
PARAMETERS
1.2B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
122K tokens
CAPABILITIES
chat, reasoning, tool_use
RELEASE DATE
2026-01-20
PROVIDER
Liquid AI
FAMILY
lfm
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.891.2 GB94%
Q5_K_S5.571.3 GB96%
Q5_K_M5.71.3 GB96%
Q6_K6.561.5 GB97%
Q8_08.51.8 GB100%
FP16162.9 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO37.9
GPQA Diamond37.9
HLE6.1
AA Intelligence8.1
AA Coding1.4
aa_ifbench41.8
aa_tau219.6
aa_scicode4.2
§ 02RUN COMMAND

Run LFM2.5-1.2B-Thinking locally with Ollama — needs 1.2 GB VRAM at Q4_K_M:

$ollama run lfm2.5-thinking