LFM2.5-1.2B-Thinking — 1.2B Dense.
- PARAMETERS
- 1.2B
- ARCHITECTURE
- Dense Transformer
- CONTEXT LENGTH
- 122K tokens
- CAPABILITIES
- chat, reasoning, tool_use
- RELEASE DATE
- 2026-01-20
- PROVIDER
- Liquid AI
- FAMILY
- lfm
| QUANT | BPW | VRAM | QUALITY |
|---|---|---|---|
| Q4_K_M | 4.89 | 1.2 GB | 94% |
| Q5_K_S | 5.57 | 1.3 GB | 96% |
| Q5_K_M | 5.7 | 1.3 GB | 96% |
| Q6_K | 6.56 | 1.5 GB | 97% |
| Q8_0 | 8.5 | 1.8 GB | 100% |
| FP16 | 16 | 2.9 GB | 100% |
Run LFM2.5-1.2B-Thinking locally with Ollama — needs 1.2 GB VRAM at Q4_K_M:
ollama run lfm2.5-thinking