Alibaba/Dense

AlibabaQwen3.5-0.8B

chatThinkingTool Use
0.9B
Parameters
256K
Context length
15
Benchmarks
6
Quantizations
1.4M
HF downloads
Architecture
Dense
Released
2025-06-01
Layers
28
KV Heads
2
Head Dim
128
Family
qwen

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.891.0 GBgood
Q5_K_S5.571.1 GBgood
Q5_K_M5.71.1 GBgood
Q6_K6.561.2 GBexcellent
Q8_08.51.4 GBlossless
FP16162.3 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN QWEN3.5-0.8B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (15)

MUSR90.0
τ²-Bench65.2
MATH60.0
IFBench21.6
IFEval17.5
GPQA Diamond11.1
AA Intelligence10.5
BigCodeBench8.8
GPQA6.7
AA Long Context6.7
BBH3.4
SciCode2.9
MMLU-PRO2.4
HLE1.2
AA Coding0.0

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run qwen3:0.9b-instruct-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA Tesla C870
2 GB VRAM • 76.8 GB/s
NVIDIA
NVIDIA Tesla D870
2 GB VRAM • 76.8 GB/s
NVIDIA
NVIDIA Tesla S870
2 GB VRAM • 76.8 GB/s
NVIDIA

Find the best GPU for Qwen3.5-0.8B

Build Hardware for Qwen3.5-0.8B

Read the full model card for detailed information about this model.

▸ SPEC SHEET

Qwen3.5-0.8B0.9B Dense.

▸ SPECIFICATIONS
PARAMETERS
0.9B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
256K tokens
CAPABILITIES
chat
RELEASE DATE
2025-06-01
PROVIDER
Alibaba
FAMILY
qwen
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.891.0 GB94%
Q5_K_S5.571.1 GB96%
Q5_K_M5.71.1 GB96%
Q6_K6.561.2 GB97%
Q8_08.51.4 GB100%
FP16162.3 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO2.4
MATH60.0
IFEval17.5
BBH3.4
MUSR90.0
BigCodeBench8.8
GPQA Diamond11.1
HLE1.2
AA Intelligence10.5
AA Coding0.0
GPQA6.7
aa_ifbench21.6
aa_tau265.2
aa_scicode2.9
aa_lcr6.7
§ 02RUN COMMAND

Run Qwen3.5-0.8B locally with Ollama — needs 1.0 GB VRAM at Q4_K_M:

$ollama run qwen3:900m