DeepSeek/Dense

DeepSeekDeepSeek R1 Distill Qwen 14B

DeepSeek R1 at 14B — excellent reasoning that fits in a single GPU.

chatreasoningThinkingDistilled
14.8B
Parameters
128K
Context length
18
Benchmarks
10
Quantizations
2.0M
HF downloads
Architecture
Dense
Released
2025-01-20
Layers
48
KV Heads
8
Head Dim
128
Family
deepseek

Quantization Options

QuantBitsVRAMQuality
Q3_K_M47.9 GBlow
Q3_K_L4.38.4 GBmoderate
IQ4_XS4.468.7 GBmoderate
Q4_K_S4.679.1 GBmoderate
Q4_K_M4.899.5 GBgood
Q5_K_S5.5710.8 GBgood
Q5_K_M5.711.0 GBgood
Q6_K6.5612.6 GBexcellent
Q8_08.516.2 GBlossless
FP161630.1 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN DEEPSEEK R1 DISTILL QWEN 14B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (18)

MATH-50094.9
MATH89.9
HumanEval80.5
IFEval66.3
AIME55.7
AA Math55.7
GPQA Diamond48.4
MMLU-PRO47.2
BBH46.5
GPQA42.0
LiveCodeBench37.6
BigCodeBench36.8
SciCode23.9
IFBench22.1
MUSR16.2
AA Intelligence15.8
AA Long Context7.0
HLE4.4

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run deepseek-r1:14b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 3080 10GB
10 GB VRAM • 760 GB/s
NVIDIA
$429
Intel Arc B570
10 GB VRAM • 456 GB/s
INTEL
$219
NVIDIA P102-101
10 GB VRAM • 320 GB/s
NVIDIA
NVIDIA CMP 170HX 10 GB
10 GB VRAM • 1560 GB/s
NVIDIA
NVIDIA CMP 50HX
10 GB VRAM • 560 GB/s
NVIDIA
NVIDIA CMP 90HX
10 GB VRAM • 760 GB/s
NVIDIA
NVIDIA RTX 2080 Ti
11 GB VRAM • 616 GB/s
NVIDIA
$350
NVIDIA GTX 1080 Ti
11 GB VRAM • 484 GB/s
NVIDIA
$200
NVIDIA RTX 5070
12 GB VRAM • 672 GB/s
NVIDIA
$549
NVIDIA RTX 4070 Ti
12 GB VRAM • 504 GB/s
NVIDIA
$799
NVIDIA RTX 4070 SUPER
12 GB VRAM • 504 GB/s
NVIDIA
$599
NVIDIA RTX 4070
12 GB VRAM • 504 GB/s
NVIDIA
$549
NVIDIA RTX 3080 Ti
12 GB VRAM • 912 GB/s
NVIDIA
$550
NVIDIA RTX 3080 12GB
12 GB VRAM • 912 GB/s
NVIDIA
$599
NVIDIA RTX 3060 12GB
12 GB VRAM • 360 GB/s
NVIDIA
$329
AMD RX 7700 XT
12 GB VRAM • 432 GB/s
AMD
$449
AMD RX 6700 XT
12 GB VRAM • 384 GB/s
AMD
$344
AMD RX 6750 XT
12 GB VRAM • 432 GB/s
AMD
$299
Intel Arc B580
12 GB VRAM • 456 GB/s
INTEL
$249
NVIDIA Tesla K40c
12 GB VRAM • 288 GB/s
NVIDIA
NVIDIA Tesla K40d
12 GB VRAM • 288 GB/s
NVIDIA
NVIDIA Tesla K40m
12 GB VRAM • 288 GB/s
NVIDIA

Find the best GPU for DeepSeek R1 Distill Qwen 14B

Build Hardware for DeepSeek R1 Distill Qwen 14B

DeepSeek R1 at 14B — excellent reasoning that fits in a single GPU.

▸ SPEC SHEET

DeepSeek R1 Distill Qwen 14B14.8B Dense.

▸ SPECIFICATIONS
PARAMETERS
14.8B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
128K tokens
CAPABILITIES
chat, reasoning
RELEASE DATE
2025-01-20
PROVIDER
DeepSeek
FAMILY
deepseek
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q3_K_M47.9 GB88%
Q3_K_L4.38.4 GB90%
IQ4_XS4.468.7 GB92%
Q4_K_S4.679.1 GB93%
Q4_K_M4.899.5 GB94%
Q5_K_S5.5710.8 GB96%
Q5_K_M5.711.0 GB96%
Q6_K6.5612.6 GB97%
Q8_08.516.2 GB100%
FP161630.1 GB100%
§ 01BENCHMARK SCORES
HumanEval80.5
MMLU-PRO47.2
MATH89.9
IFEval66.3
BBH46.5
GPQA42.0
MUSR16.2
BigCodeBench36.8
GPQA Diamond48.4
LiveCodeBench37.6
AIME55.7
MATH-50094.9
HLE4.4
AA Intelligence15.8
AA Math55.7
aa_ifbench22.1
aa_scicode23.9
aa_lcr7.0
§ 02RUN COMMAND

Run DeepSeek R1 Distill Qwen 14B locally with Ollama — needs 9.5 GB VRAM at Q4_K_M:

$ollama run deepseek-r1:14b