Shanghai AI Lab/Dense

Shanghai AI LabInternLM2 20B

InternLM2 20B — powerful model for coding and complex tasks.

chatcodingTool Use
19.8B
Parameters
32K
Context length
7
Benchmarks
10
Quantizations
60K
HF downloads
Architecture
Dense
Released
2024-01-17
Layers
48
KV Heads
8
Head Dim
128
Family
internlm

Quantization Options

QuantBitsVRAMQuality
Q3_K_M410.4 GBlow
Q3_K_L4.311.1 GBmoderate
IQ4_XS4.4611.5 GBmoderate
Q4_K_S4.6712.0 GBmoderate
Q4_K_M4.8912.6 GBgood
Q5_K_S5.5714.3 GBgood
Q5_K_M5.714.6 GBgood
Q6_K6.5616.7 GBexcellent
Q8_08.521.5 GBlossless
FP161640.1 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN INTERNLM2 20B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (7)

IFEval72.0
BBH62.8
HumanEval57.6
MATH52.0
MMLU-PRO45.0
MUSR16.7
GPQA9.5

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run internlm:20b-q4_K_M

Tag may need adjustment — check ollama.com/library/internlm for available tags.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 5080
16 GB VRAM • 960 GB/s
NVIDIA
$999
NVIDIA RTX 5070 Ti
16 GB VRAM • 896 GB/s
NVIDIA
$749
NVIDIA RTX 4080 SUPER
16 GB VRAM • 736 GB/s
NVIDIA
$999
NVIDIA RTX 4080
16 GB VRAM • 717 GB/s
NVIDIA
$1199
AMD RX 7900 GRE
16 GB VRAM • 576 GB/s
AMD
$549
AMD RX 7800 XT
16 GB VRAM • 624 GB/s
AMD
$499
AMD RX 7600 XT
16 GB VRAM • 288 GB/s
AMD
$329
AMD RX 6950 XT
16 GB VRAM • 576 GB/s
AMD
$449
AMD RX 6900 XT
16 GB VRAM • 512 GB/s
AMD
$469
AMD RX 6800 XT
16 GB VRAM • 512 GB/s
AMD
$599
AMD RX 6800
16 GB VRAM • 512 GB/s
AMD
$599
Intel Arc A770 16GB
16 GB VRAM • 560 GB/s
INTEL
$349
Apple M1 Pro (16GB)
16 GB VRAM • 200 GB/s
APPLE
$999
Apple M2 Pro (16GB)
16 GB VRAM • 200 GB/s
APPLE
$1299
Apple M4 (16GB)
16 GB VRAM • 120 GB/s
APPLE
$499
NVIDIA Tesla T4 16GB
16 GB VRAM • 320 GB/s
NVIDIA
$800
NVIDIA V100 PCIe 16GB
16 GB VRAM • 900 GB/s
NVIDIA
$2000
AMD RX 9070 XT
16 GB VRAM • 640 GB/s
AMD
$599
AMD RX 9070
16 GB VRAM • 672 GB/s
AMD
$549
Apple M1 (16GB)
16 GB VRAM • 68.25 GB/s
APPLE
$699
Apple M2 (16GB)
16 GB VRAM • 100 GB/s
APPLE
$799
Apple M3 (16GB)
16 GB VRAM • 100 GB/s
APPLE
$799
NVIDIA Tesla P100 DGXS
16 GB VRAM • 732 GB/s
NVIDIA
NVIDIA Tesla P100 PCIe 16 GB
16 GB VRAM • 732 GB/s
NVIDIA
NVIDIA Tesla P100 SXM2
16 GB VRAM • 732 GB/s
NVIDIA
NVIDIA Tesla V100 PCIe 16 GB
16 GB VRAM • 897 GB/s
NVIDIA
NVIDIA Tesla V100 SXM2 16 GB
16 GB VRAM • 1130 GB/s
NVIDIA

Find the best GPU for InternLM2 20B

Build Hardware for InternLM2 20B

InternLM2 20B — powerful model for coding and complex tasks.

▸ SPEC SHEET

InternLM2 20B19.8B Dense.

▸ SPECIFICATIONS
PARAMETERS
19.8B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
32K tokens
CAPABILITIES
chat, coding
RELEASE DATE
2024-01-17
PROVIDER
Shanghai AI Lab
FAMILY
internlm
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q3_K_M410.4 GB88%
Q3_K_L4.311.1 GB90%
IQ4_XS4.4611.5 GB92%
Q4_K_S4.6712.0 GB93%
Q4_K_M4.8912.6 GB94%
Q5_K_S5.5714.3 GB96%
Q5_K_M5.714.6 GB96%
Q6_K6.5616.7 GB97%
Q8_08.521.5 GB100%
FP161640.1 GB100%
§ 01BENCHMARK SCORES
HumanEval57.6
MMLU-PRO45.0
MATH52.0
IFEval72.0
BBH62.8
GPQA9.5
MUSR16.7