Shanghai AI Lab/Dense

Shanghai AI LabInternLM2 1B

Shanghai AI Lab's tiny model. Good for experimentation.

chatTool Use
1B
Parameters
32K
Context length
0
Benchmarks
6
Quantizations
50K
HF downloads
Architecture
Dense
Released
2024-01-17
Layers
24
KV Heads
8
Head Dim
128
Family
internlm

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.891.1 GBgood
Q5_K_S5.571.2 GBgood
Q5_K_M5.71.2 GBgood
Q6_K6.561.3 GBexcellent
Q8_08.51.6 GBlossless
FP16162.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN INTERNLM2 1B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run internlm:1b-q4_K_M

Tag may need adjustment — check ollama.com/library/internlm for available tags.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA Tesla C870
2 GB VRAM • 76.8 GB/s
NVIDIA
NVIDIA Tesla D870
2 GB VRAM • 76.8 GB/s
NVIDIA
NVIDIA Tesla S870
2 GB VRAM • 76.8 GB/s
NVIDIA

Find the best GPU for InternLM2 1B

Build Hardware for InternLM2 1B

Shanghai AI Lab's tiny model. Good for experimentation.

▸ SPEC SHEET

InternLM2 1B1B Dense.

▸ SPECIFICATIONS
PARAMETERS
1B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
32K tokens
CAPABILITIES
chat
RELEASE DATE
2024-01-17
PROVIDER
Shanghai AI Lab
FAMILY
internlm
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.891.1 GB94%
Q5_K_S5.571.2 GB96%
Q5_K_M5.71.2 GB96%
Q6_K6.561.3 GB97%
Q8_08.51.6 GB100%
FP16162.5 GB100%