Meta/Dense

MetaLlama-3.1-8B

chat
8B
Parameters
4K
Context length
15
Benchmarks
6
Quantizations
1.3M
HF downloads
Architecture
Dense
Released
2024-07-23
Layers
32
KV Heads
8
Head Dim
128
Family
llama

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.895.4 GBgood
Q5_K_S5.576.1 GBgood
Q5_K_M5.76.2 GBgood
Q6_K6.567.0 GBexcellent
Q8_08.59.0 GBlossless
FP161616.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

Benchmarks (15)

Arena Elo1191
HumanEval62.8
MBPP55.6
IFEval49.2
BigCodeBench32.8
MMLU-PRO31.1
BBH29.4
GPQA Diamond27.0
MATH-50021.8
MATH15.6
GPQA8.7
MUSR8.6
LiveCodeBench8.5
AA Intelligence7.6
HLE4.3

Run this model

Easiest way to get starteddocs →
curl -fsSL https://ollama.com/install.sh | sh
$ollama run llama3.1:8.0b-instruct-q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

Setup guide

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 3050 6GB
6 GB VRAM • 168 GB/s
NVIDIA
$169
Intel Arc A380
6 GB VRAM • 186 GB/s
INTEL
$129
NVIDIA RTX 2060 6GB
6 GB VRAM • 336 GB/s
NVIDIA
$150
NVIDIA GTX 1660 SUPER
6 GB VRAM • 336 GB/s
NVIDIA
$150
NVIDIA GTX 1660 Ti
6 GB VRAM • 288 GB/s
NVIDIA
$140
NVIDIA GTX 1060 6GB
6 GB VRAM • 192 GB/s
NVIDIA
$80
NVIDIA Tesla C2070
6 GB VRAM • 143 GB/s
NVIDIA
NVIDIA Tesla C2075
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla C2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla M2070
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2070-Q
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2075
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla X2070
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla X2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla K20X
6 GB VRAM • 250 GB/s
NVIDIA
NVIDIA Tesla K20Xm
6 GB VRAM • 250 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB 9Gbps
6 GB VRAM • 217 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB GDDR5X
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB GP104
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB Rev. 2
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1660
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1660 SUPER
6 GB VRAM • 336 GB/s
NVIDIA
NVIDIA GeForce GTX 1660 Ti
6 GB VRAM • 288 GB/s
NVIDIA
NVIDIA GeForce RTX 2060
6 GB VRAM • 336 GB/s
NVIDIA
$140
NVIDIA GeForce RTX 2060 TU104
6 GB VRAM • 336 GB/s
NVIDIA
$140
AMD Radeon RX 5600 OEM
6 GB VRAM • 288 GB/s
AMD
AMD Radeon RX 5600 XT
6 GB VRAM • 288 GB/s
AMD
$90
AMD Radeon RX 5600M
6 GB VRAM • 288 GB/s
AMD

Find the best GPU for Llama-3.1-8B

Build Hardware for Llama-3.1-8B

Llama-3.1-8B8B Parameter Dense LLM

Model Specifications

Parameters
8B
Architecture
Dense Transformer
Context Length
4K tokens
Capabilities
chat
Release Date
2024-07-23
Provider
Meta
Family
llama

VRAM Requirements

QuantizationBPWVRAMQuality
Q4_K_M4.895.4 GB94%
Q5_K_S5.576.1 GB96%
Q5_K_M5.76.2 GB96%
Q6_K6.567.0 GB97%
Q8_08.59.0 GB100%
FP161616.5 GB100%

Benchmark Scores

HumanEval62.8
MMLU-PRO31.1
MATH15.6
IFEval49.2
BBH29.4
GPQA8.7
MUSR8.6
MBPP55.6
BigCodeBench32.8
Arena Elo1191.0
GPQA Diamond27.0
LiveCodeBench8.5
MATH-50021.8
HLE4.3
AA Intelligence7.6

How to Run Llama-3.1-8B

Run Llama-3.1-8B locally with Ollama (needs 5.4 GB VRAM at Q4_K_M):

ollama run llama3.1:8b

Compatible GPUs (30)

GPUs that can run Llama-3.1-8B at Q4_K_M quantization: