BAAI/Dense

BBGE-M3

BAAI's multilingual embedding model supporting 100+ languages.

embeddingDistilled
0.568B
Parameters
8K
Context length
1
Benchmarks
6
Quantizations
3.2M
HF downloads
Architecture
Dense
Released
2024-02-05
Layers
24
KV Heads
16
Head Dim
64
Family
embedding

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.890.8 GBgood
Q5_K_S5.570.9 GBgood
Q5_K_M5.70.9 GBgood
Q6_K6.561.0 GBexcellent
Q8_08.51.1 GBlossless
FP16161.6 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN BGE-M3 NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (1)

MMLU-PRO63.0

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run bge-m3:q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for BGE-M3

Build Hardware for BGE-M3

BAAI's multilingual embedding model supporting 100+ languages.

▸ SPEC SHEET

BGE-M30.568B Dense.

▸ SPECIFICATIONS
PARAMETERS
0.568B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
8K tokens
CAPABILITIES
embedding
RELEASE DATE
2024-02-05
PROVIDER
BAAI
FAMILY
embedding
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.890.8 GB94%
Q5_K_S5.570.9 GB96%
Q5_K_M5.70.9 GB96%
Q6_K6.561.0 GB97%
Q8_08.51.1 GB100%
FP16161.6 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO63.0
§ 02RUN COMMAND

Run BGE-M3 locally with Ollama — needs 0.8 GB VRAM at Q4_K_M:

$ollama run bge-m3:latest