BAAI/Dense

Bbge-large-en-v1.5 335M

chat
0.335B
Parameters
1K
Context length
1
Benchmarks
6
Quantizations
0
Architecture
Dense
Released
2024-02-10
Layers
24
KV Heads
16
Head Dim
64
Family
embedding

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.890.7 GBgood
Q5_K_S5.570.7 GBgood
Q5_K_M5.70.7 GBgood
Q6_K6.560.8 GBexcellent
Q8_08.50.8 GBlossless
FP16161.2 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

Benchmarks (1)

MMLU-PRO62.3

Run this model

Easiest way to get starteddocs →
curl -fsSL https://ollama.com/install.sh | sh
$ollama run embedding:0b-q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

Setup guide

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for bge-large-en-v1.5 335M

Build Hardware for bge-large-en-v1.5 335M

bge-large-en-v1.5 335M0.335B Parameter Dense LLM

Model Specifications

Parameters
0.335B
Architecture
Dense Transformer
Context Length
1K tokens
Capabilities
chat
Release Date
2024-02-10
Provider
BAAI
Family
embedding

VRAM Requirements

QuantizationBPWVRAMQuality
Q4_K_M4.890.7 GB94%
Q5_K_S5.570.7 GB96%
Q5_K_M5.70.7 GB96%
Q6_K6.560.8 GB97%
Q8_08.50.8 GB100%
FP16161.2 GB100%

Benchmark Scores

MMLU-PRO62.3

How to Run bge-large-en-v1.5 335M

Run bge-large-en-v1.5 335M locally with Ollama (needs 0.7 GB VRAM at Q4_K_M):

ollama run embedding:0b

Compatible GPUs (30)

GPUs that can run bge-large-en-v1.5 335M at Q4_K_M quantization: