mixedbread-ai/Dense

Mmxbai-embed-large-v1

Mixed-bread AI's embedding model, strong on retrieval benchmarks.

embedding
0.335B
Parameters
1K
Context length
1
Benchmarks
6
Quantizations
1.8M
HF downloads
Architecture
Dense
Released
2024-03-07
Layers
24
KV Heads
16
Head Dim
64
Family
embedding

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.890.7 GBgood
Q5_K_S5.570.7 GBgood
Q5_K_M5.70.7 GBgood
Q6_K6.560.8 GBexcellent
Q8_08.50.8 GBlossless
FP16161.2 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN MXBAI-EMBED-LARGE-V1 NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (1)

MMLU-PRO64.7

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run mxbai-embed-large:q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for mxbai-embed-large-v1

Build Hardware for mxbai-embed-large-v1

Mixed-bread AI's embedding model, strong on retrieval benchmarks.

▸ SPEC SHEET

mxbai-embed-large-v10.335B Dense.

▸ SPECIFICATIONS
PARAMETERS
0.335B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
1K tokens
CAPABILITIES
embedding
RELEASE DATE
2024-03-07
PROVIDER
mixedbread-ai
FAMILY
embedding
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
Q4_K_M4.890.7 GB94%
Q5_K_S5.570.7 GB96%
Q5_K_M5.70.7 GB96%
Q6_K6.560.8 GB97%
Q8_08.50.8 GB100%
FP16161.2 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO64.7
§ 02RUN COMMAND

Run mxbai-embed-large-v1 locally with Ollama — needs 0.7 GB VRAM at Q4_K_M:

$ollama run mxbai-embed-large:latest