Nomic/Dense

Nnomic-embed-text-v1.5 100M

chat
0.1B
Parameters
8K
Context length
1
Benchmarks
6
Quantizations
9.9M
HF downloads
Architecture
Dense
Released
2024-02-10
Layers
12
KV Heads
12
Head Dim
64
Family
embedding

Quantization Options

QuantBitsVRAMQuality
Q4_K_M4.890.5 GBgood
Q5_K_S5.570.6 GBgood
Q5_K_M5.70.6 GBgood
Q6_K6.560.6 GBexcellent
Q8_08.50.6 GBlossless
FP16160.7 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

Benchmarks (1)

MMLU-PRO62.3

Run this model

Easiest way to get starteddocs →
curl -fsSL https://ollama.com/install.sh | sh
$ollama run embedding:0b-q4_k_m

Downloads and runs automatically. Add --verbose for speed stats.

Setup guide

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for nomic-embed-text-v1.5 100M

Build Hardware for nomic-embed-text-v1.5 100M

nomic-embed-text-v1.5 100M0.1B Parameter Dense LLM

Model Specifications

Parameters
0.1B
Architecture
Dense Transformer
Context Length
8K tokens
Capabilities
chat
Release Date
2024-02-10
Provider
Nomic
Family
embedding

VRAM Requirements

QuantizationBPWVRAMQuality
Q4_K_M4.890.5 GB94%
Q5_K_S5.570.6 GB96%
Q5_K_M5.70.6 GB96%
Q6_K6.560.6 GB97%
Q8_08.50.6 GB100%
FP16160.7 GB100%

Benchmark Scores

MMLU-PRO62.3

How to Run nomic-embed-text-v1.5 100M

Run nomic-embed-text-v1.5 100M locally with Ollama (needs 0.5 GB VRAM at Q4_K_M):

ollama run embedding:0b

Compatible GPUs (30)

GPUs that can run nomic-embed-text-v1.5 100M at Q4_K_M quantization: