Cohere/Dense

Coherec4ai-command-r-v01 35B

chatTool Use
35B
Parameters
128K
Context length
7
Benchmarks
14
Quantizations
0
Architecture
Dense
Released
2024-03-11
Layers
40
KV Heads
8
Head Dim
128
Family
command

Quantization Options

QuantBitsVRAMQuality
IQ3_XXS3.2514.7 GBlow
IQ3_XS3.515.8 GBlow
Q3_K_S3.6416.4 GBlow
IQ3_M3.7616.9 GBlow
Q3_K_M418.0 GBlow
Q3_K_L4.319.3 GBmoderate
IQ4_XS4.4620.0 GBmoderate
Q4_K_S4.6720.9 GBmoderate
Q4_K_M4.8921.9 GBgood
Q5_K_S5.5724.9 GBgood
Q5_K_M5.725.4 GBgood
Q6_K6.5629.2 GBexcellent
Q8_08.537.7 GBlossless
FP161670.5 GBlossless

Select your GPU above to see speed estimates and compatibility for each quantization.

READY TO RUN THIS?RENT BY THE HOUR

RENT A GPU AND RUN C4AI-COMMAND-R-V01 35B NOW

Spin up an A100 / H100 / 4090 in ~60s. Pay by the second. Cancel anytime.

Community Ratings

Loading ratings...

Benchmarks (7)

IFEval67.5
BigCodeBench37.1
BBH34.6
MMLU-PRO26.3
MUSR16.1
GPQA7.6
MATH3.5

Run this model

Easiest way to get started·Beginners
DOCS ↗
curl -fsSL https://ollama.com/install.sh | sh
$ollama run command:35b-q4_K_M

Downloads and runs automatically. Add --verbose for speed stats.

▸ SETUP GUIDE
>_

Auto-setup with fitmyllm CLI

Detects your GPU, recommends the best model, downloads it, and starts chatting — zero config. Benchmarks your speed and contributes anonymous data to improve predictions.

pip install fitmyllmthen run fitmyllmLearn more
Auto-detect GPULive tok/s in chatSpeed benchmarks9 inference engines

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA RTX 4090
24 GB VRAM • 1008 GB/s
NVIDIA
$1599
NVIDIA RTX 3090 Ti
24 GB VRAM • 1008 GB/s
NVIDIA
$999
NVIDIA RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$850
AMD RX 7900 XTX
24 GB VRAM • 960 GB/s
AMD
$999
Apple M4 Pro (24GB)
24 GB VRAM • 273 GB/s
APPLE
$1399
NVIDIA L4 24GB
24 GB VRAM • 300 GB/s
NVIDIA
$2500
NVIDIA A10 24GB
24 GB VRAM • 600 GB/s
NVIDIA
$3500
Apple M2 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M3 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M4 (24GB)
24 GB VRAM • 120 GB/s
APPLE
$699
NVIDIA Tesla M40 24 GB
24 GB VRAM • 288 GB/s
NVIDIA
NVIDIA Tesla P10
24 GB VRAM • 694 GB/s
NVIDIA
NVIDIA Tesla P40
24 GB VRAM • 347 GB/s
NVIDIA
NVIDIA Quadro RTX 6000
24 GB VRAM • 672 GB/s
NVIDIA
$4000
NVIDIA GeForce RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$1499
NVIDIA A10 PCIe
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA A10G
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA RTX A5000
24 GB VRAM • 768 GB/s
NVIDIA
$2500
NVIDIA GeForce RTX 4090
24 GB VRAM • 1010 GB/s
NVIDIA
$1599
NVIDIA L40 CNX
24 GB VRAM • 864 GB/s
NVIDIA
$5000
NVIDIA L40G
24 GB VRAM • 864 GB/s
NVIDIA
$5000
NVIDIA A30 PCIe
24 GB VRAM • 933 GB/s
NVIDIA
NVIDIA A30X
24 GB VRAM • 1220 GB/s
NVIDIA

Find the best GPU for c4ai-command-r-v01 35B

Build Hardware for c4ai-command-r-v01 35B

Read the full model card for detailed information about this model.

▸ SPEC SHEET

c4ai-command-r-v01 35B35B Dense.

▸ SPECIFICATIONS
PARAMETERS
35B
ARCHITECTURE
Dense Transformer
CONTEXT LENGTH
128K tokens
CAPABILITIES
chat
RELEASE DATE
2024-03-11
PROVIDER
Cohere
FAMILY
command
▸ VRAM REQUIREMENTS
QUANTBPWVRAMQUALITY
IQ3_XXS3.2514.7 GB82%
IQ3_XS3.515.8 GB84%
Q3_K_S3.6416.4 GB85%
IQ3_M3.7616.9 GB86%
Q3_K_M418.0 GB88%
Q3_K_L4.319.3 GB90%
IQ4_XS4.4620.0 GB92%
Q4_K_S4.6720.9 GB93%
Q4_K_M4.8921.9 GB94%
Q5_K_S5.5724.9 GB96%
Q5_K_M5.725.4 GB96%
Q6_K6.5629.2 GB97%
Q8_08.537.7 GB100%
FP161670.5 GB100%
§ 01BENCHMARK SCORES
MMLU-PRO26.3
MATH3.5
IFEval67.5
BBH34.6
GPQA7.6
MUSR16.1
BigCodeBench37.1
§ 02RUN COMMAND

Run c4ai-command-r-v01 35B locally with Ollama — needs 21.9 GB VRAM at Q4_K_M:

$ollama run command:35b