Meta/Dense

CodeLlama-34b

codingtool_usechat
33.7B
Parameters
4K
Context length
9
Benchmarks
4
Quantizations
1K
HF downloads
Architecture
Dense
Released
2024-03-14
Layers
48
KV Heads
8
Head Dim
128
Family
llama

Code Llama

Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 34B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.

Base ModelPythonInstruct
7Bmeta-llama/CodeLlama-7b-hfmeta-llama/CodeLlama-7b-Python-hfmeta-llama/CodeLlama-7b-Instruct-hf
13Bmeta-llama/CodeLlama-13b-hfmeta-llama/CodeLlama-13b-Python-hfmeta-llama/CodeLlama-13b-Instruct-hf
34Bmeta-llama/CodeLlama-34b-hfmeta-llama/CodeLlama-34b-Python-hfmeta-llama/CodeLlama-34b-Instruct-hf
70Bmeta-llama/CodeLlama-70b-hfmeta-llama/CodeLlama-70b-Python-hfmeta-llama/CodeLlama-70b-Instruct-hf

Model Use

To use this model, please make sure to install transformers:

pip install transformers accelerate

Model capabilities:

  • Code completion.
  • Infilling.
  • Instructions / chat.
  • Python specialist.

Model Details

*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).

Model Developers Meta

Variations Code Llama comes in three model sizes, and three variants:

  • Code Llama: base models designed for general code synthesis and understanding
  • Code Llama - Python: designed specifically for Python
  • Code Llama - Instruct: for instruction following and safer deployment

All variants are available in sizes of 7B, 13B and 34B parameters.

This repository contains the Instruct version of the 34B parameters model.

Input Models input text only.

Output Models generate text only.

Model Architecture Code Llama is an auto-regressive language model that uses an optimized transformer architecture.

Model Dates Code Llama and its variants have been trained between January 2023 and July 2023.

Status This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.

License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/

Research Paper More information can be found in the paper "Code Llama: Open Foundation Models for Code" or its arXiv page.

Intended Use

Intended Use Cases Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.

Out-of-Scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.

Hardware and Software

Training Factors We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.

Carbon Footprint In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.

Training Data

All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the research paper for details).

Evaluation Results

See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.

Ethical Considerations and Limitations

Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-use-guide.

Quantizations & VRAM

Q4_K_M4.5 bpw
19.4 GB
VRAM required
94%
Quality
Q6_K6.5 bpw
27.9 GB
VRAM required
97%
Quality
Q8_08 bpw
34.2 GB
VRAM required
100%
Quality
FP1616 bpw
67.9 GB
VRAM required
100%
Quality

Benchmarks (9)

Arena Elo1058
MBPP56.3
IFEval46.0
HumanEval43.9
BBH26.0
MMLU-PRO17.1
MUSR7.2
MATH4.3
GPQA2.6

Run with Ollama

$ollama run llama:33b

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

AMD RX 7900 XT
20 GB VRAM • 800 GB/s
AMD
$849
NVIDIA RTX 4000 Ada 20GB
20 GB VRAM • 432 GB/s
NVIDIA
$1250
NVIDIA A10M
20 GB VRAM • 500 GB/s
NVIDIA
NVIDIA GeForce RTX 3080 Ti 20 GB
20 GB VRAM • 760 GB/s
NVIDIA
$1199
AMD Radeon RX 7900 XT
20 GB VRAM • 800 GB/s
AMD
$899
NVIDIA RTX 4000 Ada Generation
20 GB VRAM • 360 GB/s
NVIDIA
NVIDIA RTX 4000 SFF Ada Generation
20 GB VRAM • 280 GB/s
NVIDIA
NVIDIA RTX A4500
20 GB VRAM • 640 GB/s
NVIDIA
NVIDIA RTX 4090
24 GB VRAM • 1008 GB/s
NVIDIA
$1599
NVIDIA RTX 3090 Ti
24 GB VRAM • 1008 GB/s
NVIDIA
$999
NVIDIA RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$850
AMD RX 7900 XTX
24 GB VRAM • 960 GB/s
AMD
$999
Apple M4 Pro (24GB)
24 GB VRAM • 273 GB/s
APPLE
$1399
NVIDIA L4 24GB
24 GB VRAM • 300 GB/s
NVIDIA
$2500
NVIDIA A10 24GB
24 GB VRAM • 600 GB/s
NVIDIA
$3500
Apple M2 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M3 (24GB)
24 GB VRAM • 100 GB/s
APPLE
$999
Apple M4 (24GB)
24 GB VRAM • 120 GB/s
APPLE
$699
NVIDIA Tesla M40 24 GB
24 GB VRAM • 288 GB/s
NVIDIA
NVIDIA Tesla P10
24 GB VRAM • 694 GB/s
NVIDIA
NVIDIA Tesla P40
24 GB VRAM • 347 GB/s
NVIDIA
NVIDIA Quadro RTX 6000
24 GB VRAM • 672 GB/s
NVIDIA
NVIDIA Quadro RTX 6000 Passive
24 GB VRAM • 624 GB/s
NVIDIA
NVIDIA GeForce RTX 3090
24 GB VRAM • 936 GB/s
NVIDIA
$1499
NVIDIA A10 PCIe
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA A10G
24 GB VRAM • 600 GB/s
NVIDIA
NVIDIA RTX A5000
24 GB VRAM • 768 GB/s
NVIDIA
NVIDIA GeForce RTX 3090 Ti
24 GB VRAM • 1010 GB/s
NVIDIA
$1999
NVIDIA GeForce RTX 4090
24 GB VRAM • 1010 GB/s
NVIDIA
$1599
NVIDIA L40 CNX
24 GB VRAM • 864 GB/s
NVIDIA

Find the best GPU for CodeLlama-34b

Build Hardware for CodeLlama-34b