Meta/Dense

Llama 3.3 70B

chatcodingreasoningtool_useThinkingTool UseDistilled
70.6B
Parameters
128K
Context length
9
Benchmarks
4
Quantizations
5.0M
HF downloads
Architecture
Dense
Released
2024-12-06
Layers
80
KV Heads
8
Head Dim
128
Family
llama

Model Information

The Meta Llama 3.3 multilingual large language model (LLM) is an instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks.

Model developer: Meta

Model Architecture: Llama 3.3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Training DataParamsInput modalitiesOutput modalitiesContext lengthGQAToken countKnowledge cutoff
Llama 3.3 (text only)A new mix of publicly available online data.70BMultilingual TextMultilingual Text and code128kYes15T+December 2023

Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Llama 3.3 model. Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.

Model Release Date:

  • 70B Instruct: December 6, 2024

Status: This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.

License A custom commercial license, the Llama 3.3 Community License Agreement, is available at: https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/LICENSE

Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.3 in applications, please go here.

Intended Use

Intended Use Cases Llama 3.3 is intended for commercial and research use in multiple languages. Instruction tuned text only models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. The Llama 3.3 model also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 3.3 Community License allows for these use cases.

Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3.3 Community License. Use in languages beyond those explicitly referenced as supported in this model card**.

**Note: Llama 3.3 has been trained on a broader collection of languages than the 8 supported languages. Developers may fine-tune Llama 3.3 models for languages beyond the 8 supported languages provided they comply with the Llama 3.3 Community License and the Acceptable Use Policy and in such cases are responsible for ensuring that any uses of Llama 3.3 in additional languages is done in a safe and responsible manner.

How to use

This repository contains two versions of Llama-3.3-70B-Instruct, for use with transformers and with the original llama codebase.

Use with transformers

Starting with transformers >= 4.45.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

See the snippet below for usage with Transformers:

import transformers
import torch

model_id = "meta-llama/Llama-3.3-70B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Tool use with transformers

LLaMA-3.3 supports multiple tool use formats. You can see a full guide to prompt formatting here.

Tool use is also supported through chat templates in Transformers. Here is a quick example showing a single simple tool:

# First, define a tool
def get_current_temperature(location: str) -> float:
    """
    Get the current temperature at a location.
    
    Args:
        location: The location to get the temperature for, in the format "City, Country"
    Returns:
        The current temperature at the specified location in the specified units, as a float.
    """
    return 22.  # A real function should probably actually get the temperature!

# Next, create a chat and apply the chat template
messages = [
  {"role": "system", "content": "You are a bot that responds to weather queries."},
  {"role": "user", "content": "Hey, what's the temperature in Paris right now?"}
]

inputs = tokenizer.apply_chat_template(messages, tools=[get_current_temperature], add_generation_prompt=True)

You can then generate text from this input as normal. If the model generates a tool call, you should add it to the chat like so:

tool_call = {"name": "get_current_temperature", "arguments": {"location": "Paris, France"}}
messages.append({"role": "assistant", "tool_calls": [{"type": "function", "function": tool_call}]})

and then call the tool and append the result, with the tool role, like so:

messages.append({"role": "tool", "name": "get_current_temperature", "content": "22.0"})

After that, you can generate() again to let the model use the tool result in the chat. Note that this was a very brief introduction to tool calling - for more information, see the LLaMA prompt format docs and the Transformers tool use documentation.

Use with bitsandbytes

The model checkpoints can be used in 8-bit and 4-bit for further memory optimisations using bitsandbytes and transformers

See the snippet below for usage:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "meta-llama/Llama-3.3-70B-Instruct"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)

quantized_model = AutoModelForCausalLM.from_pretrained(
	model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)

tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "What are we having for dinner?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

output = quantized_model.generate(**input_ids, max_new_tokens=10)

print(tokenizer.decode(output[0], skip_special_tokens=True))

To load in 4-bit simply pass load_in_4bit=True

Use with llama

Please, follow the instructions in the repository.

To download Original checkpoints, see the example command below leveraging huggingface-cli:

huggingface-cli download meta-llama/Llama-3.3-70B-Instruct --include "original/*" --local-dir Llama-3.3-70B-Instruct

Hardware and Software

...

Quantizations & VRAM

Q4_K_M4.5 bpw
40.2 GB
VRAM required
94%
Quality
Q6_K6.5 bpw
57.9 GB
VRAM required
97%
Quality
Q8_08 bpw
71.1 GB
VRAM required
100%
Quality
FP1616 bpw
141.7 GB
VRAM required
100%
Quality

Benchmarks (9)

Arena Elo1482
IFEval92.1
HumanEval88.4
MATH77.0
MMLU-PRO55.8
BBH54.9
GPQA46.7
BigCodeBench42.8
MUSR26.4

Run with Ollama

$ollama run llama3.3:70b

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Apple M3 Max (48GB)
48 GB VRAM • 400 GB/s
APPLE
$2899
Apple M4 Pro (48GB)
48 GB VRAM • 273 GB/s
APPLE
$1799
Apple M4 Max (48GB)
48 GB VRAM • 546 GB/s
APPLE
$2499
NVIDIA L40S 48GB
48 GB VRAM • 864 GB/s
NVIDIA
$7500
NVIDIA L40 48GB
48 GB VRAM • 864 GB/s
NVIDIA
$5500
NVIDIA RTX 6000 Ada 48GB
48 GB VRAM • 960 GB/s
NVIDIA
$6800
NVIDIA A40 48GB
48 GB VRAM • 696 GB/s
NVIDIA
$4650
NVIDIA RTX A6000 48GB
48 GB VRAM • 768 GB/s
NVIDIA
$4650
NVIDIA Quadro RTX 8000
48 GB VRAM • 672 GB/s
NVIDIA
NVIDIA Quadro RTX 8000 Passive
48 GB VRAM • 624 GB/s
NVIDIA
NVIDIA A40 PCIe
48 GB VRAM • 696 GB/s
NVIDIA
NVIDIA RTX 6000 Ada Generation
48 GB VRAM • 960 GB/s
NVIDIA
NVIDIA L20
48 GB VRAM • 864 GB/s
NVIDIA
AMD Radeon PRO W7800 48 GB
48 GB VRAM • 864 GB/s
AMD
AMD Radeon PRO W7900
48 GB VRAM • 864 GB/s
AMD
Intel Data Center GPU Max 1100
48 GB VRAM • 1230 GB/s
INTEL
NVIDIA RTX 5880 Ada Generation
48 GB VRAM • 864 GB/s
NVIDIA
NVIDIA RTX PRO 5000 Blackwell
48 GB VRAM • 1340 GB/s
NVIDIA
AMD Radeon PRO W7900D
48 GB VRAM • 864 GB/s
AMD
Apple M1 Ultra (64GB)
64 GB VRAM • 800 GB/s
APPLE
$2499
Apple M2 Ultra (64GB)
64 GB VRAM • 800 GB/s
APPLE
$2999
Apple M4 Max (64GB)
64 GB VRAM • 546 GB/s
APPLE
$2899
Apple M2 Max (64GB)
64 GB VRAM • 400 GB/s
APPLE
$2299
Apple M3 Max (64GB)
64 GB VRAM • 300 GB/s
APPLE
$2799
Apple M4 Pro (64GB)
64 GB VRAM • 273 GB/s
APPLE
$2599
AMD Radeon Instinct MI200
64 GB VRAM • 1640 GB/s
AMD
AMD Radeon Instinct MI210
64 GB VRAM • 1640 GB/s
AMD
NVIDIA H100 SXM5 64 GB
64 GB VRAM • 2020 GB/s
NVIDIA
NVIDIA Jetson AGX Orin 64 GB
64 GB VRAM • 205 GB/s
NVIDIA
NVIDIA Jetson T4000
64 GB VRAM • 273 GB/s
NVIDIA

Find the best GPU for Llama 3.3 70B

Build Hardware for Llama 3.3 70B