Alibaba/Dense

Qwen2.5-7B

tool_usechat
7.6B
Parameters
32K
Context length
10
Benchmarks
4
Quantizations
21.5M
HF downloads
Architecture
Dense
Released
2024-09-16
Layers
28
KV Heads
4
Head Dim
128
Family
qwen

Qwen2.5-7B-Instruct

<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;"> </a>

Introduction

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:

  • Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains.
  • Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots.
  • Long-context Support up to 128K tokens and can generate up to 8K tokens.
  • Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.

This repo contains the instruction-tuned 7B Qwen2.5 model, which has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
  • Number of Parameters: 7.61B
  • Number of Paramaters (Non-Embedding): 6.53B
  • Number of Layers: 28
  • Number of Attention Heads (GQA): 28 for Q and 4 for KV
  • Context Length: Full 131,072 tokens and generation 8192 tokens
    • Please refer to this section for detailed instructions on how to deploy Qwen2.5 for handling long texts.

For more details, please refer to our blog, GitHub, and Documentation.

Requirements

The code of Qwen2.5 has been in the latest Hugging face transformers and we advise you to use the latest version of transformers.

With transformers<4.37.0, you will encounter the following error:

KeyError: 'qwen2'

Quickstart

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen2.5-7B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Processing Long Texts

The current config.json is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize YaRN, a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.

For supported frameworks, you could add the following to config.json to enable YaRN:

{
  ...,
  "rope_scaling": {
    "factor": 4.0,
    "original_max_position_embeddings": 32768,
    "type": "yarn"
  }
}

For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.

Evaluation & Performance

Detailed evaluation results are reported in this 📑 blog.

For requirements on GPU memory and the respective throughput, see results here.

Quantizations & VRAM

Q4_K_M4.5 bpw
4.8 GB
VRAM required
94%
Quality
Q6_K6.5 bpw
6.7 GB
VRAM required
97%
Quality
Q8_08 bpw
8.1 GB
VRAM required
100%
Quality
FP1616 bpw
15.7 GB
VRAM required
100%
Quality

Benchmarks (10)

Arena Elo1460
IFEval75.9
MBPP60.8
MATH50.0
HumanEval45.7
MMLU-PRO36.5
BBH34.9
BigCodeBench29.1
MUSR8.5
GPQA5.5

Run with Ollama

$ollama run qwen2.5:7.6b

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

NVIDIA Tesla K20c
5 GB VRAM • 208 GB/s
NVIDIA
NVIDIA Tesla K20m
5 GB VRAM • 208 GB/s
NVIDIA
NVIDIA Tesla K20s
5 GB VRAM • 208 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 5 GB
5 GB VRAM • 160 GB/s
NVIDIA
NVIDIA P102-100
5 GB VRAM • 440 GB/s
NVIDIA
NVIDIA RTX 3050 6GB
6 GB VRAM • 168 GB/s
NVIDIA
$169
Intel Arc A380
6 GB VRAM • 186 GB/s
INTEL
$129
NVIDIA RTX 2060 6GB
6 GB VRAM • 336 GB/s
NVIDIA
$150
NVIDIA GTX 1660 SUPER
6 GB VRAM • 336 GB/s
NVIDIA
$150
NVIDIA GTX 1660 Ti
6 GB VRAM • 288 GB/s
NVIDIA
$140
NVIDIA GTX 1060 6GB
6 GB VRAM • 192 GB/s
NVIDIA
$80
NVIDIA Tesla C2070
6 GB VRAM • 143 GB/s
NVIDIA
NVIDIA Tesla C2075
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla C2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla M2070
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2070-Q
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2075
6 GB VRAM • 150 GB/s
NVIDIA
NVIDIA Tesla M2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla X2070
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla X2090
6 GB VRAM • 177 GB/s
NVIDIA
NVIDIA Tesla K20X
6 GB VRAM • 250 GB/s
NVIDIA
NVIDIA Tesla K20Xm
6 GB VRAM • 250 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB 9Gbps
6 GB VRAM • 217 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB GDDR5X
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB GP104
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1060 6 GB Rev. 2
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1660
6 GB VRAM • 192 GB/s
NVIDIA
NVIDIA GeForce GTX 1660 SUPER
6 GB VRAM • 336 GB/s
NVIDIA
NVIDIA GeForce GTX 1660 Ti
6 GB VRAM • 288 GB/s
NVIDIA

Find the best GPU for Qwen2.5-7B

Build Hardware for Qwen2.5-7B