Alibaba/Dense

Qwen 1.5 4B

chat
4B
Parameters
32K
Context length
6
Benchmarks
4
Quantizations
0
Architecture
Dense
Released
2024-02-04
Layers
40
KV Heads
20
Head Dim
128
Family
qwen

Qwen1.5-4B-Chat

Introduction

Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:

  • 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
  • Significant performance improvement in human preference for chat models;
  • Multilingual support of both base and chat models;
  • Stable support of 32K context length for models of all sizes
  • No need of trust_remote_code.

For more details, please refer to our blog post and GitHub repo.

Model Details

Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention.

Training details

We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.

Requirements

The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install transformers>=4.37.0, or you might encounter the following error:

KeyError: 'qwen2'

Quickstart

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen1.5-4B-Chat",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-4B-Chat")

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely Qwen1.5-4B-Chat-GPTQ-Int4, Qwen1.5-4B-Chat-GPTQ-Int8, Qwen1.5-4B-Chat-AWQ, and Qwen1.5-4B-Chat-GGUF.

Tips

  • If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in generation_config.json.

Quantizations & VRAM

Q4_K_M4.5 bpw
2.7 GB
VRAM required
94%
Quality
Q6_K6.5 bpw
3.7 GB
VRAM required
97%
Quality
Q8_08 bpw
4.5 GB
VRAM required
100%
Quality
FP1616 bpw
8.5 GB
VRAM required
100%
Quality

Benchmarks (6)

IFEval31.6
BBH16.3
MMLU-PRO15.5
MUSR7.4
MATH2.8
GPQA2.2

Run with Ollama

$ollama run qwen:4b

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for Qwen 1.5 4B

Build Hardware for Qwen 1.5 4B