Baidu/Mixture of Experts

ERNIE 4.5 21B A3B

chatmultilingualThinking
21B
Parameters (3B active)
128K
Context length
0
Benchmarks
4
Quantizations
20K
HF downloads
Architecture
MoE
Released
2025-06-30
Layers
28
KV Heads
4
Head Dim
128
Family
ernie

ERNIE-4.5-21B

[!NOTE] Note: "-Paddle" models use PaddlePaddle weights, while "-PT" models use Transformer-style PyTorch weights.

ERNIE 4.5 Highlights

The advanced capabilities of the ERNIE 4.5 models, particularly the MoE-based A47B and A3B series, are underpinned by several key technical innovations:

  1. Multimodal Heterogeneous MoE Pre-Training: Our models are jointly trained on both textual and visual modalities to better capture the nuances of multimodal information and improve performance on tasks involving text understanding and generation, image understanding, and cross-modal reasoning. To achieve this without one modality hindering the learning of another, we designed a heterogeneous MoE structure, incorporated modality-isolated routing, and employed router orthogonal loss and multimodal token-balanced loss. These architectural choices ensure that both modalities are effectively represented, allowing for mutual reinforcement during training.

  2. Scaling-Efficient Infrastructure: We propose a novel heterogeneous hybrid parallelism and hierarchical load balancing strategy for efficient training of ERNIE 4.5 models. By using intra-node expert parallelism, memory-efficient pipeline scheduling, FP8 mixed-precision training and finegrained recomputation methods, we achieve remarkable pre-training throughput. For inference, we propose multi-expert parallel collaboration method and convolutional code quantization algorithm to achieve 4-bit/2-bit lossless quantization. Furthermore, we introduce PD disaggregation with dynamic role switching for effective resource utilization to enhance inference performance for ERNIE 4.5 MoE models. Built on PaddlePaddle, ERNIE 4.5 delivers high-performance inference across a wide range of hardware platforms.

  3. Modality-Specific Post-Training: To meet the diverse requirements of real-world applications, we fine-tuned variants of the pre-trained model for specific modalities. Our LLMs are optimized for general-purpose language understanding and generation. The VLMs focuses on visuallanguage understanding and supports both thinking and non-thinking modes. Each model employed a combination of Supervised Fine-tuning (SFT), Direct Preference Optimization (DPO) or a modified reinforcement learning method named Unified Preference Optimization (UPO) for post-training.

Model Overview

ERNIE-4.5-21B-A3B is a text MoE Post-trained model, with 21B total parameters and 3B activated parameters for each token. The following are the model configuration details:

KeyValue
ModalityText
Training StagePosttraining
Params(Total / Activated)21B / 3B
Layers28
Heads(Q/KV)20 / 4
Text Experts(Total / Activated)64 / 6
Vision Experts(Total / Activated)64 / 6
Shared Experts2
Context Length131072

Quickstart

Using transformers library

Note: You'll need the transformers library (version 4.54.0 or newer) installed to use this model.

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "baidu/ERNIE-4.5-21B-A3B-PT"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
    torch_dtype=torch.bfloat16,
)

# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], add_special_tokens=False, return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()

# decode the generated ids
generate_text = tokenizer.decode(output_ids, skip_special_tokens=True)
print("generate_text:", generate_text)

vLLM inference

VLLM>=0.10.2 (excluding 0.11.0)

vllm serve baidu/ERNIE-4.5-21B-A3B-PT

Quantizations & VRAM

Q4_K_M4.5 bpw
12.1 GB
VRAM required
94%
Quality
Q6_K6.5 bpw
17.4 GB
VRAM required
97%
Quality
Q8_08 bpw
21.4 GB
VRAM required
100%
Quality
FP1616 bpw
42.3 GB
VRAM required
100%
Quality

GPUs that can run this model

At Q4_K_M quantization. Sorted by minimum VRAM.

Find the best GPU for ERNIE 4.5 21B A3B

Build Hardware for ERNIE 4.5 21B A3B