EXAONE 4.0 32B
Model Card
View on HuggingFaceπ License Updated! We are pleased to announce our more flexible licensing terms π€
βοΈ Try on <a href="https://friendli.ai/suite/~/serverless-endpoints/LGAI-EXAONE/EXAONE-4.0-32B/overview">FriendliAI</a> (licensed under commercial purposes)
<i>π’ EXAONE 4.0 is officially supported by HuggingFace transformers! Please check out the guide <a href="#quickstart">below</a></i>
EXAONE-4.0-32B
Introduction
We introduce EXAONE 4.0, which integrates a Non-reasoning mode and Reasoning mode to achieve both the excellent usability of EXAONE 3.5 and the advanced reasoning abilities of EXAONE Deep. To pave the way for the agentic AI era, EXAONE 4.0 incorporates essential features such as agentic tool use, and its multilingual capabilities are extended to support Spanish in addition to English and Korean.
The EXAONE 4.0 model series consists of two sizes: a mid-size 32B model optimized for high performance, and a small-size 1.2B model designed for on-device applications.
In the EXAONE 4.0 architecture, we apply new architectural changes compared to previous EXAONE models as below:
- Hybrid Attention: For the 32B model, we adopt hybrid attention scheme, which combines Local attention (sliding window attention) with Global attention (full attention) in a 3:1 ratio. We do not use RoPE (Rotary Positional Embedding) for global attention for better global context understanding.
- QK-Reorder-Norm: We reorder the LayerNorm position from the traditional Pre-LN scheme by applying LayerNorm directly to the attention and MLP outputs, and we add RMS normalization right after the Q and K projection. It helps yield better performance on downstream tasks despite consuming more computation.
For more details, please refer to our technical report, HuggingFace paper, blog, and GitHub.
Model Configuration
- Number of Parameters (without embeddings): 30.95B
- Number of Layers: 64
- Number of Attention Heads: GQA with 40-heads and 8-KV heads
- Vocab Size: 102,400
- Context Length: 131,072 tokens
Quickstart
You should install the transformers library with version >= 4.54.0.
Non-reasoning mode
For general use, you can use the EXAONE 4.0 models with the following example:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "LGAI-EXAONE/EXAONE-4.0-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="bfloat16",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# choose your prompt
prompt = "Explain how wonderful you are"
prompt = "Explica lo increΓble que eres"
prompt = "λκ° μΌλ§λ λλ¨νμ§ μ€λͺ
ν΄ λ΄"
messages = [
{"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=128,
do_sample=False,
)
print(tokenizer.decode(output[0]))
Reasoning mode
The EXAONE 4.0 models have reasoning capabilities for handling complex problems. You can activate reasoning mode by using the enable_thinking=True argument with the tokenizer, which opens a reasoning block that starts with <think> tag without closing it.
messages = [
{"role": "user", "content": "Which one is bigger, 3.12 vs 3.9?"}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
enable_thinking=True,
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=128,
do_sample=True,
temperature=0.6,
top_p=0.95
)
print(tokenizer.decode(output[0]))
[!IMPORTANT] The model generation with reasoning mode can be affected sensitively by sampling parameters, so please refer to the Usage Guideline for better quality.
Agentic tool use
The EXAONE 4.0 models can be used as agents with their tool calling capabilities. You can provide tool schemas to the model for effective tool calling.
import random
def roll_dice(max_num: int):
return random.randint(1, max_num)
tools = [
{
"type": "function",
"function": {
"name": "roll_dice",
"description": "Roll a dice with the number 1 to N. User can select the number N.",
"parameters": {
"type": "object",
"required": ["max_num"],
"properties": {
"max_num": {
"type": "int",
"description": "Max number of the dice"
}
}
}
}
}
]
messages = [
{"role": "user", "content": "Roll D6 dice twice!"}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
tools=tools,
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=1024,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(tokenizer.decode(output[0]))
Deployment
TensorRT-LLM
TensorRT-LLM officially supports EXAONE 4.0 models in the latest commits. Before it is released, you need to clone the TensorRT-LLM repository to build from source.
git clone https://github.com/NVIDIA/TensorRT-LLM.git
After cloning the repository, you need to build the source for installation. Please refer to the official documentation for a guide to build the TensorRT-LLM environment.
You can run the TensorRT-LLM server by following steps:
-
Write extra configuration YAML file
# extra_llm_api_config.yaml kv_cache_config: enable_block_reuse: false -
Run server with the configuration
trtllm-serve serve LGAI-EXAONE/EXAONE-4.0-32B --backend pytorch --extra_llm_api_options extra_llm_api_config.yaml
For more details, please refer to the documentation of EXAONE from TensorRT-LLM.
vLLM
vLLM officially supports EXAONE 4.0 models in the version of 0.10.0. You can run the vLLM server by following command:
vllm serve LGAI-EXAONE/EXAONE-4.0-32B --enable-auto-tool-choice --tool-call-parser hermes --reasoning-parser deepseek_r1
For more details, please refer to the vLLM documentation.
[!NOTE] Other inference engines including
sglangdon't support the EXAONE 4.0 officially now. We will update as soon as these libraries are updated.
Performance
The following tables show the evaluation results of each model, with reasoning and non-reasoning mode. The evaluation details can be found in the technical report.
- β denotes the model has a hybrid reasoning capability, evaluated by selecting reasoning / non-reasoning on the purpose.
- To assess Korean practical and professional knowledge, we adopt both the KMMLU-Redux and KMMLU-Pro benchmarks. Both datasets are publicly released!
32B Reasoning Mode
...
Quantizations & VRAM
Benchmarks (14)
GPUs that can run this model
At Q4_K_M quantization. Sorted by minimum VRAM.
Find the best GPU for EXAONE 4.0 32B
Build Hardware for EXAONE 4.0 32B