Jamba 1.5 Mini 52B
Model Card
View on HuggingFaceModel Information
Please note that this version will be deprecated on May 6, 2024. We encourage you to transition to the new version, which can be found here.
The AI21 Jamba 1.5 family of models is state-of-the-art, hybrid SSM-Transformer instruction following foundation models. The Jamba models are the most powerful & efficient long-context models on the market, which deliver up to 2.5X faster inference than leading models of comparable sizes.
The models demonstrate superior long context handling, speed, and quality. They mark the first time a non-Transformer model has been successfully scaled to the quality and strength of the market’s leading models.
Jamba 1.5 Mini (12B active/52B total) and Jamba 1.5 Large (94B active/398B total) are also optimized for business use cases and capabilities such as function calling, structured output (JSON), and grounded generation.
The models are released under the Jamba Open Model License, a permissive license allowing full research use and commercial use under the license terms. If you need to license the model for your needs, talk to us.
For more details of this model, see the white paper and the release blog post.
Model Details
- Developed by: AI21
- Model type: Joint Attention and Mamba (Jamba)
- License: Jamba Open Model License
- Context length: 256K
- Knowledge cutoff date: March 5, 2024
- Supported languages: English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic and Hebrew
Results on common benchmarks
| Benchmark | Jamba 1.5 Mini | Jamba 1.5 Large |
|---|---|---|
| Arena Hard | 46.1 | 65.4 |
| Wild Bench | 42.4 | 48.5 |
| MMLU (CoT) | 69.7 | 81.2 |
| MMLU Pro (CoT) | 42.5 | 53.5 |
| GPQA | 32.3 | 36.9 |
| ARC Challenge | 85.7 | 93 |
| BFCL | 80.6 | 85.5 |
| GSM-8K | 75.8 | 87 |
| RealToxicity (lower is better) | 8.1 | 6.7 |
| TruthfulQA | 54.1 | 58.3 |
RULER Benchmark - Effective context length
| Models | Claimed Length | Effective Length | 4K | 8K | 16K | 32K | 64K | 128K | 256K |
|---|---|---|---|---|---|---|---|---|---|
| Jamba 1.5 Large (94B/398B) | 256K | 256K | <ins>96.7</ins> | <ins>96.6</ins> | <ins>96.4</ins> | <ins>96.0</ins> | <ins>95.4</ins> | <ins>95.1</ins> | <ins>93.9</ins> |
| Jamba 1.5 Mini (12B/52B) | 256K | 256K | <ins>95.7</ins> | <ins>95.2</ins> | <ins>94.7</ins> | <ins>93.8</ins> | <ins>92.7</ins> | <ins>89.8</ins> | <ins>86.1</ins> |
| Gemini 1.5 Pro | 1M | >128K | <ins>96.7</ins> | <ins>95.8</ins> | <ins>96.0</ins> | <ins>95.9</ins> | <ins>95.9</ins> | <ins>94.4</ins> | -- |
| GPT-4 1106-preview | 128K | 64K | <ins>96.6</ins> | <ins>96.3</ins> | <ins>95.2</ins> | <ins>93.2</ins> | <ins>87.0</ins> | 81.2 | -- |
| Llama 3.1 70B | 128K | 64K | <ins>96.5</ins> | <ins>95.8</ins> | <ins>95.4</ins> | <ins>94.8</ins> | <ins>88.4</ins> | 66.6 | -- |
| Command R-plus (104B) | 128K | 32K | <ins>95.6</ins> | <ins>95.2</ins> | <ins>94.2</ins> | <ins>92.0</ins> | 84.3 | 63.1 | -- |
| Llama 3.1 8B | 128K | 32K | <ins>95.5</ins> | <ins>93.8</ins> | <ins>91.6</ins> | <ins>87.4</ins> | 84.7 | 77.0 | -- |
| Mistral Large 2 (123B) | 128K | 32K | <ins>96.2</ins> | <ins>96.1</ins> | <ins>95.1</ins> | <ins>93.0</ins> | 78.8 | 23.7 | -- |
| Mixtral 8x22B (39B/141B) | 64K | 32K | <ins>95.6</ins> | <ins>94.9</ins> | <ins>93.4</ins> | <ins>90.9</ins> | 84.7 | 31.7 | -- |
| Mixtral 8x7B (12.9B/46.7B) | 32K | 32K | <ins>94.9</ins> | <ins>92.1</ins> | <ins>92.5</ins> | <ins>85.9</ins> | 72.4 | 44.5 | -- |
Multilingual MMLU
| Language | Jamba 1.5 Large | Jamba 1.5 Mini |
|---|---|---|
| French | 75.8 | 65.9 |
| Spanish | 75.5 | 66.3 |
| Portuguese | 75.5 | 66.7 |
| Italian | 75.2 | 65.1 |
| Dutch | 74.6 | 65.0 |
| German | 73.9 | 63.8 |
| Arabic | 67.1 | 57.3 |
Usage
Prerequisites
In order to run optimized Mamba implementations, you first need to install mamba-ssm and causal-conv1d:
pip install mamba-ssm causal-conv1d>=1.2.0
You also have to have the model on a CUDA device.
Run the model with vLLM
The recommended way to perform efficient inference with Jamba 1.5 Mini is using vLLM. First, make sure to install vLLM (version 0.5.4 or higher is required)
pip install vllm>=0.5.4
In the example below, number_gpus should match the number of GPUs you want to deploy Jamba 1.5 Mini on. A minimum of 2 80GB GPUs is required.
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model = "ai21labs/AI21-Jamba-1.5-Mini"
number_gpus = 2
llm = LLM(model=model,
max_model_len=200*1024,
tensor_parallel_size=number_gpus)
tokenizer = AutoTokenizer.from_pretrained(model)
messages = [
{"role": "system", "content": "You are an ancient oracle who speaks in cryptic but wise phrases, always hinting at deeper meanings."},
{"role": "user", "content": "Hello!"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
sampling_params = SamplingParams(temperature=0.4, top_p=0.95, max_tokens=100)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
#Output: Seek and you shall find. The path is winding, but the journey is enlightening. What wisdom do you seek from the ancient echoes?
With the default BF16 precision on 2 80GB A100 GPUs and default vLLM configuration, you'll be able to perform inference on prompts up to 200K tokens long. On more than 2 80GB GPUs, you can easily fit the full 256K context.
<u>Note:</u> vLLM's main branch has some memory utilization improvements specific to the Jamba architecture that allow using the full 256K context length on 2 80 GPUs. You can build vLLM from source if you wish to make use of them.
ExpertsInt8 quantization
We've developed an innovative and efficient quantization technique, ExpertsInt8, designed for MoE models deployed in vLLM, including Jamba models. Using it, you'll be able to deploy Jamba 1.5 Mini on a single 80GB GPU.
In order to use ExpertsInt8, you need to use vllm version 0.5.5 or higher: pip install vllm>=0.5.5
With default vLLM configuration, you can fit prompts up to 100K on a single 80GB A100 GPU:
import os
os.environ['VLLM_FUSED_MOE_CHUNK_SIZE']='32768' # This is a workaround a bug in vLLM's fused_moe kernel
from vllm import LLM
llm = LLM(model="ai21labs/AI21-Jamba-1.5-Mini",
max_model_len=100*1024,
quantization="experts_int8")
Run the model with transformers
The following example loads Jamba 1.5 Mini to the GPU in BF16 precision, uses optimized FlashAttention2 and Mamba kernels, and parallelizes the model across multiple GPUs using accelerate. Note that in half precision (FP16/BF16), Jamba 1.5 Mini is too large to fit on a single 80GB GPU, so you'll need at least 2 such GPUs.
...
Quantizations & VRAM
Benchmarks (13)
GPUs that can run this model
At Q4_K_M quantization. Sorted by minimum VRAM.
Find the best GPU for Jamba 1.5 Mini 52B
Build Hardware for Jamba 1.5 Mini 52B