Bunny: A family of lightweight multimodal models
📖 Technical report | 🤗 Data | 🤖 Data | 🤗 HFSpace 🐰 Demo
Bunny-Llama-3-8B-V: 🤗 v1.1 | 🤗 v1.0 | 🤗 v1.0-GGUF
Bunny-4B: 🤗 v1.1 | 🤗 v1.0 | 🤗 v1.0-GGUF
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-3-mini, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
We are thrilled to introduce Bunny-Llama-3-8B-V, the pioneering vision-language model based on Llama-3, showcasing exceptional performance. The v1.1 version accepts high-resolution images up to 1152x1152.
Moreover, our Bunny-4B model built upon SigLIP and Phi-3-mini outperforms the state-of-the-art MLLMs, not only in comparison with models of similar size but also against larger MLLMs (7B and 13B). Also, the v1.1 version accepts high-resolution images up to 1152x1152.
Expand to see the performance of Bunny-4B
News and Updates
-
2024.07.23 🔥 All of the training strategy and data of latest Bunny is released! Check more details about Bunny in Technical Report, Data and Training Tutorial!
-
2024.07.21 🔥 SpatialBot, SpatialQA and SpatialBench are released! SpatialBot is an embodiment model based on Bunny, which comprehends spatial relationships by understanding and using depth information. Try model, dataset and benchmark at GitHub!
-
2024.06.20 🔥 MMR benchmark is released! It is a benchmark for measuring MLLMs' understanding ability and their robustness against misleading questions. Check the performance of Bunny and more details in GitHub!
-
2024.06.01 🔥 Bunny-v1.1-Llama-3-8B-V, supporting 1152x1152 resolution, is released! It is built upon SigLIP and Llama-3-8B-Instruct with S$
^2
$-Wrapper. Check more details in HuggingFace and wisemodel! 🐰 Demo -
2024.05.08 Bunny-v1.1-4B, supporting 1152x1152 resolution, is released! It is built upon SigLIP and Phi-3-Mini-4K 3.8B with S$
^2
$-Wrapper. Check more details in HuggingFace! 🐰 Demo -
2024.05.01 Bunny-v1.0-4B, a vision-language model based on Phi-3, is released! It is built upon SigLIP and Phi-3-Mini-4K 3.8B. Check more details in HuggingFace! 🤗 GGUF
-
2024.04.21 Bunny-Llama-3-8B-V, the first vision-language model based on Llama-3, is released! It is built upon SigLIP and Llama-3-8B-Instruct. Check more details in HuggingFace, ModelScope, and wisemodel! The GGUF format is in HuggingFace and wisemodel.
-
2024.04.18 Bunny-v1.0-3B-zh, powerful on English and Chinese, is released! It is built upon SigLIP and MiniCPM-2B. Check more details in HuggingFace, ModelScope, and wisemodel! The evaluation results are in the Evaluation. We sincerely thank Zhenwei Shao for his kind help.
-
2024.03.15 Bunny-v1.0-2B-zh, focusing on Chinese, is released! It is built upon SigLIP and Qwen1.5-1.8B. Check more details in HuggingFace, ModelScope, and wisemodel! The evaluation results are in the Evaluation.
-
2024.03.06 Bunny training data is released! Check more details about Bunny-v1.0-data in HuggingFace or ModelScope!
-
2024.02.20 Bunny technical report is ready! Check more details about Bunny here!
-
2024.02.07 Bunny is released! Bunny-v1.0-3B built upon SigLIP and Phi-2 outperforms the state-of-the-art MLLMs, not only in comparison with models of similar size but also against larger MLLMs (7B), and even achieves performance on par with LLaVA-13B! 🤗 Bunny-v1.0-3B
Quickstart
HuggingFace transformers
Here we show a code snippet to show you how to use Bunny-v1.1-Llama-3-8B-V, Bunny-v1.1-4B, Bunny-v1.0-3B and so on with HuggingFace transformers.
This snippet is only used for above models because we manually combine some configuration code into a single file for users' convenience. For example, you can check modeling_bunny_llama.py
and configuration_bunny_llama.py
and their related parts in the source code of Bunny to see the difference. For other models including models trained by yourself, we recommend loading them with installing the source code of Bunny. Or you can copy files like modeling_bunny_llama.py
and configuration_bunny_llama.py
into your model and modify auto_map
in config.json
, but we can't guarantee its correctness and you may need to modify some code to fit your model.
Before running the snippet, you need to install the following dependencies:
pip install torch transformers accelerate pillow
If the CUDA memory is enough, it would be faster to execute this snippet by setting CUDA_VISIBLE_DEVICES=0
.
Users especially those in Chinese mainland may want to refer to a HuggingFace mirror site.
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings
# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')
# set device
device = 'cuda' # or cpu
torch.set_default_device(device)
model_name = 'BAAI/Bunny-v1_1-Llama-3-8B-V' # or 'BAAI/Bunny-Llama-3-8B-V' or 'BAAI/Bunny-v1_1-4B' or 'BAAI/Bunny-v1_0-4B' or 'BAAI/Bunny-v1_0-3B' or 'BAAI/Bunny-v1_0-3B-zh' or 'BAAI/Bunny-v1_0-2B-zh'
offset_bos = 1 # for Bunny-v1_1-Llama-3-8B-V, Bunny-Llama-3-8B-V, Bunny-v1_1-4B, Bunny-v1_0-4B and Bunny-v1_0-3B-zh
# offset_bos = 0 for Bunny-v1_0-3B and Bunny-v1_0-2B-zh
# create model
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16, # float32 for cpu
device_map='auto',
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True)
# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1][offset_bos:], dtype=torch.long).unsqueeze(0).to(device)
# image, sample images can be found in https://huggingface.co/BAAI/Bunny-v1_1-Llama-3-8B-V/tree/main/images
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype, device=device)
# generate
output_ids = model.generate(
input_ids,
images=image_tensor,
max_new_tokens=100,
use_cache=True,
repetition_penalty=1.0 # increase this to avoid chattering
)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
ModelScope
We advise users especially those in Chinese mainland to use ModelScope.
snapshot_download
can help you solve issues concerning downloading checkpoints.
Expand to see the snippet
Before running the snippet, you need to install the following dependencies:
pip install torch modelscope transformers accelerate pillow
If the CUDA memory is enough, it would be faster to execute this snippet by setting CUDA_VISIBLE_DEVICES=0
.
import torch
import transformers
from modelscope import AutoTokenizer, AutoModelForCausalLM
from modelscope.hub.snapshot_download import snapshot_download
from PIL import Image
import warnings
# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')
# set device
device = 'cuda' # or cpu
torch.set_default_device(device)
model_name = 'BAAI/Bunny-Llama-3-8B-V' # or 'BAAI/Bunny-v1.0-3B' or 'BAAI/Bunny-v1.0-3B-zh' or 'BAAI/Bunny-v1.0-2B-zh'
offset_bos = 1 # for Bunny-Llama-3-8B-V and Bunny-v1.0-3B-zh
# offset_bos = 0 for Bunny-v1.0-3B and Bunny-v1.0-2B-zh
# create model
snapshot_download(model_id='thomas/siglip-so400m-patch14-384')
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16, # float32 for cpu
device_map='auto',
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True)
# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1][offset_bos:], dtype=torch.long).unsqueeze(0).to(device)
# image, sample images can be found in images folder on https://www.modelscope.cn/models/BAAI/Bunny-Llama-3-8B-V/files
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype, device=device)
# generate
output_ids = model.generate(
input_ids,
images=image_tensor,
max_new_tokens=100,
use_cache=True,
repetition_penalty=1.0 # increase this to avoid chattering
)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
Model Zoo
Evaluation
Checkpoint | MME$^\text{P} $ | MME$^\text{C} $ | MMB$^{\text{T}/\text{D}} $ | MMB-CN$^{\text{T}/ \text{D}} $ | SEED(-IMG) | MMMU$^{\text{V}/\text{T}} $ | VQA$^\text{v2} $ | GQA | SQA$^\text{I} $ | POPE |
---|---|---|---|---|---|---|---|---|---|---|
bunny-phi-1.5-eva-lora | 1213.7 | 278.9 | 60.9/56.8 | - | 56.4/64.1 | 30.0/28.4 | 76.5 | 60.4 | 58.2 | 86.1 |