Project Icon

Bunny

轻量高效多模态模型支持高分辨率图像分析

Bunny是一个轻量高效的多模态模型家族,集成多种视觉编码器和语言骨干网络。该项目通过优化训练数据提升小规模模型性能,其中Bunny-Llama-3-8B-V模型支持1152x1152分辨率图像处理,在多项视觉语言任务中表现优异。Bunny为开发者提供了灵活的多模态AI解决方案。

Bunny: A family of lightweight multimodal models

Logo

📖 Technical report | 🤗 Data | 🤖 Data | 🤗 HFSpace 🐰 Demo

Bunny-Llama-3-8B-V: 🤗 v1.1 | 🤗 v1.0 | 🤗 v1.0-GGUF

Bunny-4B: 🤗 v1.1 | 🤗 v1.0 | 🤗 v1.0-GGUF

Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-3-mini, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.

We are thrilled to introduce Bunny-Llama-3-8B-V, the pioneering vision-language model based on Llama-3, showcasing exceptional performance. The v1.1 version accepts high-resolution images up to 1152x1152.

comparison_8B

Moreover, our Bunny-4B model built upon SigLIP and Phi-3-mini outperforms the state-of-the-art MLLMs, not only in comparison with models of similar size but also against larger MLLMs (7B and 13B). Also, the v1.1 version accepts high-resolution images up to 1152x1152.

Expand to see the performance of Bunny-4B

News and Updates

  • 2024.07.23 🔥 All of the training strategy and data of latest Bunny is released! Check more details about Bunny in Technical Report, Data and Training Tutorial!

  • 2024.07.21 🔥 SpatialBot, SpatialQA and SpatialBench are released! SpatialBot is an embodiment model based on Bunny, which comprehends spatial relationships by understanding and using depth information. Try model, dataset and benchmark at GitHub!

  • 2024.06.20 🔥 MMR benchmark is released! It is a benchmark for measuring MLLMs' understanding ability and their robustness against misleading questions. Check the performance of Bunny and more details in GitHub!

  • 2024.06.01 🔥 Bunny-v1.1-Llama-3-8B-V, supporting 1152x1152 resolution, is released! It is built upon SigLIP and Llama-3-8B-Instruct with S$^2$-Wrapper. Check more details in HuggingFace and wisemodel! 🐰 Demo

  • 2024.05.08 Bunny-v1.1-4B, supporting 1152x1152 resolution, is released! It is built upon SigLIP and Phi-3-Mini-4K 3.8B with S$^2$-Wrapper. Check more details in HuggingFace! 🐰 Demo

  • 2024.05.01 Bunny-v1.0-4B, a vision-language model based on Phi-3, is released! It is built upon SigLIP and Phi-3-Mini-4K 3.8B. Check more details in HuggingFace! 🤗 GGUF

  • 2024.04.21 Bunny-Llama-3-8B-V, the first vision-language model based on Llama-3, is released! It is built upon SigLIP and Llama-3-8B-Instruct. Check more details in HuggingFace, ModelScope, and wisemodel! The GGUF format is in HuggingFace and wisemodel.

  • 2024.04.18 Bunny-v1.0-3B-zh, powerful on English and Chinese, is released! It is built upon SigLIP and MiniCPM-2B. Check more details in HuggingFace, ModelScope, and wisemodel! The evaluation results are in the Evaluation. We sincerely thank Zhenwei Shao for his kind help.

  • 2024.03.15 Bunny-v1.0-2B-zh, focusing on Chinese, is released! It is built upon SigLIP and Qwen1.5-1.8B. Check more details in HuggingFace, ModelScope, and wisemodel! The evaluation results are in the Evaluation.

  • 2024.03.06 Bunny training data is released! Check more details about Bunny-v1.0-data in HuggingFace or ModelScope!

  • 2024.02.20 Bunny technical report is ready! Check more details about Bunny here!

  • 2024.02.07 Bunny is released! Bunny-v1.0-3B built upon SigLIP and Phi-2 outperforms the state-of-the-art MLLMs, not only in comparison with models of similar size but also against larger MLLMs (7B), and even achieves performance on par with LLaVA-13B! 🤗 Bunny-v1.0-3B

Quickstart

HuggingFace transformers

Here we show a code snippet to show you how to use Bunny-v1.1-Llama-3-8B-V, Bunny-v1.1-4B, Bunny-v1.0-3B and so on with HuggingFace transformers.

This snippet is only used for above models because we manually combine some configuration code into a single file for users' convenience. For example, you can check modeling_bunny_llama.py and configuration_bunny_llama.py and their related parts in the source code of Bunny to see the difference. For other models including models trained by yourself, we recommend loading them with installing the source code of Bunny. Or you can copy files like modeling_bunny_llama.py and configuration_bunny_llama.py into your model and modify auto_map in config.json, but we can't guarantee its correctness and you may need to modify some code to fit your model.

Before running the snippet, you need to install the following dependencies:

pip install torch transformers accelerate pillow

If the CUDA memory is enough, it would be faster to execute this snippet by setting CUDA_VISIBLE_DEVICES=0.

Users especially those in Chinese mainland may want to refer to a HuggingFace mirror site.

import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings

# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')

# set device
device = 'cuda'  # or cpu
torch.set_default_device(device)

model_name = 'BAAI/Bunny-v1_1-Llama-3-8B-V' # or 'BAAI/Bunny-Llama-3-8B-V' or 'BAAI/Bunny-v1_1-4B' or 'BAAI/Bunny-v1_0-4B' or 'BAAI/Bunny-v1_0-3B' or 'BAAI/Bunny-v1_0-3B-zh' or 'BAAI/Bunny-v1_0-2B-zh'
offset_bos = 1 # for Bunny-v1_1-Llama-3-8B-V, Bunny-Llama-3-8B-V, Bunny-v1_1-4B, Bunny-v1_0-4B and Bunny-v1_0-3B-zh
# offset_bos = 0 for Bunny-v1_0-3B and Bunny-v1_0-2B-zh

# create model
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16, # float32 for cpu
    device_map='auto',
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
    model_name,
    trust_remote_code=True)

# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1][offset_bos:], dtype=torch.long).unsqueeze(0).to(device)

# image, sample images can be found in https://huggingface.co/BAAI/Bunny-v1_1-Llama-3-8B-V/tree/main/images
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype, device=device)

# generate
output_ids = model.generate(
    input_ids,
    images=image_tensor,
    max_new_tokens=100,
    use_cache=True,
    repetition_penalty=1.0 # increase this to avoid chattering
)[0]

print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())

ModelScope

We advise users especially those in Chinese mainland to use ModelScope. snapshot_download can help you solve issues concerning downloading checkpoints.

Expand to see the snippet

Before running the snippet, you need to install the following dependencies:

pip install torch modelscope transformers accelerate pillow

If the CUDA memory is enough, it would be faster to execute this snippet by setting CUDA_VISIBLE_DEVICES=0.

import torch
import transformers
from modelscope import AutoTokenizer, AutoModelForCausalLM
from modelscope.hub.snapshot_download import snapshot_download
from PIL import Image
import warnings

# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')

# set device
device = 'cuda'  # or cpu
torch.set_default_device(device)

model_name = 'BAAI/Bunny-Llama-3-8B-V' # or 'BAAI/Bunny-v1.0-3B' or 'BAAI/Bunny-v1.0-3B-zh' or 'BAAI/Bunny-v1.0-2B-zh'
offset_bos = 1 # for Bunny-Llama-3-8B-V and Bunny-v1.0-3B-zh
# offset_bos = 0 for Bunny-v1.0-3B and Bunny-v1.0-2B-zh

# create model
snapshot_download(model_id='thomas/siglip-so400m-patch14-384')
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16, # float32 for cpu
    device_map='auto',
    trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
    model_name,
    trust_remote_code=True)

# text prompt
prompt = 'Why is the image funny?'
text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:"
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1][offset_bos:], dtype=torch.long).unsqueeze(0).to(device)

# image, sample images can be found in images folder on https://www.modelscope.cn/models/BAAI/Bunny-Llama-3-8B-V/files
image = Image.open('example_2.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype, device=device)

# generate
output_ids = model.generate(
    input_ids,
    images=image_tensor,
    max_new_tokens=100,
    use_cache=True,
    repetition_penalty=1.0 # increase this to avoid chattering
)[0]

print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())

Model Zoo

Evaluation

CheckpointMME$^\text{P}$MME$^\text{C}$MMB$^{\text{T}/\text{D}}$MMB-CN$^{\text{T}/ \text{D}}$SEED(-IMG)MMMU$^{\text{V}/\text{T}}$VQA$^\text{v2}$GQASQA$^\text{I}$POPE
bunny-phi-1.5-eva-lora1213.7278.960.9/56.8-56.4/64.130.0/28.476.560.458.286.1
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号