Project Icon

Retrieval-Augmented-Visual-Question-Answering

细粒度后期交互多模态检索视觉问答系统

这个项目开发了一个基于细粒度后期交互多模态检索的视觉问答系统。系统在OK-VQA等多个基准数据集上实现了先进的检索和问答性能。它采用模块化架构,包含预训练映射网络、FLMR检索器和BLIP2读取器等关键组件。项目提供完整的代码库,支持训练和评估,并发布了预训练模型和处理后的数据集,便于研究人员进行后续研究。

Retrieval-augmented Visual Question Answering with Fine-grained Late-interaction Multi-modal Retrieval

PWC PWC PWC PWC

This is the official repository of the Retrieval Augmented Visual Question Answering (RAVQA) project. The project covers RAVQA and RAVQA-v2 (equipped with Fine-grained Late-interaction Multi-modal Retrieval).

🔥🔥News

  • [10/08/2024] We received many requests regarding adding multilingual abilities to PreFLMR. We announce that we are now training the Chinese version of PreFLMR and will release it very soon. Stay tuned!
  • [05/06/2024] 🔥🔥🔥The PreFLMR paper has been accepted to appear at ACL 2024! The camera-ready version of the paper has been updated here to include more details and analyses. Along with the acceptance, we have made some important updates to help you use the model and extend your research easier:
    • Added an evaluation script that reproduces the results in the PreFLMR paper here
    • Added the updated benchmark results with the transformer implementation here
    • Added an example script to fine-tune PreFLMR on a custom retrieval dataset here
    • IMPORTANT: fixed the OVEN data splits in the M2KR benchmark, and updated each entry with a fixed instruction to ensure the evaluation result is not affected by random sampling of instructions. Please delete your local cache and download the dataset again.
  • [13/04/2024] 🔥 We highlight another valuable and concurrent research on training instruction-following, universal, multi-task multi-modal retrievers: UniIR: Training and Benchmarking Universal Multimodal Information Retrievers, which was done by the researchers of the University of Waterloo. They also shared the M-Beir benchmark which can be used to train and evaluate multi-modal universal information retrievers. In the near future, we may collaborate to combine the two benchmarks together to facilitate the advance of this field.
  • [06/03/2024] 🔥🔥🔥The implementation based on huggingface-transformers is now available here!
  • [20/02/2024] 🔥🔥🔥 The PreFLMR project page has been launched! Explore a captivating demo showcasing PreFLMR_ViT-G, our largest model yet. Additionally, access pre-trained checkpoints and the M2KR benchmark, designed for assessing general-purpose knowledge retrievers. Stay tuned as we will soon upload a huggingface-compatible implementation along with example scripts for indexing and retrieval, providing effortless access via FLMRModelForRetrieval.from_pretrained(...).
  • [14/02/2024] 🔥Our follow-up work, PreFLMR, is now available here! PreFLMR is a general-purpose retriever that was pre-trained on more than ten million multi-modal retrieval data and achieved strong performance across a wide range of knowledge-intensive tasks. It can also serve as a strong foundation retrieval model that can be fine-tuned to fit any downstream retrieval tasks. We will release the model through huggingface-transformers very soon, which allows quick deployment in minutes.
  • [31/01/2024] 🔥We are happy to announce that the training and testing code for FLMR is now released! For the legacy RAVQA-v1 and the code for FVQA, please checkout to legacy_v1 or tag v1.0. We are also preparing a new FLMR implementation for Huggingface transformers, which will be released as plug-in-and-play models.🔥
  • [03/10/2023] Our follow-up work "Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering" has been accepted to appear at NeurIPS 2023! The paper can be found here here. If you prefer a 3-minute technical summary, look at this post. The code will be released in this repository soon. We are happy to announce that we have made a major change to our code framework such that experiment management and data processing are more flexible.
  • [01/05/2023] FVQA 2.0 is released here.
  • [08/02/2023] Our work for creating adversarial samples for the FVQA dataset is accepted to appear at EACL 2023. The dataset and codes will be released here soon.
  • [01/01/2023] We released an initial version of our work. The framework supports:
    • RA-VQA-NoDPR (T5 baseline)
    • RA-VQA-FrDPR (DPR retriever + T5 reader)
    • RA-VQA (joint training of DPR + T5)
    • TRiG (Our replication of TRiG)
    • Datasets: OK-VQA and F-VQA
  • [19/12/2022] We plan to release the code within Dec, 2022. The author is currently overwhelmed by internship work. Thanks for waiting!
  • [12/12/2022] We plan to release the code of our reproduced TRiG system as well.

Table of Content

Benchmarks

Benchmark Results for PreFLMR in the dedicated FLMR codebase

ModelWIT Recall@10IGLUE Recall@1KVQA Recall@5MSMARCO Recall@5OVEN Recall@5LLaVA Recall@1EVQA Recall@5EVQA Pseudo Recall@5OKVQA Recall@5OKVQA Pseudo Recall@5Infoseek Recall@5Infoseek Pseudo Recall@5
LinWeizheDragon/PreFLMR_ViT-G🤗0.6190.7180.4190.7830.6430.7260.6250.7210.3020.6740.3920.577
LinWeizheDragon/PreFLMR_ViT-L🤗0.6050.6990.4400.7790.6080.7290.6090.7080.3140.6900.3740.578
LinWeizheDragon/PreFLMR_ViT-B🤗0.4270.5740.2940.7860.4680.6730.5500.6630.2720.6580.2600.496

Note: We converted the checkpoints from PyTorch to Huggingface-transformers, whose benchmark results differ from the numbers reported in the original paper slightly. You can reproduce the results in the above paper by referring to the instructions in this document.

Benchmark Results for FLMR in this codebase

Using the provided codebase, it is expected to obtain the following results.

ModelRecall@5Notes
FLMR (9 ROIs)89.20
FLMR (9 ROIs)89.28Using the pretrained ckpt
ModelVQA ScoreNotes
RA-VQA54.51In the previous paper
RA-VQA-v261.86with FLMR

Since we refactored the codebase significantly in clean-up, these numbers may not match exactly to what were reported in the paper.

Resources

We host the data required for running this system in Huggingface and Baidu Cloud (coming soon).

The data contains:

  • Packed pre-extracted data for OK-VQA (including OCR features, VinVL object detection features, Oscar captioning features)
  • FLMR with the mapping network pretrained on WIT (batch size 30, in-batch negative sampling, 1 GPU, grad accumulation 4)
  • FLMR pretrained on OK-VQA and Google Search dataset (batch size 30, in-batch negative sampling, 1 GPU, grad accumulation 4)

You can download these resources from Huggingface altogether: Combined Download on Huggingface.

wget https://huggingface.co/datasets/BByrneLab/RAVQAV2Data/resolve/main/RAVQA_v2_data.tar.gz?download=true

After downloading and extracting the tar.gz, you need to unzip all .zip files under okvqa folder and okvqa/pre-extracted/OCR.zip.

After otaining all these resources, you should:

  • Change the data paths in configs/okvqa/okvqa_data_config.libsonnet
  • Change the paths to TokenizerModelVersion in configs/okvqa/FLMR_with_ROI.jsonnet
  • Change the paths to EncoderModelVersion and TokenizerModelVersion in configs/okvqa/FLMR_base_preload_vision_features.jsonnet

By downloading the provided OK-VQA data, you must comply with the OK-VQA license and MS COCO license.

Detailed Instructions

Overview

The framework was designed and implemented by Weizhe Lin, University of Cambridge. All rights are reserved. Use with research purposes is allowed. This framework is designed for research purpose, with flexibility for extension. It is not a perfect framework for production, of course.

The training and testing are backboned by pytorch-lightning. The pre-trained Transformer models are from Huggingface-transformers. The training platform is Pytorch.

In this release, we designed a new framework that wraps the data processing/training/testing utilities - Runway For ML. It is a highly efficient framework that enables flexible experimentation and data processing. Data processing is formulated as a Directional Acyclic Graph, on which the framework traverses through nodes to prepare data. This framework enables efficient data processing at million scale. For more details, please refer to the README of the framework. When cloning this repository, please use the kbvqa_dev branch.

The indexing and searching of FLMR is supported by FAISS and ColBERT. The ColBERT engine is plugged into this project as a third-party package. We fixed many errors in this package following LI-RAGE.

Structure

The framework consists of:

  1. main.py: the main program. It loads a config file and override some entries with command-line arguments. It initialises a RunwayExperiment instance to execute training and testing.
  2. Data Ops: it loads the data according to configs specified in data_pipeline. The details of this feature can be found in here
  3. Datasets: they are automatically loaded by the data loader wrapper. .collate_fn is defined to collate the data. An decorator class ModuleParser is used to help generate the training inputs. This decorator class generates input dict according to configs (config.model_config.input_modules/decorder_input_modules/output_modules).
  4. **Model
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号