Retrieval-Augmented-Visual-Question-Answering

Retrieval-Augmented-Visual-Question-Answering

细粒度后期交互多模态检索视觉问答系统

这个项目开发了一个基于细粒度后期交互多模态检索的视觉问答系统。系统在OK-VQA等多个基准数据集上实现了先进的检索和问答性能。它采用模块化架构,包含预训练映射网络、FLMR检索器和BLIP2读取器等关键组件。项目提供完整的代码库,支持训练和评估,并发布了预训练模型和处理后的数据集,便于研究人员进行后续研究。

FLMR视觉问答多模态检索预训练模型基准测试Github开源项目

Retrieval-augmented Visual Question Answering with Fine-grained Late-interaction Multi-modal Retrieval

PWC PWC PWC PWC

This is the official repository of the Retrieval Augmented Visual Question Answering (RAVQA) project. The project covers RAVQA and RAVQA-v2 (equipped with Fine-grained Late-interaction Multi-modal Retrieval).

🔥🔥News

  • [10/08/2024] We received many requests regarding adding multilingual abilities to PreFLMR. We announce that we are now training the Chinese version of PreFLMR and will release it very soon. Stay tuned!
  • [05/06/2024] 🔥🔥🔥The PreFLMR paper has been accepted to appear at ACL 2024! The camera-ready version of the paper has been updated here to include more details and analyses. Along with the acceptance, we have made some important updates to help you use the model and extend your research easier:
    • Added an evaluation script that reproduces the results in the PreFLMR paper here
    • Added the updated benchmark results with the transformer implementation here
    • Added an example script to fine-tune PreFLMR on a custom retrieval dataset here
    • IMPORTANT: fixed the OVEN data splits in the M2KR benchmark, and updated each entry with a fixed instruction to ensure the evaluation result is not affected by random sampling of instructions. Please delete your local cache and download the dataset again.
  • [13/04/2024] 🔥 We highlight another valuable and concurrent research on training instruction-following, universal, multi-task multi-modal retrievers: UniIR: Training and Benchmarking Universal Multimodal Information Retrievers, which was done by the researchers of the University of Waterloo. They also shared the M-Beir benchmark which can be used to train and evaluate multi-modal universal information retrievers. In the near future, we may collaborate to combine the two benchmarks together to facilitate the advance of this field.
  • [06/03/2024] 🔥🔥🔥The implementation based on huggingface-transformers is now available here!
  • [20/02/2024] 🔥🔥🔥 The PreFLMR project page has been launched! Explore a captivating demo showcasing PreFLMR_ViT-G, our largest model yet. Additionally, access pre-trained checkpoints and the M2KR benchmark, designed for assessing general-purpose knowledge retrievers. Stay tuned as we will soon upload a huggingface-compatible implementation along with example scripts for indexing and retrieval, providing effortless access via FLMRModelForRetrieval.from_pretrained(...).
  • [14/02/2024] 🔥Our follow-up work, PreFLMR, is now available here! PreFLMR is a general-purpose retriever that was pre-trained on more than ten million multi-modal retrieval data and achieved strong performance across a wide range of knowledge-intensive tasks. It can also serve as a strong foundation retrieval model that can be fine-tuned to fit any downstream retrieval tasks. We will release the model through huggingface-transformers very soon, which allows quick deployment in minutes.
  • [31/01/2024] 🔥We are happy to announce that the training and testing code for FLMR is now released! For the legacy RAVQA-v1 and the code for FVQA, please checkout to legacy_v1 or tag v1.0. We are also preparing a new FLMR implementation for Huggingface transformers, which will be released as plug-in-and-play models.🔥
  • [03/10/2023] Our follow-up work "Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering" has been accepted to appear at NeurIPS 2023! The paper can be found here here. If you prefer a 3-minute technical summary, look at this post. The code will be released in this repository soon. We are happy to announce that we have made a major change to our code framework such that experiment management and data processing are more flexible.
  • [01/05/2023] FVQA 2.0 is released here.
  • [08/02/2023] Our work for creating adversarial samples for the FVQA dataset is accepted to appear at EACL 2023. The dataset and codes will be released here soon.
  • [01/01/2023] We released an initial version of our work. The framework supports:
    • RA-VQA-NoDPR (T5 baseline)
    • RA-VQA-FrDPR (DPR retriever + T5 reader)
    • RA-VQA (joint training of DPR + T5)
    • TRiG (Our replication of TRiG)
    • Datasets: OK-VQA and F-VQA
  • [19/12/2022] We plan to release the code within Dec, 2022. The author is currently overwhelmed by internship work. Thanks for waiting!
  • [12/12/2022] We plan to release the code of our reproduced TRiG system as well.

Table of Content

Benchmarks

Benchmark Results for PreFLMR in the dedicated FLMR codebase

ModelWIT Recall@10IGLUE Recall@1KVQA Recall@5MSMARCO Recall@5OVEN Recall@5LLaVA Recall@1EVQA Recall@5EVQA Pseudo Recall@5OKVQA Recall@5OKVQA Pseudo Recall@5Infoseek Recall@5Infoseek Pseudo Recall@5
LinWeizheDragon/PreFLMR_ViT-G🤗0.6190.7180.4190.7830.6430.7260.6250.7210.3020.6740.3920.577
LinWeizheDragon/PreFLMR_ViT-L🤗0.6050.6990.4400.7790.6080.7290.6090.7080.3140.6900.3740.578
LinWeizheDragon/PreFLMR_ViT-B🤗0.4270.5740.2940.7860.4680.6730.5500.6630.2720.6580.2600.496

Note: We converted the checkpoints from PyTorch to Huggingface-transformers, whose benchmark results differ from the numbers reported in the original paper slightly. You can reproduce the results in the above paper by referring to the instructions in this document.

Benchmark Results for FLMR in this codebase

Using the provided codebase, it is expected to obtain the following results.

ModelRecall@5Notes
FLMR (9 ROIs)89.20
FLMR (9 ROIs)89.28Using the pretrained ckpt
ModelVQA ScoreNotes
RA-VQA54.51In the previous paper
RA-VQA-v261.86with FLMR

Since we refactored the codebase significantly in clean-up, these numbers may not match exactly to what were reported in the paper.

Resources

We host the data required for running this system in Huggingface and Baidu Cloud (coming soon).

The data contains:

  • Packed pre-extracted data for OK-VQA (including OCR features, VinVL object detection features, Oscar captioning features)
  • FLMR with the mapping network pretrained on WIT (batch size 30, in-batch negative sampling, 1 GPU, grad accumulation 4)
  • FLMR pretrained on OK-VQA and Google Search dataset (batch size 30, in-batch negative sampling, 1 GPU, grad accumulation 4)

You can download these resources from Huggingface altogether: Combined Download on Huggingface.

wget https://huggingface.co/datasets/BByrneLab/RAVQAV2Data/resolve/main/RAVQA_v2_data.tar.gz?download=true

After downloading and extracting the tar.gz, you need to unzip all .zip files under okvqa folder and okvqa/pre-extracted/OCR.zip.

After otaining all these resources, you should:

  • Change the data paths in configs/okvqa/okvqa_data_config.libsonnet
  • Change the paths to TokenizerModelVersion in configs/okvqa/FLMR_with_ROI.jsonnet
  • Change the paths to EncoderModelVersion and TokenizerModelVersion in configs/okvqa/FLMR_base_preload_vision_features.jsonnet

By downloading the provided OK-VQA data, you must comply with the OK-VQA license and MS COCO license.

Detailed Instructions

Overview

The framework was designed and implemented by Weizhe Lin, University of Cambridge. All rights are reserved. Use with research purposes is allowed. This framework is designed for research purpose, with flexibility for extension. It is not a perfect framework for production, of course.

The training and testing are backboned by pytorch-lightning. The pre-trained Transformer models are from Huggingface-transformers. The training platform is Pytorch.

In this release, we designed a new framework that wraps the data processing/training/testing utilities - Runway For ML. It is a highly efficient framework that enables flexible experimentation and data processing. Data processing is formulated as a Directional Acyclic Graph, on which the framework traverses through nodes to prepare data. This framework enables efficient data processing at million scale. For more details, please refer to the README of the framework. When cloning this repository, please use the kbvqa_dev branch.

The indexing and searching of FLMR is supported by FAISS and ColBERT. The ColBERT engine is plugged into this project as a third-party package. We fixed many errors in this package following LI-RAGE.

Structure

The framework consists of:

  1. main.py: the main program. It loads a config file and override some entries with command-line arguments. It initialises a RunwayExperiment instance to execute training and testing.
  2. Data Ops: it loads the data according to configs specified in data_pipeline. The details of this feature can be found in here
  3. Datasets: they are automatically loaded by the data loader wrapper. .collate_fn is defined to collate the data. An decorator class ModuleParser is used to help generate the training inputs. This decorator class generates input dict according to configs (config.model_config.input_modules/decorder_input_modules/output_modules).
  4. **Model

编辑推荐精选

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

TRELLIS

TRELLIS

用于可扩展和多功能 3D 生成的结构化 3D 潜在表示

TRELLIS 是一个专注于 3D 生成的项目,它利用结构化 3D 潜在表示技术,实现了可扩展且多功能的 3D 生成。项目提供了多种 3D 生成的方法和工具,包括文本到 3D、图像到 3D 等,并且支持多种输出格式,如 3D 高斯、辐射场和网格等。通过 TRELLIS,用户可以根据文本描述或图像输入快速生成高质量的 3D 资产,适用于游戏开发、动画制作、虚拟现实等多个领域。

ai-agents-for-beginners

ai-agents-for-beginners

10 节课教你开启构建 AI 代理所需的一切知识

AI Agents for Beginners 是一个专为初学者打造的课程项目,提供 10 节课程,涵盖构建 AI 代理的必备知识,支持多种语言,包含规划设计、工具使用、多代理等丰富内容,助您快速入门 AI 代理领域。

AEE

AEE

AI Excel全自动制表工具

AEE 在线 AI 全自动 Excel 编辑器,提供智能录入、自动公式、数据整理、图表生成等功能,高效处理 Excel 任务,提升办公效率。支持自动高亮数据、批量计算、不规则数据录入,适用于企业、教育、金融等多场景。

UI-TARS-desktop

UI-TARS-desktop

基于 UI-TARS 视觉语言模型的桌面应用,可通过自然语言控制计算机进行多模态操作。

UI-TARS-desktop 是一款功能强大的桌面应用,基于 UI-TARS(视觉语言模型)构建。它具备自然语言控制、截图与视觉识别、精确的鼠标键盘控制等功能,支持跨平台使用(Windows/MacOS),能提供实时反馈和状态显示,且数据完全本地处理,保障隐私安全。该应用集成了多种大语言模型和搜索方式,还可进行文件系统操作。适用于需要智能交互和自动化任务的场景,如信息检索、文件管理等。其提供了详细的文档,包括快速启动、部署、贡献指南和 SDK 使用说明等,方便开发者使用和扩展。

Wan2.1

Wan2.1

开源且先进的大规模视频生成模型项目

Wan2.1 是一个开源且先进的大规模视频生成模型项目,支持文本到图像、文本到视频、图像到视频等多种生成任务。它具备丰富的配置选项,可调整分辨率、扩散步数等参数,还能对提示词进行增强。使用了多种先进技术和工具,在视频和图像生成领域具有广泛应用前景,适合研究人员和开发者使用。

爱图表

爱图表

全流程 AI 驱动的数据可视化工具,助力用户轻松创作高颜值图表

爱图表(aitubiao.com)就是AI图表,是由镝数科技推出的一款创新型智能数据可视化平台,专注于为用户提供便捷的图表生成、数据分析和报告撰写服务。爱图表是中国首个在图表场景接入DeepSeek的产品。通过接入前沿的DeepSeek系列AI模型,爱图表结合强大的数据处理能力与智能化功能,致力于帮助职场人士高效处理和表达数据,提升工作效率和报告质量。

Qwen2.5-VL

Qwen2.5-VL

一款强大的视觉语言模型,支持图像和视频输入

Qwen2.5-VL 是一款强大的视觉语言模型,支持图像和视频输入,可用于多种场景,如商品特点总结、图像文字识别等。项目提供了 OpenAI API 服务、Web UI 示例等部署方式,还包含了视觉处理工具,有助于开发者快速集成和使用,提升工作效率。

下拉加载更多