Retrieval-augmented Visual Question Answering with Fine-grained Late-interaction Multi-modal Retrieval
This is the official repository of the Retrieval Augmented Visual Question Answering (RAVQA) project. The project covers RAVQA and RAVQA-v2 (equipped with Fine-grained Late-interaction Multi-modal Retrieval).
🔥🔥News
- [10/08/2024] We received many requests regarding adding multilingual abilities to PreFLMR. We announce that we are now training the Chinese version of PreFLMR and will release it very soon. Stay tuned!
- [05/06/2024] 🔥🔥🔥The PreFLMR paper has been accepted to appear at ACL 2024! The camera-ready version of the paper has been updated here to include more details and analyses. Along with the acceptance, we have made some important updates to help you use the model and extend your research easier:
- Added an evaluation script that reproduces the results in the PreFLMR paper here
- Added the updated benchmark results with the transformer implementation here
- Added an example script to fine-tune PreFLMR on a custom retrieval dataset here
- IMPORTANT: fixed the OVEN data splits in the M2KR benchmark, and updated each entry with a fixed instruction to ensure the evaluation result is not affected by random sampling of instructions. Please delete your local cache and download the dataset again.
- [13/04/2024] 🔥 We highlight another valuable and concurrent research on training instruction-following, universal, multi-task multi-modal retrievers: UniIR: Training and Benchmarking Universal Multimodal Information Retrievers, which was done by the researchers of the University of Waterloo. They also shared the M-Beir benchmark which can be used to train and evaluate multi-modal universal information retrievers. In the near future, we may collaborate to combine the two benchmarks together to facilitate the advance of this field.
- [06/03/2024] 🔥🔥🔥The implementation based on huggingface-transformers is now available here!
- [20/02/2024] 🔥🔥🔥 The PreFLMR project page has been launched! Explore a captivating demo showcasing PreFLMR_ViT-G, our largest model yet. Additionally, access pre-trained checkpoints and the M2KR benchmark, designed for assessing general-purpose knowledge retrievers. Stay tuned as we will soon upload a huggingface-compatible implementation along with example scripts for indexing and retrieval, providing effortless access via
FLMRModelForRetrieval.from_pretrained(...)
. - [14/02/2024] 🔥Our follow-up work, PreFLMR, is now available here! PreFLMR is a general-purpose retriever that was pre-trained on more than ten million multi-modal retrieval data and achieved strong performance across a wide range of knowledge-intensive tasks. It can also serve as a strong foundation retrieval model that can be fine-tuned to fit any downstream retrieval tasks. We will release the model through huggingface-transformers very soon, which allows quick deployment in minutes.
- [31/01/2024] 🔥We are happy to announce that the training and testing code for FLMR is now released! For the legacy RAVQA-v1 and the code for FVQA, please checkout to
legacy_v1
or tagv1.0
. We are also preparing a new FLMR implementation for Huggingface transformers, which will be released as plug-in-and-play models.🔥 - [03/10/2023] Our follow-up work "Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering" has been accepted to appear at NeurIPS 2023! The paper can be found here here. If you prefer a 3-minute technical summary, look at this post. The code will be released in this repository soon. We are happy to announce that we have made a major change to our code framework such that experiment management and data processing are more flexible.
- [01/05/2023] FVQA 2.0 is released here.
- [08/02/2023] Our work for creating adversarial samples for the FVQA dataset is accepted to appear at EACL 2023. The dataset and codes will be released here soon.
- [01/01/2023] We released an initial version of our work. The framework supports:
- RA-VQA-NoDPR (T5 baseline)
- RA-VQA-FrDPR (DPR retriever + T5 reader)
- RA-VQA (joint training of DPR + T5)
- TRiG (Our replication of TRiG)
- Datasets: OK-VQA and F-VQA
- [19/12/2022] We plan to release the code within Dec, 2022. The author is currently overwhelmed by internship work. Thanks for waiting!
- [12/12/2022] We plan to release the code of our reproduced TRiG system as well.
Table of Content
- Retrieval-augmented Visual Question Answering with Fine-grained Late-interaction Multi-modal Retrieval
- 🔥🔥News
- Benchmarks
- Resources
- Detailed Instructions
- Some Notes
- Citation
Benchmarks
Benchmark Results for PreFLMR in the dedicated FLMR codebase
Model | WIT Recall@10 | IGLUE Recall@1 | KVQA Recall@5 | MSMARCO Recall@5 | OVEN Recall@5 | LLaVA Recall@1 | EVQA Recall@5 | EVQA Pseudo Recall@5 | OKVQA Recall@5 | OKVQA Pseudo Recall@5 | Infoseek Recall@5 | Infoseek Pseudo Recall@5 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
LinWeizheDragon/PreFLMR_ViT-G🤗 | 0.619 | 0.718 | 0.419 | 0.783 | 0.643 | 0.726 | 0.625 | 0.721 | 0.302 | 0.674 | 0.392 | 0.577 |
LinWeizheDragon/PreFLMR_ViT-L🤗 | 0.605 | 0.699 | 0.440 | 0.779 | 0.608 | 0.729 | 0.609 | 0.708 | 0.314 | 0.690 | 0.374 | 0.578 |
LinWeizheDragon/PreFLMR_ViT-B🤗 | 0.427 | 0.574 | 0.294 | 0.786 | 0.468 | 0.673 | 0.550 | 0.663 | 0.272 | 0.658 | 0.260 | 0.496 |
Note: We converted the checkpoints from PyTorch to Huggingface-transformers, whose benchmark results differ from the numbers reported in the original paper slightly. You can reproduce the results in the above paper by referring to the instructions in this document.
Benchmark Results for FLMR in this codebase
Using the provided codebase, it is expected to obtain the following results.
Model | Recall@5 | Notes |
---|---|---|
FLMR (9 ROIs) | 89.20 | |
FLMR (9 ROIs) | 89.28 | Using the pretrained ckpt |
Model | VQA Score | Notes |
---|---|---|
RA-VQA | 54.51 | In the previous paper |
RA-VQA-v2 | 61.86 | with FLMR |
Since we refactored the codebase significantly in clean-up, these numbers may not match exactly to what were reported in the paper.
Resources
We host the data required for running this system in Huggingface and Baidu Cloud (coming soon).
The data contains:
- Packed pre-extracted data for OK-VQA (including OCR features, VinVL object detection features, Oscar captioning features)
- FLMR with the mapping network pretrained on WIT (batch size 30, in-batch negative sampling, 1 GPU, grad accumulation 4)
- FLMR pretrained on OK-VQA and Google Search dataset (batch size 30, in-batch negative sampling, 1 GPU, grad accumulation 4)
You can download these resources from Huggingface altogether: Combined Download on Huggingface.
wget https://huggingface.co/datasets/BByrneLab/RAVQAV2Data/resolve/main/RAVQA_v2_data.tar.gz?download=true
After downloading and extracting the tar.gz
, you need to unzip all .zip
files under okvqa
folder and okvqa/pre-extracted/OCR.zip
.
After otaining all these resources, you should:
- Change the data paths in
configs/okvqa/okvqa_data_config.libsonnet
- Change the paths to
TokenizerModelVersion
inconfigs/okvqa/FLMR_with_ROI.jsonnet
- Change the paths to
EncoderModelVersion
andTokenizerModelVersion
inconfigs/okvqa/FLMR_base_preload_vision_features.jsonnet
By downloading the provided OK-VQA data, you must comply with the OK-VQA license and MS COCO license.
Detailed Instructions
Overview
The framework was designed and implemented by Weizhe Lin, University of Cambridge. All rights are reserved. Use with research purposes is allowed. This framework is designed for research purpose, with flexibility for extension. It is not a perfect framework for production, of course.
The training and testing are backboned by pytorch-lightning. The pre-trained Transformer models are from Huggingface-transformers. The training platform is Pytorch.
In this release, we designed a new framework that wraps the data processing/training/testing utilities - Runway For ML. It is a highly efficient framework that enables flexible experimentation and data processing. Data processing is formulated as a Directional Acyclic Graph, on which the framework traverses through nodes to prepare data. This framework enables efficient data processing at million scale. For more details, please refer to the README of the framework.
When cloning this repository, please use the kbvqa_dev
branch.
The indexing and searching of FLMR is supported by FAISS and ColBERT. The ColBERT engine is plugged into this project as a third-party package. We fixed many errors in this package following LI-RAGE.
Structure
The framework consists of:
- main.py: the main program. It loads a config file and override some entries with command-line arguments. It initialises a
RunwayExperiment
instance to execute training and testing. - Data Ops: it loads the data according to configs specified in
data_pipeline
. The details of this feature can be found in here - Datasets: they are automatically loaded by the data loader wrapper.
.collate_fn
is defined to collate the data. An decorator classModuleParser
is used to help generate the training inputs. This decorator class generates input dict according to configs (config.model_config.input_modules/decorder_input_modules/output_modules
). - **Model