CVinW Readings
``Computer Vision in the Wild (CVinW)'' is an emerging research field. This writeup provides a quick introduction of CVinW and maintains a collection of papers on the topic. If you find some missing papers or resources, please open issues or pull requests (recommended).
Table of Contents
- What is Computer Vision in the Wild (CVinW)?
- Papers on Task-level Transfer with Pre-trained Models
- Papers on Efficient Model Adaptation
- Papers on Out-of-domain Generalization
- Acknowledgements
What is Computer Vision in the Wild?
:star: Goals of CVinW
Developing a transferable foundation model/system that can effortlessly adapt to a large range of visual tasks in the wild. It comes with two key factors: (i) The task transfer scenarios are broad, and (ii) The task transfer cost is low. The main idea is illustrated as follows, please see the detailed description in ELEVATER paper.
:one: Task Transfer Scenarios are Broad
We illustrate and compare CVinW with other settings using a 2D chart in Figure 1, where the space is constructed with two orthogonal dimensions: input image distribution and output concept set. The 2D chart is divided into four quadrants, based on how the model evaluation stage is different from model development stage. For any visual recognition problems at different granularity such as image classification, object detection and segmentation, the modeling setup cann be categorized into one of the four settings. We see an emerging trend on moving towards CVinW. Interested in the various pre-trained vision models that move towards CVinW? please check out Section :fire:``Papers on Task-level Transfer with Pre-trained Models''.
| |
A brief definition with a four-quadrant chart | Figure 1: The comparison of CVinW with other existing settings |
---|
:two: Task Transfer Cost is Low
One major advantage of pre-trained models is the promise that they can transfer to downstream tasks effortlessly. The model adaptation cost is considered in two orthogonal dimensions: sample-efficiency and parameter-efficiency, as illustrated in Figure 2. The bottom-left corner and top-right corner is the most inexpensive and expensive adaptation strategy, respectively. One may interpolate and make combinations in the 2D space, to get different model adaptation methods with different cost. To efficient adapt large vision models of the gradaully increaseing size, we see an emerging need on efficient model adaptation. Interested in contributing your smart efficient adaptation algorithms and see how it differs from existing papers? please check out Section :snowflake:``Papers on Efficient Model Adaptation'' .
| |
A breakdown definition of efficient model adaptation | Figure 2: The 2D chart of model adaptation cost. |
---|
:cinema: Benchmarks
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models.
Chunyuan Li*, Haotian Liu*, Liunian Harold Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Houdong Hu, Zicheng Liu, Yong Jae Lee, Jianfeng Gao.
NeurIPS 2022 (Datasets and Benchmarks Track).
[paper] [benchmark]
:loudspeaker: News
- [09/2023] 🔥 Discover the fascinating journey of "Multimodal Foundation Models: From Specialists to General-Purpose Assistants" 🌐 Dive into the evolution of large models in #ComputerVision & #VisionLanguage! This is based on our CVPR 2023 Tutorial, where you could find videos and slides of the core chapters. For its preceding paper, please check out Vision-Language Pre-training: Basics, Recent Advances, and Future Trends
- [02/2023] Organizing the 2nd Workshop @ CVPR2023 on Computer Vision in the Wild (CVinW), where two new challenges are hosted to evaluate the zero-shot, few-shot and full-shot performance of pre-trained vision models in downstream tasks:
- ``Segmentation in the Wild (SGinW)'' Challenge evaluates on 25 image segmentation tasks.
- ``Roboflow 100 for Object Detection in the Wild'' Challenge evaluates on 100 object detection tasks.
$\qquad$ [Workshop] $\qquad$ [SGinW Challenge] $\qquad$ [RF100 Challenge]
- [09/2022] Organizing ECCV Workshop Computer Vision in the Wild (CVinW), where two challenges are hosted to evaluate the zero-shot, few-shot and full-shot performance of pre-trained vision models in downstream tasks:
- ``Image Classification in the Wild (ICinW)'' Challenge evaluates on 20 image classification tasks.
- ``Object Detection in the Wild (ODinW)'' Challenge evaluates on 35 object detection tasks.
$\qquad$ [Workshop] $\qquad$ [ICinW Challenge] $\qquad$ [ODinW Challenge]
:fire: Papers on Task-level Transfer with Pre-trained Models
:orange_book: Image Classification in the Wild
[CLIP] Learning Transferable Visual Models From Natural Language Supervision.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
ICML 2021.
[paper] [code]
[ALIGN] Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
ICML 2021.
[paper]
OpenCLIP.
Gabriel Ilharco*, Mitchell Wortsman*, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, John Miller, Hongseok Namkoong, Hannaneh Hajishirzi, Ali Farhadi, Ludwig Schmidt.
10.5281/zenodo.5143773, 2021.
[code]
Florence: A New Foundation Model for Computer Vision.
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang.
arXiv:2111.11432, 2022.
[paper]
[UniCL] Unified Contrastive Learning in Image-Text-Label Space.
Jianwei Yang*, Chunyuan Li*, Pengchuan Zhang*, Bin Xiao*, Ce Liu, Lu Yuan, Jianfeng Gao.
CVPR 2022.
[paper] [code]
LiT: Zero-Shot Transfer with Locked-image text Tuning.
Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer.
CVPR 2022.
[paper]
[DeCLIP] Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm.
Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, Junjie Yan.
ICLR 2022.
[paper] [code]
FILIP: Fine-grained Interactive Language-Image Pre-Training.
Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu.
ICLR 2022.
[paper]
SLIP: Self-supervision meets Language-Image Pre-training.
Norman Mu, Alexander Kirillov, David Wagner, Saining Xie.
ECCV 2022.
[paper] [code]
[MS-CLIP]: Learning Visual Representation from Modality-Shared Contrastive Language-Image Pre-training.
Haoxuan You*, Luowei Zhou*, Bin Xiao*, Noel Codella*, Yu Cheng, Ruochen Xu, Shih-Fu Chang, Lu Yuan.
ECCV 2022.
[paper] [code]
MultiMAE: Multi-modal Multi-task Masked Autoencoders.
Roman Bachmann, David Mizrahi, Andrei Atanov, Amir Zamir.
ECCV