Project Icon

CVinW_Readings

聚焦计算机视觉在野外(Computer Vision in the Wild)这一新兴研究领域

CVinW_Readings项目聚焦计算机视觉在野外(Computer Vision in the Wild)这一新兴研究领域。项目提供CVinW简介并维护相关论文集。CVinW致力于开发易于适应广泛视觉任务的可转移基础模型,特点是广泛的任务转移场景和低转移成本。内容涵盖任务级转移、高效模型适应和域外泛化等研究方向的最新进展。

CVinW Readings Awesome

``Computer Vision in the Wild (CVinW)'' is an emerging research field. This writeup provides a quick introduction of CVinW and maintains a collection of papers on the topic. If you find some missing papers or resources, please open issues or pull requests (recommended).

Table of Contents

What is Computer Vision in the Wild?

:star: Goals of CVinW

Developing a transferable foundation model/system that can effortlessly adapt to a large range of visual tasks in the wild. It comes with two key factors: (i) The task transfer scenarios are broad, and (ii) The task transfer cost is low. The main idea is illustrated as follows, please see the detailed description in ELEVATER paper.

:one: Task Transfer Scenarios are Broad

We illustrate and compare CVinW with other settings using a 2D chart in Figure 1, where the space is constructed with two orthogonal dimensions: input image distribution and output concept set. The 2D chart is divided into four quadrants, based on how the model evaluation stage is different from model development stage. For any visual recognition problems at different granularity such as image classification, object detection and segmentation, the modeling setup cann be categorized into one of the four settings. We see an emerging trend on moving towards CVinW. Interested in the various pre-trained vision models that move towards CVinW? please check out Section :fire:``Papers on Task-level Transfer with Pre-trained Models''.

  • The Close-Set Setting. Both training and evaluation distributions are consistent in both dimensions, a typical setting in ML/CV textbooks.
  • Open-Set/Vocabulary/World Setting. It allows new concepts in evaluation, while typically remains the same visual domain. Please see examples in image classification and object detection.
  • Domain Generalization Setting. Domain shift allows new visual domain in evaluation, while typically remains the same concept pool. Please see examples such as DomainBed and DomainNet.
  • Computer Vision in the Wild Setting. CVinW allows the flexibility in both dimensions, where any new tasks/datasets in the wild essentially fall into.
A brief definition with a four-quadrant chart Figure 1: The comparison of CVinW with other existing settings

:two: Task Transfer Cost is Low

One major advantage of pre-trained models is the promise that they can transfer to downstream tasks effortlessly. The model adaptation cost is considered in two orthogonal dimensions: sample-efficiency and parameter-efficiency, as illustrated in Figure 2. The bottom-left corner and top-right corner is the most inexpensive and expensive adaptation strategy, respectively. One may interpolate and make combinations in the 2D space, to get different model adaptation methods with different cost. To efficient adapt large vision models of the gradaully increaseing size, we see an emerging need on efficient model adaptation. Interested in contributing your smart efficient adaptation algorithms and see how it differs from existing papers? please check out Section :snowflake:``Papers on Efficient Model Adaptation'' .

  • Sample-efficiency: Zero-, Few-, and Full-shot. Due to the high cost of annotating data, it is often desired to provide a small number of labeled image-label pairs in downstream datasets. Transferable models should be able to reach high performance in this data-limited scenario..
  • Parameter-efficiency: Frozen Model Inference, Prompting Tuning, Linear Probing vs Full Model Fine-tuning.. A smaller number of trainable parameter in model adaptation typically means a small training cost in a new task.
A breakdown definition of efficient model adaptationFigure 2: The 2D chart of model adaptation cost.

:cinema: Benchmarks

ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models.
Chunyuan Li*, Haotian Liu*, Liunian Harold Li, Pengchuan Zhang, Jyoti Aneja, Jianwei Yang, Ping Jin, Houdong Hu, Zicheng Liu, Yong Jae Lee, Jianfeng Gao.
NeurIPS 2022 (Datasets and Benchmarks Track). [paper] [benchmark]

:loudspeaker: News

$\qquad$ [Workshop] $\qquad$ [SGinW Challenge] $\qquad$ [RF100 Challenge]

$\qquad$ [Workshop] $\qquad$ [ICinW Challenge] $\qquad$ [ODinW Challenge]

:fire: Papers on Task-level Transfer with Pre-trained Models

:orange_book: Image Classification in the Wild

[CLIP] Learning Transferable Visual Models From Natural Language Supervision.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
ICML 2021. [paper] [code]

[ALIGN] Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision.
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
ICML 2021. [paper]

OpenCLIP.
Gabriel Ilharco*, Mitchell Wortsman*, Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, John Miller, Hongseok Namkoong, Hannaneh Hajishirzi, Ali Farhadi, Ludwig Schmidt.
10.5281/zenodo.5143773, 2021. [code]

Florence: A New Foundation Model for Computer Vision.
Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, Jianfeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang.
arXiv:2111.11432, 2022. [paper]

[UniCL] Unified Contrastive Learning in Image-Text-Label Space.
Jianwei Yang*, Chunyuan Li*, Pengchuan Zhang*, Bin Xiao*, Ce Liu, Lu Yuan, Jianfeng Gao.
CVPR 2022. [paper] [code]

LiT: Zero-Shot Transfer with Locked-image text Tuning.
Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer.
CVPR 2022. [paper]

[DeCLIP] Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm.
Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, Junjie Yan.
ICLR 2022. [paper] [code]

FILIP: Fine-grained Interactive Language-Image Pre-Training.
Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, Chunjing Xu.
ICLR 2022. [paper]

SLIP: Self-supervision meets Language-Image Pre-training.
Norman Mu, Alexander Kirillov, David Wagner, Saining Xie.
ECCV 2022. [paper] [code]

[MS-CLIP]: Learning Visual Representation from Modality-Shared Contrastive Language-Image Pre-training.
Haoxuan You*, Luowei Zhou*, Bin Xiao*, Noel Codella*, Yu Cheng, Ruochen Xu, Shih-Fu Chang, Lu Yuan.
ECCV 2022. [paper] [code]

MultiMAE: Multi-modal Multi-task Masked Autoencoders.
Roman Bachmann, David Mizrahi, Andrei Atanov, Amir Zamir.
ECCV

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号