Awesome CVPR 2024 Papers, Workshops, Challenges, and Tutorials!
The 2024 Conference on Computer Vision and Pattern Recognition (CVPR) received 11,532 valid paper submissions, and only 2,719 were accepted, for an overall acceptance rate of about 23.6%.
Below is a list of the papers, posters, challenges, workshops, and datasets I'm most excited about.
I'll be there with my crew from Voxel 51 at Booth 1519, which will be located right next to the Meta and Amazon Science booths!
If you found the repo useful, come by and say "Hi" and I'll hook you up with some swag!
🏆 Challenges
Title | Authors | Code / arXiv Page | Summary |
---|---|---|---|
Agriculture-Vision Prize Challenge | The Agriculture-Vision Prize Challenge 2024 encourages the development of algorithms for recognizing agricultural patterns from aerial images and to promote sustainable agriculture practices. Semi-supervised learning techniques will be used to merge two datasets and assess model performance. Prizes are $2,500 for 1st place, $1,500 for 2nd place, and $1,000 for 3rd place. | ||
Building3D Challenge | This challenge utilizes the Building3D dataset, an urban-scale publicly available dataset with over 160,000 buildings from 16 cities in Estonia. Participants must develop algorithms that take point clouds as input and generate wireframe models. | ||
Structured Semantic 3D Reconstruction (S23DR) Challenge | Transform posed images or SfM outputs into wireframes for extracting semantically meaningful measurements. HoHo dataset provides images, point clouds, and wireframes with semantically tagged edges. $25,000 prize pool. | ||
Pixel-level Video Understanding in the Wild | The PVUW challenge includes four tracks: Video Semantic Segmentation (VSS), Video Panoptic Segmentation (VPS), Complex Video Object Segmentation, and Motion Expression guided Video Segmentation[1]. The two new tracks, based on the MOSE and MeViS datasets, aim to foster the development of more comprehensive and robust pixel-level understanding of video scenes in complex environments and realistic scenarios. | ||
SyntaGen Competition | The SyntaGen Competition challenges participants to create high-quality synthetic datasets using Stable Diffusion and the 20 class names from PASCAL VOC 2012 for semantic segmentation. The datasets will be evaluated by training a DeepLabv3 model and assessing its performance on a private test set, with submissions ranked based on the mIoU metric[1]. The top 2 teams will receive cash prizes and the opportunity to present their work at the workshop. | ||
SMART-101 CVPR 2024 Challenge | The EvalAI challenge called "Anthropic Conversational AI Evaluation" has the objective of evaluating open-domain conversational AI systems based on their ability to engage in helpful, harmless, and honest conversations with humans[1]. The challenge comprises a multi-turn dialogue between a human and an AI assistant, where the human can ask the AI to perform open-ended tasks or engage in open-ended conversation[1]. The AI systems are evaluated on various metrics, including helpfulness, harmlessness, honesty, groundedness, and role consistency. | ||
Snapshot Spectral Imaging Face Anti-spoofing Challenge | New spectroscopy sensors can improve facial recognition systems' ability to identify realistic flexible masks made of silicone or latex. Snapshot Spectral Imaging (SSI) technology obtains compressed sensing spectral images in a single exposure, making it useful for incorporating spectroscopic information. Using a snapshot spectral camera, we created HySpeFAS - the first snapshot spectral face anti-spoofing dataset with 6760 hyperspectral images, each containing 30 spectral channels. This competition aims to encourage research on new spectroscopic sensor face anti-spoofing algorithms suitable for SSI images. | ||
Chalearn Face Anti-spoofing Workshop | Spoofing clues resulting from physical presentation attacks are caused by color distortion, screen moire patterns, and production traces. Forgery clues resulting from digital editing attacks are changes in pixel values. The fifth competition aims to explore common characteristics of these attack clues and promote unified detection algorithms. We have a Unified physical-digital Attack dataset, called UniAttackData, with 1,800 participations, 2 physical and 12 digital attacks, and 29,706 videos. | ||
DataCV Challenge | The DataCV Challenge searches training sets for various targets in object detection. The datasets for the challenge consist of a data source pool, combining multiple existing detection datasets, and a newly introduced target dataset with diverse detection environments recorded across 100 countries. Test set A is publicly available on Github, while test set B is reserved for determining challenge awards. An evaluation server is provided for calculating test accuracy. Ethical considerations have been followed by blurring human faces and vehicle license plates to ensure individual privacy and validating copyright before distributing the datasets. | ||
Grocery Vision | The GroceryVision Dataset is part of the RetailVision Workshop Challenge at CVPR 2024. It has two tracks that use real-world retail data collected in typical grocery store environments. Track 1 focuses on Video and Spatial Temporal Action Localization (TAL and STAL). Participants are provided with 73,683 image-annotation pairs for training, and their performance is evaluated based on frame-mAP for TAL and tube-mAP for STAL. Track 2 is the Multi-modal Product Retrieval (MPR) challenge. Participants must design methods to accurately retrieve product identity by measuring similarity between images and descriptions. | ||
SoccerNet-GSR'24 Challenge | SoccerNet Game State Reconstruction (GSR) is a novel computer vision task involving the tracking and identification of sports players from a single moving camera to construct a video game-like minimap, without any specific hardware worn by the players. A new benchmark for Game State Reconstruction is introduced for this challenge, including a new dataset with 200 annotated soccer clips, a new evaluation metric, and a public baseline to serve as a starting point for the participants. Methods will be ranked according to their performance on the introduced metric on a held-out challenge set. |
👁️ Vision Transformers
👁️💬 Vision-Language
Title | Authors | Code / arXiv Page | Summary |
---|---|---|---|
Vlogger: Make Your Dream A Vlog | Shaobin Zhuang, Kunchang Li3, Xinyuan Chen | Vlogger is an AI system that generates minute-level video blogs from user descriptions. It uses a Large Language Model (LLM) to break down the task into four stages: Script, Actor, ShowMaker, and Voicer. The ShowMaker uses a Spatial-Temporal Enhanced Block (STEB) to enhance spatial-temporal coherence. Vlogger can generate 5+ minute vlogs surpassing previous long video generation methods. | |
A Closer Look at the Few-Shot Adaptation of Large Vision-Language Models | Julio Silva-Rodríguez, Sina Hajimiri, Ismail Ben Ayed | CLIP is a powerful vision-language model for visual recognition. However, fine-tuning it for small downstream tasks with limited labeled samples is challenging. Efficient transfer learning (ETL) methods adapt VLMs with few parameters, but require careful per-task hyperparameter tuning using large validation sets. To overcome this, the authors propose CLAP, a principled approach that adapts linear probing for few-shot learning. CLAP consistently outperforms ETL methods, providing an efficient and robust approach for few-shot adaptation of large vision-language models in realistic settings where hyperparameter tuning with large validation sets is not feasible. | |
Alpha-CLIP: A CLIP Model Focusing on Wherever You Want | Zeyi Sun, Ye Fang, Tong Wu | Alpha-CLIP is an improved version of the CLIP model that focuses on specific regions of interest in images through an auxiliary alpha channel. It can enhance CLIP in different image-related tasks, including 2D and 3D image generation, captioning, and detection. Alpha-CLIP preserves CLIP's visual recognition ability and boosts zero-shot classification accuracy by 4.1% when using foreground masks. | |
CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update | Zhi Gao, Yuntao Du, Xintong Zhang | CLOVA is a system that leverages large language models (LLMs) to generate programs that can accomplish various visual tasks using off-the-shelf visual tools. To overcome the limitation of fixed tools, CLOVA has a closed-loop framework that includes an inference phase, reflection phase, and learning phase. It also uses a multimodal global-local reflection scheme and three flexible methods to collect real-time training data. CLOVA's learning capability enables it to adapt to new environments, resulting in a 5-20% better performance on VQA, multiple-image reasoning, knowledge tagging, and image editing tasks. | |
Convolutional Prompting meets Language Models for Continual Learning | Anurag Roy, Riddhiman Moulick, Vinay K. Verma | The paper introduces ConvPrompt, a novel approach for continual learning in vision transformers. ConvPrompt leverages convolutional prompts and large language models to maintain layer-wise shared embeddings and improve knowledge sharing across tasks. The method improves state-of-the-art by around 3% with significantly fewer parameters. In summary, ConvPrompt is an efficient and effective prompt-based continual learning approach that adapts the model capacity based on task similarity. | |
Improved Visual Grounding through Self-Consistent Explanations | Ruozhen He, Paola Cascante-Bonilla, Ziyan Yang | This paper presents a strategy called SelfEQ. The aim of SelfEQ is to improve the ability of vision-and-language models to locate specific objects in an image. The proposed strategy involves adding paraphrases generated by a large language model to existing text-image datasets. The model is then fine-tuned to ensure that a phrase and its paraphrase map to the same region in the image. This promotes self-consistency in visual explanations, expands the model's vocabulary, and enhances the quality of object locations highlighted by gradient-based visual explanation methods like GradCAM. | |
Learning CNN on ViT: A Hybrid Model to Explicitly Class-specific Boundaries for Domain Adaptation | Ba Hung Ngo, Nhat-Tuong Do-Tran, Tuan-Ngoc Nguyen | The paper introduces a new approach called Explicitly Class-specific Boundaries (ECB) for domain adaptation, which combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) by training CNN on ViT. ECB uses ViT to determine class-specific decision boundaries and CNN to group target features based on those boundaries. This improves the quality of pseudo labels and reduces knowledge disparities. The paper also provides visualizations to demonstrate the effectiveness of the proposed ECB method. | |
Link-Context Learning for Multimodal LLMs | Yan Tai, Weichen Fan, Zhao Zhang |