Transformer Tracking
This repository is a paper digest of Transformer-related approaches in visual tracking tasks. Currently, tasks in this repository include Unified Tracking (UT), Single Object Tracking (SOT) and 3D Single Object Tracking (3DSOT). Note that some trackers involving a Non-Local attention mechanism are also collected. Papers are listed in alphabetical order of the first character.
:link:Jump to:
[!NOTE] I find it hard to trace all tasks that are related to tracking, including Video Object Segmentation (VOS), Multiple Object Tracking (MOT), Video Instance Segmentation (VIS), Video Object Detection (VOD) and Object Re-Identification (ReID). Hence, I discard all other tracking tasks in a previous update. If you are interested, you can find plenty of collections in this archived version. Besides, the most recent trend shows that different tracking tasks are coming to the same avenue.
:star2:Recommendation
It's the End of the Game
State-of-the-Art Transformer Tracker:two_hearts::two_hearts::two_hearts:
- GRM (Generalized Relation Modeling for Transformer Tracking) [paper] [code] [video]
- AiATrack (AiATrack: Attention in Attention for Transformer Visual Tracking) [paper] [code] [video]
Up-to-Date Benchmark Results:rocket::rocket::rocket:
- Image courtesy: https://arxiv.org/abs/2302.11867
Helpful Learning Resource for Tracking:thumbsup::thumbsup::thumbsup:
- (Survey) Transformers in Single Object Tracking: An Experimental Survey [paper], Visual Object Tracking with Discriminative Filters and Siamese Networks: A Survey and Outlook [paper]
- (Talk) Discriminative Appearance-Based Tracking and Segmentation [video], Deep Visual Reasoning with Optimization-Based Network Modules [video]
- (Library) PyTracking: Visual Tracking Library Based on PyTorch [code]
- (People) Martin Danelljan@ETH [web], Bin Yan@DLUT [web]
Recent Trends:fire::fire::fire:
-
Target Head: Autoregressive Temporal Modeling
-
Feature Backbone: Joint Feature Extraction and Interaction
-
Advantage
- Benefit from pre-trained vision Transformer models.
- Free from randomly initialized correlation modules.
- More discriminative target-specific feature extraction.
- Much faster inference and training convergence speed.
- Simple and generic one-branch tracking framework.
-
Roadmap
- 1st step :feet: feature interaction inside the backbone.
- 2nd step :feet: concatenation-based feature interaction.
- STARK [ICCV'21], SwinTrack [NeurIPS'22]
- 3rd step :feet: joint feature extraction and interaction.
- 4th step :feet: generalized and robust relation modeling.
-
:bookmark:Unified Tracking (UT)
CVPR 2024
- GLEE (General Object Foundation Model for Images and Videos at Scale) [paper] [code]
- OmniViD (OmniVid: A Generative Framework for Universal Video Understanding) [paper] [code]
CVPR 2023
- OmniTracker (OmniTracker: Unifying Object Tracking by Tracking-with-Detection) [paper] [
code] - UNINEXT (Universal Instance Perception as Object Discovery and Retrieval) [paper] [code]
ICCV 2023
- MITS (Integrating Boxes and Masks: A Multi-Object Framework for Unified Visual Tracking and Segmentation) [paper] [code]
Preprint 2023
- HQTrack (Tracking Anything in High Quality) [paper] [code]
- SAM-Track (Segment and Track Anything) [paper] [code]
- TAM (Track Anything: Segment Anything Meets Videos) [paper] [code]
CVPR 2022
ECCV 2022
:bookmark:Single Object Tracking (SOT)
CVPR 2024
- AQATrack (Autoregressive Queries for Adaptive Tracking with Spatio-Temporal Transformers) [paper] [code]
- ARTrackV2 (ARTrackV2: Prompting Autoregressive Tracker Where to Look and How to Describe) [paper] [
code] - DiffusionTrack (DiffusionTrack: Point Set Diffusion Model for Visual Object Tracking) [paper] [code]
- HDETrack (Event Stream-Based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel Baseline) [paper] [code]
- HIPTrack (HIPTrack: Visual Tracking with Historical Prompts) [paper] [code]
- OneTracker (OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning) [paper] [
code] - QueryNLT (Context-Aware Integration of Language and Visual References for Natural Language Tracking) [paper] [code]
- SDSTrack (SDSTrack: Self-Distillation Symmetric Adapter Learning for Multi-Modal Visual Object Tracking) [paper] [code]
- Un-Track (Single-Model and Any-Modality for Video Object Tracking) [paper] [code]
ECCV 2024
- Diff-Tracker (Diff-Tracker: Text-to-Image Diffusion Models are Unsupervised Trackers) [paper] [
code] - LoRAT (Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance) [paper] [
code]
AAAI 2024
- BAT (Bi-Directional Adapter for Multi-Modal Tracking) [paper] [code]
- EVPTrack (Explicit Visual Prompts for Visual Object Tracking) [paper] [code]
- ODTrack (ODTrack: Online Dense Temporal Token Learning for Visual Tracking) [paper] [code]
- STCFormer (Sequential Fusion Based Multi-Granularity Consistency for Space-Time Transformer Tracking) [paper] [
code] - TATrack (Temporal Adaptive RGBT Tracking with Modality Prompt) [paper] [
code] - UVLTrack (Unifying Visual and Vision-Language Tracking via Contrastive Learning) [paper] [code]
ICML 2024
- AVTrack (Learning Adaptive and View-Invariant Vision Transformer for Real-Time UAV Tracking) [paper] [code]
IJCAI 2024
WACV 2024
- SMAT (Separable Self and Mixed Attention Transformers for Efficient Object Tracking) [paper] [code]
- TaMOs (Beyond SOT: It's Time to Track Multiple Generic Objects at Once) [paper] [code]
ICRA 2024
Preprint 2024
- ABTrack (Adaptively Bypassing Vision Transformer Blocks for Efficient Visual Tracking) [paper] [code]
- ACTrack (ACTrack: Adding Spatio-Temporal Condition for Visual Object Tracking) [paper] [
code] - AFter (AFter: Attention-Based Fusion Router for RGBT Tracking) [paper] [code]
- AMTTrack (Long-Term Frame-Event Visual Tracking: Benchmark Dataset and Baseline) [paper] [code]
- BofN (Predicting the Best of N Visual Trackers) [paper] [code]
- CAFormer (Cross-modulated Attention Transformer for RGBT Tracking) [paper] [
code] - CRSOT (CRSOT: Cross-Resolution Object Tracking using Unaligned Frame and Event Cameras) [paper] [code]
- CSTNet (Transformer-Based RGB-T Tracking with Channel and Spatial Feature Fusion) [paper] [code]
- DyTrack (Exploring Dynamic Transformer for Efficient Object Tracking) [paper] [
code] - eMoE-Tracker (eMoE-Tracker: Environmental MoE-Based Transformer for Robust Event-Guided Object Tracking) [paper] [
code] - LoReTrack (LoReTrack: Efficient and Accurate Low-Resolution Transformer Tracking) [paper] [code]
- MAPNet (Multi-Attention Associate Prediction Network for Visual Tracking) [paper] [
code] - MDETrack (Enhanced Object Tracking by Self-Supervised Auxiliary Depth Estimation Learning) [paper] [
code] - MMMP (From Two Stream to One Stream: Efficient RGB-T Tracking via Mutual Prompt Learning and Knowledge Distillation) [paper] [
code] - M3PT (Middle Fusion and Multi-Stage, Multi-Form Prompts for Robust RGB-T Tracking) [paper] [
code] - NLMTrack (Enhancing Thermal Infrared Tracking with Natural Language Modeling and Coordinate Sequence Generation) [paper] [code]
- OIFTrack (Optimized Information Flow for Transformer