Awesome Human Activity Recognition (mainly related/interacted to SENSOR data)
An up-to-date & curated list of Awesome IMU-based Human Activity Recognition(Ubiquitous Computing) papers, methods & resources. Please note that most of the collections of researches are mainly based on IMU data.
Acknowledgment
Many thanks to the useful publications and repos: Jingdong Wang, Awesome-Deep-Vision, Awesome-Deep-Learning-Papers, Awesome-Self-Supervised-Learning, Awesome-Semi-Supervised-Learning and Awesome-Crowd-Counting.
Contributing
Please feel free to contribute this list.
Conferences, Journals and Workshops
- IJCAI, ACM MultiMedia, AAAI, KDD, ICDM, TKDE, TIP, TNNLS, TPAMI, TMM, Pattern Recognition, AI, Nature Communication, Nature Digital Medicine, ICPR, Sensors, Ubicomp(IMWUT Journal)
- https://github.com/OxWearables/Oxford_Wearables_Activity_Recognition
Datasets
- Capture-24 [link]
- mHealth [link]
- HHAR [link]
- Opportunity [link]
- PAMAP2 [link]
- GOTOV [link]
- REALDISP [link]
- UCIDSADS [link]
- MMAct [link]
- TotalCapture [link]
- WISDM [link]
- MotionSense [link]
- MobiAct [link]
- Fenland [link]
- Salad 50 [link]
- DIP [link]
- LARa [link]
- Human Inertial Pose [link]
- Kinetics-400 dataset [link]
- UCF-101 dataset [link]
Tools
Other related tasks
- EEG analysis/prediction/modelling: https://github.com/meagmohit/EEG-Datasets
Potential Research Direction
- Large-Scale/Diverse Dataset Research
- Multi-Modality: sensor-vision, sensor-skeleton, sensor-3DPose, Sensor-Motion
- window selection
- Generative Model: e.g., cross modality data generation, IMU2Skeleton
- Handling the NULL-Class problem
- Open-World, Real-World: complex/non-repetitive activities
- Advanced model
- Data-cental: active learning, unsupervised learning, semi-supervised learning, self-supervised learning
- Actiion Segmentation
- Are the existing settings/models reliable?
- Graph Representation
- Motion-Capture, Kinetic
- Privacy related
- Interpretability
- Data Imbalance
- Domain Adaptation
- Fine-Grained
- Multi-Label
- Federated Learning
- Ensemble
- Knowledge Integragation/distillation
Papers
Surveys & Overview
-
Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction: A Comprehensive Survey
-
A Survey on Deep Learning for Human Activity Recognition (ACM Computing Surveys (CSUR)) [paper]
-
Applying Machine Learning for Sensor Data Analysis in Interactive Systems: Common Pitfalls of Pragmatic Use and Ways to Avoid Them (ACM Computing Surveys (CSUR)) [paper]
-
[DL4SAR] Deep Learning for Sensor-based Activity Recognition: A Survey (Pattern Recognition Letters) [paper][code]
-
Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities (ACM Computing Surveys (CSUR)) [paper]
-
Human Action Recognition from Various Data Modalities: A Review (IEEE TPAMI 2022) [paper] top AI Journal
2024
- EarSleep: In-ear Acoustic-based Physical and Physiological Activity Recognition for Sleep Stage Detection
- AutoAugHAR: Automated Data Augmentation for Sensor-based Human Activity Recognition
- CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining
- Changing Your Tune: Lessons for Using Music to Encourage Physical Activity
- The EarSAVAS Dataset: Enabling Subject-Aware Vocal Activity Sensing on Earables
- Self-supervised learning for Human Activity Recognition Using 700,000 Person-days of Wearable Data
- IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition
- HARMamba: Efficient Wearable Sensor Human Activity Recognition Based on Bidirectional Selective SSM
- HyperHAR: Inter-sensing Device Bilateral Correlations and Hyper-correlations Learning Approach for Wearable Sensing Device Based Human Activity Recognition
- Lateralization Effects in Electrodermal Activity Data Collected Using Wearable Devices
- Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction: A Comprehensive Survey
- exHAR: An Interface for Helping Non-Experts Develop and Debug Knowledge-based Human Activity Recognition Systems
- Kirigami: Lightweight Speech Filtering for Privacy-Preserving Activity Recognition using Audio
- Co-Designing Sensory Feedback for Wearables to Support Physical Activity through Body Sensations
- Semantic Loss: A New Neuro-Symbolic Approach for Context-Aware Human Activity Recognition
- CAvatar: Real-time Human Activity Mesh Reconstruction via Tactile Carpets
- Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity Recognition
- SF-Adapter: Computational-Efficient Source-Free Domain Adaptation for Human Activity Recognition
- Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity Recognition
- Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition
- TextureSight: Texture Detection for Routine Activity Awareness with Wearable Laser Speckle Imaging
- TS2ACT: Few-Shot Human Activity Sensing with Cross-Modal Co-Learning
2023
- Integrating Gaze and Mouse Via Joint Cross-Attention Fusion Net for Students' Activity Recognition in E-learning
- Integrating Gaze and Mouse Via Joint Cross-Attention Fusion Net for Students' Activity Recognition in E-learning
- VAX: Using Existing Video and Audio-based Activity Recognition Models to Bootstrap Privacy-Sensitive Sensors
- LAUREATE: A Dataset for Supporting Research in Affective Computing and Human Memory Augmentation
- MMTSA: Multi-Modal Temporal Segment Attention Network for Efficient Human Activity Recognition
- HMGAN: A Hierarchical Multi-Modal Generative Adversarial Network Model for Wearable Human Activity Recognition
- TAO: Context Detection from Daily Activity Patterns Using Temporal Analysis and Ontology
- HAKE: Human Activity Knowledge Engine [link]
- PhysiQ: Off-site Quality Assessment of Exercise in Physical Therapy [link]
- SAMoSA: Sensing Activities with Motion and Subsampled Audio [link]
- Physical-aware Cross-modal Adversarial Network for Wearable Sensor-based Human Action Recognition [link]
- IMU2CLIP: Multimodal Contrastive Learning for IMU Motion Sensors from Egocentric Videos and Text
- Real-time Context-Aware Multimodal Network for Activity and Activity-Stage Recognition from Team Communication in Dynamic Clinical Settings
- X-CHAR: A Concept-based Explainable Complex Human Activity Recognition Model
- Hierarchical Clustering-based Personalized Federated Learning for Robust and Fair Human Activity Recognition
- AMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments
- Narrative-Based Visual Feedback to Encourage Sustained Physical Activity: A Field Trial of the WhoIsZuki Mobile Health Platform
- Human Parsing with Joint Learning for Dynamic mmWave Radar Point Cloud
- RF-CM: Cross-Modal Framework for RF-enabled Few-Shot Human Activity Recognition
- PrISM-Tracker: A Framework for Multimodal Procedure Tracking Using Wearable Sensors and State Transition Information with User-Driven Handling of Errors and Uncertainty
- Self-supervised Learning for Human Activity Recognition Using 700,000 Person-days of Wearable Data
- GLOBEM: Cross-Dataset Generalization of Longitudinal Human Behavior Modeling
- TransFloor: Transparent Floor Localization for Crowdsourcing Instant Delivery
- Understanding the Mechanism of Through-Wall Wireless Sensing: A Model-based Perspective
- Unveiling Causal Attention in Dogs' Eyes with Smart Eyewear
- MHCCL: Masked Hierarchical Cluster-wise Contrastive Learning for Multivariate Time Series
2022
- Self-Supervised Contrastive Pre-Training for Time Series via Time-Frequency Consistency
- MaeFE: Masked Autoencoders Family of Electrocardiogram for Self-supervised Pre-training and Transfer Learning
- A Simple Self-Supervised IMU Denoising Method For Inertial Aided Navigation
- Adaptive Memory Networks with Self-supervised Learning for Unsupervised Anomaly Detection
- MHCCL: Masked Hierarchical Cluster-wise Contrastive Learning for Multivariate Time Series
- FLAME: Federated Learning across Multi-device Environments
- Longitudinal cardio-respiratory fitness prediction through wearables in free-living environments
- Self-supervised transfer learning of physiological representations from free-living wearable data
- Learning Generalizable Physiological Representations from Large-scale Wearable Data
- Application-Driven AI Paradigm for Human Action Recognition
- A hybrid accuracy-and energy-aware human activity recognition model in IoT environment
- Predicting Performance Improvement of Human Activity Recognition Model by Additional Data Collection
- SAMoSA: Sensing Activities with Motion and Subsampled Audio
- Towards Ubiquitous Personalized Music Recommendation with Smart Bracelets
- Towards a Dynamic Inter-Sensor Correlations Learning Framework for Multi-Sensor-Based Wearable Human Activity Recognition
- Augmented Adversarial Learning for Human Activity Recognition with Partial Sensor Sets
- Bootstrapping Human Activity Recognition Systems for Smart Homes from Scratch
- Towards a Dynamic Inter-Sensor Correlations Learning Framework for Multi-Sensor-Based Wearable Human Activity Recognition [link]
- Cosmo: Contrastive Fusion Learning with Small Data for Multimodal Human Activity Recognition
- What Makes Good Contrastive Learning on Small-Scale Wearable-based Tasks?
- ClusterFL: a similarity-aware federated learning system for human activity recognition
- Human Action Recognition from Various Data Modalities: A Review (IEEE TPAMI 2022 (top AI Journal))
- Semantic-Discriminative Mixup for Generalizable Sensor-based Cross-domain Activity Recognition
- Are You Left Out?: An Efficient and Fair Federated Learning for Personalized Profiles on Wearable Devices of Inferior Networking Conditions
- Progressive Cross-modal Knowledge Distillation for Human Action Recognition [link]
- Leveraging Sound and Wrist Motion to Detect Activities of Daily Living with Commodity Smartwatches
- I Want to Know Your Hand: Authentication on Commodity Mobile Phones Based on Your Hand's Vibrations
- CSI:DeSpy: Enabling Effortless Spy Camera Detection via Passive Sensing of User Activities and Bitrate Variations
- Acceleration-based Activity Recognition of Repetitive Works with Lightweight Ordered-work Segmentation Network
- IF-ConvTransformer: A Framework for Human Activity Recognition Using IMU Fusion and ConvTransformer
- Quali-Mat: Evaluating the Quality of Execution in Body-Weight Exercises with a Pressure Sensitive Sports Mat
- Non-Bayesian Out-of-Distribution Detection Applied to CNN Architectures for Human Activity Recognition
- Resource-Efficient Continual Learning for Sensor-Based Human Activity Recognition
- Beyond the Gates of Euclidean Space: Temporal-Discrimination-Fusions and Attention-based Graph Neural Network for Human Activity Recognition
- LiteHAR: LIGHTWEIGHT HUMAN ACTIVITY RECOGNITION FROM WIFI SIGNALS WITH RANDOM CONVOLUTION KERNELS
- A Review on Topological Data Analysis in Human Activity Recognition
- Deep CNN-LSTM with Self-Attention Model for Human Activity Recognition using Wearable Sensor
- Zero-Shot Learning for IMU-Based Activity Recognition Using Video Embeddings
- Deep Transfer Learning with Graph Neural Network for Sensor-Based Human Activity Recognition
- Meta-learning meets the Internet of Things: Graph prototypical models for sensor-based human activity recognition
- Federated Multi-Task Learning
- Unsupervised Human Activity Recognition Using the Clustering Approach: A Review
- Hierarchical Self Attention Based Autoencoder for Open-Set Human Activity Recognition
- Assessing the State of Self-Supervised Human Activity Recognition using Wearables
- ROBUST AND EFFICIENT UNCERTAINTY AWARE BIOSIGNAL CLASSIFICATION VIA EARLY EXIT ENSEMBLES
- Machine learning detects altered spatial navigation features in outdoor behaviour of Alzheimer’s disease patients
- Evaluating Contrastive Learning on Wearable Timeseries for Downstream Clinical Outcomes
- Segmentation-free Heart Pathology Detection Using Deep Learning
- Anticipatory Detection of Compulsive Body-focused Repetitive Behaviors with Wearables
- Assessing the State of Self-Supervised Human Activity Recognition using Wearables
- Method and system for automatic extraction of virtual on-body inertial measurement units
- Enhancing the Security & Privacy of Wearable Brain-Computer Interfaces
- Detecting Smartwatch-Based Behavior Change in Response to a Multi-Domain Brain Health Intervention
- ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition
- Multi-scale Deep Feature Learning for Human Activity Recognition Using Wearable Sensors
- Improving Wearable-Based Activity Recognition Using Image Representations
- Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges
- A recurrent neural network architecture to model physical activity energy expenditure in older people
- Application of artificial intelligence in wearable devices: Opportunities and challenges
- A Close Look into Human Activity Recognition Models using Deep Learning