Project Icon

Awesome-Human-Activity-Recognition

人类活动识别领域的全面资源集合

该项目提供了人类活动识别(HAR)领域的全面资源集合,包括最新研究、方法、数据集、工具和潜在研究方向。资源涵盖基于惯性测量单元(IMU)的技术,并按年份整理了重要论文。内容从基础理论到前沿应用,为HAR领域的研究人员和开发者提供了宝贵的参考。

Awesome Human Activity Recognition (mainly related/interacted to SENSOR data)

Awesome MIT License PRs Welcome

An up-to-date & curated list of Awesome IMU-based Human Activity Recognition(Ubiquitous Computing) papers, methods & resources. Please note that most of the collections of researches are mainly based on IMU data.

Acknowledgment

Many thanks to the useful publications and repos: Jingdong Wang, Awesome-Deep-Vision, Awesome-Deep-Learning-Papers, Awesome-Self-Supervised-Learning, Awesome-Semi-Supervised-Learning and Awesome-Crowd-Counting.

Contributing

We Need You!

Please feel free to contribute this list.

Conferences, Journals and Workshops

Datasets

Tools

Other related tasks

Potential Research Direction

  • Large-Scale/Diverse Dataset Research
  • Multi-Modality: sensor-vision, sensor-skeleton, sensor-3DPose, Sensor-Motion
  • window selection
  • Generative Model: e.g., cross modality data generation, IMU2Skeleton
  • Handling the NULL-Class problem
  • Open-World, Real-World: complex/non-repetitive activities
  • Advanced model
  • Data-cental: active learning, unsupervised learning, semi-supervised learning, self-supervised learning
  • Actiion Segmentation
  • Are the existing settings/models reliable?
  • Graph Representation
  • Motion-Capture, Kinetic
  • Privacy related
  • Interpretability
  • Data Imbalance
  • Domain Adaptation
  • Fine-Grained
  • Multi-Label
  • Federated Learning
  • Ensemble
  • Knowledge Integragation/distillation

Papers

Surveys & Overview

  • Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction: A Comprehensive Survey

  • A Survey on Deep Learning for Human Activity Recognition (ACM Computing Surveys (CSUR)) [paper]

  • Applying Machine Learning for Sensor Data Analysis in Interactive Systems: Common Pitfalls of Pragmatic Use and Ways to Avoid Them (ACM Computing Surveys (CSUR)) [paper]

  • [DL4SAR] Deep Learning for Sensor-based Activity Recognition: A Survey (Pattern Recognition Letters) [paper][code]

  • Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities (ACM Computing Surveys (CSUR)) [paper]

  • Human Action Recognition from Various Data Modalities: A Review (IEEE TPAMI 2022) [paper] top AI Journal

2024

  • EarSleep: In-ear Acoustic-based Physical and Physiological Activity Recognition for Sleep Stage Detection
  • AutoAugHAR: Automated Data Augmentation for Sensor-based Human Activity Recognition
  • CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining
  • Changing Your Tune: Lessons for Using Music to Encourage Physical Activity
  • The EarSAVAS Dataset: Enabling Subject-Aware Vocal Activity Sensing on Earables
  • Self-supervised learning for Human Activity Recognition Using 700,000 Person-days of Wearable Data
  • IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition
  • HARMamba: Efficient Wearable Sensor Human Activity Recognition Based on Bidirectional Selective SSM
  • HyperHAR: Inter-sensing Device Bilateral Correlations and Hyper-correlations Learning Approach for Wearable Sensing Device Based Human Activity Recognition
  • Lateralization Effects in Electrodermal Activity Data Collected Using Wearable Devices
  • Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction: A Comprehensive Survey
  • exHAR: An Interface for Helping Non-Experts Develop and Debug Knowledge-based Human Activity Recognition Systems
  • Kirigami: Lightweight Speech Filtering for Privacy-Preserving Activity Recognition using Audio
  • Co-Designing Sensory Feedback for Wearables to Support Physical Activity through Body Sensations
  • Semantic Loss: A New Neuro-Symbolic Approach for Context-Aware Human Activity Recognition
  • CAvatar: Real-time Human Activity Mesh Reconstruction via Tactile Carpets
  • Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity Recognition
  • SF-Adapter: Computational-Efficient Source-Free Domain Adaptation for Human Activity Recognition
  • Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity Recognition
  • Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition
  • TextureSight: Texture Detection for Routine Activity Awareness with Wearable Laser Speckle Imaging
  • TS2ACT: Few-Shot Human Activity Sensing with Cross-Modal Co-Learning

2023

  • Integrating Gaze and Mouse Via Joint Cross-Attention Fusion Net for Students' Activity Recognition in E-learning
  • Integrating Gaze and Mouse Via Joint Cross-Attention Fusion Net for Students' Activity Recognition in E-learning
  • VAX: Using Existing Video and Audio-based Activity Recognition Models to Bootstrap Privacy-Sensitive Sensors
  • LAUREATE: A Dataset for Supporting Research in Affective Computing and Human Memory Augmentation
  • MMTSA: Multi-Modal Temporal Segment Attention Network for Efficient Human Activity Recognition
  • HMGAN: A Hierarchical Multi-Modal Generative Adversarial Network Model for Wearable Human Activity Recognition
  • TAO: Context Detection from Daily Activity Patterns Using Temporal Analysis and Ontology
  • HAKE: Human Activity Knowledge Engine [link]
  • PhysiQ: Off-site Quality Assessment of Exercise in Physical Therapy [link]
  • SAMoSA: Sensing Activities with Motion and Subsampled Audio [link]
  • Physical-aware Cross-modal Adversarial Network for Wearable Sensor-based Human Action Recognition [link]
  • IMU2CLIP: Multimodal Contrastive Learning for IMU Motion Sensors from Egocentric Videos and Text
  • Real-time Context-Aware Multimodal Network for Activity and Activity-Stage Recognition from Team Communication in Dynamic Clinical Settings
  • X-CHAR: A Concept-based Explainable Complex Human Activity Recognition Model
  • Hierarchical Clustering-based Personalized Federated Learning for Robust and Fair Human Activity Recognition
  • AMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments
  • Narrative-Based Visual Feedback to Encourage Sustained Physical Activity: A Field Trial of the WhoIsZuki Mobile Health Platform
  • Human Parsing with Joint Learning for Dynamic mmWave Radar Point Cloud
  • RF-CM: Cross-Modal Framework for RF-enabled Few-Shot Human Activity Recognition
  • PrISM-Tracker: A Framework for Multimodal Procedure Tracking Using Wearable Sensors and State Transition Information with User-Driven Handling of Errors and Uncertainty
  • Self-supervised Learning for Human Activity Recognition Using 700,000 Person-days of Wearable Data
  • GLOBEM: Cross-Dataset Generalization of Longitudinal Human Behavior Modeling
  • TransFloor: Transparent Floor Localization for Crowdsourcing Instant Delivery
  • Understanding the Mechanism of Through-Wall Wireless Sensing: A Model-based Perspective
  • Unveiling Causal Attention in Dogs' Eyes with Smart Eyewear
  • MHCCL: Masked Hierarchical Cluster-wise Contrastive Learning for Multivariate Time Series

2022

  • Self-Supervised Contrastive Pre-Training for Time Series via Time-Frequency Consistency
  • MaeFE: Masked Autoencoders Family of Electrocardiogram for Self-supervised Pre-training and Transfer Learning
  • A Simple Self-Supervised IMU Denoising Method For Inertial Aided Navigation
  • Adaptive Memory Networks with Self-supervised Learning for Unsupervised Anomaly Detection
  • MHCCL: Masked Hierarchical Cluster-wise Contrastive Learning for Multivariate Time Series
  • FLAME: Federated Learning across Multi-device Environments
  • Longitudinal cardio-respiratory fitness prediction through wearables in free-living environments
  • Self-supervised transfer learning of physiological representations from free-living wearable data
  • Learning Generalizable Physiological Representations from Large-scale Wearable Data
  • Application-Driven AI Paradigm for Human Action Recognition
  • A hybrid accuracy-and energy-aware human activity recognition model in IoT environment
  • Predicting Performance Improvement of Human Activity Recognition Model by Additional Data Collection
  • SAMoSA: Sensing Activities with Motion and Subsampled Audio
  • Towards Ubiquitous Personalized Music Recommendation with Smart Bracelets
  • Towards a Dynamic Inter-Sensor Correlations Learning Framework for Multi-Sensor-Based Wearable Human Activity Recognition
  • Augmented Adversarial Learning for Human Activity Recognition with Partial Sensor Sets
  • Bootstrapping Human Activity Recognition Systems for Smart Homes from Scratch
  • Towards a Dynamic Inter-Sensor Correlations Learning Framework for Multi-Sensor-Based Wearable Human Activity Recognition [link]
  • Cosmo: Contrastive Fusion Learning with Small Data for Multimodal Human Activity Recognition
  • What Makes Good Contrastive Learning on Small-Scale Wearable-based Tasks?
  • ClusterFL: a similarity-aware federated learning system for human activity recognition
  • Human Action Recognition from Various Data Modalities: A Review (IEEE TPAMI 2022 (top AI Journal))
  • Semantic-Discriminative Mixup for Generalizable Sensor-based Cross-domain Activity Recognition
  • Are You Left Out?: An Efficient and Fair Federated Learning for Personalized Profiles on Wearable Devices of Inferior Networking Conditions
  • Progressive Cross-modal Knowledge Distillation for Human Action Recognition [link]
  • Leveraging Sound and Wrist Motion to Detect Activities of Daily Living with Commodity Smartwatches
  • I Want to Know Your Hand: Authentication on Commodity Mobile Phones Based on Your Hand's Vibrations
  • CSI:DeSpy: Enabling Effortless Spy Camera Detection via Passive Sensing of User Activities and Bitrate Variations
  • Acceleration-based Activity Recognition of Repetitive Works with Lightweight Ordered-work Segmentation Network
  • IF-ConvTransformer: A Framework for Human Activity Recognition Using IMU Fusion and ConvTransformer
  • Quali-Mat: Evaluating the Quality of Execution in Body-Weight Exercises with a Pressure Sensitive Sports Mat
  • Non-Bayesian Out-of-Distribution Detection Applied to CNN Architectures for Human Activity Recognition
  • Resource-Efficient Continual Learning for Sensor-Based Human Activity Recognition
  • Beyond the Gates of Euclidean Space: Temporal-Discrimination-Fusions and Attention-based Graph Neural Network for Human Activity Recognition
  • LiteHAR: LIGHTWEIGHT HUMAN ACTIVITY RECOGNITION FROM WIFI SIGNALS WITH RANDOM CONVOLUTION KERNELS
  • A Review on Topological Data Analysis in Human Activity Recognition
  • Deep CNN-LSTM with Self-Attention Model for Human Activity Recognition using Wearable Sensor
  • Zero-Shot Learning for IMU-Based Activity Recognition Using Video Embeddings
  • Deep Transfer Learning with Graph Neural Network for Sensor-Based Human Activity Recognition
  • Meta-learning meets the Internet of Things: Graph prototypical models for sensor-based human activity recognition
  • Federated Multi-Task Learning
  • Unsupervised Human Activity Recognition Using the Clustering Approach: A Review
  • Hierarchical Self Attention Based Autoencoder for Open-Set Human Activity Recognition
  • Assessing the State of Self-Supervised Human Activity Recognition using Wearables
  • ROBUST AND EFFICIENT UNCERTAINTY AWARE BIOSIGNAL CLASSIFICATION VIA EARLY EXIT ENSEMBLES
  • Machine learning detects altered spatial navigation features in outdoor behaviour of Alzheimer’s disease patients
  • Evaluating Contrastive Learning on Wearable Timeseries for Downstream Clinical Outcomes
  • Segmentation-free Heart Pathology Detection Using Deep Learning
  • Anticipatory Detection of Compulsive Body-focused Repetitive Behaviors with Wearables
  • Assessing the State of Self-Supervised Human Activity Recognition using Wearables
  • Method and system for automatic extraction of virtual on-body inertial measurement units
  • Enhancing the Security & Privacy of Wearable Brain-Computer Interfaces
  • Detecting Smartwatch-Based Behavior Change in Response to a Multi-Domain Brain Health Intervention
  • ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition
  • Multi-scale Deep Feature Learning for Human Activity Recognition Using Wearable Sensors
  • Improving Wearable-Based Activity Recognition Using Image Representations
  • Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges
  • A recurrent neural network architecture to model physical activity energy expenditure in older people
  • Application of artificial intelligence in wearable devices: Opportunities and challenges
  • A Close Look into Human Activity Recognition Models using Deep Learning
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

稿定AI

稿定设计 是一个多功能的在线设计和创意平台,提供广泛的设计工具和资源,以满足不同用户的需求。从专业的图形设计师到普通用户,无论是进行图片处理、智能抠图、H5页面制作还是视频剪辑,稿定设计都能提供简单、高效的解决方案。该平台以其用户友好的界面和强大的功能集合,帮助用户轻松实现创意设计。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号