A curated list of resources for Learning with Noisy Labels
2008-NIPS - Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. [Paper] [Code]
2009-ICML - Supervised learning from multiple experts: whom to trust when everyone lies a bit. [Paper]
2011-NIPS - Bayesian Bias Mitigation for Crowdsourcing. [Paper]
2012-ICML - Learning to Label Aerial Images from Noisy Data. [Paper]
2013-NIPS - Learning with Multiple Labels. [Paper]
2014-ML - Learning from multiple annotators with varying expertise. [Paper]
2014 - A Comprehensive Introduction to Label Noise. [Paper]
2014 - Learning from Noisy Labels with Deep Neural Networks. [Paper]
2015-ICLR_W - Training Convolutional Networks with Noisy Labels. [Paper] [Code]
2015-CVPR - Learning from Massive Noisy Labeled Data for Image Classification. [Paper] [Code]
2015-CVPR - Visual recognition by learning from web data: A weakly supervised domain generalization approach. [Paper] [Code]
2015-CVPR - Training Deep Neural Networks on Noisy Labels with Bootstrapping. [Paper] [Loss-Code-Unofficial-1] [Loss-Code-Unofficial-2] [Code-Keras]
2015-ICCV - Webly supervised learning of convolutional networks. [Paper] [Project Pagee]
2015-TPAMI - Classification with noisy labels by importance reweighting. [Paper] [Code]
2015-NIPS - Learning with Symmetric Label Noise: The Importance of Being Unhinged. [Paper] [Loss-Code-Unofficial]
2015-Arxiv - Making Risk Minimization Tolerant to Label Noise. [Paper]
2015 - Learning Discriminative Reconstructions for Unsupervised Outlier Removal. [Paper] [Code]
2015-TNLS - Rboost: label noise-robust boosting algorithm based on a nonconvex loss function and the numerically stable base learners. [Paper]
2016-AAAI - Robust semi-supervised learning through label aggregation. [Paper]
2016-ICLR - Auxiliary Image Regularization for Deep CNNs with Noisy Labels. [Paper] [Code]
2016-CVPR - Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels. [Paper] [Code]
2016-ICML - Loss factorization, weakly supervised learning and label noise robustness. [Paper]
2016-RL - On the convergence of a family of robust losses for stochastic gradient descent. [Paper]
2016-NC - Noise detection in the Meta-Learning Level. [Paper] [Additional information]
2016-ECCV - The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition. [Paper] [Project Page]
2016-ICASSP - Training deep neural-networks based on unreliable labels. [Paper] [Poster] [Code-Unofficial]
2016-ICDM - Learning deep networks from noisy labels with dropout regularization. [Paper] [Code]
2016-KBS - A robust multi-class AdaBoost algorithm for mislabeled noisy data. [Paper]
2017-AAAI - Robust Loss Functions under Label Noise for Deep Neural Networks. [Paper]
2017-PAKDD - On the Robustness of Decision Tree Learning under Label Noise. [Paper]
2017-ICLR - Training deep neural-networks using a noise adaptation layer. [Paper] [Code]
2017-ICLR - Who Said What: Modeling Individual Labelers Improves Classification. [Paper] [Code]
2017-CVPR - Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach. [Paper] [Code]
2017-CVPR - Learning From Noisy Large-Scale Datasets With Minimal Supervision. [Paper]
2017-CVPR - Lean crowdsourcing: Combining humans and machines in an online system. [Paper] [Code]
2017-CVPR - Attend in groups: a weakly-supervised deep learning framework for learning from web data. [Paper] [Code]
2017-ICML - Robust Probabilistic Modeling with Bayesian Data Reweighting. [Paper] [Code]
2017-ICCV - Learning From Noisy Labels With Distillation. [Paper] [Code]
2017-NIPS - Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks. [Paper]
2017-NIPS - Active bias: Training more accurate neural networks by emphasizing high variance samples. [Paper] [Code]
2017-NIPS - Decoupling" when to update" from" how to update". [Paper] [Code]
2017-IEEE-TIFS - A Light CNN for Deep Face Representation with Noisy Labels. [Paper] [Code-Pytorch] [Code-Keras] [Code-Tensorflow]
2017-TNLS - Improving Crowdsourced Label Quality Using Noise Correction. [Paper]
2017-ML - Learning to Learn from Weak Supervision by Full Supervision. [Paper] [Code]
2017-ML - Avoiding your teacher's mistakes: Training neural networks with controlled weak supervision. [Paper]
2017-Arxiv - Deep Learning is Robust to Massive Label Noise. [Paper]
2017-Arxiv - Fidelity-weighted learning. [Paper]
2017 - Self-Error-Correcting Convolutional Neural Network for Learning with Noisy Labels. [Paper]
2017-Arxiv - Learning with confident examples: Rank pruning for robust classification with noisy labels. [Paper] [Code]
2017-Arxiv - Regularizing neural networks by penalizing confident output distributions. [Paper]
2017 - Learning with Auxiliary Less-Noisy Labels. [Paper]
2018-AAAI - Deep learning from crowds. [Paper]
2018-ICLR - mixup: Beyond Empirical Risk Minimization. [Paper] [Code]
2018-ICLR - Learning From Noisy Singly-labeled Data. [Paper] [Code]
2018-ICLR_W - How Do Neural Networks Overcome Label Noise?. [Paper]
2018-CVPR - CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise. [Paper] [Code]
2018-CVPR - Joint Optimization Framework for Learning with Noisy Labels. [Paper] [Code] [Code-Unofficial-Pytorch]
2018-CVPR - Iterative Learning with Open-set Noisy Labels. [Paper] [Code]
2018-ICML - MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. [Paper] [Code]
2018-ICML - Learning to Reweight Examples for Robust Deep Learning. [Paper] [Code] [Code-Unofficial-PyTorch]
2018-ICML - Dimensionality-Driven Learning with Noisy Labels. [Paper] [Code]
2018-ECCV - CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images. [Paper] [Code]
2018-ECCV - Learning with Biased Complementary Labels. [Paper] [Code]
2018-ISBI - Training a neural network based on unreliable human annotation of medical images. [Paper]
2018-WACV - Iterative Cross Learning on Noisy Labels. [Paper]
2018-WACV - A semi-supervised two-stage approach to learning from noisy labels.
OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。
openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。
高分辨率纹理 3D 资产生成
Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。
一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。
3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。
用于可扩展和多功能 3D 生成的结构化 3D 潜在表示
TRELLIS 是一个专注于 3D 生成的项目,它利用结构化 3D 潜在表示技术,实现了可扩展且多功能的 3D 生成。项目提供了多种 3D 生成的方法和工具,包括文本到 3D、图像到 3D 等,并且支持多种输出格式,如 3D 高斯、辐射场和网格等。通过 TRELLIS,用户可以根据文本描述或图像输入快速生成高质量的 3D 资产,适用于游戏开发、动画制作、虚拟现实等多个领域。
10 节课教你开启构建 AI 代理所需的一切知识
AI Agents for Beginners 是一个专为初学者打造的课程项目,提供 10 节课程,涵盖构建 AI 代理的必备知识,支持多种语言,包含规划设计、工具使用、多代理等丰富内容,助您快速入门 AI 代理领域。
AI Excel全自动制表工具
AEE 在线 AI 全自动 Excel 编辑器,提供智能录入、自动公式、数据整理、图表生成等功能,高效处理 Excel 任务,提升办公效率。支持自动高亮数据、批量计算、不规则数据录入,适用于企业、教育、金融等多场景。
基于 UI-TARS 视觉语言模型的桌面应用,可通过自然语言控制计算机进行多模态操作。
UI-TARS-desktop 是一款功能强大的桌面应用,基于 UI-TARS(视觉语言模型)构建。它具备自然语言控制、截图与视觉识别、精确的鼠标键盘控制等功能,支持跨平台使用(Windows/MacOS),能提供实时反馈和状态显示,且数据完全本地处理,保障隐私安全。该应用集成了多种大语言模型和搜索方式,还可进行文件系统操作。适用于需要智能交互和自动化任务的场景,如信息检索、文件管理等。其提供了详细的文档,包括快速启动、部署、贡献指南和 SDK 使用说明等,方便开发者使用和扩展。
开源且先进的大规模视频生成模型项目
Wan2.1 是一个开源且先进的大规模视频生成模型项目,支持文本到图像、文本到视频、图像到视频等多种生成任务。它具备丰富的配置选项,可调整分辨率、扩散步数等参数,还能对提示词进行增强。使用了多种先进技术和工具,在视频和图像生成领域具有广泛应用前景,适合研究人员和开发者使用。
全流程 AI 驱动的数据可视化工具,助力用户轻松创作高颜值图表
爱图表(aitubiao.com)就是AI图表,是由镝数科技推出的一款创新型智能数据可视化平台,专注于为用户提供便捷的图表生成、数据分析和报告撰写服务。爱图表是中国首个在图表场景接入DeepSeek的产品。通过接入前沿的DeepSeek系列AI模型,爱图表结合强大的数据处理能力与智能化功能,致力于帮助职场人士高效处理和表达数据,提升工作效率和报告质量。
一款强大的视觉语言模型,支持图像和视频输入
Qwen2.5-VL 是一款强大的视觉语言模型,支持图像和视频输入,可用于多种场景,如商品特点总结、图像文字识别等。项目提供了 OpenAI API 服务、Web UI 示例等部署方式,还包含了视觉处理工具,有助于开发者快速集成和使用,提升工作效率。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号