Awesome-Information-Bottleneck

Awesome-Information-Bottleneck

信息瓶颈理论在机器学习中的进展和应用综述

本项目汇总了信息瓶颈理论在机器学习领域的关键文献,包括经典论文、综述、理论研究、模型开发和应用。重点介绍了信息瓶颈原理解释深度神经网络学习行为的方法,以及在表示学习、生成模型、强化学习等方向的创新应用。项目为研究人员和实践者提供了了解信息瓶颈理论最新进展的全面资源。

Information Bottleneck深度学习信息理论神经网络机器学习Github开源项目

Awesome Information Bottleneck Paper List Awesome PRs Welcome

In memory of Professor Naftali Tishby.
Last updated on October, 2022. <br>

0. Introduction

illustration To learn, you must forget. This may probably be one of the most intuitive lessons we have from Naftali Tishby's Information Bottleneck (IB) methods, which grew out of the fundamental tradeoff (rate v.s. distortion) from Claude Shannon's information theory, and later creatively explained the learning behaviors of deep neural networks by the fitting & compression framework.

It has been four years since the dazzling talk on Opening the Black Box of Deep Neural Networks, and more than twenty years since the first paper on the Information Bottleneck method. It is time for us to take a look back, to celebrate what has been established, and to prepare for a future.

This repository is organized as follows:

All papers are selected and sorted by topic/conference/year/importance. Please send a pull request if you would like to add any paper.

We also made slides on theory, applications and controversy for the initial Information Bottleneck principle in deep learning (p.s., some controversy has been addressed by recent publications, e.g., Lorenzen et al., 2021).

1. Classics

Agglomerative Information Bottleneck [link]
Noam Slonim, Naftali Tishby
NIPS, 1999 <br>

🐤 The Information Bottleneck Method [link]
Naftali Tishby, Fernando C. Pereira, William Bialek
Preprint, 2000 <br>

Predictability, complexity and learning [link]
William Bialek, Ilya Nemenman, Naftali Tishby
Neural Computation, 2001 <br>

Sufficient Dimensionality Reduction: A novel analysis principle [link]
Amir Globerson, Naftali Tishby
ICML, 2002 <br>

The information bottleneck: Theory and applications [link]
Noam Slonim
PhD Thesis, 2002 <br>

An Information Theoretic Tradeoff between Complexity and Accuarcy [link]
Ran Gilad-Bachrach, Amir Navot, Naftali Tishby
COLT, 2003 <br>

Information Bottleneck for Gaussian Variables [link]
Gal Chechik, Amir Globerson, Naftali Tishby, Yair Weiss
NIPS, 2003 <br>

Information and Fitness [link]
Samuel F. Taylor, Naftali Tishby and William Bialek
Preprint, 2007 <br>

Efficient representation as a design principle for neural coding and computation [link]
William Bialek, Rob R. de Ruyter van Steveninck, and Naftali Tishby
Preprint, 2007 <br>

The Information Bottleneck Revisited or How to Choose a Good Distortion Measure [link]
Peter Harremoes and Naftali Tishby
ISIT, 2007 <br>

🐤 Learning and Generalization with the Information Bottleneck [link]
Ohad Shamir, Sivan Sabato, Naftali Tishby
Journal of Theoretical Computer Science, 2009 <br>

🐤 Information-Theoretic Bounded Rationality [link]
Pedro A. Ortega, Daniel A. Braun, Justin Dyer, Kee-Eung Kim, Naftali Tishby
Preprint, 2015 <br>

🐤 Opening the Black Box of Deep Neural Networks via Information [link]
Ravid Shwartz-Ziv, Naftali Tishby
ICRI, 2017 <br>

2. Reviews

Information Bottleneck and its Applications in Deep Learning [link]
Hassan Hafez-Kolahi, Shohreh Kasaei
Preprint, 2019 <br>

The Information Bottleneck Problem and Its Applications in Machine Learning [link]
Ziv Goldfeld, Yury Polyanskiy
Preprint, 2020 <br>

On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views [link]
Abdellatif Zaidi, Iñaki Estella-Aguerri, Shlomo Shamai
Entropy, 2020 <br>

Information Bottleneck: Theory and Applications in Deep Learning [link]
Bernhard C. Geiger, Gernot Kubin
Entropy, 2020 <br>

On Information Plane Analyses of Neural Network Classifiers – A Review [link]
Bernhard C. Geiger
Preprint, 2021

Table 1 (p.2) gives a nice summary on the effect of different architectures & MI estimators on the existence of the compression phases and causal links between compression and generalizations.

A Critical Review of Information Bottleneck Theory and its Applications to Deep Learning [link]
Mohammad Ali Alomrani
Preprint, 2021 <br>

Information Flow in Deep Neural Networks [link]
Ravid Shwartz-Ziv
PhD Thesis, 2022 <br>

3. Theories

Gaussian Lower Bound for the Information Bottleneck Limit [link]
Amichai Painsky, Naftali Tishby
JMLR, 2017 <br>

Information-theoretic analysis of generalization capability of learning algorithms [link]
Aolin Xu, Maxim Raginsky
NeurIPS, 2017 <br>

Caveats for information bottleneck in deterministic scenarios [link] [ICLR version]
Artemy Kolchinsky, Brendan D. Tracey, Steven Van Kuyk
UAI, 2018 <br>

🐤🔥 Emergence of Invariance and Disentanglement in Deep Representations [link]
Alessandro Achille, Stefano Soatto
JMLR, 2018

  • This paper is a gem. On a high-level, it shows the relationship of generalization and information bottleneck in weights (IIW).
    • Be aware how this differs from Tishby's original definition on information bottleneck in representation).
  • Specifically, if we approximate SGD by stochastic differential equations, we can see that SGD naturally leads to minimization in IIW.
  • The authors argue that an optimal representation should have 4 properties: sufficiency, minimality, invariance, and disentanglement. Notably, the last two properties can naturally emerge with the minimization in mutual information between the datasets and network weights, or IIW.
<br>

On the Information Bottleneck Theory of Deep Learning [link]
Andrew Michael Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan Daniel Tracey, David Daniel Cox
ICLR, 2018 <br>

The Dual Information Bottleneck [link]
Zoe Piran, Ravid Shwartz-Ziv, Naftali Tishby
Preprint, 2019 <br>

🐤 Learnability for the Information Bottleneck [link] [slides] [poster] [journal version] [workshop version]
Tailin Wu, Ian Fischer, Isaac L. Chuang, Max Tegmark
UAI, 2019 <br>

🐤 Phase Transitions for the Information Bottleneck in Representation Learning [link] [video]
Tailin Wu, Ian Fischer
ICLR, 2020 <br>

Bottleneck Problems: Information and Estimation-Theoretic View [link]
Shahab Asoodeh, Flavio Calmon
Preprint, 2020 <br>

Information Bottleneck: Exact Analysis of (Quantized) Neural Networks [link]
Stephan Sloth Lorenzen, Christian Igel, Mads Nielsen
Preprint, 2021

  • This paper shows that different ways of binning when computing the mutual information leads to qualitatively different results.
  • It then confirms then original IB paper's results of the fitting & compression phase using quantized nets with exact computation for mutual information.
<br>

Perturbation Theory for the Information Bottleneck [link]
Vudtiwat Ngampruetikorn, David J. Schwab
Preprint, 2021 <br>

PAC-Bayes Information Bottleneck [link]
Zifeng Wang, Shao-Lun Huang, Ercan Engin Kuruoglu, Jimeng Sun, Xi Chen, Yefeng Zheng
ICLR, 2022

  • This paper discusses using $I(w, S)$ instead to $I(T, X)$ as the information bottleneck.
  • However, activations should in effect play a crucial role in network's generalization, but they are not explicitly captured by $I(w, S)$.
<br>

4. Models

Deep Variational Information Bottleneck [link]
Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy
ICLR, 2017 <br>

The Deterministic Information Bottleneck [link] [UAI Version]
DJ Strouse, David J. Schwab
Neural Computation, 2017

This replaces the mutual information term with entropy in the original IB objective.

<br>

Learning Sparse Latent Representations with the Deep Copula Information Bottleneck [link]
Aleksander Wieczorek, Mario Wieser, Damian Murezzan, Volker Roth
ICLR, 2018 <br>

Generalization in Reinforcement Learning with Selective Noise Injection and Information Bottleneck [link]
Maximilian Igl, Kamil Ciosek, Yingzhen Li, Sebastian Tschiatschek, Cheng Zhang, Sam Devlin, Katja Hofmann
NeurIPS, 2019 <br>

Information bottleneck through variational glasses [link]
Slava Voloshynovskiy, Mouad Kondah, Shideh Rezaeifar, Olga Taran, Taras Holotyak, Danilo Jimenez Rezende
NeurIPS Bayesian Deep Learning Workshop, 2019 <br>

🐤 Variational Discriminator Bottleneck [link]
Xue Bin Peng, Angjoo Kanazawa, Sam Toyer, Pieter Abbeel, Sergey Levine
ICLR, 2019 <br>

Nonlinear Information Bottleneck [link]
Artemy Kolchinsky, Brendan Tracey, David Wolpert
Entropy, 2019

This formuation shows better performance than VIB.

<br>

General Information Bottleneck Objectives and their Applications to Machine Learning [link]
Sayandev Mukherjee
Preprint, 2019

This paper synthesize IB and Predictive IB, and provides a new variational bound.

<br>

🐤 Graph Information Bottleneck [link] [code] [slides]
Tailin Wu, Hongyu Ren, Pan Li, Jure Leskovec,
NeurIPS, 2020 <br>

🐤 Learning Optimal Representations with the Decodable Information Bottleneck [link]
Yann Dubois, Douwe Kiela, David J. Schwab, Ramakrishna Vedantam
NeurIPS, 2020 <br>

🐤 Concept Bottleneck Models [link]
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang
ICML, 2020 <br>

Disentangled Representations for Sequence Data using Information Bottleneck Principle [link] [talk]
Masanori Yamada, Heecheol Kim, Kosuke Miyoshi, Tomoharu Iwata, Hiroshi Yamakawa
ICML, 2020 <br>

🐤 IBA: Restricting the Flow: Information Bottlenecks for Attribution

编辑推荐精选

AEE

AEE

AI Excel全自动制表工具

AEE 在线 AI 全自动 Excel 编辑器,提供智能录入、自动公式、数据整理、图表生成等功能,高效处理 Excel 任务,提升办公效率。支持自动高亮数据、批量计算、不规则数据录入,适用于企业、教育、金融等多场景。

UI-TARS-desktop

UI-TARS-desktop

基于 UI-TARS 视觉语言模型的桌面应用,可通过自然语言控制计算机进行多模态操作。

UI-TARS-desktop 是一款功能强大的桌面应用,基于 UI-TARS(视觉语言模型)构建。它具备自然语言控制、截图与视觉识别、精确的鼠标键盘控制等功能,支持跨平台使用(Windows/MacOS),能提供实时反馈和状态显示,且数据完全本地处理,保障隐私安全。该应用集成了多种大语言模型和搜索方式,还可进行文件系统操作。适用于需要智能交互和自动化任务的场景,如信息检索、文件管理等。其提供了详细的文档,包括快速启动、部署、贡献指南和 SDK 使用说明等,方便开发者使用和扩展。

Wan2.1

Wan2.1

开源且先进的大规模视频生成模型项目

Wan2.1 是一个开源且先进的大规模视频生成模型项目,支持文本到图像、文本到视频、图像到视频等多种生成任务。它具备丰富的配置选项,可调整分辨率、扩散步数等参数,还能对提示词进行增强。使用了多种先进技术和工具,在视频和图像生成领域具有广泛应用前景,适合研究人员和开发者使用。

爱图表

爱图表

全流程 AI 驱动的数据可视化工具,助力用户轻松创作高颜值图表

爱图表(aitubiao.com)就是AI图表,是由镝数科技推出的一款创新型智能数据可视化平台,专注于为用户提供便捷的图表生成、数据分析和报告撰写服务。爱图表是中国首个在图表场景接入DeepSeek的产品。通过接入前沿的DeepSeek系列AI模型,爱图表结合强大的数据处理能力与智能化功能,致力于帮助职场人士高效处理和表达数据,提升工作效率和报告质量。

Qwen2.5-VL

Qwen2.5-VL

一款强大的视觉语言模型,支持图像和视频输入

Qwen2.5-VL 是一款强大的视觉语言模型,支持图像和视频输入,可用于多种场景,如商品特点总结、图像文字识别等。项目提供了 OpenAI API 服务、Web UI 示例等部署方式,还包含了视觉处理工具,有助于开发者快速集成和使用,提升工作效率。

HunyuanVideo

HunyuanVideo

HunyuanVideo 是一个可基于文本生成高质量图像和视频的项目。

HunyuanVideo 是一个专注于文本到图像及视频生成的项目。它具备强大的视频生成能力,支持多种分辨率和视频长度选择,能根据用户输入的文本生成逼真的图像和视频。使用先进的技术架构和算法,可灵活调整生成参数,满足不同场景的需求,是文本生成图像视频领域的优质工具。

WebUI for Browser Use

WebUI for Browser Use

一个基于 Gradio 构建的 WebUI,支持与浏览器智能体进行便捷交互。

WebUI for Browser Use 是一个强大的项目,它集成了多种大型语言模型,支持自定义浏览器使用,具备持久化浏览器会话等功能。用户可以通过简洁友好的界面轻松控制浏览器智能体完成各类任务,无论是数据提取、网页导航还是表单填写等操作都能高效实现,有利于提高工作效率和获取信息的便捷性。该项目适合开发者、研究人员以及需要自动化浏览器操作的人群使用,在 SEO 优化方面,其关键词涵盖浏览器使用、WebUI、大型语言模型集成等,有助于提高网页在搜索引擎中的曝光度。

xiaozhi-esp32

xiaozhi-esp32

基于 ESP32 的小智 AI 开发项目,支持多种网络连接与协议,实现语音交互等功能。

xiaozhi-esp32 是一个极具创新性的基于 ESP32 的开发项目,专注于人工智能语音交互领域。项目涵盖了丰富的功能,如网络连接、OTA 升级、设备激活等,同时支持多种语言。无论是开发爱好者还是专业开发者,都能借助该项目快速搭建起高效的 AI 语音交互系统,为智能设备开发提供强大助力。

olmocr

olmocr

一个用于 OCR 的项目,支持多种模型和服务器进行 PDF 到 Markdown 的转换,并提供测试和报告功能。

olmocr 是一个专注于光学字符识别(OCR)的 Python 项目,由 Allen Institute for Artificial Intelligence 开发。它支持多种模型和服务器,如 vllm、sglang、OpenAI 等,可将 PDF 文件的页面转换为 Markdown 格式。项目还提供了测试框架和 HTML 报告生成功能,方便用户对 OCR 结果进行评估和分析。适用于科研、文档处理等领域,有助于提高工作效率和准确性。

飞书多维表格

飞书多维表格

飞书多维表格 ×DeepSeek R1 满血版

飞书多维表格联合 DeepSeek R1 模型,提供 AI 自动化解决方案,支持批量写作、数据分析、跨模态处理等功能,适用于电商、短视频、影视创作等场景,提升企业生产力与创作效率。关键词:飞书多维表格、DeepSeek R1、AI 自动化、批量处理、企业协同工具。

下拉加载更多