Awesome Information Bottleneck Paper List
In memory of Professor Naftali Tishby.
Last updated on October, 2022.
0. Introduction
To learn, you must forget. This may probably be one of the most intuitive lessons we have from Naftali Tishby's Information Bottleneck (IB) methods, which grew out of the fundamental tradeoff (rate v.s. distortion) from Claude Shannon's information theory, and later creatively explained the learning behaviors of deep neural networks by the fitting & compression framework.
It has been four years since the dazzling talk on Opening the Black Box of Deep Neural Networks, and more than twenty years since the first paper on the Information Bottleneck method. It is time for us to take a look back, to celebrate what has been established, and to prepare for a future.
This repository is organized as follows:
- Classics
- Reviews
- Theories
- Models
- Applications (General)
- Applications (RL)
- Methods for Mutual Information Estimation (😣 MI is notoriously hard to estimate! )
- Other Information Theory Driven Work (verbose)
- Citation
All papers are selected and sorted by topic/conference/year/importance. Please send a pull request if you would like to add any paper.
We also made slides on theory, applications and controversy for the initial Information Bottleneck principle in deep learning (p.s., some controversy has been addressed by recent publications, e.g., Lorenzen et al., 2021).
1. Classics
Agglomerative Information Bottleneck [link]
Noam Slonim, Naftali Tishby
NIPS, 1999
🐤 The Information Bottleneck Method [link]
Naftali Tishby, Fernando C. Pereira, William Bialek
Preprint, 2000
Predictability, complexity and learning [link]
William Bialek, Ilya Nemenman, Naftali Tishby
Neural Computation, 2001
Sufficient Dimensionality Reduction: A novel analysis principle [link]
Amir Globerson, Naftali Tishby
ICML, 2002
The information bottleneck: Theory and applications [link]
Noam Slonim
PhD Thesis, 2002
An Information Theoretic Tradeoff between Complexity and Accuarcy [link]
Ran Gilad-Bachrach, Amir Navot, Naftali Tishby
COLT, 2003
Information Bottleneck for Gaussian Variables [link]
Gal Chechik, Amir Globerson, Naftali Tishby, Yair Weiss
NIPS, 2003
Information and Fitness [link]
Samuel F. Taylor, Naftali Tishby and William Bialek
Preprint, 2007
Efficient representation as a design principle for neural coding and computation [link]
William Bialek, Rob R. de Ruyter van Steveninck, and Naftali Tishby
Preprint, 2007
The Information Bottleneck Revisited or How to Choose a Good Distortion Measure [link]
Peter Harremoes and Naftali Tishby
ISIT, 2007
🐤 Learning and Generalization with the Information Bottleneck [link]
Ohad Shamir, Sivan Sabato, Naftali Tishby
Journal of Theoretical Computer Science, 2009
🐤 Information-Theoretic Bounded Rationality [link]
Pedro A. Ortega, Daniel A. Braun, Justin Dyer, Kee-Eung Kim, Naftali Tishby
Preprint, 2015
🐤 Opening the Black Box of Deep Neural Networks via Information [link]
Ravid Shwartz-Ziv, Naftali Tishby
ICRI, 2017
2. Reviews
Information Bottleneck and its Applications in Deep Learning [link]
Hassan Hafez-Kolahi, Shohreh Kasaei
Preprint, 2019
The Information Bottleneck Problem and Its Applications in Machine Learning [link]
Ziv Goldfeld, Yury Polyanskiy
Preprint, 2020
On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views [link]
Abdellatif Zaidi, Iñaki Estella-Aguerri, Shlomo Shamai
Entropy, 2020
Information Bottleneck: Theory and Applications in Deep Learning [link]
Bernhard C. Geiger, Gernot Kubin
Entropy, 2020
On Information Plane Analyses of Neural Network Classifiers – A Review [link]
Bernhard C. Geiger
Preprint, 2021
Table 1 (p.2) gives a nice summary on the effect of different architectures & MI estimators on the existence of the compression phases and causal links between compression and generalizations.
A Critical Review of Information Bottleneck Theory and its Applications to Deep Learning [link]
Mohammad Ali Alomrani
Preprint, 2021
Information Flow in Deep Neural Networks [link]
Ravid Shwartz-Ziv
PhD Thesis, 2022
3. Theories
Gaussian Lower Bound for the Information Bottleneck Limit [link]
Amichai Painsky, Naftali Tishby
JMLR, 2017
Information-theoretic analysis of generalization capability of learning algorithms [link]
Aolin Xu, Maxim Raginsky
NeurIPS, 2017
Caveats for information bottleneck in deterministic scenarios [link] [ICLR version]
Artemy Kolchinsky, Brendan D. Tracey, Steven Van Kuyk
UAI, 2018
🐤🔥 Emergence of Invariance and Disentanglement in Deep Representations [link]
Alessandro Achille, Stefano Soatto
JMLR, 2018
- This paper is a gem. On a high-level, it shows the relationship of generalization and information bottleneck in weights (IIW).
- Be aware how this differs from Tishby's original definition on information bottleneck in representation).
- Specifically, if we approximate SGD by stochastic differential equations, we can see that SGD naturally leads to minimization in IIW.
- The authors argue that an optimal representation should have 4 properties: sufficiency, minimality, invariance, and disentanglement. Notably, the last two properties can naturally emerge with the minimization in mutual information between the datasets and network weights, or IIW.
On the Information Bottleneck Theory of Deep Learning [link]
Andrew Michael Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan Daniel Tracey, David Daniel Cox
ICLR, 2018
The Dual Information Bottleneck [link]
Zoe Piran, Ravid Shwartz-Ziv, Naftali Tishby
Preprint, 2019
🐤 Learnability for the Information Bottleneck [link] [slides] [poster] [journal version] [workshop version]
Tailin Wu, Ian Fischer, Isaac L. Chuang, Max Tegmark
UAI, 2019
🐤 Phase Transitions for the Information Bottleneck in Representation Learning [link] [video]
Tailin Wu, Ian Fischer
ICLR, 2020
Bottleneck Problems: Information and Estimation-Theoretic View [link]
Shahab Asoodeh, Flavio Calmon
Preprint, 2020
Information Bottleneck: Exact Analysis of (Quantized) Neural Networks [link]
Stephan Sloth Lorenzen, Christian Igel, Mads Nielsen
Preprint, 2021
- This paper shows that different ways of binning when computing the mutual information leads to qualitatively different results.
- It then confirms then original IB paper's results of the fitting & compression phase using quantized nets with exact computation for mutual information.
Perturbation Theory for the Information Bottleneck [link]
Vudtiwat Ngampruetikorn, David J. Schwab
Preprint, 2021
PAC-Bayes Information Bottleneck [link]
Zifeng Wang, Shao-Lun Huang, Ercan Engin Kuruoglu, Jimeng Sun, Xi Chen, Yefeng Zheng
ICLR, 2022
- This paper discusses using $I(w, S)$ instead to $I(T, X)$ as the information bottleneck.
- However, activations should in effect play a crucial role in network's generalization, but they are not explicitly captured by $I(w, S)$.
4. Models
Deep Variational Information Bottleneck [link]
Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy
ICLR, 2017
The Deterministic Information Bottleneck [link] [UAI Version]
DJ Strouse, David J. Schwab
Neural Computation, 2017
This replaces the mutual information term with entropy in the original IB objective.
Learning Sparse Latent Representations with the Deep Copula Information Bottleneck [link]
Aleksander Wieczorek, Mario Wieser, Damian Murezzan, Volker Roth
ICLR, 2018
Generalization in Reinforcement Learning with Selective Noise Injection and Information Bottleneck [link]
Maximilian Igl, Kamil Ciosek, Yingzhen Li, Sebastian Tschiatschek, Cheng Zhang, Sam Devlin, Katja Hofmann
NeurIPS, 2019
Information bottleneck through variational glasses [link]
Slava Voloshynovskiy, Mouad Kondah, Shideh Rezaeifar, Olga Taran, Taras Holotyak, Danilo Jimenez Rezende
NeurIPS Bayesian Deep Learning Workshop, 2019
🐤 Variational Discriminator Bottleneck [link]
Xue Bin Peng, Angjoo Kanazawa, Sam Toyer, Pieter Abbeel, Sergey Levine
ICLR, 2019
Nonlinear Information Bottleneck [link]
Artemy Kolchinsky, Brendan Tracey, David Wolpert
Entropy, 2019
This formuation shows better performance than VIB.
General Information Bottleneck Objectives and their Applications to Machine Learning [link]
Sayandev Mukherjee
Preprint, 2019
This paper synthesize IB and Predictive IB, and provides a new variational bound.
🐤 Graph Information Bottleneck [link] [code] [slides]
Tailin Wu, Hongyu Ren, Pan Li, Jure Leskovec,
NeurIPS, 2020
🐤 Learning Optimal Representations with the Decodable Information Bottleneck [link]
Yann Dubois, Douwe Kiela, David J. Schwab, Ramakrishna Vedantam
NeurIPS, 2020
🐤 Concept Bottleneck Models [link]
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang
ICML, 2020
Disentangled Representations for Sequence Data using Information Bottleneck Principle [link] [talk]
Masanori Yamada, Heecheol Kim, Kosuke Miyoshi, Tomoharu Iwata, Hiroshi Yamakawa
ICML, 2020
🐤 IBA: Restricting the Flow: Information Bottlenecks for Attribution