Chelsea Sidrane & Ryan Holmdahl
Since the advent of the deep learning era in roughly 2012 A.D. [9], researchers and practitioners alike have sought theory to understand the massive success of the technique. Naftali Tishby et al theorized that the information bottleneck principle might very well explain this “deep learning” stuff [1, 2, 3].
The Information Bottleneck (IB) principle relies on a few ideas that we will define. The first is compression. To compress a piece of information, such as an image, a text document, or a sequence of bits, is to represent it in an abbreviated fashion, usually with the help of a code. We take our information, encode it using our short and efficient code, perhaps store it or transmit it, and, when we need to, decode the information back to its original form [4].
A question we must answer before we begin compression is: which part of the information do we care about? For example, if I write the sentence “The quick brown fox jumped over the lazy dog” on a whiteboard:

I might assume that what really matters are the symbolic English characters, and that transcribing it into typed text preserves the information content — but this wouldn’t be the case if we were trying to predict who wrote the sentence on the board. In handwriting analysis, the way I cross my ‘t’ and the way I loop my ‘p’ are relevant pieces of information that would be lost by transcribing the sentence into typed text.
The concept of a sufficient statistic captures this idea well [4]. Let’s call me, the person who wrote the sentence on the board, , and the sentence that I wrote, a sample of my handwriting,
. Now let’s take a picture of the sentence and call that picture a function of the sample
. If the picture is high quality enough to do forensic handwriting analysis on and determine that I wrote the sentence, then we can say that
is a sufficient statistic of
for
. The picture (
) captures everything about me (
) that the writing on the board (
) does. In contrast, a typed-up version of the text I wrote,
, would not be as good for predicting my identity (
), as the actual writing on the board (
) would be. This means that the typed-up version of the text (
) is not a sufficient statistic of the writing on the board (
) for predicting the author (
).
Sometimes we don’t want just any sufficient statistic, we want a minimal sufficient statistic, say, . All this means is that
is a sufficient statistic for
while using as few bits as possible, or maximally compressing
[4]. For the words on the whiteboard, it’s perhaps sufficient to only see the ‘t’ and ‘p’ that I wrote to positively identify me as the writer, meaning that
contains a lot of unnecessary information and that
would be a picture of just those two letters.
The last concept we’ll introduce is that of mutual information. Mutual information essentially tells you how much information two random variables share. If knowing one random variable tells you nothing useful about the other
, then their mutual information
is zero [4]. For example, if I flip two coins, and I tell you the first one was heads up, this doesn’t help me figure out the second coin because they are independent events and have zero mutual information. However, if I were to flip two cards from the top of a deck, and I show you that the first is the Queen of Hearts, then you know the second card is not the Queen of Hearts, and is less likely to be a Queen than anything else, so there does exist non-zero, positive mutual information between the two events.
Now that we’ve discussed compression, sufficient statistics, and mutual information, we can get back to the Information Bottleneck theory [1]. It’s a broadly applicable idea, but we will describe it in the context of a neural network [2,3]. For a classification task, we have a class , and we draw a sample from this class, which we call
. Since we’re performing classification,
represents what is relevant about
. We feed
into our neural network, producing activations at each layer of the network. We call the activations of the
-th layer
. The last layer produces a predicted class label,
. If the network follows the IB theory, then
attempts to compress the input
as much as possible while retaining as much information as possible about the relevant information
[1,2,3]. The resulting
may not be a true sufficient statistic, but would strive to approach one.

The flow of information through a neural network. Source: [3]
The IB theory states that this approximate minimally sufficient statistic is produced by maximizing the mutual information between the compressed representation and the variable defining relevance (), and by minimizing the mutual information between the compressed representation and the input (
) [1,2,3].
The theory has intuitive appeal to describe the functioning of neural networks, as by definition neural networks attempt to learn some representation of the input with which good predictions of the class label can be made.
Tishby proposed that the theory of classification networks as IB models could be evaluated by measuring the mutual information between intermediate layer representations
and the true label
, and the mutual information
between the intermediate representations
and the input
[2,3]. According to Tishby, training beginning from random weights should increase
over all layers. Training should also result in learning weights such that
decreases as one moves from early hidden layers to later ones. The maximization of
describes how as the network trains, it learns intermediate representations
that approach a sufficient statistic, such that
. The minimization of
describes how the network learns intermediate representations that shed “irrelevant features” of the input
.
Tishby originally proposed the Information Bottleneck theory in a 2000 paper [1], but formalized the theory as applied to deep learning in a 2015 paper [2]. He and colleagues then published a paper with experiments in 2017 [3] that began a broad debate about understanding how the Information Bottleneck theory may be relevant to deep learning.
This debate has raised a number of interesting questions, such as:
How do you meaningfully measure mutual information in a neural network?
This has been one of the more polarizing topics in the IB debate. The answers to this question depend on what the true mutual information of the hidden layers are. If the network is deterministic, then:
- If the input
is discrete, Amjad and Geiger show that the mutual information
is piecewise constant as you move along the layers from input to output [5]
- If the input
is discrete and if the mappings from an input to the network’s hidden layer activations are invertible, which Goldfeld et al argue is true for any neural network which uses sigmoid or tanh activation functions, each of
and
are fully informative of the other and
is completely constant as you move along the layers from input to output [6]
- If the input
is continuous, the mutual information
is infinite because the density function
is a delta function at
as Saxe et al demonstrate [7]
In cases (2) and (3), it is argued that some source of noise in the hidden layers is necessary to give a meaningful value. Tishby et al introduce noise by binning the activations, assigning them to buckets based on their value and in doing so reducing many activations to a single value. Goldfeld et al add small random values to each neuron’s activation during training so that the mapping is no longer invertible, arguing that the noise must be part of the network’s function instead of applied after-the-fact like with binning. Achille and Soatto employ a similar method, adding random noise to the network’s weights instead of its activations [8]. Ultimately, it remains an open question how best to derive a value of
which meaningfully captures compression in the neural networks employed in practice, which generally do not use these noise sources and use non-saturating activations like the rectified linear unit (ReLu). It also remains to be seen if mutual information is the best quantity to measure in order to understand neural networks from an information theoretic perspective
How does compression of inputs relate to model generalization?
A central idea in IB theory for neural networks is that compression of the inputs, captured in , helps the network to generalize. Intuitively, this makes sense: the less a hidden layer’s activations tells you about the value of
, the more “unnecessary” features it has shed about
, and the fewer “unnecessary” features the model uses, the less prone it is to overfitting. While logical, this is a difficult claim to actually prove, and would seemingly require a consistent correlation between lower
and higher model performance on withheld samples to be shown on a variety of different models.
How does clustering of activations relate to model generalization?
Goldfeld et al argue that the source of compression in neural networks is clustering of hidden layer activations. More specifically, they argue that estimates of is mostly measuring how “close together” the activations at layer
are for different inputs of the same class. From this, they propose that the extent to which different models cluster their inputs, and how that clustering changes over the course of a model’s training, be further explored as a cause of model generalization. It’s an interesting theory, and one which makes intuitive sense, but again would require examination under a broader set of neural networks and with more rigorous measures of clustering.
Conclusion
As the deep learning community continues to explore how and why neural networks perform as well as they do, the Information Bottleneck theory provides an interesting framework for guiding and interpreting the research accumulated along the way. There are still a lot of questions to tackle, but it’s an approach certainly worth consideration and an in-depth examination. If you’d like to dive into the topic in more detail, look at the references and extra references for some relevant papers to help you get started:
References
[1] Tishby, Naftali, Fernando C. Pereira, and William Bialek. “The information bottleneck method.” arXiv preprint physics/0004057 (2000).
[2] Tishby, Naftali, and Noga Zaslavsky. “Deep learning and the information bottleneck principle.” In 2015 IEEE Information Theory Workshop (ITW), pp. 1-5. IEEE, 2015.
[3] Shwartz-Ziv, Ravid, and Naftali Tishby. “Opening the black box of deep neural networks via information.” arXiv preprint arXiv:1703.00810 (2017).
[4] Cover, Thomas M., and Joy A. Thomas. Elements of information theory. John Wiley & Sons, 2012.
[5] Amjad, Rana A. and Bernhard C. Geiger. “Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle.” (2018). ArXiv.
[6] Goldfeld, Ziv, Ewout van den Berg, Kristjan Greenewald, Brian Kingsbury, Igor Melnyk, Nam Nguyen, and Yury Polyanskiy. “Estimating Information Flow in DNNs.” (2018).
[7] Saxe, Andrew Michael, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan Daniel Tracey, and David Daniel Cox. “On the information bottleneck theory of deep learning.” (2018).
[8] Achille, Alessandro, and Stefano Soatto. “Emergence of invariance and disentanglement in deep representations.” The Journal of Machine Learning Research 19, no. 1 (2018): 1947-1980.
[9] Wikipedia. “Deep Learning: Deep Learning Revolution.“
Extra References
Gabrié, Marylou, Andre Manoel, Clément Luneau, Nicolas Macris, Florent Krzakala, and Lenka Zdeborová. “Entropy and mutual information in models of deep neural networks.” In Advances in Neural Information Processing Systems, pp. 1826-1836. 2018.
Kolchinsky, Artemy, Brendan D. Tracey, and Steven Van Kuyk. “Caveats for information bottleneck in deterministic scenarios.” (2018).
One thought on “A Debate: The Information Bottleneck Theory for DNNs”