Visualizing Information Bottleneck through Variational Inference
Abstract
The Information Bottleneck theory provides a theoretical and computational framework for finding approximate minimum sufficient statistics. Analysis of the Stochastic Gradient Descent (SGD) training of a neural network on a toy problem has shown the existence of two phases, fitting and compression. In this work, we analyze the SGD training process of a Deep Neural Network on MNIST classification and confirm the existence of two phases of SGD training. We also propose a setup for estimating the mutual information for a Deep Neural Network through Variational Inference.
1 Introduction
Deep Neural Networks (DNNs) have found wide application on large-scale tasks like visual object recognition (Krizhevsky et al. [2017]), machine translation (Wu et al. [2016]) and reinforcement learning (Silver et al. [2016]). The success of DNNs in many areas has led to a growing interest in trying to explain their performance. Tishby and Zaslavsky [2015] proposed to analyze DNNs through the Information Bottleneck lens. Shwartz-Ziv and Tishby [2017] analyzed the information plane of a small neural network on a toy problem and reported two phases of a neural network trained using SGD, fitting and compression. In our work we test the hypothesis of Information Bottleneck theory for Deep Learning on a tougher problem of image classification.
For our experiments we need a way to estimate the mutual information between the input and the output of a hidden layer. Alemi et al. [2016] proposed a variational approximation to the Information Bottleneck. We also build upon the Variational Autoencoder derivation (Kingma and Welling [2013]) to propose an alternative MI upper bound for a teacher-student model.
We use the variational estimates to test the Information Bottleneck hypothesis of two phases of DNN training, fitting and compression. We find that the two phases are present in the training of a 4 layer DNN on MNIST classification task. Our results show that the claims of Information Bottleneck theory about Deep Learning hold true for a deep VIB network.
Our paper begins with a discussion of Information Bottleneck theory and it’s application to analyze DNNs (section-2). In section-3 we define the classification problem and leveraging variational inference to calculate mutual information. We also derive the mutual information upper bounds in section-3. In section-4 we discuss the experimental settings and report results on the classification task. We deliver our concluding remarks in section-5 and also suggest future research directions.
2 Information Bottleneck and Deep Learning
2.1 Mutual Information
Given any two random variables, and , with a joint distribution , their Mutual Information is defined as:
(1) |
(2) |
where is the Kullback-Liebler divergence of the distributions and , and and are the entropy and conditional entropy of and , respectively.
2.2 Information Bottleneck
When analyzing representations of w.r.t , the classical notion of minimal sufficient statistics provide good candidates for optimal representation. Sufficient statistics, in our context, are maps or partitions of , , that capture all the information that has on . Namely, .
Minimal sufficient statistics, , are the simplest sufficient statistics and induce the coarsest sufficient partition on . A simple of way of formulating this is through the Markov chain: , which should hold for a minimal sufficient statistics with any other sufficient statistics . Since exact minimal sufficient statistics only exist for very special distributions, (i.e. exponential families), Tishby et al. [2001] relaxed this optimization problem by first allowing the map to be stochastic, defined as an encoder , and then, by allowing the map to capture as much as possible of , not necessarily all of it.
This leads to the Information Bottleneck (IB) tradeoff [Tishby et al. [2001]], which provides a computational framework for finding approximate minimal sufficient statistics, or the optimal tradeoff between compression of and prediction of .
2.3 Information Plane of Neural Nets
Any representation variable, , defined as a (possibly stochastic) map of the input , is characterized by its joint distributions with and , or by its encoder and decoder distributions, and , respectively. Given , is uniquely mapped to a point in the Information Plane with coordinates .
Tishby and Zaslavsky [2015] proposed to analyze DNNs in the Information Plane and suggested that the goal of the neural network is to optimize the Information Bottleneck tradeoff between compression and prediction, successively, for each layer.
Building on top of this work Shwartz-Ziv and Tishby [2017] analyzed the information plane of a network with 7 fully connected hidden layers, and widths 12-10-7-5-4-3-2 neurons with hyperbolic tangent activations. The analysis showed that the Stochastic Gradient Descent (SGD) optimization has two main phases, in the first and shorter phase the layers increase the information on the input (fitting), while in the second much longer phase the layers reduce the information on the input (compression phase). The tasks were chosen as binary decision rules which are invariant under rotations of the sphere, with 12 binary inputs that represent 12 uniformly distributed points on the sphere. As the network size was small they calculated the mutual information exhaustively by binning the output activations into 30 buckets and computing the joint distribution. Figure 1 shows the information plane for the described setup.

Anonymous [2018] showed that results of Shwartz-Ziv and Tishby [2017] are not applicable for networks with ReLU activations and also claim that the shape of information plane is a result of saturating hyperbolic tangent non-linearities and for ReLU activation the same shape is not observed. Anonymous [2018] follows a binning strategy to estimate the mutual information for ReLU activations. This method of estimating mutual information has been refuted by authors of Shwartz-Ziv and Tishby [2017], ICLR-2018 Openreview111Review comments and discussion available at https://goo.gl/U24Kfp.
In section 4 we discuss how we estimate mutual information using variational inference. Using these results we draw the information plane for a neural network learned to classify on the MNIST dataset222http://yann.lecun.com/exdb/mnist. We use ReLU non-linearities and a deeper neural network for our task. In our results we observe the two distinct phases of SGD, fitting and compression as originally observed by Shwartz-Ziv and Tishby [2017].
3 Problem Definition
Information plane analysis is usually limited to toy problems with simple distributions, otherwise the MI calculation quickly becomes intractable. We examine approaches to estimating the information plane position using variational inference.
3.1 Task: MNIST Classification
The MNIST classification task consists of images of digits and the task is to predict the label of digit. An example of the task is shown in in Figure 2. We consider to be the input space (images) and to be the output label space (digit labels). We modify this task into the teacher-student setting, where the digit images are generated by a pre-trained teacher model.

3.2 Variational Information Bottleneck Model
The IB setup views the data generating process as a Markov chain . is a signal we wish to extract from observations . is some intermediate representation computed from by the model on the way to computing the predictions .
The VIB [Alemi et al., 2016] method is a generalization of the Variational Autoencoder [Kingma and Welling, 2013] into the supervised setting. Its objective function is to maximize the rate distortion tradeoff:
These two quantities correspond to the position in the information plane, so computing an approximation of the loss will give us an estimate of the network’s position. The MI of (target variable) and can be computed as the difference between entropy and conditional entropy (the remaining uncertainty after observing ):
Under our setup is difficult to compute, because we would need to invoke Bayes’ rule on , which is what we have (after marginalizing over all ). So we introduce an approximate distribution :
The second term is the average log-likelihood of under . In practice we will use the learned decoder of the VIB model as the approximate . The second term can be thought of as a conditional cross entropy term, which is an upper bound on the conditional entropy. To calculate we decompose it the same way:
is the average entropy of the conditional distribution for a fixed . Intuitively, this makes sense. If the entropy of the encoding is high, then there is high uncertainty of choosing from , so mutual information should be low. Because is hard to compute, we substitute it with the cross-entropy of and some variational approximation . Cross entropy is always greater than entropy, so:
3.3 Alternative Method
Suppose we assume that can be sampled using the process . In practice, this would mean pre-training an unsupervised latent variable model which we will define as . We can think of this as an instance of the teacher-student setup, where the student is trained to mimic the teacher. Given this, we can use another approach to calculate by estimating for a given :
To perform variational inference, we introduce a distribution :
(3) |
We can use this to lower bound :
(4) | ||||
If we define to use the teacher model:
we can simplify the KL divergence:
In other words, if we have access to the "real" data generating distribution , we can use it during variational inference. The final equation is as follows:
(5) | ||||
The variational distribution is computed for a specific . Algorithmically the inference process is as follows:
-
1.
Sample , from the teacher model
-
2.
Run the student encoder to get
-
3.
Sample from
-
4.
Run the inference network to get
-
5.
Sample from , and re-run the teacher model
-
6.
Sample from this distribution to compute
-
7.
Use all the samples to compute an MI upper bound via Equation 5
-
8.
Use the upper bound as a loss function to train the inference network
3.4 Hypothesis
The classification model we analyze is trained to maximize the IB tradeoff directly. However, the objective does not specify how this optimization will take place. We want to see whether the two training modes observed by Shwartz-Ziv and Tishby [2017] will happen for a VIB model, or whether it will train to fit and compress in an interleaved manner [Anonymous, 2018].
Our secondary goal is to compare our two approaches to estimate mutual information of and . Both methods upper bound the true quantity , we would like to see which methods will produce a lower result. The VIB objective has the advantage of being fast to calculate, whereas our method is optimization-based that runs over multiple iterations. However we might get a better result due to the network having access to the data generating distribution.
4 Experiments and Results
We test our approach on classifying MNIST digits generated by a teacher model. The teacher is a VAE with 20 hidden states and trained for 100 epochs. The student model is a classifier trained using VIB. The hidden state is 40 dimensions and we use a Gaussian with zero mean and unit variance as . The decoder is a 2-layer MLP. The encoder can be either a 2-layered MLP or a 3-layered CNN.
To generate labeled training data, we reconstruct the training images using the teacher VAE and keep the original labels. After every epoch of training, we estimate the mutual information from both the VIB loss function or the inference network. This network has access to the original data generating VAE.
4.1 Zero Information Signals
To verify our implementation, we ran training on a dataset where the images are generated independently from the labels, thereby having . We would expect to see and to drop rapidly once the network has determined that the image contains no useful information for predicting the digit.


4.2 Comparing MI Estimators
Our results show that the direct estimation method from Alemi et al. [2016] is better at bounding during the later stages of training. We suspect that this is because at later stages of training, the marginal is sufficiently close to the approximation , which produces a better estimate than doing it the roundabout way. In contrast, the inference network was able to reach a lower bound while mutual information is still rising.


In the following sections, we will take the minimum of both estimates.
4.3 Information Plane Dynamics




4.4 Model Uncertainty
VIB models can declare their uncertainty of an encoding given a model by enlarging the variance. Surprisingly, during compression phase we are still able to see the variance given to real samples go down.


5 Conclusion
In this work we analyzed the training of a DNN on an image classification task. We confirm the existence of the two phases of SGD in the information plane for classification models whose mutual information can be calculated through variational inference. We proposed a mutual information upper bound for a teacher-student training setting and compared its performance to the bounds formulated by Alemi et al. [2016].
Our next step is to do a baseline analysis of a linear model whose mutual information we can exactly calculate. Going in the other direction, we also wish to extend this technique to more difficult problems such as tougher image related tasks or perhaps analyzing the discriminator of a GAN. We also want to find ways to improve our mutual information lower bound estimate, and to extend this analysis to deterministic neural networks which do not train to maximize mutual information directly.
References
- Alemi et al. [2016] Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. CoRR, abs/1612.00410, 2016. URL http://arxiv.org/abs/1612.00410.
- Anonymous [2018] Anonymous. On the information bottleneck theory of deep learning. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=ry_WPG-A-.
- Kingma and Welling [2013] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Krizhevsky et al. [2017] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Commun. ACM, 60(6):84–90, 2017. doi: 10.1145/3065386. URL http://doi.acm.org/10.1145/3065386.
- Shwartz-Ziv and Tishby [2017] Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. CoRR, abs/1703.00810, 2017. URL http://arxiv.org/abs/1703.00810.
- Silver et al. [2016] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. doi: 10.1038/nature16961. URL https://doi.org/10.1038/nature16961.
- Tishby and Zaslavsky [2015] Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. CoRR, abs/1503.02406, 2015. URL http://arxiv.org/abs/1503.02406.
- Tishby et al. [2001] Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method. In Proceedings of the 37th Allerton Conference on Communication, Control and Computation, volume 49, 07 2001.
- Wu et al. [2016] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144.