This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Distangling Biological Noise in Cellular Images with a focus on Explainability

Manik Sharma
Department of Engineering Design,
IIT Madras, Chennai
shmakn99@gmail.com
&Ganapathy Krishnamurthi
Department of Engineering Design
IIT Madras, Chennai
gankrish@smail.iitm.ac.in
Abstract

The cost of some drugs and medical treatments has risen in recent years that many patients are having to go without. A classification project could make researchers more efficient.

One of the more surprising reasons behind the cost is how long it takes to bring new treatments to market. Despite improvements in technology and science, research and development continues to lag. In fact, finding new treatment takes, on average, more than 10 years and costs hundreds of millions of dollars. In turn, greatly decreasing the cost of treatments can make ensure these treatments get to patients faster. This work aims at solving a part of this problem by creating a cellular image classification model which can decipher the genetic perturbations in cell (occurring naturally or artificially). Another interesting question addressed is what makes the deep-learning model decide in a particular fashion, which can further help in demystifying the mechanism of action of certain perturbations and paves a way towards the explainability of the deep-learning model.

We show the results of Grad-CAM visualizations and make a case for the significance of certain features over others. Further we discuss how these significant features are pivotal in extracting useful diagnostic information from the deep-learning model.

1 Introduction

It has been a human endeavour, for time eternity, to know about the inner functioning of our body, be it at the microscopic or macroscopic or macroscopic scale. In his 1859 book, On the Origin of Species, Charles Darwin propounded the earth shattering theory of Natural Selection. A reconciliation in the statistic nature and biological nature of this excitingly new phenomenon was discovered posthumously in the works of Gregor Mendel, who blamed, factors - now called genes, as the sole reason for passage of traits or a slightly adjacent term heredity.

Mendel’s factors led scientist on frantic chase for the physical location of these genes and after around half century Alfred Hershey and Martha Chase, in the year 1952, proved definitely the seat of gene to be DNA [1], thought of as an useless bio-molecule till then. Followed by Har Gobind Khorana’s discovery of the genetic code led to a flurry of research in the field of molecular biology that has led bare infront of scientist many exciting inner workings of the cell. This research is often accompanied by a behemoth of data, be it numerical, or be it images. A host of deep learning and machine learning techniques have been thrown at these these data-sets to tease out patterns which can be of immense practical use to the human race.

1.1 Biological Primer

The process of formation of proteins, or the Central Dogma, is surprisingly not so much of a chemical concept but rather an informational one [2]. The strings of coded information in the DNA provide a template for the structure of protein, but the DNA is enslaved by its geography, i.e, it is only found inside the nucleus while the protein are found in the cytoplasm (outside the nucleus). To carry this information, another molecule called RNA (specifically messenger-RNA) is employed. siRNA are single stranded molecules which bind with mRNA (the intermediary molecule in between DNA and Protein) and stops the formation of the protein [3]. One way to study the function of a particular gene is by silencing it and then studying the resultant cell phenotype. This is also called loss-of-function analyses of the cell [4]. The total silencing renders the action of a particular gene useless, which is termed as gene-knockout and it is a low through put technique as compared to gene-knockdown. Another factor to keep in mind here is the off-target effects of the siRNA used, which might affect not only the mRNA it is designed to inhibit but other mRNAs with partial matches.

2 Data-Set

To understand the data fully, we need to understand the experimental setup under which the data is collected. This will help in understanding what are the sources of spurious effects which are introduced during the process of data collection (there maybe other unclassified sources as well).

2.1 Experimental Setup

Refer to caption
Figure 1: Channel Wise visualization of a data-point

The experiment is conducted on single cell type, by choosing one of the four available cell types - HUVEC (Human Umbilical Vein Endothelial Cell), RPE (Retinal Pigment Epithelium), HepG2 (Human Liver Cancer Cell ) and U2OS (Human Bone Osteosarcoma Epithelial Cells). The images are taken from cell cultures.

The siRNA is the object of interest, which is going to create the biological variability, or on a simpler level, changes which we are going to observe. Cell cultures are created in wells of plate which is can hold 384 such well (16×2416\times 24). There are in total 51 experiments conducted in the collection of database. One well can be thought as a mini-test tube. There are two imaging sites per well, images from the border wells are discarded as they might be affected by environmental noise like temperature differentials. The image obtained is of the size - 512×512×6512\times 512\times 6 i.e, there are six channels per image (for comparison in a standard image there are three channels - Red, Green and Blue).

2.2 Role of Controls

Controls play a very crucial role in determining the performance of the model as well as calibrating the model

Negative control is a well which is left untreated in the experiments, there is one negative control per plate. A negative control is included in the experiment to distinguish reagent specific effects from non-reagent specific effects in the siRNA-treated cells [4]. When the image from the negative control well is given as an input the model should not make confident predictions, this is a kind of validity check

Positive control is a well which is treated with reagents whose affect on the cell culture is known and well studied [4]. Positive controls are used to measure how efficient the reagent is. In this context when images from the positive control are given as input the model should predict the reagent class with maximum confidence. This again serves as a sanity check.

2.3 Related Statistics

There are in total NTotal=56416N_{Total}=56416 images in the data-set. Out of which NTrain=36517N_{Train}=36517 are in the train set and NTest=19899N_{Test}=19899 are in the test set. Since the images comprise of six channels, the total number of gray-scale images in the data-set is NGrayScale=338496N_{Gray-Scale}=338496. There in total NClass=1108N_{Class}=1108 classes.

Refer to caption
Figure 2: Class Wise Distribution of Samples

The number of samples per class, nSamples/Class=NTrain/NClassn_{Samples/Class}=N_{Train}/N_{Class}, is a very important number while training the data-set. A data-set might have a high NTotalN_{Total} yet a very low nSamples/Classn_{Samples/Class} in effect rendering the size of the data-set misleading. The following figure 2 shows the distribution of nSamples/Classn_{Samples/Class}:

In figure 2 SxS_{x} is the set of Classes which have same Number of Samples (=x=x). In Table 1 we can see the number of classes which fall in the particular sets.

Class Set Number of Classes
S29S_{29} 22
S31S_{31} 22
S32S_{32} 3131
S33S_{33} 10651065
Table 1: Number of Classes in Class Sets

2.4 Pixel Value Distribution

To get a good bearing on the distribution of pixel values in the data-set, we can look at the distribution of mean pixel value per image according to different channels.

Refer to caption
Figure 3: Channel-Wise mean Pixel Value Distribution

The variance in the mean Pixel Values for Channel 1 is the highest and is the lowest for Channel 4 (Figure 3).

3 Methods and Techniques

In this section we discuss techniques such as selection of backbone for the deep-learning model, the design of loss, prediction schema and methods used to understand the inner-workings of the optimized model.

3.1 Backbone

Densenet takes the idea of skip connections (introduced in Resnets [5]) one step further. In Resnet where each layer is connected to the next layer with a shortcut, in Densenet apart from the next layer each layer is connected to every other subsequent layer.

Resnets:

xl=Hl(xl1)+xl1x_{l}=H_{l}(x_{l-1})+x_{l-1} (1)

Densenets:

xl=Hl([x0,x1,,xl1])x_{l}=H_{l}([x_{0},x_{1},\ldots,x_{l-1}]) (2)

where xlx_{l} is the feature map after the lthl^{th} layer. Hl()H_{l}(\cdot) is a composite function comprising of activations, non-linearity, pooling etc.

But this is all the similarity that exists between the two networks. Dense takes a departure in the way it introduces these connections, instead of summing up the input to the output, it is rather concatenated with it [6]. In equation [2] the vector [x0,x1,,xl1][x_{0},x_{1},\ldots,x_{l-1}] is the concatenation of the feature maps outputted in layers 0,1,,l10,1,\ldots,l-1.This can only be done if the dimension of the channels in the input are same as the output, therefore it is hard to maintain these skip connections through the entire network. These dense connectivity layers are packed into a module, called a dense block as shown in Figure 4. This is a miniature feed forward network in itself which has this added advantage of skip connections. These dense blocks are connected using transition layers. It is at these transition layers where the change in the dimensions of the feature map takes place.

Refer to caption
Figure 4: Architecture of a dense-block as taken from [6]

3.2 Losses

Loss functions are a way to get measure of distance in the embedding space. Since the model tries to minimise this function, it encourages certain kind of structures on the embeddings produced by the model. We will trace the evolution of loss function for a particular type of problem - that of Face Recognition, because it is this class where the current task also finds its place. The difficulty which is posed by the problems of this class is that the intravariations within a class can be larger than inter-differences.

Softmax Loss is a go-to function which provides useful supervision signal in object detection problems. But due to the difficulty posed in the previous section, a vanilla softmax loss gives unsatisfactory results.

Lsoftmax(y,p)=1Ni=1Nlog(exp(WyiTxi+byi)k=1Kexp(WkTxi+bk))L_{softmax}(y,p)=-\frac{1}{N}\sum_{i=1}^{N}\log(\frac{\exp(W_{y_{i}}^{T}x_{i}+b_{y_{i}})}{\sum_{k=1}^{K}\exp(W_{k}^{T}x_{i}+b_{k})}) (3)

where NN is the batch-size and KK is the number of classes.

Based on the observations of Parde et al [7], one can infer about the quality of the input image by the L2-norm of the features learnt by the softmax loss function. This gave rise to L2-softmax [8], which proposes to enforce all the feature to have the same L2-Norm.

W^=WW2,x^=αxx2\hat{W}=\frac{W}{||W||_{2}},~{}\hat{x}=\alpha\frac{x}{||x||_{2}} (4)

There have been other kinds of normalizations proposed for both weights and the features in the softmax loss function and it has become a common strategy in softmax.

While the Softmax Loss did help in solving the difficulty at hand up to some extent, practitioners in the Face Recognition community wanted that the samples should be separated more strictly to avoid misclassifying the difficult samples [9]. Arc Loss provides a way to deal with the misclassification of difficult samples. Arc Loss can be motivated in very straight forward way using the softmax loss function which is written as in equation [3], for simplicity we will assume bj=0b_{j}=0. Writing WjTxi=Wjxicos(θj)W_{j}^{T}x_{i}=||W_{j}||~{}||x_{i}||\cos(\theta_{j}), where θj\theta_{j} is the angle between the weight WjW_{j} and feature xix_{i}. Fixing the norm of WjW_{j} to be 11 as done in equation [4], following Wang et al [10] we also fix the feature xix_{i} by using l2normalisationl_{2}-normalisation and re-scaling it to ss. Making the changes we get,

Larc(y,p)=1Ni=1Nlog(exp(scos(θyi))exp(scos(θyi))+k=1,kjKexp(scos(θj)))L_{arc}(y,p)=-\frac{1}{N}\sum_{i=1}^{N}\\ \log(\frac{\exp(s\cos(\theta_{y_{i}}))}{\exp(s\cos(\theta_{y_{i}}))+\sum_{k=1,k\neq j}^{K}\exp(s\cos(\theta_{j}))}) (5)

further addition of an angular margin penalty mm between xix_{i} and WjW_{j} results in intra-class compactness and inter-class discrepancy

Larcmargin(y,p)=1Ni=1Nlog(exp(scos(θyi+m))exp(scos(θyi+m))+k=1,kjKexp(scos(θj)))L_{arc-margin}(y,p)=-\frac{1}{N}\sum_{i=1}^{N}\\ \log(\frac{\exp(s\cos(\theta_{y_{i}}+m))}{\exp(s\cos(\theta_{y_{i}}+m))+\sum_{k=1,k\neq j}^{K}\exp(s\cos(\theta_{j}))}) (6)

3.3 Pseudo Labelling

Pseudo Labelling is a technique borrowed from the semi-supervised learning domain. The network is trained with labeled as well unlabelled data simultaneously. In place of labels, Pseudo Labels are used which is nothing but picking the class which has the maximum predicted probability [11].

Let yPLy^{PL} be the pseudo label for a particular sample xx. Let pp be the predicted probabilities for the sample xx, then:

yiPL={1ifi=argmaxjp0otherwisey^{PL}_{i}=\begin{cases}\text{1}&\quad\text{if}~{}~{}i=argmax_{j}p\\ \text{0}&\quad\text{otherwise}\\ \end{cases}

The labelled and unlabelled samples are used simultaneously to calculate the loss value as follows:

L=1Ni=1Nk=1Kloss(yki,pki)+α(t)1Mj=1Mk=1Kloss(ykPLj,pkj)L=\frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{K}loss(y_{k}^{i},p_{k}^{i})+\\ \alpha(t)\frac{1}{M}\sum_{j=1}^{M}\sum_{k=1}^{K}loss({y_{k}^{PL}}^{j},p_{k}^{j}) (7)

where pkjp_{k}^{j} are the predicted probabilities for MM unlabelled samples and α(t)\alpha(t) is a coefficient balancing the effect of labeled and unlabelled samples on the network.

It is important to achieve a proper scheduling of α(t)\alpha(t), a higher value will disrupt the training and a low value will negate any benefits which can be derived from pseudo-labelling [12].

One way to schedule α(t)\alpha(t) is as follows:

α(t)={0tT1tT1T2T1T1tT2αfT2t\alpha(t)=\begin{cases}\text{0}&\quad t~{}\leq~{}T_{1}\\ \frac{t-T_{1}}{T_{2}-T_{1}}&\quad T_{1}~{}\leq~{}t~{}\leq~{}T_{2}\\ \alpha_{f}&\quad T_{2}~{}\leq~{}t\end{cases}

where T1T_{1} is the epoch at which the pseudo-labelling starts and increases linearly until it attains its final value at T2T_{2} where it is equal to αf\alpha_{f}.

The reasons proposed for the success of this technique is Low Density Separation between Classes, according to the cluster assumption the decision boundaries between clusters lie in low density regions, pseudo labelling helps the network output to be insensitive to variations in the directions of low-dimensional manifold [11].

3.4 CutMix

Data augmentation is broad variety of techniques which is used to increase the generalisation capabilities of the network, either by increasing the available train data-set or increasing the feature capturing strength of filters. There are some standard augmentation techniques such as flipping and rotating the input image which try to make the network agnostic of the pose of features in the image. Even after the application of said transforms the ability of the network to learn local features is not enhanced, there are few techniques which try to promote the object localisation capabilities.

One strategy to use is CutMix. A cropped-up part of one image is placed on top of another image and the labels are manipulated accordingly. More area one image has in the final image, the final label is higher for that image too. To achieve this first we need to make a bounding box BB which is the region removed from image xAx_{A}. Let the coordinates of the bounding box BB = (rx,ry,rw,rh)(r_{x},r_{y},r_{w},r_{h}). The bounding box is sampled as follows:

rxUniform(0,W),rw=W1λryUniform(0,H),rh=H1λr_{x}\sim Uniform(0,W),~{}~{}~{}r_{w}=W\sqrt{1-\lambda}\\ r_{y}\sim Uniform(0,H),~{}~{}~{}r_{h}=H\sqrt{1-\lambda} (8)

where xA,xB(R)W×H×Cx_{A},~{}x_{B}\in\mathbb{(}R)^{W\times H\times C} and λ\lambda is the CutMix coefficient which is sampled from a beta distribution Beta(α,α)Beta(\alpha,\alpha) with α\alpha usually set to 11. Also note rwrhWH=1λ\frac{r_{w}r_{h}}{WH}=1-\lambda which is the cropped area ratio. Using this we get our mask M{0,1}W×HM\in\{0,1\}^{W\times H} which is equal to 0 for all the points inside the bounding box BB. Therefore the final image x^\hat{x} and label y^\hat{y} thus become,

x^=MxA+(1M)xBy^=λyA+(1λ)yB\hat{x}=M\odot x_{A}+(1-M)\odot x_{B}\\ \hat{y}=\lambda y_{A}+(1-\lambda)y_{B} (9)
Refer to caption
Figure 5: Image xAx_{A} | Image xbx_{b} | Cutmix Image

3.5 Activation Maps

There have been various techniques [13], [14] proposed in recent times to understand the workings of deep-CNNs. One major break through in this has been the technique called Class Activation Maps (CAM) [15], which proposes to produce these visualizations using global average pooling. A better and more sophisticated version has been proposed based on this work called Grad-CAM.

To go forward, we have to answer a pertinent question - What makes a good visual explanation? This can be answered in two parts. First, the technique should be able to clearly show the difference in localization properties of model for different classes, further it should show the discriminative behaviour in these localizations. Second, the visual explanations should a good resolution, therefore capturing the fine grain details about models attention. These two points are the focus of Grad-CAM.

Grad-CAM is gradient weighted global average pooling technique [16], which works by first computing the importance weights αkc\alpha_{k}^{c},

αkc=1ZijycAijk\alpha_{k}^{c}=\frac{1}{Z}\sum_{i}\sum_{j}\frac{\partial y^{c}}{\partial A_{ij}^{k}} (10)

An αkc\alpha_{k}^{c} is computed for a particular class cc and Activation Map AijkA_{ij}^{k} where the partial differentiation term ycAijk\frac{\partial y^{c}}{\partial A_{ij}^{k}} inside the global average pooling function is the gradient of the class activation, ycy^{c}(before the softmax) with respect to activation map AijkA_{ij}^{k}

After this summation of each activation map, weighted by the importance in taken, to generate the Grad-CAM visualization LGradCAMcL_{Grad-CAM}^{c}

LGradCAMc=ReLU(kαkcAk)L_{Grad-CAM}^{c}=ReLU(\sum_{k}\alpha_{k}^{c}A^{k}) (11)

4 Experiments

4.1 Backbone

As compared to Resnet, a pre-trained densenet (trained on Imagenet dataset) performed better in general and was used for further training. The version used is the memory-efficient implementation of Densenet-161. The input is a six channel image of the dimensions 6×512×5126\times 512\times 512, the dimensions of the transforming image along with the operation acting on it are tracked in the following table:

Output Size Layer-Specification
96×256×25696\times 256\times 256 7×77\times 7 Conv, stride = 2
96×128×12896\times 128\times 128 3×33\times 3 MaxPool, stride = 2
384×128×128384\times 128\times 128 [1×1Conv,stride = 13×3Conv,stride = 1]\left[\begin{array}[]{c}1\times 1~{}\text{Conv},~{}\text{stride = 1}\\ 3\times 3~{}\text{Conv},~{}\text{stride = 1}\end{array}\right] ×\times 6
192×64×64192\times 64\times 64 [1×1Conv,stride = 12×2AvgPool,stride = 2]\left[\begin{array}[]{c}1\times 1~{}\text{Conv},~{}\text{stride = 1}\\ 2\times 2~{}\text{AvgPool},~{}\text{stride = 2}\end{array}\right]
768×64×64768\times 64\times 64 [1×1Conv,stride = 13×3Conv,stride = 1]\left[\begin{array}[]{c}1\times 1~{}\text{Conv},~{}\text{stride = 1}\\ 3\times 3~{}\text{Conv},~{}\text{stride = 1}\end{array}\right] ×\times 12
384×32×32384\times 32\times 32 [1×1Conv,stride = 12×2AvgPool,stride = 2]\left[\begin{array}[]{c}1\times 1~{}\text{Conv},~{}\text{stride = 1}\\ 2\times 2~{}\text{AvgPool},~{}\text{stride = 2}\end{array}\right]
2112×32×322112\times 32\times 32 [1×1Conv,stride = 13×3Conv,stride = 1]\left[\begin{array}[]{c}1\times 1~{}\text{Conv},~{}\text{stride = 1}\\ 3\times 3~{}\text{Conv},~{}\text{stride = 1}\end{array}\right] ×\times 36
1056×16×161056\times 16\times 16 [1×1Conv,stride = 12×2AvgPool,stride = 2]\left[\begin{array}[]{c}1\times 1~{}\text{Conv},~{}\text{stride = 1}\\ 2\times 2~{}\text{AvgPool},~{}\text{stride = 2}\end{array}\right]
2208×16×162208\times 16\times 16 [1×1Conv,stride = 13×3Conv,stride = 1]\left[\begin{array}[]{c}1\times 1~{}\text{Conv},~{}\text{stride = 1}\\ 3\times 3~{}\text{Conv},~{}\text{stride = 1}\end{array}\right] ×\times 24
Table 2: Architecture of Densenet-161

4.2 Inclusion of Cell Type

The way a siRNA reagent interacts with the cell to produce the end results, depends upto some extent on the cell-type as well. This information is incorporated in the model after the Densenet operations. There are four unique cell types, this is represented as a one-hot vector c{0,1}4c\in\{0,1\}^{4}. The output of Densenet (a tensor of size 2208×16×162208\times 16\times 16) is flattened using adaptive average pooling to bring it down to size (2208×1×12208\times 1\times 1) which is then further reduced in dimensions to produce a vector outputdensenet2208output_{densenet}\in\mathbb{R}^{2208}. Concatenating cc and outputdensenetoutput_{densenet} we end up with a vector, combined2212combined\in\mathbb{R}^{2212}.

4.3 Loss Selection

The loss function acts on a embedding, emb1024emb\in\mathbb{R}^{1024} which is the output the final layer of the model. The final loss function is composite function made up of Softmax [3.2] and Arc-Margin [3.2] loss functions. Using the functions defined in [3] and [6] we get the composite loss,

Lcomposite=cLarcmargin+(1c)LsoftmaxL_{composite}=cL_{arc-margin}+(1-c)L_{softmax} (12)

The hyperparameter s=30s=30 and m=0.5m=0.5 in Equation 6, the value of cc is 0.20.2.

A composite loss function is used to produce a stable training regime. A loss function composed entirely of LarcmarginL_{arc-margin} ended up in an oscillatory behavior. The inclusion LsoftmaxL_{softmax} helped stabilize the training process.

4.4 Accuracy and Loss

Refer to caption
Figure 6: PL+CM+CD

As can be seen in the Figure 6 the generalising capabilities of the network were improved almost instantaneously with the introduction of Pseudo-Labelling [3.3].

As defined in section 3.4, CutMix helps the model generalise better. The coefficient λ\lambda in equation [9] was sampled from the distribution Beta(α,α)Beta(\alpha,\alpha) with the value of α=1\alpha=1.

Pseudo-Labelling proved better at helping the model generalise on the validation set. Though after the introduction of pseudo-labelling the training loss and training accuracy [6] both took a hit. We will get a better insight into the benefits of pseudo-labelling in the next section.

4.5 Template and Feature Vector

One way to look at the deep-learning model is through the lens of template and feature vectors. The first part of the model takes the image, II and does the convolution and pooling operations, giving a feature vector xFx\in\mathbb{R}^{F}. After this a dot product is taken between xx and wiw_{i}, which is the template for the ithi^{th} class. The resulting operation wixw_{i}^{\top}x can be summarised for every class using,

p=Wxp^{\prime}=Wx (13)

where W=[w1,w2,,wC]W=[w_{1},w_{2},\ldots,w_{C}] and WC×FW\in\mathbb{R}^{C\times F}, where CC is the number of classes. After taking softmax of pp^{\prime} we get pp. The value of pip_{i} is the probability that the image II belongs to class ii. The key take away here is that, piwixp_{i}\propto w_{i}^{\top}x

Therefore a model will generalise well for which wiw_{i}’s are spread apart. Further all the feature vectors of the same class should form a cluster and feature vectors of different classes should be far apart.

Refer to caption
Figure 7: Arc-Margin W/o and With PL

In this model we have separate template vectors for the two types of losses. The Figure 7 shows the difference between the template vectors obtained obtained with and without pseudo-labelling. As compared to the vectors in left Figure 7 the vectors are more spread apart in the right Figure 7.

Refer to caption
Figure 8: Softmax W/o and With PL

Figure 8 are for softmax template vectors. Apart from the spread of points, another thing to notice here are the blue points. The points shown with blue color are templates for Junk Classes. These are the classes for which there are no samples present in the data-set. The templates generated from the model trained with pseudo-labels has made a completely separate cluster for such classes. The distinction between the Junk Classes and Actual Classes is visible all the four Figure, 7, 8, but the distinction is best visible in the right Figure 8.

Refer to caption
Figure 9: Features W/o and With PL

Another interesting plot to look for is that of the feature vectors. The plot without pseudo-labelling (Figure 9) has vague clusters and almost in-existent decision boundary. On the other hand the clusters in Figure 9 are much more well defined and tightly packed with a clear decision boundary separating the clusters. The significance further increases because of the fact that these three classes are the most prevalent in the data-set.

In the scatter Plot of class template vectors and feature vectors initially wi,x1024w_{i},x\in\mathbb{R}^{1024} which is reduced to two dimensions using t-SNE [17]

4.6 Convolution Layers

Convolution layers act as filter which can be tuned in the training process to pick-up certain features and dampen others. The filter is usually square shaped and extends through all the channels of the image on which it is acting. Lets consider a square input image for simplicity (also the input image is square shaped in case). Let the input image be IinW×W×CinI_{in}\in\mathbb{R}^{W\times W\times C_{in}} and the convolution filter be fn×n×Cinf\in\mathbb{R}^{n\times n\times C_{in}} where CC is the number of input channels. The action of one filter produces a new Image with a single channel. When we club a lot of these kind of filters, we get that many images as output which can be stacked to form channels of the output image IoutW×W×CoutI_{out}\in\mathbb{R}^{W^{\prime}\times W^{\prime}\times C_{out}}.

We are interested in filters of shape 1×1×Cin1\times 1\times C_{in}. The Figures in 10 shows two such filter layers. The image on the left [10] shows the 1×11\times 1 part of the Denselayer (6) of Denseblock (1) [2].

Another way to think of a single 1×11\times 1 convolution operation is as taking the weighted average of the channels of the input image IinI_{in}.

Iout,k=w1,k×Iin,1+w2,k×Iin,2++wCin,k×Iin,CinI_{out,k}=w_{1,k}\times I_{in,1}+w_{2,k}\times I_{in,2}+\ldots\\ +w_{C_{in},k}\times I_{in,C_{in}} (14)

where, [w1,k,w2,k,,wCin,k][w_{1,k},w_{2,k},\ldots,w_{C_{in},k}] is the kthk^{th} 1×11\times 1 filter of the convolution layer, Iout,kI_{out,k} is the kthk^{th} channel of the output image and Iin,iI_{in,i} is the ithi^{th} channel of the input image.

What is important to note in both the figures 10, is the vertical bands of one-color, which are repeated horizontally, forming contiguous blocks of same color.

A band of same color signifies that instead of weighing each input channel (Iin,iI_{in,i}) differently they multiplied by the same number, hence an average of sorts, lets call such and output Iavg,kI_{avg,k} if this happens for the kthk^{th} filter.

A block of same color signifies that these average images Iavg,kI_{avg,k} are repeated multiple times in the output image. If this happens for zz such filters, a lot of variability in the feature map is lost, which is not a good sign.

This phenomenon is observed predominantly in two of the layers shown in Figure 10.

Refer to caption
Figure 10: DB(1),DL(6),1×11\times 1 and DB(2),DL(2),1×11\times 1

4.7 Grad-CAM

Visualization techniques such as Grad-CAM [3.5] provide us tools to better understand which part of the section of the image impacts the prediction in a positive way. This information is crucial to understand the feature selection capabilities of the network.

Refer to caption
Figure 11: RPE, siRNA1131 | HEPG2, siRNA1134 | HEPG2, siRNA1138

In figure 11, it is quite evident that the model seem to paying attention very specific features in the image. Further it would not be wrong to say that these features are of significance, given that it is the Endoplasmic Reticulum in the left, Mitochondria in the middle and Nucleolus in the right.

5 Discussion

Where does a model look at while deciding the proper classification? As seen in the previous section thanks advancements in visualization techniques like Grad-CAM, this question has become answerable to a certain extent.

A question going forward now that can be put up is - Can we benefit from this information? Usually when we train the model for an image classification task, the human capabilities are at par with the model (there maybe minor differences on data-sets of large scale, where sometimes the model outperforms the human or vice-versa but the magnitude is not even of one order). For problems such as these, the details about where the model pays attention serves nothing more than just a sanity check. For example a model trained to identify dogs should pay attention to the dog in the image, if a model does so we are assured that it has indeed learned to look in the right direction.

Now consider a possibility where the humans can not classify the images because there is no semantic object in the images to classify upon (which is the case in the problem at hand) and now suppose we train an agent which can do this task and is really good at it too. Wouldn’t it be interesting now to look at where the agent is looking?

Refer to caption
Figure 12: Plot of Attention vs. Channel for the layer - Denseblock 4, Denselayer - 24

The figure 12 is a crude representation of where the model is looking at. As can be seen most attention is payed on the Channel 5 of the image, for a given cell-type and reagent label. This should be an indication that Channel 5 is the most affected by the reagent and is the Channel which is most crucial in the prediction of the image.

Attention in this case is nothing but as follows:

Attentioni=LCAM(Image)Imagei2Attention_{i}=||L_{CAM}(Image)-Image_{i}||_{2} (15)

Where LCAM(Image)L_{CAM}(Image) is any CAM visualization and ImageiImage_{i} is the ithi^{th} channel of the image. This is the vector AttentionCAttention\in\mathbb{R}^{C} (CC is the number of channels) is then maxmax-normalized to get the final attention. A higher number for a channel suggests that the attention map is concordant with channel hence the channel is significant.

References

  • [1] Anthony JF Griffiths, Jeffrey H Miller, David T Suzuki, Richard C Lewontin, and William M Gelbart. Mendel’s experiments. In An Introduction to Genetic Analysis. 7th edition. WH Freeman, 2000.
  • [2] Michel Morange. The central dogma of molecular biology. Resonance, 14(3):236–247, 2009.
  • [3] Hassan Dana, Ghanbar Mahmoodi Chalbatani, Habibollah Mahmoodzadeh, Rezvan Karimloo, Omid Rezaiean, Amirreza Moradzadeh, Narges Mehmandoost, Fateme Moazzen, Ali Mazraeh, Vahid Marmari, et al. Molecular mechanisms and biological functions of sirna. International journal of biomedical science: IJBS, 13(2):48, 2017.
  • [4] Haiyong Han. Rna interference to knock down gene expression. In Disease Gene Identification, pages 293–302. Springer, 2018.
  • [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [6] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
  • [7] Connor J Parde, Carlos Castillo, Matthew Q Hill, Y Ivette Colon, Swami Sankaranarayanan, Jun-Cheng Chen, and Alice J O’Toole. Deep convolutional neural network features and the original image. arXiv preprint arXiv:1611.01751, 2016.
  • [8] Rajeev Ranjan, Carlos D Castillo, and Rama Chellappa. L2-constrained softmax loss for discriminative face verification. arXiv preprint arXiv:1703.09507, 2017.
  • [9] Wang Mei and Weihong Deng. Deep face recognition: A survey. arXiv preprint arXiv: 1804.06655, 2018.
  • [10] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5265–5274, 2018.
  • [11] Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, page 2, 2013.
  • [12] Yves Grandvalet and Yoshua Bengio. Entropy regularization. Semi-supervised learning, pages 151–168, 2006.
  • [13] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
  • [14] Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5188–5196, 2015.
  • [15] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.
  • [16] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
  • [17] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.