This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Competition-based Adaptive ReLU
for Deep Neural Networks

Junjia Chen and Zhibin Pan The authors are with the School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, P. R. China. (e-mail: zbpan@xjtu.edu.cn)
Abstract

Activation functions introduce nonlinearity into deep neural networks. Most popular activation functions allow positive values to pass through while blocking or suppressing negative values. From the idea that positive values and negative values are equally important, and they must compete for activation, we proposed a new Competition-based Adaptive ReLU (CAReLU). CAReLU scales the input values based on the competition results between positive values and negative values. It defines two parameters to adjust the scaling strategy and can be trained uniformly with other network parameters. We verify the effectiveness of CAReLU on image classification, super-resolution, and natural language processing tasks. In the experiment, our method performs better than other widely used activation functions. In the case of replacing ReLU in ResNet-18 with our proposed activation function, it improves the classification accuracy on the CIFAR-100 dataset. The effectiveness and the new perspective on the utilization of competition results between positive values and negative values make CAReLU a promising activation function.

Index Terms:
Adaptive activation function, ReLU, deep learning, competition-based

I Introduction

Deep learning is one of the most popular techniques demonstrating superior performance across multiple applications and has been the focus of academic and engineering communities for years. Deep learning techniques are evolving fast and each year hundreds of new models are proposed. Its applications quickly dominate in many areas such as computer vision (CV)[1, 2] and natural language processing (NLP)[3, 4].

y=g(𝒘T𝒙+b).y=g(\bm{w}^{\rm T}\bm{x}+b). (1)

A deep neural network contains thousands of neural units. Every neural unit gathers the input signals from other neurons and generates a signal for other neurons to process. The structure of a deep neural network is a directed acyclic graph where every vertex represents a mapping from signals of incoming arcs into new signals of outgoing arcs. Eq. (1) is one of the most common vertices in deep neural networks, which applies an affine transformation and a non-linear mapping g()g(\cdot) to the input tensor, where 𝒙d\bm{x}\in\mathbb{R}^{d} is the dd-dimension input gathered from incoming arcs, 𝒘d\bm{w}\in\mathbb{R}^{d} is the weight vector, bb\in\mathbb{R} is the bias, and yy\in\mathbb{R} is the vertex’s output. 𝒘\bm{w} and bb can be obtained through training. This non-linear mapping is called the activation function, without which a deep neural network degenerates into a linear regression model.

ReLU(𝒘T𝒙+b)=max(𝒘T𝒙+b,0).{\rm ReLU}(\bm{w}^{\rm T}\bm{x}+b)={\rm max}(\bm{w}^{\rm T}\bm{x}+b,0). (2)

In the early stage of deep learning research, S-shape functions like the Sigmoid function and the hyperbolic tangent function (tanh) were proposed and became popular but they were later found ineffective. Rectified Linear Unit (ReLU)[5, 6] shown in Eq. (2) was proposed that demonstrates top-tier convergence speed, better generalization and ease of implementation. But ReLU completely shuts down negative values, which led to the development of LeakyReLU[7] that allows the negative values to pass through. Parametric ReLU (PReLU)[8] improves model fitting by parameterizing the LeakyReLU’s slope of the negative part. Multi-phase ReLU[9] utilizes six different phases from the input. Sparse regularization[10] increases the sparsity of ReLU’s input.

Activation functions with more complicated formulas are also explored. Dan proposed Gaussian Error Linear Unit (GeLU) and Sigmoid Linear Unit (SiLU)[11], which are widely used in natural language processing. SiLU was also found and named Swish[12] by Prajit by utilizing meta-learning techniques. Inspired by SiLU’s self-gating property, Mish[13] and Serf[14] are proposed to further improve performance. Brosnan developed the Universal Activation Function (UAV) [15], which generalizes some popular activation functions by using five trainable parameters. Shui-Long proposed the tanhLU[16] by integrating tanh into a linear unit. Mathew proposed the Growing Cosine Unit (GCU)[17] that can improve gradient flow and reduce network size.

Most activation functions differently process the negative part and the positive part of the input tensor. For example, ReLU completely blocks negative values; LeakyReLU, PReLU, and SiLU scale down negative values. It implies that positive values are far more important than negative values. From the viewpoint of signal processing, positive and negative values are equally important and they must compete for activation instead of simply shutting down or suppressing negative values.

In this paper, we propose a new Competition-based Adaptive ReLU (CAReLU). 1) A CAReLU utilizes the competition results between positive values and negative values in the input tensor. 2) A CAReLU has two parameters that can be trained uniformly with other network parameters. 3) Different competition indicators can cooperate with CAReLU to generate different activation functions for different tasks. We evaluate our method and find that our method constantly performs better compared to the most popular activation functions.

II Proposed Method

II-A Construction of Competition-based Adaptive ReLU

Our proposed method stems from the idea that both positive values and negative values are equally important and they must compete for activation. We choose energy as an indicator of the competition. The rule is that the competitor with higher energy wins the qualification for activation. Let 𝒛=(z1,z2,,zd)d\bm{z}=(z_{1},z_{2},...,z_{d})\in\mathbb{R}^{d} be the output of the previous affine transformation. The percentage of positive values’ energy pEp_{E} is defined as follows:

pE\displaystyle p_{E} =j=1d[max(zj,0)]2𝒛2+ϵ,\displaystyle=\frac{\sum_{j=1}^{d}[{\rm max}(z_{j},0)]^{2}}{\|\bm{z}\|^{2}+\epsilon}, (3)

where a small positive constant value ϵ\epsilon is added in the denominator to prevent the division by zero. We define ss as follows:

s\displaystyle s =sgn(2pE1),\displaystyle={\rm sgn}(2p_{E}-1), (4)

where sgn(){\rm sgn}(\cdot) is the sign function. If pE<0.5p_{E}<0.5 is true, then we have s=1s=-1, which means the negative values win the competition; If pE>0.5p_{E}>0.5 is true, then we have s=1s=1, which means the positive values win the competition. By multiplying 𝒛\bm{z} with ss when passing it to the ReLU, we can flip the input if negative values have higher energy:

yi=ReLU(szi).y_{i}={\rm ReLU}(sz_{i}). (5)

However, this vanilla version of our idea does not perform well. First, there is no degeneracy into regular ReLU when prioritizing higher energy impairs performance. Second, the sign function creates discontinuities in the model’s parameter space, which harms the gradient-based optimization. To address the first issue, we replace the fixed scaling factor of 22 and the bias of 1-1 in Eq. (4) with trainable parameters α\alpha and β\beta per layer:

s\displaystyle s =sgn(αpE+β).\displaystyle={\rm sgn}(\alpha p_{E}+\beta). (6)

If α\alpha is close or equal to 0 after training, this mapping degenerates into ReLU with a factor of 11 or 1-1. Only 2 extra parameters are introduced per layer, which is negligible when considering the total number of weights.

To address the second issue, we replace the sign function with the tanh function as follows:

s\displaystyle s =tanh(αpE+β).\displaystyle={\tanh}(\alpha p_{E}+\beta). (7)

The tanh function has a codomain ranging from 1-1 to 11 and it is a continuous function with a non-zero gradient, which can be viewed as a smooth version of the sign function.

There are other kinds of indicators for competition between positive values and negative values other than energy. The other two indicators we experiment with are:

pL1=j=1d|max(zj,0)|𝒛1+ϵ,\begin{aligned} p_{L1}&=\frac{\sum_{j=1}^{d}|{\rm max}(z_{j},0)|}{\|\bm{z}\|_{1}+\epsilon}\\ \end{aligned}, (8)
pc=j=1dH(zj)d,\begin{aligned} p_{c}&=\frac{\sum_{j=1}^{d}H(z_{j})}{d}\\ \end{aligned}, (9)

where H()H(\cdot) is the Heaviside step function, Eq. (8) is the percentage of positive values’ L1-norm, and Eq. (9) is the percentage of the number of positive values.

Competition-based Adaptive Scaling (CAS) can be defined as follows to adaptively scale the input 𝒛\bm{z} based on competition results:

CAS(𝒛)K[tanh(αp+β)]𝒛,{\rm CAS}(\bm{z})\triangleq K[{\tanh}(\alpha p+\beta)]\bm{z},\\ (10)

where KK is a constant and p{pE,pL1,pc}p\in\{p_{E},p_{L1},p_{c}\}. Since |tanh(αp+β)||\tanh(\alpha p+\beta)| is less than 1, the magnitude of feature vectors will shrink layer by layer in a sequence model. To combat this phenomenon, the constant KK is placed to scale up the results of the tanh function. By concatenating CAS and ReLU, we construct Competition-based Adaptive ReLU as follows:

CAReLU(𝒛)ReLU(CAS(𝒛)).{\rm CAReLU}(\bm{z})\triangleq{\rm ReLU}({\rm CAS}(\bm{z})). (11)

II-B Working with Batch Normalization

Batch Normalization (BN) [18] is widely used in convolutional neural networks. BN normalizes and rescales the input with 2 parameters on a mini-batch before activation. Since CAReLU requires competition between positive values and negative values, the BN’s normalization might impair this process. To improve our method’s compatibility with BN, we define BN-CAReLU\rm BN\text{-}CAReLU as follows by placing Eq. (10) before BN so that the neural network can obtain competition results before the normalization:

BN-CAReLU(𝒛)ReLU(BN(CAS(𝒛))).{\rm BN\text{-}CAReLU}(\bm{z})\triangleq{\rm ReLU}({\rm BN}({\rm CAS}(\bm{z}))). (12)

II-C Gradients of the Competition-based Adaptive Scaling

Let α0\alpha_{0} be the initial value of α\alpha and β0\beta_{0} be the initial value of β\beta. Instead of running a grid search for initial values, we simply set these values as follows:

{α0=0,β0=1,K=1/tanh(β0)1.313.\begin{dcases}\alpha_{0}&=0,\\ \beta_{0}&=1,\\ K&=1/\tanh(\beta_{0})\approx 1.313.\end{dcases} (13)

where α0=0\alpha_{0}=0 means that deep neural networks don’t utilize competition results at the beginning. β0=1\beta_{0}=1 makes the tanh function starts at the non-saturating area, where the gradient is large enough to initiate the update. The constant KK is set to 1/tanh(β0)1/\tanh(\beta_{0}) so that the CAS is initialized as an identity mapping when the training begins.

To update the parameters, we need to compute gradients of the loss function with respect to α\alpha, β\beta, and input value ziz_{i}. Let 𝒛^=(z^1,z^2,,z^d)d\hat{\bm{z}}=({\hat{z}}_{1},{\hat{z}}_{2},...,{\hat{z}}_{d})\in\mathbb{R}^{d} be the output of CAS as follows:

𝒛^=CAS(𝒛).\hat{\bm{z}}={\rm CAS}(\bm{z}). (14)

Gradients can be derived from the chain rule as follows:

α=\displaystyle\frac{\partial\mathcal{L}}{\partial\alpha}= K4p(eαp+β+eαpβ)2j=1dz^jzj,\displaystyle K\frac{4p}{({\rm e}^{\alpha p+\beta}+{\rm e}^{-\alpha p-\beta})^{2}}\sum_{j=1}^{d}\frac{\partial\mathcal{L}}{\partial\hat{z}_{j}}z_{j}, (15)
β=\displaystyle\frac{\partial\mathcal{L}}{\partial\beta}= K4(eαp+β+eαpβ)2j=1dz^jzj,\displaystyle K\frac{4}{({\rm e}^{\alpha p+\beta}+{\rm e}^{-\alpha p-\beta})^{2}}\sum_{j=1}^{d}\frac{\partial\mathcal{L}}{\partial\hat{z}_{j}}z_{j},
zi=\displaystyle\frac{\partial\mathcal{L}}{\partial z_{i}}= K[4α(eαp+β+eαpβ)2pzi(j=1dz^jzj)\displaystyle K[\frac{4\alpha}{({\rm e}^{\alpha p+\beta}+{\rm e}^{-\alpha p-\beta})^{2}}\frac{\partial p}{\partial z_{i}}(\sum_{j=1}^{d}\frac{\partial\mathcal{L}}{\partial\hat{z}_{j}}z_{j})
+z^itanh(αp+β)],\displaystyle+\frac{\partial\mathcal{L}}{\partial\hat{z}_{i}}\tanh(\alpha p+\beta)],

where \mathcal{L} is the loss function and z^i\frac{\partial\mathcal{L}}{\partial\hat{z}_{i}} is obtained from backpropagation. pzi\frac{\partial p}{\partial z_{i}} is the partial differential of the competition indicator pp with respect to the input. For pEp_{E}, pL1p_{L1} and pcp_{c}, we ignore the small positive constant value ϵ\epsilon and the gradients are derived as follows:

pEzi\displaystyle\frac{\partial p_{E}}{\partial z_{i}} =2max(zi,0)𝒛22zij=1d[max(zj,0)]2(j=1dzj2)2,\displaystyle=\frac{2{\rm max}(z_{i},0)\|\bm{z}\|^{2}-2z_{i}\sum_{j=1}^{d}[{\rm max}(z_{j},0)]^{2}}{(\sum_{j=1}^{d}z_{j}^{2})^{2}}, (16)
pL1zi\displaystyle\frac{\partial p_{L1}}{\partial z_{i}} ={j=1dmax(zj,0)𝒛12,zi<0𝒛1j=1dmax(zj,0)𝒛12,zi>0,\displaystyle=\begin{dcases}\frac{\sum_{j=1}^{d}{\rm max}(z_{j},0)}{\|\bm{z}\|_{1}^{2}},&z_{i}<0\\ \frac{\|\bm{z}\|_{1}-\sum_{j=1}^{d}{\rm max}(z_{j},0)}{\|\bm{z}\|_{1}^{2}},&z_{i}>0\end{dcases},
pczi\displaystyle\frac{\partial p_{c}}{\partial z_{i}} =0.\displaystyle=0.

Thus, we obtained all gradients of CAS for gradient descent and backpropagation. Since Eq. (11) and Eq. (12) are just different concatenations of CAS, ReLU, and BN, which are all differentiable with respect to the input and parameters, the gradients can be derived from the chain rule.

III Experiments

In this section, we evaluate CAReLU across different applications. For models without BN, we directly replace existing activation functions with Eq. (11) in every layer. For models with BN, we also replace BN-ReLU\rm BN\text{-}ReLU combination with Eq. (12) in every layer. We respectively use CAReLUE\rm CAReLU_{E}, CAReLUL1\rm CAReLU_{L1}, and CAReLUc\rm CAReLU_{c} to denote CAReLU implemented with pEp_{E} in Eq. (3), pL1p_{L1} in Eq. (8), and pcp_{c} in Eq. (9). The following experiments show that our method outperforms other popular activation functions in multiple applications.

III-A CIFAR-100 Image Classification

We compare our methods to the most popular activation functions on CIFAR-100 imagine classification task [19]. CIFAR-100 is a dataset that has 100 classes, containing 500 training images and 100 test images for each class. We use ResNet-18 [20], GoogLeNet [21], and VGG-13 [22] networks to evaluate our activation functions. A stochastic gradient descent (SGD) optimizer with a momentum of 0.90.9 and a weight decay of 5×1045\times 10^{-4} is used to train all networks. The learning rate starts at 0.10.1 and is divided by 55 in 5050th, 120120th, and 160160th epochs. The training ends at 200200th epoch.

TABLE I: Top-1 Accuracy (%) on CIFAR-100 Test Set
Methods ResNet-18 GoogLeNet VGG-13
ReLU{\rm ReLU} 76.14±0.2176.14\pm 0.21 78.53±0.1978.53\pm 0.19 72.53±0.2072.53\pm 0.20
LeakyReLU{\rm LeakyReLU} 76.18±0.1476.18\pm 0.14 78.43±0.0878.43\pm 0.08 72.27±0.1172.27\pm 0.11
PReLU{\rm PReLU} 74.26±0.1474.26\pm 0.14 76.14±0.3776.14\pm 0.37 71.02±0.1571.02\pm 0.15
Swish-1{\rm Swish\text{-}1} 75.74±0.1575.74\pm 0.15 75.74±0.0875.74\pm 0.08 71.36±0.0771.36\pm 0.07
Swish{\rm Swish} 76.30±0.2576.30\pm 0.25 77.31±0.2477.31\pm 0.24 72.10±0.2572.10\pm 0.25
BN-CAReLUE{\rm BN\text{-}CAReLU_{E}} 76.50±0.2076.50\pm 0.20 79.23±0.13\bm{79.23\pm 0.13} 72.85±0.19\bm{72.85\pm 0.19}
CAReLUE{\rm CAReLU_{E}} 76.62±0.23\bm{76.62\pm 0.23} 79.21±0.1579.21\pm 0.15 72.62±0.1472.62\pm 0.14
BN-CAReLUL1{\rm BN\text{-}CAReLU_{L1}} 76.44±0.1576.44\pm 0.15 78.94±0.1878.94\pm 0.18 72.47±0.3272.47\pm 0.32
CAReLUL1{\rm CAReLU_{L1}} 76.43±0.1576.43\pm 0.15 78.89±0.2278.89\pm 0.22 72.58±0.3672.58\pm 0.36
BN-CAReLUc{\rm BN\text{-}CAReLU_{c}} 76.29±0.2876.29\pm 0.28 78.60±0.2878.60\pm 0.28 72.46±0.2372.46\pm 0.23
CAReLUc{\rm CAReLU_{c}} 76.13±0.2176.13\pm 0.21 78.65±0.1178.65\pm 0.11 72.24±0.2672.24\pm 0.26

Table I shows the results of the classification experiment. Most implementations of our method have overall better performance compared to other baseline methods. CAReLUE{\rm CAReLU_{E}} has the highest accuracy in ResNet-18 experiments and BN-CAReLUE{\rm BN\text{-}CAReLU_{E}} has the highest accuracy in GoogLeNet and VGG-13 experiments. When comparing different implementations of our method, BN-CAReLU{\rm BN\text{-}CAReLU} performs better than CAReLU{\rm CAReLU} except for the CAReLUE{\rm CAReLU_{E}} in ResNet-18 experiments and CAReLUc{\rm CAReLU_{c}} in GoogLeNet, which proves the previous assumption that BN’s normalization impairs the effectiveness of our method. Energy-based implementations generally perform better than the other two implementations. The reason is that the gradient of CAReLUE{\rm CAReLU_{E}} is a continuous function and it’s smoother than the gradient of CAReLUL1{\rm CAReLU_{L1}}; CAReLUc{\rm CAReLU_{c}} contains Heaviside step functions, which results in a bumpy loss landscape [23] and thus impairs the performance.

TABLE II: Values of tanh(αp+β)\tanh(\alpha p+\beta) from the First 10 CAReLUs of
Best Trained Models
CAS\rm CAS ResNet-18 GoogLeNet VGG-13
#1 0.5418±0.01720.5418\pm 0.0172 0.1775±0.00070.1775\pm 0.0007 0.1861±0.00280.1861\pm 0.0028
#2 0.1915±0.01130.1915\pm 0.0113 0.2028±0.01690.2028\pm 0.0169 0.1447±0.00780.1447\pm 0.0078
#3 0.4540±0.02180.4540\pm 0.0218 0.1449±0.00490.1449\pm 0.0049 0.2670±0.02650.2670\pm 0.0265
#4 0.0678±0.00650.0678\pm 0.0065 0.1591±0.00790.1591\pm 0.0079 0.1489±0.00770.1489\pm 0.0077
#5 0.2266±0.00600.2266\pm 0.0060 0.1558±0.02500.1558\pm 0.0250 0.1388±0.00450.1388\pm 0.0045
#6 0.1327±0.01000.1327\pm 0.0100 0.5597±0.03050.5597\pm 0.0305 0.9923±0.00000.9923\pm 0.0000
#7 0.5954±0.00670.5954\pm 0.0067 0.3055±0.00200.3055\pm 0.0020 0.0962±0.00850.0962\pm 0.0085
#8 0.1049±0.00460.1049\pm 0.0046 0.2143±0.03990.2143\pm 0.0399 0.4725±0.00600.4725\pm 0.0060
#9 0.1795±0.00330.1795\pm 0.0033 0.1317±0.00800.1317\pm 0.0080 0.1553±0.02450.1553\pm 0.0245
#10 0.1673±0.01490.1673\pm 0.0149 0.4171±0.00080.4171\pm 0.0008 0.2692±0.08490.2692\pm 0.0849
Refer to caption
Refer to caption
Refer to caption
Figure 1: Histograms of αp+β\alpha p+\beta obtained from the best trained model. (a) ResNet-18/CAReLUE{\rm CAReLU_{E}}. (b) GoogLeNet/BN-CAReLUE{\rm BN\text{-}CAReLU_{E}}. (c) VGG-13/BN-CAReLUE{\rm BN\text{-}CAReLU_{E}}.

We also evaluate values of tanh(αp+β)\tanh(\alpha p+\beta) in CASs on each layer. In Table II, we show the means and standard deviations of tanh(αp+β)\tanh(\alpha p+\beta) from the first 10 CASs of best models on the test set. Small deviations indicate that CASs on trained models do not generate significantly different values of tanh(αp+β)\tanh(\alpha p+\beta) for different samples. Instead, input values are scaled uniformly and then fine-tuned for each sample according to the input’s competition results. Some degenerate into constant scaling such as CAS#6 of VGG-13 in Table II. Allowing degeneracy into ReLU enables our method to utilize competition results without compromising the performance whenever the original approach is optimal. Histograms of αp+β\alpha p+\beta obtained from the best models on the test set are shown in Fig. 1. Despite our initial design being a binary scale factor described in Eq. (4), values of αp+β\alpha p+\beta mostly land on non-saturating regions.

III-B BSD-300 Image Super Resolution

We compare our methods to the most popular activation functions on an image super-resolution task on the Berkeley segmentation dataset (BSD300), which contains 200 training images and 100 test images [24]. The network we used for this experiment is an efficient sub-pixel convolutional neural network (ESPCN) featuring a network comprising several convolutional layers and a pixel shuffle layer [25]. Training images are cropped to 256×256256\times 256 and scaled down to (256/r)×(256/r)(256/r)\times(256/r) where rr is the upscale factor. The ESPCN network upscales the down-scaled images back to 256×256256\times 256. An Adam optimizer [26] with a learning rate of 1×1031\times 10^{-3} is used for training the network in 200 epochs. We run every setting 5 times and show experiment results in Table III.

The data show that CAReLUE{\rm CAReLU_{E}} and CAReLUL1{\rm CAReLU_{L1}} surpass other activation functions in PSNR. CAReLUc{\rm CAReLU_{c}} outperforms PReLU, Swish-1, and Swish, but does not perform as well as CAReLUE{\rm CAReLU_{E}} and CAReLUL1{\rm CAReLU_{L1}} due to its discontinuity introduced by the Heaviside function.

TABLE III: PSNR (dB) on BSD300 Super-resolution Task
Methods r=3r=3 r=4r=4
ReLU{\rm ReLU} 25.001±0.01425.001\pm 0.014 23.634±0.00823.634\pm 0.008
LeakyReLU{\rm LeakyReLU} 25.005±0.00825.005\pm 0.008 23.635±0.01023.635\pm 0.010
PReLU{\rm PReLU} 24.980±0.01524.980\pm 0.015 23.635±0.01423.635\pm 0.014
Swish-1{\rm Swish\text{-}1} 24.934±0.01024.934\pm 0.010 23.585±0.01023.585\pm 0.010
Swish{\rm Swish} 24.947±0.01324.947\pm 0.013 23.590±0.01123.590\pm 0.011
CAReLUE{\rm CAReLU_{E}} 25.018±0.006\bm{25.018\pm 0.006} 23.642±0.00823.642\pm 0.008
CAReLUL1{\rm CAReLU_{L1}} 25.015±0.00325.015\pm 0.003 23.643±0.009\bm{23.643\pm 0.009}
CAReLUc{\rm CAReLU_{c}} 25.000±0.01225.000\pm 0.012 23.631±0.01323.631\pm 0.013

III-C Natural Language Inference on SNLI

In this section, we evaluate our methods on the Stanford Natural Language Inference (SNLI) corpus [27]. SNLI corpus is a collection of 570k human-written English sentence pairs labeled for entailment, contradiction, and neutral. A premise and a hypothesis comprise a sentence pair. The model we used for this task comprises an embedding layer, a Long short-term memory (LSTM) [28] encoder, and a sequence of fully connected layers. Hypothesis and premise go through the embedding layer and the encoder independently. We concatenate the encoded hypothesis feature and premise feature, then send them through the fully connected layers for classification. We use the Adam optimizer with a learning rate of 0.0010.001 to train the parameters for 50 epochs on the SNLI training set. We run every setting 5 times and show experiment results in Table IV.

[ht]

TABLE IV: Classification Accuracy (%) on SNLI Test Set
         Methods          Acc
         ReLU{\rm ReLU}          77.69±0.2577.69\pm 0.25
         LeakyReLU{\rm LeakyReLU}          77.12±0.4877.12\pm 0.48
         PReLU{\rm PReLU}          77.25±0.3377.25\pm 0.33
         Swish-1{\rm Swish\text{-}1}          -1
         Swish{\rm Swish}          -
         CAReLUE{\rm CAReLU_{E}}          78.67±0.25\bm{78.67\pm 0.25}
         CAReLUL1{\rm CAReLU_{L1}}          78.41±0.3978.41\pm 0.39
         CAReLUc{\rm CAReLU_{c}}          78.40±0.2578.40\pm 0.25
  • 1

    -” indicates that the training with this activation function does not converge.

Data suggest that our methods perform better than other activation functions. Swish and Swish-1 are unable to converge in this task. CAReLUE{\rm CAReLU_{E}} achieve the highest classification accuracy in this experiment. Though mean accuracies of CAReLUL1{\rm CAReLU_{L1}} and CAReLUc{\rm CAReLU_{c}} are approximately identical, CAReLUc{\rm CAReLU_{c}} is more robust because it has a smaller standard deviation. All three implementations of our method outperform other baseline activation functions.

IV Conclusion

Stemming from the idea that both positive values and negative values are equally important and they must compete for activation, we developed the Competition-based Adaptive ReLU (CAReLU) activation function by introducing the competition between positive values and negative values. A CAReLU has 2 parameters that can be trained uniformly with other network parameters using gradient descent. By respectively implementing CAReLU with each one of 3 competition indicators, we developed 3 different activation functions: CAReLUE\rm CAReLU_{E}, CAReLUL1\rm CAReLU_{L1}, and CAReLUc\rm CAReLU_{c}. We also developed a technique when working with Batch Normalization to have extra performance gain.

We evaluated our method on different tasks and achieved consistent performance improvements. CAReLUE\rm CAReLU_{E} is generally more effective, but CAReLUL1\rm CAReLU_{L1} and CAReLUc\rm CAReLU_{c} can also achieve great performance in certain scenarios. The effectiveness and the new perspective on the competition between positive values and negative values make CAReLU promising in deep learning tasks.

References

  • [1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, p. 84–90, may 2017.
  • [2] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  • [3] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30.   Curran Associates, Inc., 2017.
  • [4] D. W. Otter, J. R. Medina, and J. K. Kalita, “A survey of the usages of deep learning for natural language processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 604–624, 2021.
  • [5] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, “What is the best multi-stage architecture for object recognition?” in 2009 IEEE 12th International Conference on Computer Vision, 2009, pp. 2146–2153.
  • [6] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010, pp. 807–814.
  • [7] A. L. Maas, A. Y. Hannun, A. Y. Ng et al., “Rectifier nonlinearities improve neural network acoustic models,” in Proc. ICML, vol. 30, no. 1.   Atlanta, Georgia, USA, 2013, p. 3.
  • [8] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1026–1034.
  • [9] C. Banerjee, T. Mukherjee, and E. Pasiliao Jr, “The multi-phase relu activation function,” in Proceedings of the 2020 ACM Southeast Conference, 2020, pp. 239–242.
  • [10] H. Ide and T. Kurita, “Improvement of learning for CNN with ReLU activation by sparse regularization,” in 2017 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2017, pp. 2684–2691.
  • [11] D. Hendrycks and K. Gimpel, “Gaussian error linear units (GELUs),” arXiv preprint arXiv:1606.08415, 2016.
  • [12] P. Ramachandran, B. Zoph, and Q. V. Le, “Searching for activation functions,” arXiv preprint arXiv:1710.05941, 2017.
  • [13] D. Misra, “Mish: A self regularized non-monotonic neural activation function,” arXiv preprint arXiv:1908.08681, vol. 4, no. 2, pp. 10–48 550, 2019.
  • [14] S. Nag, M. Bhattacharyya, A. Mukherjee, and R. Kundu, “Serf: Towards better training of deep neural networks using log-softplus error activation function,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 5324–5333.
  • [15] B. Yuen, M. T. Hoang, X. Dong, and T. Lu, “Universal activation function for machine learning,” Scientific Reports, vol. 11, no. 1, pp. 1–11, 2021.
  • [16] S.-L. Shen, N. Zhang, A. Zhou, and Z.-Y. Yin, “Enhancement of neural networks with an alternative activation function tanhlu,” Expert Syst. Appl., vol. 199, no. C, aug 2022.
  • [17] M. M. Noel, A. Trivedi, P. Dutta et al., “Growing cosine unit: A novel oscillatory activation function that can speedup training and reduce parameters in convolutional neural networks,” arXiv preprint arXiv:2108.12943, 2021.
  • [18] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in ICML, 2015.
  • [19] A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” Univ. of Toronto, Tech. Rep., 2009.
  • [20] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • [21] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
  • [22] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [23] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein, “Visualizing the loss landscape of neural nets,” in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31.   Curran Associates, Inc., 2018.
  • [24] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 2, 2001, pp. 416–423 vol.2.
  • [25] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1874–1883.
  • [26] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [27] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning, “A large annotated corpus for learning natural language inference,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP).   Association for Computational Linguistics, 2015.
  • [28] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.