This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Compensation Learning

Rujing Yao rjyao@mail.nankai.edu.cn wuou@tju.edu.cn Ou Wu wuou@tju.edu.cn

Abstract—Weighting strategy prevails in machine learning. For example, a common approach in robust machine learning is to exert lower weights on samples which are likely to be noisy or quite hard. This study reveals another undiscovered strategy, namely, compensating. Various incarnations of compensating have been utilized but it has not been explicitly revealed. Learning with compensating is called compensation learning and a systematic taxonomy is constructed for it in this study. In our taxonomy, compensation learning is divided on the basis of the compensation targets, directions, inference manners, and granularity levels. Many existing learning algorithms including some classical ones can be viewed or understood at least partially as compensation techniques. Furthermore, a family of new learning algorithms can be obtained by plugging the compensation learning into existing learning algorithms. Specifically, two concrete new learning algorithms are proposed for robust machine learning. Extensive experiments on image classification and text sentiment analysis verify the effectiveness of the two new algorithms. Compensation learning can also be used in other various learning scenarios, such as imbalance learning, clustering, regression, and so on.

Index Terms—Sample weighting, Compensation learning, Robust machine learning, Learning taxonomy.

1 Introduction

In supervised learning, a loss function is defined on the training set, and the training goal is to seek optimal models by minimizing the training loss. According to the degree of training difficulty, samples can be divided into easy, medium, hard, and noisy samples. Generally, easy and medium samples are indispensable and positively influence the training. The whole training procedure can significantly benefit from medium samples if appropriate learning manners are leveraged. However, the whole training procedure is vulnerable to noisy and partial quite hard samples.

A common practice is to introduce the weighting strategy if hard and noisy samples exist. Low weights are assigned to noisy and quite hard samples to reduce their negative influences during loss minimization. This strategy usually infers the weights and subsequently conducts training on the basis of the weighted loss [33]. Wang et al. [50] proposed a Bayesian method to infer the sample weights as latent variables. Kumar et al. [26] proposed a self-paced learning (SPL) manner that combines the two steps as a whole by using an added regularizer. Meta learning [29, 40, 54] is introduced to alternately infer weights and seek model parameters with an additional validation set.

Various robust learning methods exist that do not rely on the weighting strategy. For example, the classical method support vector machine (SVM) [5] introduces slack variables to address possibly noisy samples, and robust clustering [8] introduces additional vectors to cope with noises. However, a unified theory to better explain such methods and subsequently illuminate more novel methods remains lacking. In this study, another under-explored yet widely used strategy, namely, compensating, is revealed and further investigated. Mathematically, the compensating strategy actually adds111Weighting actually multiplies a term to a feature vector, a logit vector, a loss, etc. a perturbation term (called compensation term in this study) to a feature vector, a logit vector, a loss, etc. Many existing learning methods including some classical ones can be considered introducing or partial on the basis of compensating. Learning with compensating is referred to as compensation learning in this study.

We conduct a pilot study for compensation learning in terms of theoretical taxonomy, connections with existing classical learning methods, and new concrete compensation learning methods. First, five compensation targets, three directions, five inference manners, and four granularity levels are defined. Second, several existing learning methods are re-explained from the viewpoint of compensation learning. Third, two concrete compensation learning algorithms are proposed, namely, logit compensation with ll1-regularization (LogComp) and mixed compensation (MixComp). Last, the two proposed learning algorithms are evaluated on data corpora from image classification and text sentiment classification.

Our main contributions are summarized as follows:

1) An under-explored yet widely used learning strategy, namely, compensating, is identified and formalized in this study. A new learning paradigm, named compensation learning, is presented and a taxonomy is constructed for it. In addition to the robust learning mainly referred in this paper, other learning scenarios, such as imbalance learning can also benefit from compensation learning.

2) Several typical learning methods are re-explained with the viewpoint of compensation learning. New insights can be obtained for these methods. Theoretically, various new methods can be generated on the basis of introducing the idea of compensating into existing methods. Section V and VI-D present examples.

3) Two concrete new compensation learning methods are proposed. Experiments on robust learning on four benchmark sets verify their effectiveness compared with several existing classical methods.

2 Related Work

2.1 The Weighting Strategy in Machine Learning

Weighting is a widely used machine learning strategy in at least the following five areas: noise-aware learning [38], curriculum learning [1], crowdsourcing learning [7], cost-sensitive learning [4], and imbalance learning [21]. In noisy-aware and curriculum learning areas, weights are sample-wise; in cost-sensitive learning, weights can be sample-wise, category-wise, or mixed; in imbalance learning, weights are usually category-wise.

Intuitively, the weights of medium and partial hard samples are kept or enlarged; and the weights of quite hard samples should be kept or reduced. For example, in Focal loss [31], the weights of easy samples are (relatively) reduced and those of the hard222In fact, if the weights of quite hard samples are reduced, the performance will be increased [28]. samples are (relatively) enlarged. Most existing studies do not assume the above division. Instead, samples are usually divided into easy/non-easy or normal/noisy. For example, in Focal loss and Adaboost [9], the weights of non-easy samples are gradually increased.

In cost-sensitive learning, the weights are associated with the pre-determined costs. In imbalance learning, categories with lower proportions are usually negatively affected. Therefore, increasing the weights of samples in the low-proportion categories is a common practice.

The compensating strategy investigated in this study does not intend to eliminate the weighting strategy. Instead, this study summarizes various existing learning ideas which do not utilize weighting yet. These learning ideas are systematically investigated to attribute to a new learning paradigm, namely, compensation learning. These two strategies can be mutually beneficial333For example, a sample-level weighting method (e.g., Focal loss) can be transformed into a category-level weighting method (e.g., replace the sample-level prediction yiy_{i} with the category-level average ycy_{c}) inspired by our taxonomy for compensating learning.. Theoretically, each concrete weighting-based learning method may correspond to a concrete compensating-based learning method. A solid and deep investigation for the weighting strategy in machine learning will significantly benefit compensation learning.

2.2 Noise-aware Machine Learning

This study investigates compensation learning mainly in learning with noisy labels. The weighting strategy is prevailing in this area. There exist two common technical solutions.

In the first solution, noise detection is performed and noisy samples may be assigned lower weights in the successive model training. Koh and Liang [15] defined an influence function to measure the impact of each sample on the model training. Samples with higher influence scores are more likely to be noisy. Huang et al. [22] conducted a cyclical pre-training strategy and recorded the training losses for each sample in the whole cycles. The samples with higher average training losses are more likely to be noisy.

In the second solution, an end-to-end procedure is leveraged to construct a noise-robust model. Reed et al. [39] proposed a Bootstrapping loss to reduce the negative impact of samples which may be noisy. Goldberger and en-Reuven [12] designed a noise adaptation layer to model the relationship between overserved labels that may be noisy and true latent labels.

A recent survey can be referred to [14]. Compensation learning can replace weighting in both above solutions. In this study, only the second solution is referred.

2.3 Robust Machine Learning

A formal definition for robust machine learning does not exist at present. There are two typical learning scenarios for robust machine learning. The first scenario refers to the robustness of a learning process, while the second scenario refers to the robustness of a trained model. In the first scenario, a robust learning method should cope well with training data that may be noisy [44, 56], imbalance [24], few-shot [53, 6], etc. In the second scenario, a robust trained model should cope well with adversarial attacks [58]. Both scenarios receive much and increasing attention in recent years. Both the weighting and the compensating strategies are widely-used in the first scenario, whereas only the compensating strategy is mainly utilized in the second scenario.

Refer to caption
Figure 1: Taxonomy of compensation learning.

3 A Taxonomy of Compensation Learning

Compensating can be used in many learning scenarios. This section leverages classification as the illustrative example. Given a training set S={xi,yi}S=\{x_{i},y_{i}\}, i=1,,Ni=1,\ldots,N, where xix_{i} is the ii-th sample, and yi{1,,c,,C}y_{i}\in\{1,\ldots,c,\ldots,C\} is its categorical label. In a standard supervised deep learning context, let uiu_{i} be the logit vector for xix_{i} output using a deep neural network. The training loss can be written as follows:

\displaystyle\mathcal{\mathcal{L}} =il(𝕊(f(xi)),yi)=il(𝕊(ui),yi),\displaystyle=\sum\nolimits_{i}{l(\mathbb{S}({f}({x_{i}})),{y_{i}})}=\sum\nolimits_{i}{l(\mathbb{S}({u_{i}}),{y_{i}})}, (1)

where 𝕊()\mathbb{S}(\cdot) transforms the logit vector uiu_{i} into a soft label pip_{i}, f(){f(\cdot)} represents a deep neural network, and ui=f(xi)u_{i}=f(x_{i}).

In the weighting strategy, the loss function is usually defined as follows:

=iwil(𝕊(ui),yi),\mathcal{L}=\sum\nolimits_{i}{{w_{i}\cdot}l(\mathbb{S}({u_{i}}),{y_{i}})}, (2)

where wiw_{i} is the weight associated to the sample xix_{i}. Theoretically, the more likely a sample is noisy or quite hard, the lower its weight.

The compensating strategy investigated in this study can also increase or reduce the influences of samples in model training on the basis of their degrees of training difficulty. For instance, a negative value can be added to reduce the loss incurred from a noisy sample. Resultantly, the negatively influence of this sample will be reduced because its impact on gradients is reduced. Contrarily, when the influence should be increased, a positive value can be added to the loss incurred from the sample. In terms of mathematical computation, “weighting” relies on the multiplication operation, whereas “compensating” relies on adding operation.

Fig. 1 shows the constructed taxonomy of the whole compensation learning. This section introduces each item in the taxonomy.

3.1 Compensation Targets

Eq. (1) contains four different types of variables for each sample, namely, raw feature xix_{i}, logit vector uiu_{i}, label yiy_{i}, and sample loss lil_{i} (=l(𝕊(ui),yi)l(\mathbb{S}({u_{i}}),{y_{i}})). Therefore, compensation targets can be feature, logit vector, label, and loss.

(1) Compensation for feature (Feature compensation). In this kind of compensation, the raw feature vector (xix_{i}) or transformed feature vector (e.g., dense feature) of each sample can have a compensation vector (Δxi\Delta{x_{i}}). Eq. (1) becomes

\displaystyle\mathcal{L} =il(𝕊(f(xi+Δxi)),yi)=il(𝕊(ui),yi).\displaystyle=\sum\nolimits_{i}{l(\mathbb{S}(f({x_{i}}{\rm{+}}\Delta{x_{i}})),{y_{i}})}=\sum\nolimits_{i}{l(\mathbb{S}({u_{i}^{\prime}}),{y_{i}})}. (3)

(2) Compensation for logit vector (Logit compensation). In this kind of compensation, the logit vector (uiu_{i}) of each sample can have a compensation vector (Δui\Delta{u_{i}}). Eq. (1) becomes

=il(𝕊(ui+Δui),yi).\mathcal{L}=\sum\nolimits_{i}{l(\mathbb{S}({u_{i}}{\rm{+}}\Delta{u_{i}}),{y_{i}})}. (4)

(3) Compensation for label (Label compensation). In this kind of compensation, the label (yiy_{i}) of each sample can have a compensation label (Δyi\Delta y_{i}). Let pi=softmax(ui)p_{i}=\text{softmax}(u_{i}). Eq. (1) becomes

(i)=il(pi,yi+Δyi)or(ii)=il(pi+Δyi,yi).\begin{array}[]{l}({\rm{i}})\quad{\rm{}}\mathcal{L}=\sum\nolimits_{i}{l({p_{i}},{y_{i}}{\rm{+}}\Delta{y_{i}})}{\quad\rm{or\vspace{1mm}}}\\ ({\rm{ii}})\quad{\rm{}}\mathcal{L}=\sum\nolimits_{i}{l({p_{i}}{\rm{+}}\Delta{y_{i}},{y_{i}})}.\end{array} (5)

In Eq. (5-i), Δyi\Delta y_{i} is added to the true label yiy_{i}, while in (ii) Δyi\Delta y_{i} is added to the predicted label pip_{i}. Considering that labels after compensation should be a (soft) label, Δyi\Delta y_{i} should satisfy the following requirements:

cΔyic=0,yic+Δyic0orpic+Δyic0.\sum\nolimits_{c}{\Delta{y_{ic}}=0,{y_{ic}}+\Delta{y_{ic}}{\rm{}}\geq{\rm{0\quad or\quad}}{p_{ic}}+\Delta{y_{ic}}{\rm{}}\geq{\rm{0}}}. (6)

(4) Compensation for loss (Loss compensation). In this kind of compensation, the loss of each sample can have a compensation loss (Δli)(\Delta l_{i}). Eq. (1) becomes

=il(𝕊(ui),yi)+Δli.\mathcal{L}=\sum\nolimits_{i}{l(\mathbb{S}({u_{i}}),{y_{i}}){\rm{+}}\Delta}{l_{i}}. (7)

(5) Compensation for mixed targets (Mix-target compensation). In this kind of compensation, two or more of the aforementioned targets can have their compensation terms, simultaneously. For example, when both feature and label compensations are utilized, Eq. (3) becomes

\displaystyle\mathcal{L} =il(𝕊(f(xi+Δxi)),yi+Δyi),\displaystyle=\sum\nolimits_{i}{l(\mathbb{S}(f({x_{i}}{\rm{+}}\Delta{x_{i}})),{y_{i}}+\Delta{y_{i}})}, (8)

where Δxi\Delta{x_{i}} and Δyi\Delta{y_{i}} are the feature and loss compensations, respectively444Lee et al. [27] combine adversarial training and label smoothing, which can be considered as mix-target (feature and label) compensation..

Remark: The compensation variables (i.e., Δxi\Delta x_{i}, Δui\Delta u_{i}, Δyi\Delta y_{i}, and Δli\Delta l_{i}) are trainable during training. They are introduced to reduce the negative impact of some training samples (e.g., noisy or partial hard ones) and increase the positive impact of some samples (e.g., medium ones). For example, in raw feature-based compensation, let myim_{y_{i}} be the center vector of the category of xix_{i}. Ideally, if Δxi\Delta x_{i} = myim_{y_{i}}xix_{i}, the impact of xix_{i} is completely reduced if xix_{i} is noisy.

If the loss functions defined in Eqs. (3)–(8) are directly used without any other constrictions on the compensation variables, nothing can be learned as compensation variables are trainable. For example, when the loss in Eq. (4) is directly used, a random model will be produced because in the training, the value of Δui\Delta u_{i} will be learned to be equal to yiy_{i}. How to infer them and learn with the above loss functions are described in the succeeding subsection.

There may exist other compensation candidates, such as view, structure (e.g., adjacency matrix in GCN), and gradient, which will be explored in future work.

3.2 Compensation Directions

There are two directions according to the loss variations after compensation.

(1) Positive compensation. If the compensation reduces the loss, then it is called positive compensation. Positive compensation can reduce the influence of noisy and quite hard samples during training.

(2) Negative compensation. If the compensation increases the loss, then it is called negative compensation. Negative compensation can increase the influence of easy and medium samples during training. This case will be discussed in the rest of this paper.

(3) Mix-direction compensation. If the compensation increases the losses of some training samples and decreases the losses of others simultaneously, then it is called mix-direction compensation. The logit adjustment method actually leverages this type of compensation, which will be discussed in Section IV.

3.3 Compensation Inference

In compensation learning, compensation variables in losses in Eqs. (3)–(7) should be inferred during training. There are five typical manners (maybe not exhaustive) to infer their values and optimize the whole loss.

(1) Inference with prior knowledge. In this manner, the compensation variables are inferred on the basis of prior knowledge. Alternatively, the compensation variables are fixed before the optimizing of training loss. Taking the label compensation as an example. Given that for each sample, we can obtain a predicted label yiy^{\prime}_{i} by another model, the label compensation can be defined as

Δyi=λ(yiyi),\Delta{y_{i}}{\rm{=}}\lambda({y^{\prime}_{i}}-{y_{i}}), (9)

where λ\lambda is a hyper-parameter and locates in [0, 1]. Δyi\Delta y_{i} defined in Eq. (9) satisfies the condition given by Eq. (6). If yiy^{\prime}_{i} is in trust, then it is highly possible that Δyi\Delta y_{i} approaches to zero if xix_{i} is normal, and it is large if xix_{i} is noisy. Assuming that yiy^{\prime}_{i} is the output of the model in the previous epoch. Eq. (5-i) becomes

=il(𝕊(ui),yi+λ(yiyi)),\mathcal{L}=\sum\nolimits_{i}{l(\mathbb{S}({u_{i}}),{y_{i}}{\rm{+}}\lambda({y^{\prime}_{i}}-{y_{i}}))}, (10)

which is exactly the Bootstrapping loss [39].

(2) Inference with hyper-parameter tuning. In this manner, the compensation variable(s) is/are taken as hyper-parameter(s). Consequently, the optimal value is determined according to the manner of hyper-parameter tuning.

(3) Inference with regularization. In this manner, a regularization term is added for the compensation variables. For example, a natural assumption is that the proportion of the samples that require the compensation variables is small. Therefore, l1l1-norm can be used. Taking the logit compensation as examples. A loss function is defined as follows:

=il(𝕊(ui+Δui),yi)+λReg(Δui),{\rm{}}\mathcal{L}=\sum\nolimits_{i}{l(\mathbb{S}({u_{i}}+\Delta{u_{i}}),{y_{i}})}+\lambda{\mathop{Reg}\nolimits}{\rm{(}}\Delta{u_{i}}), (11)

where λ\lambda is a hyper-parameter and Reg()Reg(\cdot) is regularizer. This manner is similar to the self-paced learning [26]. When λ\lambda\to\infty, no compensation is allowed and compensation learning is reduced to conventional learning.

(4) Inference with meta learning. In this manner, the compensation variables are inferred on the basis of another small clean validation set with meta learning. Given a clean validation set Ω\Omega comprising MM clean training samples and taking loss compensation as an example. Let κi\kappa_{i} be the loss compensation variable for xix_{i} (S)(\in S). We first define that

=iSl(𝕊(ui),yi:Θ)+κi,\mathcal{L}=\sum\nolimits_{i\in S}{l(\mathbb{S}({u_{i}}),{y_{i}}:\Theta)}{\rm{+}}{\kappa_{i}}, (12)

where Θ\Theta is the model parameter set to be learned. Given 𝜿\bm{\kappa}, Θ\Theta can be optimized on the training set SS by solving

Θ(𝜿)=argminΘiSl(𝕊(ui),yi:Θ)+κi.{\Theta^{\rm{*}}}(\bm{\kappa})=\arg\mathop{\min}\limits_{\Theta}\sum\nolimits_{i\in S}{l(\mathbb{S}({u_{i}}),{y_{i}}:\Theta)}{\rm{+}}{\kappa_{i}}. (13)

After Θ\Theta is obtained, 𝜿\bm{\kappa} can be optimized on the validation set Ω\Omega by solving

𝜿=argminjΩ𝜿l(𝕊(uj),yj:Θ(𝜿)).{\bm{\kappa}^{\rm{*}}}=\arg\mathop{\min}\limits\bm{{}_{\kappa}}\sum\nolimits_{j\in\Omega}{l(\mathbb{S}({u_{j}}),{y_{j}}:{\Theta^{*}}(\bm{\kappa}))}. (14)

These two optimizations can be performed alternately, and finally Θ\Theta^{*} and 𝜿\bm{\kappa^{*}} are learned. When either logit or label compensation is used, the above optimization procedure can also be utilized with slight variations.

The above inference manner is similar with that used in the meta learning-based weighting strategy for robust learning [40]. Meta learning has been widely used in robust learning and many existing meta learning-based weighting methods [29, 54] can be leveraged for compensation learning.

(5) Inference with adversarial learning. In both feature and logit compensations, the compensation term can be obtained by adversarial learning. Taking feature compensation as an example, the objective function in negative compensation is

Δxi\displaystyle\Delta{x_{i}^{*}} =argmaxΔxiϵl(𝕊(f(xi+Δxi)),yi),\displaystyle=\arg\mathop{\max}_{{\left\|{\Delta{x_{i}}}\right\|\leq\epsilon}}{l(\mathbb{S}(f({x_{i}}{\rm{+}}\Delta{x_{i}})),{y_{i}})}, (15)

where ϵ\epsilon is the bound. Likewise, the objective function in positive feature-level compensation can be

Δxi\displaystyle\Delta{x_{i}^{*}} =argminΔxiϵl(𝕊(f(xi+Δxi)),yi).\displaystyle=\arg\mathop{\min}_{{\left\|{\Delta{x_{i}}}\right\|\leq\epsilon}}{l(\mathbb{S}(f({x_{i}}{\rm{+}}\Delta{x_{i}})),{y_{i}})}. (16)

(6) Inference with mixed manners. Two or more of the above five manners can be combined together to infer the compensation term in a learning task.

Remark: An existing compensation-based learning method usually adopts one of the inference manners listed above. Theoretically, the inference manner can be changed from one manner to another and a new method will subsequently be obtained.

3.4 Compensation Granularity

Compensation granularity has four levels.

(1) Sample-level compensation. All the compensation variables discussed above are for samples. Each sample has its own compensation variable.

(2) Category-level compensation. In this level, samples within the same category share the same compensation. Taking the logit vector-based compensation as an example, when category-level compensation is utilized, the loss in Eq. (4) becomes

=il(𝕊(ui+Δuyi),yi).\mathcal{L}=\sum\nolimits_{i}{l(\mathbb{S}({u_{i}}{\rm{+}}\Delta{u_{{y_{i}}}}),{y_{i}})}. (17)

Category-level compensation mainly solves the problem when the impact of all the samples of a category should be increased. For example, in long-tail classification, the tail category should be emphasized in learning.

(3) Corpus-level compensation. In this level, samples within the whole training corpus share the same compensation. Take the negative compensation described in Eq. (15) as an example, the objective function becomes

Δx\displaystyle\Delta{x^{*}} =argmaxΔxϵl(𝕊(f(xi+Δx)),yi),\displaystyle=\arg\mathop{\max}_{{\left\|{\Delta{x}}\right\|\leq\epsilon}}{l(\mathbb{S}(f({x_{i}}{\rm{+}}\Delta{x})),{y_{i}})}, (18)

which means that all samples share the same term Δx\Delta{x^{*}}. Δx\Delta{x^{*}} is exactly the universal adversarial perturbation [37].

(4) Mix-level compensation. In this level, more than one of the aforementioned three levels are utilized simultaneously. This case occurs in complex contexts, e.g., when noisy labels and category imbalance exist. Taking label-based compensation as an example. The loss in Eq. (4) can be written as

=il(pi,yi+Δyi+Δyyi),\vspace{-2mm}{\rm{}}\mathcal{L}=\sum\nolimits_{i}{l({p_{i}},{y_{i}}{\rm{+}}\Delta{y_{i}}{\rm{+}}\Delta{y_{{y_{i}}}})}, (19)

where Δyyi\Delta{y_{{y_{i}}}} is the category-level label compensation.

4 Connection with Existing Learning Paradigms

The weighting strategy is straightforward and quite intuitive, hence it has been widely used in the machine learning community. Compensating seems not as straightforward as weighting. However, it can play the same/similar role with weighting in machine learning. They both have their own strengths. Compensating can be used in the feature, logit vector, label, and loss, whereas weighting is usually used in the loss. Weighting is usually efficient, whereas the optimization in some compensating methods (e.g., Eq. (18)) is relatively complex. A theoretical comparison for them is beneficial for both strategies and we leave it as future work.

Many classical and newly proposed learning methods, which are on the basis of distinct inspirations and theoretical motivations, can be attributed to compensation learning or explained in the viewpoint of compensation learning. We choose the following methods as illustrative examples.

(1) Robust clustering [8]. Let mcm_{c} be the cluster center of the cc-th cluster. Let ωic\omega_{ic} ({0,1}\in\{0,1\}) denote whether xix_{i} belongs to the cc-th cluster. The optimization form of conventional data clustering can be written as follows:

min{mc},{ωic}icωicximc22.\mathop{\min}\limits_{\{{m_{c}}\},\{{\omega_{ic}}\}}\sum\nolimits_{i}{\sum\nolimits_{c}{{\omega_{ic}}}}\left\|{{x_{i}}-{m_{c}}}\right\|_{2}^{2}. (20)

Given that outlier samples may exist, sample-level feature compensation (denoted as oio_{i} for xix_{i}) can be introduced with regularization. When l2l2-norm is used, (20) becomes

min{mc},{ωic},{oi}icωic((xi+oi)mc22+λoi2),\mathop{\min}\limits_{\{{m_{c}}\},\{{\omega_{ic}}\},{\rm{\{}}{o_{i}}{\rm{\}}}}\sum\nolimits_{i}{\sum\nolimits_{c}{{\omega_{ic}}\left({\left\|{({x_{i}}+{o_{i}})-{m_{c}}}\right\|_{2}^{2}+\lambda{{\left\|{{o_{i}}}\right\|}_{2}}}\right)}}, (21)

which becomes the method proposed by Foreo et al. [8].

(2) Adversarial training. An adversarial sample can be regarded as a negatively compensated one for the original sample. Training with adversarial samples (i.e., adversarial training) is proven to be useful in many applications and various methods are proposed [35].

Shafahi et al. [41] proposed universal adversarial training which is actually based on a corpus-level negative feature compensation. The loss on adversarial samples is

corpusadv=maxδϵil(𝕊(f(xi+δ)),yi).\mathcal{L}_{corpus-adv}=\mathop{\max}\limits_{{{\left\|\delta\right\|}}\leq\epsilon}\sum\nolimits_{i}{l\left({\mathbb{S}(f({x_{i}}+\delta)),y_{i}}\right)}. (22)

Benz et al. [2] observed that universal adversarial perturbation does not attack all classes equally. They proposed a category-wise universal adversarial training method and the loss on adversarial samples is

categoryadv=maxδyiϵil(𝕊(f(xi+δyi)),yi),\mathcal{L}_{category-adv}=\mathop{\max}\limits_{{{\left\|\delta_{y_{i}}\right\|}}\leq\epsilon}\sum\nolimits_{i}{l\left({\mathbb{S}(f({x_{i}}+\delta_{y_{i}})),y_{i}}\right)}, (23)

which belongs to the category-level negative feature compensation. Motivated by our taxonomy, mixed corpus/category/sample-level adversarial perturbations can subsequently be generated. A mixed corpus/sample-level adversarial perturbation is described as an example:

δ=argmax𝛿il(S(f(xi+δ)),yi)mixadv=maxδiil(S(f(xi+δ+δi)),yi),\begin{array}[]{l}\delta^{*}=arg\underset{\delta}{\max}\sum\limits_{i}{l(S(f(x_{i}+\delta)),{y_{i}})}\\ {\mathcal{L}_{mix-adv}}=\underset{\delta_{i}}{\max}\sum\limits_{i}{l(S(f(x_{i}+\delta^{*}+{\delta_{i}})),{y_{i}})},\\ \end{array} (24)

where δ\delta^{*} and δi\delta_{i} are the corpus-level and sample-level perturbations, respectively. A further statistical analysis for the two levels of adversarial perturbations may illuminate us to better understand the adversarial characteristics of the data.

(3) Adversarial label smoothing [11]. Label smoothing is actually a type of sample-level label compensation. Its compensation term for a sample (xi,yix_{i},y_{i}) is defined as follows:

Δyi=λ(I/Cyi),\Delta{y_{i}}=\lambda(I/C-y_{i}), (25)

where II is a CC-dimensional vector and each element is equal to 1. Obviously, the compensation term is determined according to pre-definition (i.e., the prior knowledge manner).

According to the inference manner in our taxonomy, adversarial learning can be utilized to pursue the compensation term. Accordingly, the term is

Δyi=λ(piyi),\displaystyle\Delta{y_{i}}=\lambda(p^{*}_{i}-y_{i}), (26)

where

pi=argmaxpil(𝕊(ui),yi+λ(piyi)).\displaystyle p^{*}_{i}=arg\underset{p_{i}}{\max}{l(\mathbb{S}(u_{i}),y_{i}+\lambda(p_{i}-y_{i}))}. (27)

Eq. (27) has an analytic solution such that pip^{*}_{i} is the one-hot vector for the category which corresponds to the minimum softmax value in 𝕊(ui)\mathbb{S}(u_{i}).

(4) Logit adjustment-based imbalance learning [36]. In a multi-category classification problem, let πc\pi_{c} be the proportion of the training samples in the cc-th category. Let g=[g(π1),,g(πC)]\textbf{g}=\left[{g\left({{\pi_{1}}}\right),\ldots,g\left({{\pi_{C}}}\right)}\right]. When the proportions are imbalanced, a corpus-level of logit compensation can be introduced as follows:

=il(𝕊(ui+g),yi).\mathcal{L}=\sum\nolimits_{i}{l\left({\mathbb{S}({u_{i}}{\rm{+}}\textbf{g}),{y_{i}}}\right)}. (28)

For the above loss, when g(){g}(\cdot) is an increasing function, we conjecture that the influences of samples in the minority categories (i.e., πc<1C{\pi_{c}}<\frac{1}{C}) on the loss are increased.

Refer to caption
Figure 2: The relative loss increment ((ll)/l(l^{\prime}-l)/{l}) for Logit Adjustment. Head categories are in the left and tail ones are in the right. The losses of head categories are mainly decreased, while those of tail ones are increased.

As the influences of samples in the minority categories on the loss are increased, the imbalanced problem can be alleviated by the logit compensation used in Eq. (28). When g(πc)=τlog(πc)(τ>0){g}\left({{\pi_{c}}}\right)=\tau\log\left({{\pi_{c}}}\right)\left({\tau>0}\right) and cross-entropy loss are used, Eq. (28) becomes

=ilogeui,yi+τlogπyiceui,c+τlogπc,\mathcal{L}={\rm{-}}\sum\nolimits_{i}{\log\frac{{{e^{{u_{i,{y_{i}}}}+\tau\log{\pi_{{y_{i}}}}}}}}{{\sum\nolimits_{c}{{e^{{u_{i,c}}+\tau\log{\pi_{c}}}}}}}}, (29)

which is exactly the logit adjusted loss [36]. We make a statistic for the relative loss variations incurred by Logit adjustment for each category on the imbalanced version of the benchmark image classification data CIFAR-100 [25]. The results are presented in Fig. 2. The loss variations of head categories (those with small class Ids) are negative, and those of tail (those with small class Ids) are positive. In other words, both positive and negative compensations exist in logit adjustment. Intuitively, a category-level version can be obtained via meta learning, which is discussed in Appendix A.

(5) SVM [5]. This method is based on the following hinge loss:

li=max(0,1yi(wTxi+b)).{\rm{}}{l_{i}}=\max(0,{\rm{}}1-{y_{i}}({\text{w}^{T}}{x_{i}}+b)). (30)

To reduce the negative contributions of noisy or hard samples, the loss can be compensated as follows:

li\displaystyle{\rm{}}{{l^{\prime}}_{i}} =max(0,liξi)\displaystyle=\max(0,{\rm{}}{l_{i}}{\rm{-}}{\xi_{i}})\quad{\rm{}} (31)
=max(0,1yi(wTxi+b)ξi)(ξi0).\displaystyle=\max(0,1-{y_{i}}({\rm{\textbf{w}}^{T}}{x_{i}}+b){\rm{-}}{\xi_{i}}){\rm{}}{\qquad\rm{(}}{\xi_{i}}\geq 0{\rm{)}}.

Then the whole training loss with max margin and l1l1-norm for ξi{\xi_{i}} becomes

=12w2+ili+λ|ξi|(ξi0).{\rm{}}\mathcal{L}=\frac{1}{2}{\left\|{\rm{\textbf{w}}}\right\|^{2}}+\sum\nolimits_{i}{{{l^{\prime}}_{i}}+}\lambda{\rm{|}}{\xi_{i}}{\rm{|\qquad(}}{\xi_{i}}\geq 0{\rm{)}}. (32)

The minimization of Eq. (32) equals to the following optimization problem:

minw,b,{ξi}12w2+λiξis.t.1yi(wTxi+b)ξi0ξi0,i=1,,N,\begin{array}[]{l}{\min\limits_{\textbf{w},b,\{\xi_{i}\}}\frac{1}{2}{\left\|{\rm{\textbf{w}}}\right\|^{2}}+\lambda\sum\nolimits_{i}{{\xi_{i}}{\rm{}}}}\\ {\rm{s}}{\rm{.t}}{\rm{.}}\quad 1-{y_{i}}({{\textbf{w}^{T}}{x_{i}}+b){\rm{-}}{\xi_{i}}}\leq 0\\ \qquad{\rm{}}{\xi_{i}}\geq 0,i=1,\cdots,N\\ \end{array}, (33)

which is the standard form of SVM (without kernel). Alternatively, the slack variable can be seen as a loss compensation for SVM. Naturally, other types of compensation (e.g., label compensation) may be considered in SVM.

(6) Knowledge distillation [18]. In knowledge distillation, there are two deep neural networks called teacher and student, respectively. The output of the teacher model for xix_{i} is

qi=softmax(zi/T),{q_{i}}=\text{\text{softmax}}({z_{i}}/T), (34)

where ziz_{i} is the logit vector from the teacher model and TT is the temperature. qiq_{i} can be viewed as a prior knowledge about the label compensation for the student model.

Then according to Eq. (6), the training loss of the student model with label compensation becomes

\displaystyle\mathcal{L} =il(pi,yi+λ(qipi)),\displaystyle=\sum\nolimits_{i}{l({p_{i}},{y_{i}}{\rm{+}}\lambda({q_{i}}-{p^{\prime}_{i}}))}, (35)

where pi=softmax(ui/T)p^{\prime}_{i}=\text{\text{softmax}}({u_{i}}/T). Eq. (35) is exactly the loss function of knowledge distillation.

(7) Implicit semantic data augmentation (ISDA)  [51]. In contrast with previous data augmentation techniques, ISDA does not produce new samples or features. Instead, ISDA transforms the semantic data augmentation problem into the optimization of a new loss defined as

=iexp(ui,yi)c=1Cexp(ui,c+λ2(wcwyi)TΣyi(wcwyi)),\small\mathcal{L}{\rm{=-}}\sum\nolimits_{i}{\frac{{\exp({u_{i,{y_{i}}}})}}{{\sum\limits_{c=1}^{C}{\exp({u_{i,{c}}}+\frac{\lambda}{2}{{({{\rm{w}}_{c}}-{{\rm{w}}_{{y_{i}}}})}^{T}}{\Sigma_{{y_{i}}}}({{\rm{w}}_{c}}-{{\rm{w}}_{{y_{i}}}}))}}}}, (36)

where Σyi\Sigma_{y_{i}} is is the co-variance matrix for the yiy_{i}th category; wc\text{w}_{c} is the model parameter for the logit vectors and ui,c=wcTx~i{u_{i,c}=}{{\rm{w}}_{\rm{c}}}^{T}{\tilde{x}_{i}}{{}} (x~i\tilde{x}_{i} is the output of the last feature encoding layer for the sample xix_{i}).

In Eq. (36), a logit compensation term is observed as follows:

ui=ui+δyi,\displaystyle u^{\prime}_{i}=u_{i}+\delta_{y_{i}}, (37)

where

δyi=λ2[(w1wyi)TΣyi(w1wyi)(wCwyi)TΣyi(wCwyi)].\displaystyle{\delta_{{y_{i}}}}{\rm{=}}\frac{\lambda}{2}\left[{\begin{array}[]{*{20}{c}}{{{({{\rm{w}}_{1}}-{{\rm{w}}_{{y_{i}}}})}^{T}}{\Sigma_{{y_{i}}}}({{\rm{w}}_{1}}-{{\rm{w}}_{{y_{i}}}})}\\ \vdots\\ {{{({{\rm{w}}_{C}}-{{\rm{w}}_{{y_{i}}}})}^{T}}{\Sigma_{{y_{i}}}}({{\rm{w}}_{C}}-{{\rm{w}}_{{y_{i}}}})}\end{array}}\right]. (38)

Obviously, the compensation is category-level and determined with prior knowledge. In addition, the compensation direction is negative as the loss is increased for each training sample. The term is heavily dependent on the co-variance matrix Σyi\Sigma_{y_{i}}, which can be further optimized via meta learning by minimizing the following loss on a validation set Ω\Omega:

Σ=argminΣjΩl(𝕊(uj),yj;Θ(Σ)),{\Sigma^{*}}=\arg\mathop{\min}\limits_{\Sigma}\sum\limits_{j\in\Omega}{l\left({\mathbb{S}\left({{u_{j}}}\right),{y_{j}};{\Theta^{*}}(\Sigma)}\right)}, (39)

which is just the meta implicit data augmentation (MetaSAug) proposed by Li et al. [30]. MetaSAug is quite effective in long-tail classification.

(8) Arcface [32]. Arcface is a classical face recognition loss defined as follows:

=iexp[si(cos(θi,yi+m))]exp[si(cos(θi,yi+m))]+cyiexp[si(cos(θi,c))],\small\mathcal{L}{\rm{=-}}\sum\nolimits_{i}{\frac{\exp[s_{i}(cos(\theta_{i,y_{i}}+m))]}{{{\exp[s_{i}(cos(\theta_{i,y_{i}}+m))]}}+\sum\nolimits_{c\neq y_{i}}\exp[s_{i}(cos(\theta_{i,c}))]}}, (40)

where θi,c\theta_{i,c} is the angle between the weight wcw_{c} and the feature x~i\tilde{x}_{i} which are defined in the description for ISDA; mm is a hyper-parameter. Indeed, mm does not belong to the five compensation targets in our taxonomy. It is a corpus-level term and determined via hyper-parameter tuning.

Wang et al. [48] proposed a new Arcface loss, namely, Balancedloss, with the category-level compensation. The loss is defined as

=iexp[si(cos(θi,yi+mgi))]exp[si(cos(θi,yi+mgi))]+cyiexp[si(cos(θi,c))],\small\mathcal{L}{\rm{=-}}\sum\nolimits_{i}{\frac{\exp[s_{i}(cos(\theta_{i,y_{i}}+m_{g_{i}}))]}{{{\exp[s_{i}(cos(\theta_{i,y_{i}}+m_{g_{i}}))]}}+\sum\nolimits_{c\neq y_{i}}\exp[s_{i}(cos(\theta_{i,c}))]}}, (41)

where gi{g_{i}} is the skin-tone category of the jj-th sample. Obviously, mgim_{g_{i}} is a category-level term. It can be optimized via meta learning:

mg=argmin{mgj}jΩl(Θ(mgj)),\small m_{g}^{*}=arg\underset{\{m_{g_{j}}\}}{\min}\sum_{j\in\Omega}l(\Theta(m_{g_{j}})), (42)

which is proven to be quite effective in the experiments conducted by Wang et al. [48].

Other numerous typical methods such as Robust nonrigid ICP [20], D2L [34], DAC [46], Deep self-learning [16], LDAM [3], MRFL [59], Robust regression [43], and Bootstrapping loss [39] can also be explained with compensation learning.

5 Two New Learning Method Examples

This section introduces two method examples by introducing the idea of compensation learning into existing algorithms.

5.1 l1-based Logit Compensation

An example is given to explain how logit compensation works. Assume that the inferred logit vector of a noisy sample xix_{i} is ui=[3.0,0.8,0.2]u_{i}=[3.0,0.8,0.2] and its (noisy) label yiy_{i} is [0, 1, 0]. The cross-entropy loss incurred by this training sample is 2.36. This loss negatively affects in training because yiy_{i} is noisy. To reduce the negative influence, if a compensation vector (e.g., [-1, 2, 0]) is learned, then the new logit vector becomes [2, 2.8, 0.2]. Consequently, the cross-entropy loss of xix_{i} is 0.42, which is much lower than 2.36. When l1l1-norm is used, the training loss is

=il(𝕊(ui+vi),yi)+λ|vi|,\mathcal{L}=\sum\nolimits_{i}{l(\mathbb{S}({u_{i}}+{v_{i}}),{y_{i}})}+\lambda{\rm{|}}{v_{i}}{\rm{|}}, (43)

where viv_{i} is the logit compensation vector and it is trainable during the training stage. If no noisy and quite hard samples are present, viv_{i} will approach to zero for all training samples. This method is called LogComp for brevity. The detailed steps are described in Algorithm 1.

5.2 Mixed Positive and Negative Compensation

We observed that large compensations (i.e., viv_{i}) concentrate in samples with large losses during the running of LogComp in the experiments. Let li=l(𝕊(ui),yi)l_{i}=l(\mathbb{S}(u_{i}),y_{i}). Motivated by adversarial training,  (43) is modified into the following form

=i:liτminviϵl(𝕊(ui+vi),yi)+i:li<τli.\mathcal{L}=\sum\limits_{i:{l_{i}}\geq\tau}{\mathop{\min}\limits_{\left\|{{v_{i}}}\right\|\leq\epsilon}}l\left({\mathbb{S}\left({{u_{i}}+{v_{i}}}\right),{y_{i}}}\right)+\sum\limits_{i:{l_{i}}\textless\tau}{{l_{i}}}. (44)

Compared with (43), (44) has one more hyper-parameter. Nevertheless, (44) is more flexible than (43). The results on image classification show that (44) is better than (43) if appropriate τ\tau and ϵ\epsilon are used.

Further, negative feature compensation can be used to increase the influences of samples whose losses are below the threshold τ\tau in the optimization. A mixed compensation is subsequently obtained with the following loss:

\displaystyle\mathcal{L} =i:liτminviϵ1l(𝕊(ui+vi),yi)\displaystyle=\sum\limits_{i:{l_{i}}\geq\tau}{\mathop{\min}\limits_{\left\|{{v_{i}}}\right\|\leq\epsilon_{1}}}l\left(\mathbb{S}({{{u_{i}}+{v_{i}}}),{y_{i}}}\right) (45)
+i:li<τmaxδiϵ2l(𝕊(f(xi+δi)),yi).\displaystyle+\sum\limits_{i:{l_{i}}\textless\tau}{\mathop{\max}\limits_{\left\|{{\delta_{i}}}\right\|\leq\epsilon_{2}}}l\left({\mathbb{S}(f\left({{x_{i}}+{\delta_{i}}})\right),{y_{i}}}\right).

The main difference between (45) and the adversarial training loss [35] is that the losses of quite hard (including noisy) samples are not increased any more in (45). Instead, the losses of these samples are reduced as in (45). When τ>maxi{li}\tau>\max\limits_{i}\{l_{i}\}, only the maximization part exists and the whole loss becomes the adversarial training loss; when ϵ2=0\epsilon_{2}=0, (45) is reduced to (44).

The minimization part in both (44) and (45) can be solved with an optimization approach similar to PGD [35]. This method is called MixComp for brevity. The PGD-like optimization for the minimization part in Eqs. (44) and (45) is as follows. First, we have

l(𝕊(ui+vi),yi)vi|=vi=0𝕊(ui)y^i,\frac{{\partial l(\mathbb{S}({u_{i}}+{v_{i}}),{y_{i}})}}{{\partial{v_{i}}}}\left|{{}_{{v_{i}}=0}}\right.=\mathbb{S}({u_{i}})-{\hat{y}_{i}}, (46)

where y^i\hat{y}_{i} is the one-hot vector of yiy_{i}. Therefore, viv_{i} can be calculated by

vi=η(y^i𝕊(ui)),{v_{i}}=\eta({\hat{y}_{i}}-\mathbb{S}({u_{i}})), (47)

where η\eta is the hyper-parameter. Accordingly, the updating of uiu_{i} is

ui=ui+η(y^i𝕊(ui)).{u^{\prime}_{i}}={u_{i}}{\rm{+}}\eta({\hat{y}_{i}}-\mathbb{S}({u_{i}})). (48)

In our implementation, only one updating step is used. Consequently, if \infty-norm is used, then we have

|vi|=|η(𝕊(ui)y^i)||η||(𝕊(ui)y^i)|η.{\rm{|}}{v_{i}}{\rm{|=|}}\eta(\mathbb{S}({u_{i}})-{\hat{y}_{i}})|\leq|\eta||(\mathbb{S}({u_{i}})-{\hat{y}_{i}})|\leq\eta. (49)

Therefore, we use η\eta to control the bound (i.e., ϵ1\epsilon_{1}) of viv_{i}. The detailed steps of MixComp are described in Algorithm 2.

Algorithm 1 LogComp
1:Training set S={xi,yi}S=\{{x_{i}},{\rm{}}{y_{i}}\}, i=1,,Ni=1,\cdots,N; hyper-parameters λ\lambda; #Epoch; #Batch; and learning rate.
2:Model f(x,w)f({x,{\rm{}}{\mathop{\rm\textbf{w}}\nolimits}}).
3:Initialization: v=0v=\textbf{0} for each training sample, w as w(0){\textbf{w}^{\left(0\right)}};
4:repeat
5:     tt = 1,,1,\cdots, #Epoch
6:      kk = 1,,1,\cdots, #Batch
7:       Generate mini-batch DkD_{k} from SS;
8:       Calculate loss based on Eq. (43);
9:       Update w via SGD;
10:until stable accuracy in the validation set.
Algorithm 2 MixComp
1:Training set S={xi,yi}S=\{{x_{i}},{\rm{}}{y_{i}}\}, i=1,,Ni=1,\cdots,N; #Epoch; #Batch; learning rate; ϵ1\epsilon_{1}; ϵ2\epsilon_{2}; and τ\tau.
2:Model f(x,w)f({x,{\rm{}}{\mathop{\rm\textbf{w}}\nolimits}}).
3:Initialization: w as w(0){\textbf{w}^{\left(0\right)}};
4:repeat
5:     tt = 1,,1,\cdots, #Epoch
6:      kk = 1,,1,\cdots, #Batch
7:       Generate mini-batch DkD_{k} from SS;
8:       Infer viv_{i} according to Eq. (47) for samples with a lower loss than τ\tau;
9:       Infer δi\delta_{i} for the rest samples according to PGD optimization;
10:       Calculate loss based on Eq. (45);
11:       Update w using SGD;
12:until stable accuracy in the validation set.

6 Experiments

This section evaluates our methods (LogComp and MixComp) in image classification and text sentiment analysis when there are noisy labels.

6.1 Competing Methods

As our proposed methods belong to the end-to-end noise-aware solution, the following methods are compared: soft/hard Bootstrapping [39], label smoothing [45], online label smoothing [57], progressive self label correction (ProSelfLC) [49], and PGD-based adversarial training (PGD-AT) [35].

The parameter settings are detailed in the corresponding subsections. In all experiments, the average classification accuracy and standard deviation of three repeated runs are recorded for each comparison.

6.2 Image Classification

Table 1: Classification accuracies(%) on CIFAR-10.
Random noise Pair noise
0% 10% 20% 30% 10% 20% 30%
Base (ResNet-20) 91.79±\pm0.31 88.78±\pm0.33 87.55±\pm0.32 85.85±\pm0.37 90.32±\pm0.19 89.28±\pm0.14 87.06±\pm0.23
Soft Bootstrapping 91.83±\pm0.12 89.37±\pm0.18 87.52±\pm0.37 85.59±\pm0.33 90.44±\pm0.23 89.16±\pm0.22 87.08±\pm0.25
Hard Bootstrapping 92.06±\pm0.10 89.61±\pm0.20 88.07±\pm0.32 86.37±\pm0.26 90.34±\pm0.18 89.54±\pm0.25 86.86±\pm0.19
Label Smoothing 92.12±\pm0.14 90.15±\pm0.09 88.54±\pm0.18 86.82±\pm0.16 90.63±\pm0.22 90.12±\pm0.06 88.28±\pm0.42
Online Label Smoothing 92.18±\pm0.15 89.84±\pm0.14 88.19±\pm0.15 86.08±\pm0.22 90.65±\pm0.18 89.52±\pm0.08 87.68±\pm0.16
ProSelfLC 91.80±\pm0.16 89.90±\pm0.16 88.84±\pm0.22 86.78±\pm0.31 90.40±\pm0.23 89.76±\pm0.17 87.11±\pm0.20
PGD-AT 89.90±\pm0.08 87.56±\pm0.13 86.87±\pm0.13 84.80±\pm0.17 88.90±\pm0.15 88.38±\pm0.07 87.44±\pm0.10
LogComp 92.42±\pm0.09 90.99±\pm0.06 90.20±\pm0.16 88.81±\pm0.18 91.17±\pm0.17 91.13±\pm0.08 89.72±\pm0.15
MixComp 92.26±\pm0.04 91.09±\pm0.11 90.63±\pm0.12 88.98±\pm0.15 91.29±\pm0.03 91.15±\pm0.05 90.01±\pm0.16
Table 2: Classification accuracies(%) on CIFAR-100.
Random noise Pair noise
0% 10% 20% 30% 10% 20% 30%
Base (ResNet-20) 67.81±\pm0.08 63.67±\pm0.29 60.63±\pm0.33 57.82±\pm0.35 63.94±\pm0.29 61.22±\pm0.03 55.74±\pm0.22
Soft Bootstrapping 68.38±\pm0.24 64.01±\pm0.23 60.66±\pm0.28 57.97±\pm0.23 64.29±\pm0.31 60.71±\pm0.23 56.27±\pm0.26
Hard Bootstrapping 67.62±\pm0.29 64.28±\pm0.33 60.32±\pm0.22 58.09±\pm0.19 63.96±\pm0.26 60.69±\pm0.29 56.18±\pm0.17
Label Smoothing 67.54±\pm0.10 65.04±\pm0.18 61.84±\pm0.27 59.06±\pm0.08 65.43±\pm0.24 62.71±\pm0.24 58.92±\pm0.19
Online Label Smoothing 67.80±\pm0.19 64.55±\pm0.15 61.53±\pm0.22 59.19±\pm0.13 64.70±\pm0.28 62.54±\pm0.19 57.44±\pm0.25
ProSelfLC 68.37±\pm0.22 64.64±\pm0.28 62.14±\pm0.17 58.93±\pm0.24 65.36±\pm0.18 62.57±\pm0.16 59.08±\pm0.27
PGD-AT 64.37±\pm0.17 60.39±\pm0.24 57.38±\pm0.21 54.23±\pm0.16 60.41±\pm0.20 58.08±\pm0.13 54.37±\pm0.22
LogComp 68.72±\pm0.11 65.55±\pm0.16 62.56±\pm0.16 59.59±\pm0.15 66.49±\pm0.19 64.74±\pm0.13 61.36±\pm0.16
MixComp 68.71±\pm0.15 65.79±\pm0.14 62.76±\pm0.20 60.17±\pm0.12 66.81±\pm0.16 64.83±\pm0.11 63.55±\pm0.13

Two benchmark image classification data sets, namely, CIFAR-10 and CIFAR-100 [25], are used. CIFAR-10 contains 10 categories and CIFAR-100 contains 100 categories. The details of these two data sets are shown in [25].

The synthetic label noises are simulated on the basis of the two common schemes used in [13, 15, 23]. The first is the random scheme in which each training sample is assigned to a uniform random label with a probability pp. The second is the pair scheme in which each training sample is assigned to the category next to its true category on the basis of the category list with a probability pp. The value of pp is set as 10%, 20%, and 30%.

The training/testing configuration used in [49] is followed. The parameter settings are as follows. The batch size and learning rate are set as 128 and 0.1, respectively. Other parameter settings are detailed in Appendix B.

Refer to caption
Figure 3: Samples with higher l1-norm values of logit compensation whose labels seem erroneous.

The results are shown in Tables 1 and 2, respectively, when ResNet-20 [17] is used as the base neural network. Our methods, MixComp and LogComp, achieve twelve and two highest accuracies among the fourteen comparisons, respectively. The results of MixComp are obtained when ϵ2\epsilon_{2} equals to 0, indicating that only positive compensation is useful for the (clean) accuracy when there are noises. Indeed, both the hyper-parameters ϵ2\epsilon_{2} and τ\tau balance the trade-off between positive and negative compensations.

Table 3: An ablation study of MixComp on CIFAR-10 (%).
Random noise 0% 10% 20% 30%
Baseline (ResNet-20) 91.79±\pm0.31 88.78±\pm0.33 87.55±\pm0.32 85.85±\pm0.37
Only pos. comp. (ϵ2=0\epsilon_{2}=0) 92.26±\pm0.04 91.09±\pm0.11 90.63±\pm0.12 88.98±\pm0.15
Only neg. comp. (ϵ1=0\epsilon_{1}=0) 91.66±\pm0.12 88.69±\pm0.22 87.33±\pm0.10 85.71±\pm0.31
Both directions 91.91±\pm0.19 90.11±\pm0.27 89.75±\pm0.17 88.17±\pm0.16
Table 4: Performance variations under different values of ϵ2\epsilon_{2}.
0 2/255 4/255 6/255 8/255
Clean accuracy(%) 91.09±\pm0.11 90.11±\pm0.27 89.77±\pm0.18 88.67±\pm0.15 88.30±\pm0.19
Adversarial accuracy(%) 11.57±\pm0.36 53.38±\pm0.31 64.95±\pm0.24 68.20±\pm0.16 70.25±\pm0.14
Table 5: Classification accuracies(%) on CIFAR-10 (0% noise).
ResNet-32 ResNet-44 ResNet-56 ResNet-110
Base 92.50±\pm0.26 92.82±\pm0.15 93.03±\pm0.34 93.51±\pm0.18
Soft Bootstrapping 92.40±\pm0.17 92.83±\pm0.16 93.43±\pm0.27 94.08±\pm0.29
Hard Bootstrapping 92.19±\pm0.23 92.94±\pm0.11 93.38±\pm0.25 94.02±\pm0.23
Label Smoothing 92.75±\pm0.24 92.89±\pm0.18 93.05±\pm0.23 93.92±\pm0.43
Online Label Smoothing 92.61±\pm0.19 92.93±\pm0.34 93.41±\pm0.20 93.54±\pm0.18
ProSelfLC 92.87±\pm0.22 92.98±\pm0.28 93.21±\pm0.19 93.58±\pm0.37
PGD-AT 90.66±\pm0.16 91.31±\pm0.19 91.80±\pm0.22 91.98±\pm0.15
LogComp 93.42±\pm0.11 93.59±\pm0.09 93.80±\pm0.17 94.40±\pm0.12
MixComp 93.00±\pm0.15 93.18±\pm0.13 93.38±\pm0.21 94.35±\pm0.10
Table 6: Classification accuracies(%) on CIFAR-100 (0% noise).
ResNet-32 ResNet-44 ResNet-56 ResNet-110
Base 69.16±\pm0.19 70.02±\pm0.19 70.38±\pm0.34 73.18±\pm0.12
Soft Bootstrapping 69.76±\pm0.25 70.76±\pm0.34 71.01±\pm0.40 74.19±\pm0.24
Hard Bootstrapping 69.37±\pm0.24 70.06±\pm0.29 70.26±\pm0.31 73.35±\pm0.18
Label Smoothing 69.91±\pm0.27 70.52±\pm0.51 71.49±\pm0.29 74.01±\pm0.44
Online Label Smoothing 69.53±\pm0.22 70.05±\pm0.79 71.06±\pm0.26 73.59±\pm0.19
ProSelfLC 69.54±\pm0.29 70.39±\pm0.35 70.49±\pm0.32 73.42±\pm0.24
PGD-AT 65.94±\pm0.18 66.55±\pm0.26 67.58±\pm0.29 70.83±\pm0.17
LogComp 71.41±\pm0.21 71.48±\pm0.18 72.73±\pm0.24 75.54±\pm0.14
MixComp 70.29±\pm0.17 71.24±\pm0.19 71.81±\pm0.28 74.31±\pm0.16
Table 7: Classification accuracies(%) on CIFAR-100 (20% pair noise).
ResNet-32 ResNet-44 ResNet-56 ResNet-110
Base 62.46±\pm0.54 62.73±\pm0.64 63.37±\pm0.22 67.51±\pm0.19
Soft Bootstrapping 63.09±\pm0.33 63.69±\pm0.39 64.06±\pm0.28 67.87±\pm0.26
Hard Bootstrapping 63.03±\pm0.41 63.57±\pm0.32 63.99±\pm0.34 67.40±\pm0.23
Label Smoothing 64.45±\pm0.28 65.72±\pm0.27 66.50±\pm0.74 69.43±\pm0.36
Online Label Smoothing 63.94±\pm0.66 65.18±\pm0.70 65.45±\pm0.52 68.38±\pm0.34
ProSelfLC 64.04±\pm0.37 65.04±\pm0.44 63.94±\pm0.41 68.86±\pm0.26
PGD-AT 60.13±\pm0.31 60.58±\pm0.28 60.02±\pm0.32 65.62±\pm0.20
LogComp 66.50±\pm0.27 66.97±\pm0.29 68.57±\pm0.25 71.74±\pm0.21
MixComp 67.14±\pm0.23 69.07±\pm0.26 68.92±\pm0.21 71.86±\pm0.18

An ablation study is conducted for MixComp on CIFAR-10 (random noises) as MixComp involves both positive and negative compensations. The results in Table 3 indicate that negative compensation (i.e., adversarial training) and compensation with both directions do not improve the performance yet the positive compensation achieves the best performance. Table 4 lists the clean and adversarial accuracies of MixComp under different values of ϵ2\epsilon_{2} on the CIFAR-10 (10% random noises). The increase of ϵ2\epsilon_{2} improves the adversarial accuracies yet reduces the clean accuracies. Although negative compensation in MixComp does not improve the clean accuracy, it benefits the adversarial accuracy.

When LogComp and MixComp are used, some original labels with high average (positive) compensation terms are found to be erroneous. Fig. 3 shows two samples from CIFAR-10. Their labels seem wrong. Comparisons on other base networks [17], namely, ResNet-32, ResNet-44, ResNet-56, and ResNet-110 are also conducted. The same conclusions are still obtained. Tables 57 present the classification accuracies of the competing methods with the above four base networks on partial noisy rates.

6.3 Text Sentiment Analysis

Two benchmark data sets are used, namely, IMDB and SST-2 [47]. Both are binary tasks and the details can be seen in [47]. Two types of label noises are added. In the first type (symmetric), the labels of the former 5%, 10%, and 20% (according to there indexes in the corpus) training samples are flipped to simulate the label noises; in the second type (asymmetric), the labels of the former 5%, 10%, and 20% positive samples are flipped to negative. The 300-DD Glove [59] embedding is used. The values for #epochs, batch size, learning rate, and dropout rate follow the settings in [19, 52]. The data split and other parameter settings are detailed in Appendix C.

The results of the competing methods on the IMDB and SST-2 for the symmetric and asymmetric label noises are shown in Tables 8 and 9, respectively, when BiLSTM with attention [10] is used as the base network. Our proposed method, MixComp, achieves the overall best results (13 highest accuracies among 14 comparisons). When no added label noises are present (0%), both MixComp and LogComp still achieve better results than the base method BiLSTM with attention on both sets.

Table 8: Classification accuracies(%) on IMDB.
Symmetric noise Asymmetric noise
0% 5% 10% 20% 5% 10% 20%
Base (BiLSTM+attention) 84.39±\pm0.34 83.04±\pm0.17 81.90±\pm0.61 78.13±\pm0.13 82.35±\pm0.88 79.53±\pm2.68 73.74±\pm1.14
Soft Bootstrapping 84.79±\pm0.87 83.87±\pm0.13 81.11±\pm0.62 79.60±\pm1.78 83.36±\pm1.11 80.70±\pm2.19 73.52±\pm2.65
Hard Bootstrapping 84.44±\pm0.93 84.10±\pm0.54 83.01±\pm0.70 80.84±\pm1.07 82.48±\pm1.72 81.42±\pm1.55 75.26±\pm1.02
Label Smoothing 84.62±\pm0.18 83.14±\pm0.24 82.41±\pm0.51 80.73±\pm0.20 82.75±\pm0.29 82.28±\pm0.33 74.70±\pm0.48
Online Label Smoothing 84.83±\pm0.51 84.14±\pm0.37 82.09±\pm0.54 80.91±\pm1.17 83.78±\pm0.77 81.35±\pm0.92 73.75±\pm1.38
ProSelfLC 84.79±\pm0.39 83.21±\pm0.44 82.17±\pm0.47 80.42±\pm0.41 83.22±\pm0.91 81.58±\pm0.85 74.96±\pm3.01
PGD-AT 85.82±\pm0.10 84.12±\pm0.37 83.53±\pm0.44 81.48±\pm0.18 82.41±\pm0.98 80.75±\pm0.73 72.40±\pm2.15
LogComp 85.17±\pm0.16 84.53±\pm0.20 83.75±\pm0.46 81.64±\pm0.22 84.45±\pm0.39 81.44±\pm0.36 76.87±\pm0.30
MixComp 85.87±\pm0.08 85.12±\pm0.14 84.33±\pm0.22 82.60±\pm0.19 85.12±\pm0.18 82.31±\pm0.21 77.83±\pm0.25
Table 9: Classification accuracies(%) on SST-2.
Symmetric noise Asymmetric noise
0% 5% 10% 20% 5% 10% 20%
Base (BiLSTM+attention) 83.85±\pm0.02 82.71±\pm0.05 81.12±\pm0.29 79.72±\pm0.03 82.07±\pm0.45 81.46±\pm0.19 79.49±\pm0.39
Soft Bootstrapping 83.77±\pm0.33 83.25±\pm0.17 82.21±\pm0.23 80.40±\pm0.42 82.78±\pm0.27 81.66±\pm0.44 79.14±\pm0.25
Hard Bootstrapping 83.68±\pm0.40 83.18±\pm0.22 81.45±\pm0.63 80.50±\pm0.16 82.25±\pm0.54 81.73±\pm0.23 79.52±\pm0.67
Label Smoothing 83.87±\pm0.52 82.78±\pm0.09 82.16±\pm0.32 80.57±\pm0.14 82.69±\pm0.41 81.95±\pm0.24 79.63±\pm0.68
Online Label Smoothing 83.67±\pm0.19 83.34±\pm0.14 82.03±\pm0.22 80.61±\pm0.39 82.58±\pm0.33 82.20±\pm0.32 79.57±\pm0.72
ProSelfLC 83.81±\pm0.05 83.07±\pm0.16 81.92±\pm0.28 80.28±\pm0.33 82.42±\pm0.43 82.03±\pm0.21 79.27±\pm0.79
PGD-AT 83.88±\pm0.14 82.15±\pm0.23 81.81±\pm0.19 80.33±\pm0.12 82.36±\pm0.23 81.69±\pm0.18 73.81±\pm0.24
LogComp 84.10±\pm0.08 83.18±\pm0.13 81.83±\pm0.15 80.42±\pm0.05 82.85±\pm0.10 82.23±\pm0.15 78.87±\pm0.23
MixComp 84.34±\pm0.05 83.46±\pm0.08 82.31±\pm0.16 80.99±\pm0.13 82.75±\pm0.18 82.30±\pm0.12 79.80±\pm0.21
Table 10: An ablation study of MixComp on IMDB (%).
Symmetric noise 0% 5% 10% 20%
Baseline (BiLSTM+attention) 84.39±\pm0.34 83.04±\pm0.17 81.90±\pm0.61 78.13±\pm0.13
Only pos. comp. (ϵ2=0\epsilon_{2}=0) 84.65±\pm0.11 83.53±\pm0.21 82.91±\pm0.33 80.10±\pm0.22
Only neg. comp. (ϵ1=0\epsilon_{1}=0) 85.84±\pm0.36 84.87±\pm0.22 83.89±\pm0.27 81.73±\pm0.24
Both directions 85.87±\pm0.08 85.12±\pm0.14 84.33±\pm0.22 82.60±\pm0.19

An ablation study is also conducted for MixComp on IMDB. Each compensation is useful and their combination achieves the best performance. The results are shown in Table 10. Given that judging the sentimental states of some sentences is difficult, inevitably, some original samples are quite hard or noisy. When LogComp is used, some original labels with high average compensation terms are found to be erroneous. For example, the sentence “Plummer steals the show without resorting to camp as nicholas’ wounded and wounding uncle ralph” is labeled as positive in the original set. More examples are listed in Tables A-1 and A-2 in the Appendix D.

LogComp also achieves the second-best results on IMDB. On IMDB, the base model is usually converged in the second epoch. However, LogComp is usually converged in the third or the fifth epoch. The validation accuracies of the six epochs for the base model and our LogComp are shown in Fig. 4. LogComp can decelerate the convergence speed leading that the training data can be more fully trained.

Refer to caption
Figure 4: The validation accuracies in the first six epochs under different proportions of random noises on IMDB when using Base (left) and LogComp (right), respectively.

6.4 Discussion

More extensions and new methods can be obtained based on our taxonomy.

(1) The extension of the logit compensation described in (44). As previously mentioned, each weighting method may correspond to a compensating method. Self-paced learning (SPL) [26] is a classical sample weighting strategy in machine learning. The weights are obtained with the following objective function:

minwi{0,1}iwil(𝕊(ui),yi)λwi.\mathop{\min}\limits_{{w_{i}}\in\{0,1\}}\sum\nolimits_{i}{{w_{i}}l(\mathbb{S}({u_{i}}),{y_{i}})-\lambda{w_{i}}}. (50)

The solution is

wi={1 ifl(𝕊(ui),yi)λ 0 otherwise ,w_{i}=\begin{cases}1&\text{ if$\ l(\mathbb{S}(u_{i}),y_{i})\leqslant\lambda$ }\\ 0&\text{ otherwise }\end{cases}, (51)

which indicates that the weights of samples with larger losses than λ\lambda are set as 0. When the value of λ\lambda is increased, more samples will participate in the model training.

Fig. 5 shows the curves of weights for the original SPL and its variants. Logit compensation can be used to implement the SPL with (44) and (52) when the hyper-parameters ϵ\epsilon and τ\tau satisfy the following conditions:

τt+1>τt and ϵ>2maxi{ui},\tau^{t+1}>\tau^{t}\text{ and }\epsilon>2\max\limits_{i}\{||u_{i}||\}, (52)

where tt is the index of the current epoch. A new method is obtained and can be called self-paced logit compensation.

Refer to caption
Figure 5: The curves of weights under different losses in SPL. “Hard” represents the original SPL [55].

With Eq. (52), similar curves can also be obtained. Fig. 6 shows the curve of loss ratios (compensated loss : original loss) when ϵ>2maxi{ui}\epsilon>2\max\limits_{i}\{||u_{i}||\} on the CIFAR-100 data set. The curve indicates that our strategy can also exert higher weights (=1=1) to samples with low losses and lower weights (0\approx 0) to samples with high losses.

Refer to caption
Figure 6: Loss ratio curve of self-paced logit compensation given a fixed η\eta and τ\tau.

(2) The extension of MixComp. Indeed, the parameters ϵ1\epsilon_{1} and ϵ2\epsilon_{2} characterize the extent of positive/negative compensations, respectively. Intuitively, an example with a larger loss should have a greater positive compensation; while an example with a lower loss should have a greater negative compensation. Therefore, the constrains for the compensation terms in (26) can be redefined as follows:

δiϵ1[1+(liτ)/τ] and δiϵ2[1+(τli)/τ].\delta_{i}\leq\epsilon_{1}[1+(l_{i}-\tau)/\tau]\text{ and }\delta_{i}\leq\epsilon_{2}[1+(\tau-l_{i})/\tau]. (53)

(3) The extension of Bootstrapping. The Bootstrapping loss and the online label smoothing can be unified into the following new loss:

=il(pi,yi+α(βp~yi+(1β)piyi)),\mathcal{L}=\sum\nolimits_{i}{l({p_{i}},{y_{i}}{\rm{+}}\alpha(\beta{\widetilde{p}_{{y_{i}}}}{\rm{+(1-}}\beta{\rm{)}}{p_{i}}-{y_{i}}))}, (54)

where p~yi\widetilde{p}_{{y_{i}}} is the category-level average prediction in the previous epoch; α\alpha and β\beta are hyper-parameters and are located in [0, 1]. When β\beta equals 0, the above loss becomes the soft Bootstrapping loss. When β\beta equals 1, the loss becomes the online label smoothing loss with a little difference. Specifically, p~yi{\tilde{p}}_{{y_{i}}} is defined as follows:

p~yi=1Zyij:yj=yi(confj×pj),{\tilde{p}_{{y_{i}}}}{\rm{=}}\frac{1}{{{Z_{{y_{i}}}}}}\sum\limits_{j:{y_{j}}={y_{i}}}{({\textit{\text{conf}}}_{j}\times{p_{j}})}, (55)

where confj{\textit{\text{conf}}}_{j} is the prediction confidence of the prediction pjp_{j}, and ZyiZ_{{y_{i}}} is the normalizer. Two typical definitions of confj{\textit{\text{conf}}}_{j} are

confj=1or\displaystyle\textit{\text{conf}}_{j}=1\ \ or (56)
confj={1 if the prediction is correct 0 otherwise .\displaystyle\textit{\text{conf}}_{j}=\begin{cases}1&\text{ if\ the prediction\ is\ correct }\\ 0&\text{ otherwise }\end{cases}.

When the second definition is used and β=1\beta=1, the unified loss becomes the online label smoothing. Nevertheless, the values of p~yi{\tilde{p}}_{{y_{i}}} obtained by the above two definitions are close to each other when #epoch >5 in most data sets according to our observations. The unified new method can be called mixBootstrapping.

7 Conclusions

This study reveals a widely used yet under-explored machine learning strategy, namely, compensating. Machine learning methods leveraging or partially leveraging compensating comprise a new learning paradigm called compensating learning. To solidify the theoretical basis of compensation learning, a systematic taxonomy is constructed on the basis of which to compensate, the direction, how to infer, and the granularity. To demonstrate the universality of compensation learning, several existing learning methods are explained within our constructed taxonomy. Furthermore, two concrete compensation learning methods (i.e., LogComp and MixComp) are proposed. Extensive experiments suggest that our proposed methods are effective in robust learning tasks.

Acknowledgement

We thank Mr. Mengyang Li for his useful suggestions on the experiments.

References

  • [1] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In ICML, pages 41–48, 2009.
  • [2] Philipp Benz, Chaoning Zhang, Adil Karjauv, and In So Kweon. Universal adversarial training with class-wise perturbations. In ICME, pages 1–6, 2021.
  • [3] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. In NeurIPS, pages 1565–1576, 2019.
  • [4] Kuang-Yu Chang, Chu-Song Chen, and Yi-Ping Hung. Ordinal hyperplanes ranker with cost sensitivities for age estimation. In CVPR, pages 585–592, 2011.
  • [5] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20(3):273–297, 1995.
  • [6] D. Das and Csg Lee. A two-stage approach to few-shot learning for image recognition. IEEE Transactions on Image Processing, 29(99):3336–3350, 2019.
  • [7] Jia Deng, Jonathan Krause, and Li Fei-Fei. Fine-grained crowdsourcing for fine-grained recognition. In CVPR, pages 580–587, 2013.
  • [8] Pedro A Forero, Vassilis Kekatos, and Georgios B Giannakis. Robust clustering using outlier-sparsity regularization. IEEE Transactions on Signal Processing, 60(8):4163–4177, 2012.
  • [9] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997.
  • [10] Felix A Gers, Jürgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm. Neural Computation, 12(10):2451–2471, 2000.
  • [11] Morgane Goibert and Elvis Dohmatob. Adversarial robustness via label-smoothing. arXiv preprint arXiv:1906.11567, 2019.
  • [12] Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In ICLR, 2017.
  • [13] Sheng Guo, Weilin Huang, Haozhi Zhang, Chenfan Zhuang, Dengke Dong, Matthew R Scott, and Dinglong Huang. Curriculumnet: Weakly supervised learning from large-scale web images. In ECCV, pages 139–154, 2018.
  • [14] Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W Tsang, James T Kwok, and Masashi Sugiyama. A survey of label-noise representation learning: Past, present and future. arXiv preprint arXiv:2011.04406, 2020.
  • [15] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor W Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, pages 8536–8546, 2018.
  • [16] Jiangfan Han, Ping Luo, and Xiaogang Wang. Deep self-learning from noisy labels. In ICCV, pages 5137–5146, 2019.
  • [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
  • [18] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  • [19] James Hong and Michael Fang. Sentiment analysis with deeply learned distributed representations of variable length texts. Stanford University Report, pages 1–9, 2015.
  • [20] Hidekata Hontani, Takamiti Matsuno, and Yoshihide Sawada. Robust nonrigid icp using outlier-sparsity regularization. In CVPR, pages 174–181, 2012.
  • [21] Chen Huang, Yining Li, Chen Change Loy, and Xiaoou Tang. Learning deep representation for imbalanced classification. In CVPR, pages 5375–5384, 2016.
  • [22] Jinchi Huang, Lie Qu, Rongfei Jia, and Binqiang Zhao. O2u-net: A simple noisy label detection approach for deep neural networks. In ICCV, pages 3325–3333, 2019.
  • [23] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, pages 2309–2318, 2018.
  • [24] Justin M Johnson and Taghi M Khoshgoftaar. Survey on deep learning with class imbalance. Journal of Big Data, 6(1):1–54, 2019.
  • [25] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, 2009.
  • [26] M Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In NeurIPS, pages 1189–1197, 2010.
  • [27] Wonseok Lee, Hanbit Lee, and Sang-goo Lee. Semantics-preserving adversarial training. arXiv preprint arXiv:2009.10978, 2020.
  • [28] Buyu Li, Yu Liu, and Xiaogang Wang. Gradient harmonized single-stage detector. In AAAI, pages 8577–8584, 2019.
  • [29] Junnan Li, Yongkang Wong, Qi Zhao, and Mohan S Kankanhalli. Learning to learn from noisy labeled data. In CVPR, pages 5051–5059, 2019.
  • [30] Shuang Li, Kaixiong Gong, Chi Harold Liu, Yulin Wang, Feng Qiao, and Xinjing Cheng. Metasaug: Meta semantic augmentation for long-tailed visual recognition. In CVPR, pages 5212–5221, 2021.
  • [31] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In ICCV, pages 2999–3007, 2017.
  • [32] Hao Liu, Xiangyu Zhu, Zhen Lei, and Stan Z Li. Adaptiveface: Adaptive margin and sampling for face recognition. In CVPR, pages 11947–11956, 2019.
  • [33] Tongliang Liu and Dacheng Tao. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3):447–461, 2016.
  • [34] Xingjun Ma, Yisen Wang, Michael E Houle, Shuo Zhou, Sarah Erfani, Shutao Xia, Sudanthi Wijewickrema, and James Bailey. Dimensionality-driven learning with noisy labels. In ICML, pages 3361–3370, 2018.
  • [35] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
  • [36] Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. In ICLR, 2021.
  • [37] Seyed-Mohsen Moosavi-Dezfooli, Omar Fawzi, Alhussein amd Fawzi, and Pascal Frossard. Universal adversarial perturbations. In CVPR, pages 86–94, 2017.
  • [38] Nagarajan Natarajan, Inderjit S Dhillon, Pradeep Ravikumar, and Ambuj Tewari. Learning with noisy labels. In NeurIPS, pages 1196–1204, 2013.
  • [39] Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. In ICLR, 2015.
  • [40] Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to reweight examples for robust deep learning. In ICML, pages 4331–4340, 2018.
  • [41] Ali Shafahi, Mahyar Najibi, Zheng Xu, John Dickerson, Larry S Davis, and Tom Goldstein. Universal adversarial training. In AAAI, pages 5636–5643, 2020.
  • [42] Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, and Deyu Meng. Meta-weight-net: Learning an explicit mapping for sample weighting. In NeurIPS, pages 1917–1928, 2019.
  • [43] Martin Slawski, Emanuel Ben-David, et al. Linear regression with sparsely permuted data. Electronic Journal of Statistics, 13(1):1–36, 2019.
  • [44] Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. arXiv preprint arXiv:2007.08199, 2020.
  • [45] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, pages 2818–2826, 2016.
  • [46] Sunil Thulasidasan, Tanmoy Bhattacharya, Jeff Bilmes, Gopinath Chennupati, and Jamal Mohd-Yusof. Combating label noise in deep learning using abstention. In ICML, pages 6234–6243, 2019.
  • [47] Chenglong Wang, Feijun Jiang, and Hongxia Yang. A hybrid framework for text modeling with convolutional rnn. In KDD, pages 2061–2069, 2017.
  • [48] Mei Wang, Yaobin Zhang, and Weihong Deng. Meta balanced network for fair face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
  • [49] Xinshao Wang, Yang Hua, Elyor Kodirov, David A Clifton, and Neil M Robertson. Proselflc: Progressive self label correction for training robust deep neural networks. In CVPR, pages 752–761, 2021.
  • [50] Yixin Wang, Alp Kucukelbir, and David M Blei. Robust probabilistic modeling with bayesian data reweighting. In ICML, pages 3646–3655, 2017.
  • [51] Yulin Wang, Xuran Pan, Shiji Song, Hong Zhang, Cheng Wu, and Gao Huang. Implicit semantic data augmentation for deep networks. In NeurIPS, pages 12614–12623, 2019.
  • [52] Yequan Wang, Aixin Sun, Jialong Han, Ying Liu, and Xiaoyan Zhu. Sentiment analysis by capsules. In WWW, pages 1165–1174, 2018.
  • [53] Yaqing Wang, Quanming Yao, James T Kwok, and Lionel M Ni. Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys (CSUR), 53(3):1–34, 2020.
  • [54] Zhen Wang, Guosheng Hu, and Qinghua Hu. Training noise-robust deep neural networks via meta-learning. In CVPR, pages 4523–4532, 2020.
  • [55] Xiaoxia Wu, Ethan Dyer, and Behnam Neyshabur. When do curricula work? In ICLR, 2021.
  • [56] J. Yao, J. Wang, I. W. Tsang, Y. Zhang, J. Sun, C. Zhang, and R. Zhang. Deep learning from noisy image labels with quality embedding. IEEE Transactions on Image Processing, 28:1909–1922, 2019.
  • [57] Chang-Bin Zhang, Peng-Tao Jiang, Qibin Hou, Yunchao Wei, Qi Han, Zhen Li, and Ming-Ming Cheng. Delving deep into label smoothing. IEEE Transactions on Image Processing, 30:5984–5996, 2021.
  • [58] Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3):1–41, 2020.
  • [59] Liang Zhao, Tianyang Zhao, Tingting Sun, Zhuo Liu, and Zhikui Chen. Multi-view robust feature learning for data clustering. IEEE Signal Processing Letters, 27:1750–1754, 2020.

Appendix A Meta Logit Adjustment

In Eq. (29) of the paper, the hyper-parameter τ\tau is fixed for all categories. A category-wise setting for τ\tau may be useful. Therefore, a new logit adjustment with meta optimization on τ\tau is proposed and called Meta logit adjustment. Let Ω\Omega be the validation set for meta optimization. According to Eqs. (1214) in the paper, the new loss is

=xiSlogeui,yi+τyilogπyiyeui,y+τyilogπy.\mathcal{L}={\rm{-}}\sum\limits_{{x_{i}}\in S}{\log\frac{{{e^{{u_{i,{y_{i}}}}+{\tau_{{y_{i}}}}\log{\pi_{{y_{i}}}}}}}}{{\sum\nolimits_{y}{{e^{{u_{i,y}}+{\tau_{{y_{i}}}}\log{\pi_{y}}}}}}}}. (a-1)

Given a value for τ={τ1,,τC}\tau=\{\tau_{1},…,\tau_{C}\}, the network parameter Θ\Theta can be obtained by solving

Θ(τ)=argminΘxiSlogeui,yi+τyilogπyiyeui,y+τyilogπy.{\Theta^{\rm{*}}}(\tau)=\arg\mathop{\min}\limits_{\Theta}{\rm{-}}\sum\limits_{{x_{i}}\in S}{\log\frac{{{e^{{u_{i,{y_{i}}}}+{\tau_{{y_{i}}}}\log{\pi_{{y_{i}}}}}}}}{{\sum\nolimits_{y}{{e^{{u_{i,y}}+{\tau_{{y_{i}}}}\log{\pi_{y}}}}}}}}. (a-2)

After Θ(τ){\Theta^{\rm{*}}}(\tau) is obtained, τ\tau can be optimized by solving

τ=argminτxiΩl(softmax(f(xi:Θ(τ)),yi)).{\tau^{\rm{*}}}=\arg\mathop{\min}\limits_{\tau}{\rm{-}}\sum\limits_{{x_{i}}\in\Omega}{l\left({\text{softmax}\left({f\left({{x_{i}}:{\Theta^{*}}\left(\tau\right)}\right),{y_{i}}}\right)}\right)}. (a-3)

Eqs. (a-2) and (a-3) are solved alternatively. The detailed optimization steps are similar to those used in MetaSDA [30], Meta-Weight-Net [42], and other meta optimization studies.

Appendix B Parameter setting in image classification

For the two data sets, the #epochs are set as 300. The λ\lambda in LogComp is searched in {0.25, 0.35} and the learning rate for the compensation variable in LogComp is searched in {1.5, 3, 4.5, 6}. In MixComp, ϵ1(i.e.,η)\epsilon_{1}(i.e.,\eta) is searched in {0.5, 1.5, 2, 3, 4, 5}, and ϵ2\epsilon_{2} is searched in {0, 8/255, 10/255, 12/255}. τ\tau is determined according to the top-propro percent of ordered losses, and the value of propro is searched in {0, 5, 7.5, 15, 25, 35, 45, 50}. In Soft/Hard Bootstrapping, Label Smoothing, and online label smoothing, the parameters follow the settings in [57]. In ProSelfLC, the parameters follow the settings in [49]. In PGD-AT, ϵ2\epsilon_{2} is is searched in {8/255, 10/255, 12/255}.

Appendix C Parameter setting in text sentiment analysis

For IMDB data set, the batch size is set as 64; the learning rate is set as 0.001; the number of epochs is set as 6; the proportion of train/val/test data is 4:1:5; the embedding dropout is set as 0.5; the dimension of hidden vectors is 100. In LogComp, the learning rate for the compensation variable is searched in {0.6, 0.7, 0.8, 0.9, 1}, and the λ\lambda is searched in {0.75, 1}. In MixComp, ϵ1(i.e.,η)\epsilon_{1}(i.e.,\eta) is searched in {0, 0.075, 0.15, 0.25, 0.5, 0.75, 1}, and ϵ2\epsilon_{2} is searched in {0, 0.005, 0.01, 0.015}. τ\tau is determined according to the top-propro percent of ordered losses, and the value of propro is searched in {0, 5, 7.5, 15, 25, 35, 45, 50}.

For SST-2 data set, the batch size is set as 32; the learning rate is set as 0.0001; the number of epochs is set as 50; the division of train/val/test data follows the default split; the embedding dropout is set as 0.7; the dimension of hidden vectors is 256. In LogComp, the learning rate for the compensation variable is searched in {0.02, 0.025, 0.03, 0.035, 0.04}, and the λ\lambda is searched in {0.75, 1}. In MixComp, ϵ1(i.e.,η)\epsilon_{1}(i.e.,\eta) is searched in {0, 0.075, 0.15, 0.25, 0.5, 0.75, 1}, and ϵ2\epsilon_{2} is searched in {0, 0.005, 0.01, 0.015}. τ\tau is determined according to the top-propro percent of ordered losses, and the value of propro is searched in {0, 5, 7.5, 15, 25, 35, 45, 50}.

For the two data sets, in Soft/Hard Bootstrapping, Label Smoothing, and online label smoothing, the parameters follow the settings in [57]. In ProSelfLC, the parameters follow the settings in [49]. In PGD-AT, ϵ2\epsilon_{2} is searched in {0.005, 0.01, 0.015}.

Appendix D More sentence examples

Table A-1 shows some samples with higher l1l1-norm values of logit compensation whose labels are erroneous. Without contexts, we believe that these labels are wrong. Some readers may consider that the labels are correct in certain contexts. In our view, it is inappropriate to assume that annotators are familiar with these contexts in advance. Table  A-2 shows some samples that are difficult to predict by machines. Their l1l1-norm values are also high.

Table A-1: Sentences with wrong labels.
Sample Original label Our label
the exploitative, clumsily staged violence overshadows everything, including most of the actors. 1 0
.. a fascinating curiosity piece – fascinating, that is, for about ten minutes. 0 1
this is a great movie. I love the series on tv and so I loved the movie. One of the best things in the movie is that Helga finally admits her deepest darkest secret to Arnold!!! that was great. i loved it it was pretty funny too. It’s a great movie! Doy!!! 0 1
Table A-2: Sentences that are difficult to predict by machines.
Sample Original label
it ’s a boring movie about a boring man, made watchable by a bravura performance from a consummate actor incapable of being boring. 1
she is a lioness, protecting her cub, and he a reluctant villain, incapable of controlling his crew. 1
it made me want to get made-up and go see this movie with my sisters. 1

In addition, we plot the distribution of l1l1-norm of compensated logit vectors when using LogComp on CIFAR-10 and CIFAR-100 data sets when no added label noises are present (0%). The results are shown in Fig. A-1 and Fig. A-2. Both distribution curves show a long-tail trend, which is quite reasonable.

Refer to caption
Figure A-1: Distribution of l1l1-norm of compensated logit vectors on CIFAR-10.
Refer to caption
Figure A-2: Distribution of l1l1-norm of compensated logit vectors on CIFAR-100.