This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\coltauthor\Name

Yatong Bai \Emailyatong_bai@berkeley.edu
\NameBrendon G. Anderson \Emailbganderson@berkeley.edu
\NameSomayeh Sojoudi \Emailsojoudi@berkeley.edu
\addrUniversity of California, Berkeley

Mixing Classifiers to Alleviate the Accuracy-Robustness Trade-Off

Abstract

Deep neural classifiers have recently found tremendous success in data-driven control systems. However, existing models suffer from a trade-off between accuracy and adversarial robustness. This limitation must be overcome in the control of safety-critical systems that require both high performance and rigorous robustness guarantees. In this work, we develop classifiers that simultaneously inherit high robustness from robust models and high accuracy from standard models. Specifically, we propose a theoretically motivated formulation that mixes the output probabilities of a standard neural network and a robust neural network. Both base classifiers are pre-trained, and thus our method does not require additional training. Our numerical experiments verify that the mixed classifier noticeably improves the accuracy-robustness trade-off and identify the confidence property of the robust base classifier as the key leverage of this more benign trade-off. Our theoretical results prove that under mild assumptions, when the robustness of the robust base model is certifiable, no alteration or attack within a closed-form p\ell_{p} radius on an input can result in misclassification of the mixed classifier.

keywords:
Adversarial Robustness, Image Classification, Computer Vision, Model Ensemble

1 Introduction

In recent years, high-performance machine learning models have been employed in various control settings, including reinforcement learning for dynamic systems with uncertainty (Levine et al., 2016; Sutton and Barto, 2018) and autonomous driving (Bojarski et al., 2016; Wu et al., 2017). However, models such as neural networks have been shown to be vulnerable to adversarial attacks, which are imperceptibly small input data alterations maliciously designed to cause failure (Szegedy et al., 2014; Nguyen et al., 2015; Huang et al., 2017; Eykholt et al., 2018; Liu et al., 2019). This vulnerability makes such models unreliable for safety-critical control where guaranteeing robustness is necessary. In response, “adversarial training (AT)” (Kurakin et al., 2017; Goodfellow et al., 2015; Bai et al., 2022a, b; Zheng et al., 2020; Zhang et al., 2019) have been studied to alleviate the susceptibility. AT builds robust neural networks by training on adversarially attacked data.

A parallel line of work focuses on mathematically certified robustness (Anderson et al., 2020; Ma and Sojoudi, 2021; Anderson and Sojoudi, 2022a). Among these methods, “randomized smoothing (RS)” is a particularly popular one that seeks to achieve certified robustness by processing intentionally corrupted data at inference time (Cohen et al., 2019; Li et al., 2019; Pfrommer et al., 2023), and has recently been applied to robustify reinforcement learning-based control strategies (Kumar et al., 2022; Wu et al., 2022). The recent work (Anderson and Sojoudi, 2022b) has shown that “locally biased smoothing,” which robustifies the model locally based on the input test datum, outperforms the traditional RS with fixed smoothing noise. However, Anderson and Sojoudi (2022b) only focus on binary classification problems, significantly limiting the applications. Moreover, Anderson and Sojoudi (2022b) rely on the robustness of a KK-nearest-neighbor (KK-NN) classifier, which suffers from a lack of representation power when applied to harder problems and becomes a bottleneck.

While some works have shown that there exists a fundamental trade-off between accuracy and robustness (Tsipras et al., 2019; Zhang et al., 2019), recent research has argued that it should be possible to simultaneously achieve robustness and accuracy on benchmark datasets (Yang et al., 2020). To this end, variants of AT that improve the accuracy-robustness trade-off have been proposed, including TRADES (Zhang et al., 2019), Interpolated Adversarial Training (Lamb et al., 2019), and many others (Raghunathan et al., 2020; Zhang and Wang, 2019; Tramèr et al., 2018; Balaji et al., 2019). However, even with these improvements, degraded clean accuracy is often an inevitable price of achieving robustness. Moreover, standard non-robust models often achieve enormous performance gains by pre-training on larger datasets, whereas the effect of pre-training on robust classifiers is less understood and may be less prominent (Chen et al., 2020; Fan et al., 2021).

This work makes a theoretically disciplined step towards robustifying models without sacrificing clean accuracy. Specifically, we build upon locally biased smoothing and replace its underlying KK-NN classifier with a robust neural network that can be obtained via various existing methods. We then modify how the standard base model (a highly accurate but possibly non-robust neural network) and the robust base model are “mixed” accordingly. The resulting formulation, to be introduced in Section˜3, is a convex combination of the output probabilities from the two base classifiers. We prove that, when the robust network has a bounded Lipschitz constant or is built via RS, the mixed classifier also has a closed-form certified robust radius. More importantly, our method achieves an empirical robustness level close to that of the robust base model while approaching the standard base model’s clean accuracy. This desirable behavior significantly improves the accuracy-robustness trade-off, especially for tasks where standard models noticeably outperform robust models on clean data.

Note that we do not make any assumptions about how the standard and robust base models are obtained (can be AT, RS, or others), nor do we assume the adversarial attack type and budget. Thus, our mixed classification scheme can take advantage of pre-training on large datasets via the standard base classifier and benefit from ever-improving robust training methods via the robust base classifier.

2 Background and related works

2.1 Notations

The p\ell_{p} norm is denoted by p\lVert\cdot\rVert_{p}, while p\lVert\cdot\rVert_{p*} denotes its dual norm. The matrix IdI_{d} denotes the identity matrix in d×d{\mathbb{R}}^{d\times d}. For a scalar aa, sgn(a){1,0,1}\operatorname{sgn}(a)\in\{-1,0,1\} denotes its sign. For a natural number cc, the set [c][c] is defined as {1,2,,c}\{1,2,\dots,c\}. For an event AA, the indicator function 𝕀(A){\mathbb{I}}(A) evaluates to 1 if AA takes place and 0 otherwise. The notation X𝒮[A(X)]{\mathbb{P}}_{X\sim{\mathcal{S}}}[A(X)] denotes the probability for an event A(X)A(X) to occur, where XX is a random variable drawn from the distribution 𝒮{\mathcal{S}}. The normal distribution on d{\mathbb{R}}^{d} with mean x¯\overline{x} and covariance Σ\Sigma is written as 𝒩(x¯,Σ){\mathcal{N}}(\overline{x},\Sigma). We denote the cumulative distribution function of 𝒩(0,1){\mathcal{N}}(0,1) on {\mathbb{R}} by Φ\Phi and write its inverse function as Φ1\Phi^{-1}.

Consider a model g:dcg:{\mathbb{R}}^{d}\to{\mathbb{R}}^{c}, whose components are gi:d,i[c]g_{i}:{\mathbb{R}}^{d}\to{\mathbb{R}},\ i\in[c], where dd is the dimension of the input and cc is the number of classes. In this paper, we assume that g()g(\cdot) does not have the desired level of robustness, and refer to it as a “standard model”, as opposed to a “robust model” which we denote as h()h(\cdot). We consider p\ell_{p} norm-bounded attacks on differentiable neural networks. A classifier f:d[c]f:{\mathbb{R}}^{d}\to[c], defined as f(x)=argmaxi[c]gi(x)f(x)=\operatorname*{arg\,max}_{i\in[c]}g_{i}(x), is considered robust against adversarial attacks at an input datum xdx\in{\mathbb{R}}^{d} if it assigns the same class to all perturbed inputs x+δx+\delta such that δpϵ\lVert\delta\rVert_{p}\leq\epsilon, where ϵ0\epsilon\geq 0 is the attack radius.

2.2 Related Adversarial Attacks and Defenses

The fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks based on differentiating the cross-entropy loss are highly effective and have been considered the most standard attacks for evaluating robust models (Madry et al., 2018; Goodfellow et al., 2015). To exploit the structures of the defense methods, adaptive attacks have also been introduced (Tramèr et al., 2020).

On the defense side, while AT (Madry et al., 2018) and TRADES (Zhang et al., 2019) have seen enormous success, such methods are often limited by a significantly larger amount of required training data (Schmidt et al., 2018) and a decrease in generalization capability. Initiatives that construct more effective training data via data augmentation (Rebuffi et al., 2021; Gowal et al., 2021) and generative models (Sehwag et al., 2022) have successfully produced more robust models. Improved versions of AT (Jia et al., 2022; Shafahi et al., 2019) have also been proposed.

Previous initiatives that aim to enhance the accuracy-robustness trade-off include using alternative attacks during training (Pang et al., 2022), appending early-exit side branches to a single network (Hu et al., 2020), and applying AT for regularization (Zheng et al., 2021). Moreover, ensemble-based defenses, such as random ensemble (Liu et al., 2018) and diverse ensemble (Pang et al., 2019; Alam et al., 2022), have been proposed. In comparison, this work considers two separate classifiers and uses their synergy to improve the accuracy-robustness trade-off, achieving higher performances.

2.3 Locally Biased Smoothing

Randomized smoothing, popularized by (Cohen et al., 2019), achieves robustness at inference time by replacing f(x)=argmaxi[c]gi(x)f(x)=\operatorname*{arg\,max}_{i\in[c]}g_{i}(x) with a smoothed classifier f~(x)=argmaxi[c]𝔼ξ𝒮[gi(x+ξ)]\widetilde{f}(x)=\operatorname*{arg\,max}_{i\in[c]}\mathbb{E}_{\xi\sim{\mathcal{S}}}\left[g_{i}(x+\xi)\right] , where 𝒮{\mathcal{S}} is a smoothing distribution. A common choice for 𝒮{\mathcal{S}} is a Gaussian distribution.

Anderson and Sojoudi (2022b) have recently argued that data-invariant RS does not always achieve robustness. They have shown that in the binary classification setting, RS with an unbiased distribution is suboptimal, and an optimal smoothing procedure shifts the input point in the direction of its true class. Since the true class is generally unavailable, a “direction oracle” is used as a surrogate. This “locally biased smoothing” method is no longer randomized and outperforms traditional data-blind RS. The locally biased smoothed classifier, denoted hγ:dh^{\gamma}\colon{\mathbb{R}}^{d}\to{\mathbb{R}}, is obtained via the deterministic calculation hγ(x)=g(x)+γh(x)g(x)ph^{\gamma}(x)=g(x)+\gamma h(x)\lVert\nabla g(x)\rVert_{p*}, where h(x){1,1}h(x)\in\{-1,1\} is the direction oracle and γ0\gamma\geq 0 is a trade-off parameter. The direction oracle should come from an inherently robust classifier (which is often less accurate). In (Anderson and Sojoudi, 2022b), this direction oracle is chosen to be a one-nearest-neighbor classifier.

3 Using a Robust Neural Network as the Smoothing Oracle

Locally biased smoothing was designed for binary classification, restricting its practicality. Here, we first extend it to the multi-class setting by treating the output of each class, denoted as hiγ(x)h^{\gamma}_{i}(x), independently, giving rise to:

hsmo1,iγ(x)gi(x)+γhi(x)gi(x)p,i[c].h_{\text{smo1},i}^{\gamma}(x)\coloneqq g_{i}(x)+\gamma h_{i}(x)\lVert\nabla g_{i}(x)\rVert_{p*},\;\;\;i\in[c]. (1)

Note that if gi(x)p\lVert\nabla g_{i}(x)\rVert_{p*} is large for some class ii, then hsmo1,iγ(x)h_{\text{smo1},i}^{\gamma}(x) can be large for class ii even if both gi(x)g_{i}(x) and hi(x)h_{i}(x) are small, leading to incorrect predictions. To remove the effect of the gradient magnitude difference across the classes, we propose a normalized formulation as follows:

hsmo2,iγ(x)gi(x)+γhi(x)gi(x)p1+γgi(x)p,i[c].h_{\text{smo2},i}^{\gamma}(x)\coloneqq\frac{g_{i}(x)+\gamma h_{i}(x)\lVert\nabla g_{i}(x)\rVert_{p*}}{1+\gamma\lVert\nabla g_{i}(x)\rVert_{p*}},\;\;\;i\in[c]. (2)

The parameter γ\gamma adjusts between clean accuracy and robustness. It holds that hsmo2,iγ(x)gi(x)h_{\text{smo2},i}^{\gamma}(x)\equiv g_{i}(x) when γ=0\gamma=0, and hsmo2,iγ(x)hi(x)h_{\text{smo2},i}^{\gamma}(x)\to h_{i}(x) when γ\gamma\to\infty for all xx and all ii.

With the mixing procedure generalized to the multi-class setting, we now discuss the choice of the smoothing oracle hi()h_{i}(\cdot). While KK-NN classifiers are relatively robust and can be used as the oracle, their representation power is too weak. On the CIFAR-10 image classification task (Krizhevsky, 2012), KK-NN only achieves around 35%35\% accuracy on clean test data. In contrast, an adversarially trained ResNet can reach 50%50\% accuracy on attacked test data (Madry et al., 2018). This lackluster performance of KK-NN becomes a significant bottleneck in the accuracy-robustness trade-off of the mixed classifier. To this end, we replace the KK-NN model with a robust neural network. The robustness of this network can be achieved via various methods, including AT, TRADES, and RS.

Further scrutinizing Eq.˜2 leads to the question of whether gi(x)p\lVert\nabla g_{i}(x)\rVert_{p*} is the best choice for adjusting the mixture of g()g(\cdot) and h()h(\cdot). This gradient magnitude term is a result of Anderson and Sojoudi (2022b)’s assumption that h(x){1,1}h(x)\in\{-1,1\}. Here, we no longer have this assumption. Instead, we assume both g()g(\cdot) and h()h(\cdot) to be differentiable. Thus, we generalize the formulation to

hsmo3,iγ(x)gi(x)+γRi(x)hi(x)1+γRi(x),i[c],\displaystyle h_{\text{smo3},i}^{\gamma}(x)\coloneqq\frac{g_{i}(x)+\gamma R_{i}(x)h_{i}(x)}{1+\gamma R_{i}(x)},\;\;\;i\in[c], (3)

where Ri(x)R_{i}(x) is an extra scalar term that can potentially depend on both gi(x)\nabla g_{i}(x) and hi(x)\nabla h_{i}(x) to determine the “trustworthiness” of the base classifiers. Here, we empirically compare four options for Ri(x)R_{i}(x), namely, 11, gi(x)p\lVert\nabla g_{i}(x)\rVert_{p*}, maxjgj(x)p\lVert\nabla\max_{j}g_{j}(x)\rVert_{p*}, and gi(x)phi(x)p\frac{\lVert\nabla g_{i}(x)\rVert_{p*}}{\lVert\nabla h_{i}(x)\rVert_{p*}}.

Another design question is whether g()g(\cdot) and h()h(\cdot) should be the pre-softmax logits or the post-softmax probabilities. Note that since most attack methods are designed based on logits, the output of the mixed classifier should be logits rather than probabilities to avoid gradient masking, an undesirable phenomenon that makes it hard to evaluate the robustness properly. Thus, we have the following two options that make the mixed model compatible with existing gradient-based attacks:

  1. 1.

    Use the logits for both base classifiers, g()g(\cdot) and h()h(\cdot).

  2. 2.

    Use the probabilities for both base classifiers, and then convert the mixed probabilities back to logits. The required “inverse-softmax” operator is simply the natural logarithm.

Refer to caption
  • “No Softmax” represents Option 1, i.e., use the logits for g()g(\cdot) and h()h(\cdot).

  • “Softmax” represents Option 2, i.e., use the probabilities for g()g(\cdot) and h()h(\cdot).

  • With the best formulation, high clean accuracy can be achieved with very little sacrifice on robustness.

Figure 1: Comparing the “attacked accuracy – clean accuracy” curves for various options for Ri(x)R_{i}(x).

Figure˜1 visualizes the accuracy-robustness trade-off achieved by mixing logits or probabilities with different Ri(x)R_{i}(x) options. Here, the base classifiers are a pair of standard and adversarially trained ResNet-18s. This “clean accuracy versus PGD10-attacked accuracy” plot concludes that Ri(x)=1R_{i}(x)=1 gives the best accuracy-robustness trade-off, and g()g(\cdot) and h()h(\cdot) should be probabilities. Appendix˜A in the supplementary materials confirms this selection by repeating Figure˜1 with alternative model architectures, different robust base classifier training methods, and various attack budgets.

Our selection of Ri(x)=1R_{i}(x)=1 differs from Ri(x)=gi(x)pR_{i}(x)=\lVert g_{i}(x)\rVert_{p*} used in (Anderson and Sojoudi, 2022b). Intuitively, Anderson and Sojoudi (2022b) used linear classifiers to motivate estimating the base models’ trustworthiness with their gradient magnitudes. When the base classifiers are highly nonlinear neural networks as in our case, while a base classifier’s local Lipschitzness correlates with its robustness, its gradient magnitude is not always a good local Lipschitzness estimator. Additionally, Section˜3.1 offers theoretical intuitions for selecting mixing probabilities over mixing logits.

With these design choices implemented, the formulation Eq.˜3 can be re-parameterized as

hiα(x)log((1α)gi(x)+αhi(x)),i[c],\displaystyle h_{i}^{\alpha}(x)\coloneqq\log\big{(}(1-\alpha)g_{i}(x)+\alpha h_{i}(x)\big{)},\;\;i\in[c], (4)

where α=γ1+γ[0,1]\alpha=\frac{\gamma}{1+\gamma}\in[0,1]. We take hα()h^{\alpha}(\cdot) in Eq.˜4, which is a convex combination of base classifier probabilities, as our proposed mixed classifier. Note that Eq.˜4 calculates the mixed classifier logits, acting as a drop-in replacement for existing models which usually produce logits. Removing the logarithm recovers the output probabilities without changing the predicted class.

3.1 Theoretical Certified Robust Radius

In this section, we derive certified robust radii for the mixed classifier hα()h^{\alpha}(\cdot) introduced in Eq.˜4, given in terms of the robustness properties of h()h(\cdot) and the mixing parameter α\alpha. The results ensure that despite being more sophisticated than a single model, hα()h^{\alpha}(\cdot) cannot be easily conquered, even if an adversary attempts to adapt its attack methods to its structure. Such guarantees are of paramount importance for reliable deployment in safety-critical control applications.

Noticing that the base model probabilities satisfy 0gi()10\leq g_{i}(\cdot)\leq 1 and 0hi()10\leq h_{i}(\cdot)\leq 1 for all ii, we introduce the following generalized and tightened notion of certified robustness.

Definition 3.1.

Consider an arbitrary input xdx\in{\mathbb{R}}^{d} and let y=argmaxihi(x)y=\operatorname*{arg\,max}_{i}h_{i}(x), μ[0,1]\mu\in[0,1], and r0r\geq 0. Then, h()h(\cdot) is said to be certifiably robust at xx with margin μ\mu and radius rr if hy(x+δ)hi(x+δ)+μh_{y}(x+\delta)\geq h_{i}(x+\delta)+\mu for all iyi\neq y and all δd\delta\in{\mathbb{R}}^{d} such that δpr\lVert\delta\rVert_{p}\leq r.

Lemma 3.2.

Let xdx\in{\mathbb{R}}^{d} and r0r\geq 0. If it holds that α[12,1]\alpha\in[\frac{1}{2},1] and h()h(\cdot) is certifiably robust at xx with margin 1αα\frac{1-\alpha}{\alpha} and radius rr, then the mixed classifier hα()h^{\alpha}(\cdot) is robust in the sense that argmaxihiα(x+δ)=argmaxihi(x)\operatorname*{arg\,max}_{i}h_{i}^{\alpha}(x+\delta)=\operatorname*{arg\,max}_{i}h_{i}(x) for all δd\delta\in{\mathbb{R}}^{d} such that δpr\lVert\delta\rVert_{p}\leq r.

Proof 3.3.

Suppose that h()h(\cdot) is certifiably robust at xx with margin 1αα\frac{1-\alpha}{\alpha} and radius rr. Since α[12,1]\alpha\in[\frac{1}{2},1], it holds that 1αα[0,1]\frac{1-\alpha}{\alpha}\in[0,1]. Let y=argmaxihi(x)y=\operatorname*{arg\,max}_{i}h_{i}(x). Consider an arbitrary i[c]{y}i\in[c]\setminus\{y\} and δd\delta\in{\mathbb{R}}^{d} such that δpr\lVert\delta\rVert_{p}\leq r. Since gi(x+δ)[0,1]g_{i}(x+\delta)\in[0,1], it holds that

exp(hyα(x+δ))\displaystyle\exp\big{(}h_{y}^{\alpha}(x+\delta)\big{)} exp(hiα(x+δ))\displaystyle-\exp\big{(}h_{i}^{\alpha}(x+\delta)\big{)}
=(1α)(gy(x+δ)gi(x+δ))+α(hy(x+δ)hi(x+δ))\displaystyle=(1-\alpha)(g_{y}(x+\delta)-g_{i}(x+\delta))+\alpha(h_{y}(x+\delta)-h_{i}(x+\delta))
(1α)(01)+α(hy(x+δ)hi(x+δ))\displaystyle\geq(1-\alpha)(0-1)+\alpha(h_{y}(x+\delta)-h_{i}(x+\delta))
(α1)+α(1αα)=0.\displaystyle\geq(\alpha-1)+\alpha\left(\tfrac{1-\alpha}{\alpha}\right)=0.

Thus, it holds that hyα(x+δ)hiα(x+δ)h_{y}^{\alpha}(x+\delta)\geq h_{i}^{\alpha}(x+\delta) for all iyi\neq y, and thus argmaxihiα(x+δ)=y=argmaxihi(x)\operatorname*{arg\,max}_{i}h_{i}^{\alpha}(x+\delta)=y=\operatorname*{arg\,max}_{i}h_{i}(x).

Intuitively, Definition 3.1 ensures that all points within a radius from a nominal point have the same prediction as the nominal point, with the difference between the top and runner-up probabilities no smaller than a threshold. For practical classifiers, the robust margin can be straightforwardly estimated by calculating the confidence gap between the predicted and the runner-up classes at an adversarial input obtained with strong attacks.

While most existing provably robust results consider the special case with zero margin, we will show that models built via common methods are also robust with non-zero margins. We specifically consider two types of popular robust classifiers: Lipschitz continuous models (Theorem˜3.5) and RS models (Theorem˜3.7). Here, Lemma 3.2 builds the foundation for proving these two theorems, which amounts to showing that Lipschitz and RS models are robust with non-zero margins and thus the mixed classifiers built with them are robust.

Lemma 3.2 provides further justifications for using probabilities instead of logits in the mixing operation. Intuitively, it holds that (1α)gi()(1-\alpha)g_{i}(\cdot) is bounded between 0 and 1α1-\alpha, so as long as α\alpha is relatively large (specifically, at least 12\frac{1}{2}), the detrimental effect of g()g(\cdot)’s probabilities when subject to attack can be bounded and be overcome by h()h(\cdot). Had we used the logits for gi()g_{i}(\cdot), since this quantity cannot be bounded, it would have been much harder to overcome the vulnerability of g()g(\cdot).

Since we do not make assumptions on the Lipschitzness or robustness of g()g(\cdot), Lemma 3.2 is tight. To understand this, we suppose that there exists some i[c]\{y}i\in[c]\backslash\{y\} and δ0\delta\neq 0 such that δpr\lVert\delta\rVert_{p}\leq r that make hy(x+δ)hi(x+δ)hdh_{y}(x+\delta)-h_{i}(x+\delta)\coloneqq h_{d} smaller than 1αα\frac{1-\alpha}{\alpha}, indicating that αhd>α1-\alpha h_{d}>\alpha-1. Since the only information about g()g(\cdot) is that gi(x+δ)[0,1]g_{i}(x+\delta)\in[0,1] and thus the value gy(x+δ)gi(x+δ)g_{y}(x+\delta)-g_{i}(x+\delta) can be any number in [1,1][-1,1], it is possible that (1α)(gy(x+δ)gi(x+δ))(1-\alpha)\left(g_{y}(x+\delta)-g_{i}(x+\delta)\right) is smaller than αhd-\alpha h_{d}. In this case, it holds that hyα(x+δ)<hiα(x+δ)h_{y}^{\alpha}(x+\delta)<h_{i}^{\alpha}(x+\delta), and thus argmaxihiα(x+δ)argmaxihi(x)\operatorname*{arg\,max}_{i}h_{i}^{\alpha}(x+\delta)\neq\operatorname*{arg\,max}_{i}h_{i}(x).

Definition 3.4.

A function f:df\colon{\mathbb{R}}^{d}\to{\mathbb{R}} is called p\ell_{p}-Lipschitz continuous if there exists L(0,)L\in(0,\infty) such that |f(x)f(x)|Lxxp|f(x^{\prime})-f(x)|\leq L\|x^{\prime}-x\|_{p} for all x,xdx^{\prime},x\in{\mathbb{R}}^{d}. The Lipschitz constant of such ff is defined to be Lipp(f)inf{L(0,):|f(x)f(x)|Lxxpfor all x,xd}\operatorname{Lip}_{p}(f)\coloneqq\inf\{L\in(0,\infty):|f(x^{\prime})-f(x)|\leq L\|x^{\prime}-x\|_{p}~\text{for all $x^{\prime},x\in{\mathbb{R}}^{d}$}\}.

Assumption 1

The classifier h()h(\cdot) is robust in the sense that, for all i{1,2,,n}i\in\{1,2,\dots,n\}, hi()h_{i}(\cdot) is p\ell_{p}-Lipschitz continuous with Lipschitz constant Lipp(hi)\operatorname{Lip}_{p}(h_{i}).

Theorem 3.5.

Suppose that Assumption 1 holds, and let xdx\in{\mathbb{R}}^{d} be arbitrary. Let y=argmaxihi(x)y=\operatorname*{arg\,max}_{i}h_{i}(x). Then, if α[12,1]\alpha\in[\frac{1}{2},1], it holds that argmaxihiα(x+δ)=y\operatorname*{arg\,max}_{i}h_{i}^{\alpha}(x+\delta)=y for all δd\delta\in{\mathbb{R}}^{d} such that

δprpα(x)miniyα(hy(x)hi(x))+α1α(Lipp(hy)+Lipp(hi)).\big{\lVert}\delta\big{\rVert}_{p}\leq r_{p}^{\alpha}(x)\coloneqq\min_{i\neq y}\frac{\alpha\left(h_{y}(x)-h_{i}(x)\right)+\alpha-1}{\alpha\left(\operatorname{Lip}_{p}(h_{y})+\operatorname{Lip}_{p}(h_{i})\right)}. (5)
Proof 3.6.

Suppose that α[12,1]\alpha\in[\frac{1}{2},1], and let δd\delta\in{\mathbb{R}}^{d} be such that δprpα(x)\lVert\delta\rVert_{p}\leq r_{p}^{\alpha}(x). Furthermore, let i[c]{y}i\in[c]\setminus\{y\}. It holds that

hy(x+δ)hi(x+δ)\displaystyle h_{y}(x+\delta)-h_{i}(x+\delta) =hy(x)hi(x)+hy(x+δ)hy(x)+hi(x)hi(x+δ)\displaystyle=h_{y}(x)-h_{i}(x)+h_{y}(x+\delta)-h_{y}(x)+h_{i}(x)-h_{i}(x+\delta)
hy(x)hi(x)Lipp(hy)δpLipp(hi)δp\displaystyle\geq h_{y}(x)-h_{i}(x)-\operatorname{Lip}_{p}(h_{y})\lVert\delta\rVert_{p}-\operatorname{Lip}_{p}(h_{i})\lVert\delta\rVert_{p}
hy(x)hi(x)(Lipp(hy)+Lipp(hi))rpα(x)1αα.\displaystyle\geq h_{y}(x)-h_{i}(x)-\left(\operatorname{Lip}_{p}(h_{y})+\operatorname{Lip}_{p}(h_{i})\right)r_{p}^{\alpha}(x)\geq\tfrac{1-\alpha}{\alpha}.

Therefore, h()h(\cdot) is certifiably robust at xx with margin 1αα\frac{1-\alpha}{\alpha} and radius rpα(x)r_{p}^{\alpha}(x). Hence, by Lemma 3.2, the claim holds.

We remark that the p\ell_{p} norm that Theorem˜3.5 certifies may be arbitrary (e.g., 1\ell_{1}, 2\ell_{2}, or \ell_{\infty}), so long as the Lipschitz constant of the robust network h()h(\cdot) is computed with respect to the same norm.

Assumption 1 is not restrictive in practice. For example, Gaussian RS with smoothing variance σ2Id\sigma^{2}I_{d} yields robust models with 2\ell_{2}-Lipschitz constant 2/πσ2\sqrt{\nicefrac{{2}}{{\pi\sigma^{2}}}} (Salman et al., 2019). Moreover, empirically robust methods such as AT and TRADES often train locally Lipschitz continuous models, even though there may not be closed-form theoretical guarantees.

Assumption 1 can be relaxed to the even less restrictive scenario of using local Lipschitz constants over a neighborhood (e.g., a norm ball) around a nominal input xx (i.e., how flat h()h(\cdot) is near xx) as a surrogate for the global Lipschitz constants. In this case, Theorem˜3.5 holds for all δ\delta within this neighborhood. Specifically, suppose that for an arbitrary input xx and an p\ell_{p} attack radius ϵ\epsilon, it holds that hy(x)hy(x+δ)ϵLippx(hy)h_{y}(x)-h_{y}(x+\delta)\leq\epsilon\cdot\operatorname{Lip}_{p}^{x}(h_{y}) and hi(x+δ)hi(x)ϵLippx(hi)h_{i}(x+\delta)-h_{i}(x)\leq\epsilon\cdot\operatorname{Lip}_{p}^{x}(h_{i}) for all iyi\neq y and all perturbations δ\delta such that δpϵ\lVert\delta\rVert_{p}\leq\epsilon. Furthermore, suppose that the robust radius rpα(x)r_{p}^{\alpha}(x), as defined in Eq.˜5 but use the local Lipschitz constant Lippx\operatorname{Lip}_{p}^{x} as a surrogate to the global constant Lipp\operatorname{Lip}_{p}, is not smaller than ϵ\epsilon. Then, if the robust base classifier h()h(\cdot) is correct at the nominal point xx, then the mixed classifier hα()h^{\alpha}(\cdot) is robust at xx within the radius ϵ\epsilon. The proof follows that of Theorem˜3.5.

The relaxed Lipschitzness defined above can be estimated for practical differentiable classifiers via an algorithm similar to the PGD attack (Yang et al., 2020). Yang et al. (2020) also showed that many existing empirically robust models, including those trained with AT or TRADES, are in fact locally Lipschitz. Note that Yang et al. (2020) evaluated the local Lipschitz constants of the logits, whereas we analyze the probabilities, whose Lipschitz constants are much smaller. Therefore, Theorem˜3.5 provides important insights into the empirical robustness of the mixed classifier.

An intuitive explanation of Theorem˜3.5 is that if α1\alpha\to 1, then rpα(x)miniyhy(x)hi(x)Lipp(hy)+Lipp(hi)r_{p}^{\alpha}(x)\to\min_{i\neq y}\frac{h_{y}(x)-h_{i}(x)}{\operatorname{Lip}_{p}(h_{y})+\operatorname{Lip}_{p}(h_{i})}, which is the standard Lipschitz-based robust radius of h()h(\cdot) around xx (see (Fazlyab et al., 2019; Hein and Andriushchenko, 2017) for further discussions on Lipschitz-based robustness). On the other hand, if α\alpha is too small in comparison to the relative confidence of h()h(\cdot) and put an excess weight into the non-robust classifier g()g(\cdot), namely, if there exists iyi\neq y such that α11+hy(x)hi(x)\alpha\leq\frac{1}{1+h_{y}(x)-h_{i}(x)}, then rpα(x)0r_{p}^{\alpha}(x)\leq 0, and in this case, we cannot provide non-trivial certified robustness for hα()h^{\alpha}(\cdot). If h()h(\cdot) is 100%100\% confident in its prediction, then hy(x)hi(x)=1h_{y}(x)-h_{i}(x)=1 for all iyi\neq y, and therefore this threshold value of α\alpha becomes 12\frac{1}{2}, leading to non-trivial certified radii for α>12\alpha>\frac{1}{2}. However, once we put over 12\frac{1}{2} of the weight into g()g(\cdot), a nonzero radius around xx is no longer certifiable. Since no assumptions on the robustness of g()g(\cdot) around xx have been made, this is intuitively the best one can expect.

We now move on to tightening the certified radius in the special case when h()h(\cdot) is an RS classifier and our robust radii are defined in terms of the 2\ell_{2} norm.

Assumption 2

The classifier h()h(\cdot) is a (Gaussian) randomized smoothing classifier, i.e., h(x)=𝔼ξ𝒩(0,σ2Id)[h¯(x+ξ)]h(x)=\mathbb{E}_{\xi\sim{\mathcal{N}}(0,\sigma^{2}I_{d})}\left[\overline{h}(x+\xi)\right] for all xdx\in{\mathbb{R}}^{d}, where h¯:d[0,1]c\overline{h}\colon{\mathbb{R}}^{d}\to[0,1]^{c} is a neural model that is non-robust in general. Furthermore, for all i[c]i\in[c], h¯i()\overline{h}_{i}(\cdot) is not 0 almost everywhere or 1 almost everywhere.

Theorem 3.7.

Suppose that Assumption 2 holds, and let xdx\in{\mathbb{R}}^{d} be arbitrary. Let y=argmaxihi(x)y=\operatorname*{arg\,max}_{i}h_{i}(x) and y=argmaxiyhi(x)y^{\prime}=\operatorname*{arg\,max}_{i\neq y}h_{i}(x). Then, if α[12,1]\alpha\in[\frac{1}{2},1], it holds that argmaxihiα(x+δ)=y\operatorname*{arg\,max}_{i}h_{i}^{\alpha}(x+\delta)=y for all δd\delta\in{\mathbb{R}}^{d} such that

δ2\displaystyle\lVert\delta\rVert_{2} rσα(x)σ2(Φ1(αhy(x))Φ1(αhy(x)+1α)).\displaystyle\leq r_{\sigma}^{\alpha}(x)\coloneqq\frac{\sigma}{2}\Big{(}\Phi^{-1}\left(\alpha h_{y}(x)\right)-\Phi^{-1}\left(\alpha h_{y^{\prime}}(x)+1-\alpha\right)\Big{)}.

The proof of Theorem˜3.7 is provided in Appendix˜B in the supplementary materials.

To summarize our certified radii, Theorem˜3.5 applies to very general Lipschitz continuous robust base classifiers h()h(\cdot) and arbitrary p\ell_{p} norms, whereas Theorem˜3.7, applying to the 2\ell_{2} norm and RS base classifiers, strengthens the certified radius by exploiting the stronger Lipschitzness arising from the special structure and smoothness granted by Gaussian convolution operations. Theorems 3.5 and 3.7 guarantee that our proposed robustification cannot be easily circumvented by adaptive attacks.

4 Numerical Experiments

4.1 α\alpha’s Influence on Mixed Classifier Robustness

We first use the CIFAR-10 dataset to evaluate the mixed classifier hα()h^{\alpha}(\cdot) with various values of α\alpha. We use a ResNet18 model trained on unattacked images as the standard base model g()g(\cdot) and use another ResNet18 trained on PGD20 data as the robust base model h()h(\cdot). We consider PGD20 attacks that target g()g(\cdot) and h()h(\cdot) individually (abbreviated as STD and ROB attacks and can be regarded as transfer attacks), in addition to the adaptive PGD20 attack generated using the end-to-end gradient of hα()h^{\alpha}(\cdot), denoted as the MIX attack.

The test accuracy of each mixed classifier is presented in Figure˜2. As α\alpha increases, the clean accuracy of hα()h^{\alpha}(\cdot) converges from the clean accuracy of g()g(\cdot) to the clean accuracy of h()h(\cdot). In terms of attacked performance, when the attack targets g()g(\cdot), the attacked accuracy increases with α\alpha. When the attack targets h()h(\cdot), the attacked accuracy decreases with α\alpha, showing that the attack targeting h()h(\cdot) becomes more benign when the mixed classifier emphasizes g()g(\cdot). When the attack targets the mixed classifier hα()h^{\alpha}(\cdot), the attacked accuracy increases with α\alpha.

When α\alpha is around 0.50.5, the MIX-attacked accuracy of hα()h^{\alpha}(\cdot) quickly increases from near zero to more than 30%30\% (two-thirds of h()h(\cdot)’s attacked accuracy). This observation precisely matches the theoretical intuition from Theorem˜3.5. Meanwhile, when α\alpha is greater than 0.50.5, the clean accuracy gradually decreases at a much slower rate, leading to the alleviated accuracy-robustness trade-off.

4.2 The Relationship between hα()h^{\alpha}(\cdot)’s Robustness and h()h(\cdot)’s Confidence

This difference in how clean and attacked accuracy change with α\alpha can be explained by the prediction confidence of the robust base classifier h()h(\cdot). Specifically, Table˜1 confirms that h()h(\cdot) makes confident correct predictions even when under attack (average robust margin is 0.7680.768). Moreover, h()h(\cdot)’s robust margin follows a long-tail distribution: the median robust margin is 0.9330.933, much larger than the 0.7680.768 mean. Thus, most attacked inputs correctly classified by h()h(\cdot) are highly confident (i.e., robust with large margins). As Lemma 3.2 suggests, such a property is precisely what the mixed classifier relies on. Intuitively, once α\alpha becomes greater than 0.50.5 and gives h()h(\cdot) more authority over g()g(\cdot), h()h(\cdot) can use its confidence to correct g()g(\cdot)’s mistakes under attack.

Refer to caption
Figure 2: The accuracy of the mixed classifier hα()h^{\alpha}(\cdot) at various α\alpha values. “STD attack”, “ROB attack”, and “MIX attack” refer to the PGD20 attack generated using the gradient of g()g(\cdot), h()h(\cdot), and hα()h^{\alpha}(\cdot) respectively, with ϵ\epsilon set to 8255\frac{8}{255}.
Table 1: Average gap between the probabilities of the predicted class and the runner-up class.
Clean Instances
Correct Incorrect
g()g(\cdot) 0.982 0.698
h()h(\cdot) 0.854 0.434

     PGD20 Instances Correct Incorrect g()g(\cdot) 0.602 0.998 h()h(\cdot) 0.768 0.635

On the other hand, h()h(\cdot) is unconfident when producing incorrect predictions on clean data, with the top two classes’ output probabilities separated by merely 0.4340.434. This probability gap again forms a long-tail distribution (the median is 0.3780.378 which is less than the mean), confirming that h()h(\cdot) rarely makes confident incorrect predictions. Now, consider clean data that g()g(\cdot) correctly classifies and h()h(\cdot) mispredicts. Recall that we assume g()g(\cdot) to be more accurate but less robust, so this scenario should be common. Since g()g(\cdot) is confident (average top two classes probability gap is 0.9820.982) and h()h(\cdot) is usually unconfident, even when α>0.5\alpha>0.5 and g()g(\cdot) has less authority than h()h(\cdot) in the mixture, g()g(\cdot) can still correct some of the mistakes from h()h(\cdot).

In summary, h()h(\cdot) is confident when making correct predictions on attacked data while being unconfident when misclassifying clean data, and such a confidence property is the key source of the mixed classifier’s improved accuracy-robustness trade-off. Additional analyses in Appendix˜A with alternative base models imply that multiple existing robust classifiers share this benign confidence property and thus help the mixed classifier improve the trade-off.

4.3 Visualization of the Certified Robust Radii

Next, we visualize the certified robust radii presented in Theorem˜3.5 and Theorem˜3.7. Since a (Gaussian) RS model with smoothing covariance matrix σ2Id\sigma^{2}I_{d} has an 2\ell_{2}-Lipschitz constant 2/πσ2\sqrt{\nicefrac{{2}}{{\pi\sigma^{2}}}}, such a model can be used to simultaneously visualize both theorems, with Theorem˜3.7 giving tighter certificates of robustness. Note that RS models with a larger smoothing variance certify larger radii but achieve lower clean accuracy, and vice versa. Here, we consider the CIFAR-10 dataset and select g()g(\cdot) to be a ConvNeXT-T model with a clean accuracy of 97.25%97.25\%, and use the RS models presented in (Zhang et al., 2019) as h()h(\cdot). For a fair comparison, we select an α\alpha value such that the clean accuracy of the constructed mixed classifier hα()h^{\alpha}(\cdot) matches that of another RS model hbaseline()h_{\text{baseline}}(\cdot) with a smaller smoothing variance. The expectation term in the RS formulation is approximated with the empirical mean of 1000010000 random perturbations drawn from 𝒩(0,σ2Id){\mathcal{N}}(0,\sigma^{2}I_{d}), and the certified radii of hbaseline()h_{\text{baseline}}(\cdot) are calculated using Theorems 3.5 and 3.7 by setting α\alpha to 11. Figure˜3 displays the calculated certified accuracy of hα()h^{\alpha}(\cdot) and hbaseline()h_{\text{baseline}}(\cdot) at various attack radii. The ordinate “Accuracy” at a given abscissa “2\ell_{2} radius” reflects the percentage of the test data for which the considered model gives a correct prediction as well as a certified radius at least as large as the 2\ell_{2} radius under consideration.

{subfigure}

[ hbaseline()h_{\text{baseline}}(\cdot): RS with σ=0.5\sigma=0.5.      hα()h^{\alpha}(\cdot): α=0.76\alpha=0.76; h()h(\cdot) is RS with σ=1\sigma=1. ][t] Refer to caption {subfigure}[ hbaseline()h_{\text{baseline}}(\cdot): RS with σ=0.25\sigma=0.25. Consider two mixed classifier examples: haα()h^{\alpha}_{a}(\cdot): α=0.76\alpha=0.76; ha()h_{a}(\cdot) is RS with σ=0.5\sigma=0.5; hbα()\hskip 0.56905pth^{\alpha}_{b}(\cdot): α=0.67\alpha=0.67; hb()h_{b}(\cdot) is RS with σ=1.0\sigma=1.0. ][t] Refer to caption

Figure 3: Comparing the certified accuracy-robustness trade-off of RS models and our mixed classifier using both Lipschitz-based (Lip-based) certificates and RS-based certificates (Theorems 3.5 and 3.7, respectively). The clean accuracy is the same between hbaseline()h_{\text{baseline}}(\cdot) and hα()h^{\alpha}(\cdot) in each subfigure, and the empty circles represent discontinuity in the certified accuracy at radius 0.

In both subplots of Figure˜3, the certified robustness curves of hα()h^{\alpha}(\cdot) do not connect to the clean accuracy when α0\alpha\to 0. This is because Theorems 3.5 and 3.7 both consider robustness with respect to h()h(\cdot) and do not certify test inputs at which h()h(\cdot) makes incorrect predictions, even though hα()h^{\alpha}(\cdot) may correctly predict some of these points. This is reasonable because we do not assume any robustness or Lipschitzness of g()g(\cdot), and g()g(\cdot) is allowed to be arbitrarily incorrect whenever the radius is non-zero.

The Lipschitz-based bound of Theorem˜3.5 allows us to visualize the performance of the mixed classifier hα()h^{\alpha}(\cdot) when h()h(\cdot) is an 2\ell_{2}-Lipschitz model. In this case, the curves associated with hα()h^{\alpha}(\cdot) and hbaseline()h_{\text{baseline}}(\cdot) intersect, with hα()h^{\alpha}(\cdot) achieving higher certified accuracy at larger radii and hbaseline()h_{\text{baseline}}(\cdot) certifying more points at smaller radii. Adjusting α\alpha and the Lipschitz constant of h()h(\cdot) can change the location of this intersection while maintaining the clean accuracy. Thus, the mixed classifier allows for optimizing the certified accuracy at a particular radius without sacrificing clean accuracy.

The RS-based bound from Theorem˜3.7 captures the behavior of the mixed classifier hα()h^{\alpha}(\cdot) when h()h(\cdot) is an RS model. For both hα()h^{\alpha}(\cdot) and hbaseline()h_{\text{baseline}}(\cdot), the RS-based bounds certify larger radii than the corresponding Lipschitz-based bounds. Nonetheless, hbaseline()h_{\text{baseline}}(\cdot) can certify more points with the RS-based guarantee. Intuitively, this phenomenon suggests that RS models can yield correct but low-confidence predictions when under large-radius attack, and thus may not be best-suited for our mixing operation, which relies on robustness with non-zero margins. Meanwhile, Lipschitz models, a more general and common class of models, exploit the mixing operation more effectively. Moreover, as shown in Figure˜2 and Table˜1, empirically robust models often yield high-confidence correct predictions when under attack, making them more suitable to be used as hα()h^{\alpha}(\cdot)’s robust base classifier.

5 Conclusions

This work proposes to mix the predicted probabilities of an accurate classifier and a robust classifier to mitigate the accuracy-robustness trade-off. These two base classifiers can be pre-trained, and the resulting mixed classifier requires no additional training. Theoretical results certify that the mixed classifier inherits the robustness of the robust base model under realistic assumptions. Empirical evaluations show that our method approaches the high accuracy of the latest standard models while retaining the robustness of modern robust classification methods. Hence, this work provides a foundation for future research to focus on either accuracy or robustness without sacrificing the other, providing additional incentives for deploying robust models in safety-critical control.

\acks

This work was supported by grants from ONR, NSF, and C3 AI.

References

  • Alam et al. (2022) Manaar Alam, Shubhajit Datta, Debdeep Mukhopadhyay, Arijit Mondal, and Partha Pratim Chakrabarti. Resisting adversarial attacks in deep neural networks using diverse decision boundaries. arXiv preprint arXiv:2208.08697, 2022.
  • Anderson et al. (2020) Brendon Anderson, Ziye Ma, Jingqi Li, and Somayeh Sojoudi. Tightened convex relaxations for neural network robustness certification. In IEEE Conference on Decision and Control, 2020.
  • Anderson and Sojoudi (2022a) Brendon G Anderson and Somayeh Sojoudi. Data-driven certification of neural networks with random input noise. IEEE Transactions on Control of Network Systems, 2022a.
  • Anderson and Sojoudi (2022b) Brendon G. Anderson and Somayeh Sojoudi. Certified robustness via locally biased randomized smoothing. In Learning for Dynamics and Control Conference, 2022b.
  • Bai et al. (2022a) Yatong Bai, Tanmay Gautam, Yu Gai, and Somayeh Sojoudi. Practical convex formulation of robust one-hidden-layer neural network training. American Control Conference, 2022a.
  • Bai et al. (2022b) Yatong Bai, Tanmay Gautam, and Somayeh Sojoudi. Efficient global optimization of two-layer ReLU networks: Quadratic-time algorithms and adversarial training. SIAM Journal on Mathematics of Data Science, 2022b.
  • Balaji et al. (2019) Yogesh Balaji, Tom Goldstein, and Judy Hoffman. Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets. arXiv preprint arXiv:1910.08051, 2019.
  • Bojarski et al. (2016) Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
  • Chen et al. (2020) Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, and Zhangyang Wang. Adversarial robustness: From self-supervised pre-training to fine-tuning. In IEEE Conference on Computer Vision and Pattern Recognition, 2020.
  • Cohen et al. (2019) Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, 2019.
  • Eykholt et al. (2018) Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning visual classification. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • Fan et al. (2021) Lijie Fan, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, and Chuang Gan. When does contrastive learning preserve adversarial robustness from pretraining to finetuning? In Advances in Neural Information Processing Systems, 2021.
  • Fazlyab et al. (2019) Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, and George Pappas. Efficient and accurate estimation of Lipschitz constants for deep neural networks. In Advances in Neural Information Processing Systems, 2019.
  • Goodfellow et al. (2015) Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
  • Gowal et al. (2021) Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan A. Calian, and Timothy Mann. Improving robustness using generated data. arXiv preprint arXiv:2110.09468, 2021.
  • Hein and Andriushchenko (2017) Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems, 2017.
  • Hu et al. (2020) Ting-Kuei Hu, Tianlong Chen, Haotao Wang, and Zhangyang Wang. Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. In International Conference on Learning Representations, 2020.
  • Huang et al. (2017) Sandy H. Huang, Nicolas Papernot, Ian J. Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. In International Conference on Learning Representations, 2017.
  • Jia et al. (2022) Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, and Xiaochun Cao. LAS-AT: Adversarial training with learnable attack strategy. In IEEE Conference on Computer Vision and Pattern Recognition, 2022.
  • Krizhevsky (2012) Alex Krizhevsky. Learning multiple layers of features from tiny images, 2012. URL https://www.cs.toronto.edu/˜kriz/learning-features-2009-TR.pdf.
  • Kumar et al. (2022) Aounon Kumar, Alexander Levine, and Soheil Feizi. Policy smoothing for provably robust reinforcement learning. In International Conference on Learning Representations, 2022.
  • Kurakin et al. (2017) Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial machine learning at scale. In International Conference on Learning Representations, 2017.
  • Lamb et al. (2019) Alex Lamb, Vikas Verma, Juho Kannala, and Yoshua Bengio. Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy. In ACM Workshop on Artificial Intelligence and Security, 2019.
  • Levine et al. (2019) Alexander Levine, Sahil Singla, and Soheil Feizi. Certifiably robust interpretation in deep learning. arXiv preprint arXiv:1905.12105, 2019.
  • Levine et al. (2016) Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334–1373, 2016.
  • Li et al. (2019) Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified adversarial robustness with additive noise. In Advances in Neural Information Processing Systems, 2019.
  • Liu et al. (2019) Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. Perceptual-sensitive GAN for generating adversarial patches. In The AAAI Conference on Artificial Intelligence, 2019.
  • Liu et al. (2018) Xuanqing Liu, Minhao Cheng, Huan Zhang, and Cho-Jui Hsieh. Towards robust neural networks via random self-ensemble. In European Conference on Computer Vision, 2018.
  • Liu et al. (2022) Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A ConvNet for the 2020s. In IEEE Conference on Computer Vision and Pattern Recognition, 2022.
  • Ma and Sojoudi (2021) Ziye Ma and Somayeh Sojoudi. A sequential framework towards an exact SDP verification of neural networks. In International Conference on Data Science and Advanced Analytics, 2021.
  • Madry et al. (2018) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
  • Nguyen et al. (2015) Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • Pang et al. (2019) Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. In International Conference on Machine Learning, 2019.
  • Pang et al. (2022) Tianyu Pang, Min Lin, Xiao Yang, Jun Zhu, and Shuicheng Yan. Robustness and accuracy could be reconcilable by (proper) definition. arXiv preprint arXiv:2202.10103, 2022.
  • Pfrommer et al. (2023) Samuel Pfrommer, Brendon G Anderson, and Somayeh Sojoudi. Projected randomized smoothing for certified adversarial robustness. Transactions on Machine Learning Research, 2023.
  • Raghunathan et al. (2020) Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, and Percy Liang. Understanding and mitigating the tradeoff between robustness and accuracy. In International Conference on Machine Learning, 2020.
  • Rebuffi et al. (2021) Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A Calian, Florian Stimberg, Olivia Wiles, and Timothy Mann. Fixing data augmentation to improve adversarial robustness. arXiv preprint arXiv:2103.01946, 2021.
  • Salman et al. (2019) Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems, 2019.
  • Schmidt et al. (2018) Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. Advances in Neural Information Processing Systems, 31, 2018.
  • Sehwag et al. (2022) Vikash Sehwag, Saeed Mahloujifar, Tinashe Handina, Sihui Dai, Chong Xiang, Mung Chiang, and Prateek Mittal. Robust learning meets generative models: Can proxy distributions improve adversarial robustness? In International Conference on Learning Representations, 2022.
  • Shafahi et al. (2019) Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! Advances in Neural Information Processing Systems, 2019.
  • Sutton and Barto (2018) Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT press, 2018.
  • Szegedy et al. (2014) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
  • Tramèr et al. (2018) Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian J. Goodfellow, Dan Boneh, and Patrick D. McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations, 2018.
  • Tramèr et al. (2020) Florian Tramèr, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. In Advances in Neural Information Processing Systems, 2020.
  • Tsipras et al. (2019) Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In International Conference on Learning Representations, 2019.
  • Wu et al. (2017) Bichen Wu, Forrest Iandola, Peter H. Jin, and Kurt Keutzer. SqueezeDet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017.
  • Wu et al. (2022) Fan Wu, Linyi Li, Zijian Huang, Yevgeniy Vorobeychik, Ding Zhao, and Bo Li. CROP: Certifying robust policies for reinforcement learning through functional smoothing. In International Conference on Learning Representations, 2022.
  • Yang et al. (2020) Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Russ R. Salakhutdinov, and Kamalika Chaudhuri. A closer look at accuracy vs. robustness. In Annual Conference on Neural Information Processing Systems, 2020.
  • Zhang and Wang (2019) Haichao Zhang and Jianyu Wang. Defense against adversarial attacks using feature scattering-based adversarial training. In Annual Conference on Neural Information Processing Systems, 2019.
  • Zhang et al. (2019) Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, 2019.
  • Zheng et al. (2020) Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, and Atul Prakash. Efficient adversarial training with transferable adversarial examples. In IEEE Conference on Computer Vision and Pattern Recognition, 2020.
  • Zheng et al. (2021) Yaowei Zheng, Richong Zhang, and Yongyi Mao. Regularizing neural networks via adversarial model perturbation. In IEEE Conference on Computer Vision and Pattern Recognition, 2021.

Appendix A Additional Empirical Support for Ri(x)=1R_{i}(x)=1

{subfigure}

[ConvNeXT-T and TRADES WRN-34 under \ell_{\infty} PGD attack.][t] Refer to caption {subfigure}[Standard and AT ResNet18s under 2\ell_{2} PGD attack.][t] Refer to caption

Figure 4: Comparing the options for Ri(x)R_{i}(x) with alternative selections of base classifiers.
Table 2: Experiment settings for comparing the choices of Ri(x)R_{i}(x).
Attack Budget; PGD Steps g()g(\cdot) Architecture h()h(\cdot) Architecture
Figure˜1 \ell_{\infty}, ϵ=8255\ \epsilon=\frac{8}{255}, 10 Steps Standard ResNet18 \ell_{\infty}-adversarially-trained ResNet18
Figure˜4 \ell_{\infty}, ϵ=8255\ \epsilon=\frac{8}{255}, 20 Steps Standard ConvNeXT-T TRADES WideResNet-34
Figure˜4 2\ell_{2}, ϵ=0.5\;\;\epsilon=0.5, 20 Steps Standard ResNet18 2\ell_{2}-adversarially-trained ResNet18

Finally, we use additional empirical evidence (Figures 4 and 4) to show that Ri(x)=1R_{i}(x)=1 is the appropriate choice for the mixed classifier and that the probabilities should be used for the mixture. While most experiments in this paper are based on the popular ResNet architecture, our method does not depend on any ResNet properties. Therefore, for the experiment in Figure˜4, we select a more modern ConvNeXT-T model (Liu et al., 2022) pre-trained on ImageNet-1k as an alternative architecture for g()g(\cdot). We also use a robust model trained via TRADES in place of an adversarially-trained network for h()h(\cdot) for the interest of diversity. Additionally, although most of our experiments are based on \ell_{\infty} attacks, the proposed method applies to all p\ell_{p} attack budgets. In Figure˜4, we provide an example that considers the 2\ell_{2} attack. The experiment settings are summarized in Table˜2.

Figures 4 and 4 confirm that setting Ri(x)R_{i}(x) to the constant 11 achieves the best trade-off curve between clean and attacked accuracy, and that mixing the probabilities outperforms mixing the logits. This result aligns with the conclusions of Figure˜1 and our theoretical analyses.

For all three cases listed in Table˜2, the mixed classifier reduces the error rate of h()h(\cdot) on clean data by half while maintaining 80%80\% of h()h(\cdot)’s attacked accuracy. This observation suggests that the mixed classifier noticeably alleviates the accuracy-robustness trade-off. Additionally, our method is especially suitable for applications where the clean accuracy gap between g()g(\cdot) and h()h(\cdot) is large. On easier datasets such as MNIST and CIFAR-10, this gap has been greatly reduced by the latest advancements in constructing robust classifiers. However, on harder tasks such as CIFAR-100 and ImageNet-1k, this gap is still large, even for state-of-the-art methods. For these applications, standard classifiers often benefit much more from pre-training on larger datasets than robust models.

Appendix B Proof of Theorem˜3.7

Theorem B.5 (Restated).

Suppose that Assumption 2 holds, and let xdx\in{\mathbb{R}}^{d} be arbitrary. Let y=argmaxihi(x)y=\operatorname*{arg\,max}_{i}h_{i}(x) and y=argmaxiyhi(x)y^{\prime}=\operatorname*{arg\,max}_{i\neq y}h_{i}(x). Then, if α[12,1]\alpha\in[\frac{1}{2},1], it holds that argmaxihiα(x+δ)=y\operatorname*{arg\,max}_{i}h_{i}^{\alpha}(x+\delta)=y for all δd\delta\in{\mathbb{R}}^{d} such that

δ2\displaystyle\lVert\delta\rVert_{2} rσα(x)σ2(Φ1(αhy(x))Φ1(αhy(x)+1α)).\displaystyle\leq r_{\sigma}^{\alpha}(x)\coloneqq\frac{\sigma}{2}\Big{(}\Phi^{-1}\left(\alpha h_{y}(x)\right)-\Phi^{-1}\left(\alpha h_{y^{\prime}}(x)+1-\alpha\right)\Big{)}.
Proof B.6.

First, note that since every h¯i()\overline{h}_{i}(\cdot) is not 0 almost everywhere or 1 almost everywhere, it holds that hi(x)(0,1)h_{i}(x)\in(0,1) for all ii and all xx. Now, suppose that α[12,1]\alpha\in[\frac{1}{2},1], and let δd\delta\in{\mathbb{R}}^{d} be such that δ2rσα(x)\lVert\delta\rVert_{2}\leq r_{\sigma}^{\alpha}(x). Let μα1αα\mu_{\alpha}\coloneqq\frac{1-\alpha}{\alpha}. Define the function h~:dc\tilde{h}\colon{\mathbb{R}}^{d}\to{\mathbb{R}}^{c} by

h~y(x)=h¯y(x)1+μα,h~i(x)=h¯i(x)+μα1+μα for all iy.\tilde{h}_{y}(x)=\frac{\overline{h}_{y}(x)}{1+\mu_{\alpha}},\quad\tilde{h}_{i}(x)=\frac{\overline{h}_{i}(x)+\mu_{\alpha}}{1+\mu_{\alpha}}\text{ for all $i\neq y$}.

Furthermore, define h^:dc\hat{h}\colon{\mathbb{R}}^{d}\to{\mathbb{R}}^{c} by h^(x)=𝔼ξ𝒩(0,σ2Id)[h~(x+ξ)]\hat{h}(x)=\mathbb{E}_{\xi\sim{\mathcal{N}}(0,\sigma^{2}I_{d})}\left[\tilde{h}(x+\xi)\right].

Then, since h~y(x)=h¯y(x)1+μα(0,11+μα)(0,1)\tilde{h}_{y}(x)=\frac{\overline{h}_{y}(x)}{1+\mu_{\alpha}}\in(0,\frac{1}{1+\mu_{\alpha}})\subseteq(0,1) and h~i(x)=h¯i(x)+μα1+μα(μα1+μα,1)(0,1)\tilde{h}_{i}(x)=\frac{\overline{h}_{i}(x)+\mu_{\alpha}}{1+\mu_{\alpha}}\in(\frac{\mu_{\alpha}}{1+\mu_{\alpha}},1)\subseteq(0,1) for all iyi\neq y, it must be the case that 0<h~i(x)<10<\tilde{h}_{i}(x)<1 for all ii and all xx, and hence, for all ii, the function xΦ1(h^i(x))x\mapsto\Phi^{-1}\big{(}\hat{h}_{i}(x)\big{)} is 2\ell_{2}-Lipschitz continuous with Lipschitz constant 1σ\frac{1}{\sigma} (see (Levine et al., 2019, Lemma 1), or Lemma 2 in (Salman et al., 2019) and the discussion thereafter). Therefore,

|Φ1(h^i(x+δ))Φ1(h^i(x))|δ2σrσα(x)σ\left|\Phi^{-1}\big{(}\hat{h}_{i}(x+\delta)\big{)}-\Phi^{-1}\big{(}\hat{h}_{i}(x)\big{)}\right|\leq\frac{\lVert\delta\rVert_{2}}{\sigma}\leq\frac{r_{\sigma}^{\alpha}(x)}{\sigma} (6)

for all ii. Applying Eq.˜6 for i=yi=y yields that

Φ1(h^y(x+δ))Φ1(h^y(x))rσα(x)σ.\Phi^{-1}\big{(}\hat{h}_{y}(x+\delta)\big{)}\geq\Phi^{-1}\big{(}\hat{h}_{y}(x)\big{)}-\frac{r_{\sigma}^{\alpha}(x)}{\sigma}. (7)

Since Φ1\Phi^{-1} monotonically increases and h^i(x)h^y(x)\hat{h}_{i}(x)\leq\hat{h}_{y^{\prime}}(x) for all iyi\neq y, applying Eq.˜6 to iyi\neq y gives

Φ1(h^i(x+δ))\displaystyle\Phi^{-1}\big{(}\hat{h}_{i}(x+\delta)\big{)} Φ1(h^i(x))+rσα(x)σΦ1(h^y(x))+rσα(x)σ.\displaystyle\leq\Phi^{-1}\big{(}\hat{h}_{i}(x)\big{)}+\frac{r_{\sigma}^{\alpha}(x)}{\sigma}\leq\Phi^{-1}\big{(}\hat{h}_{y^{\prime}}(x)\big{)}+\frac{r_{\sigma}^{\alpha}(x)}{\sigma}. (8)

Subtracting Eq.˜8 from Eq.˜7 gives that

Φ1(h^y(x+δ))\displaystyle\Phi^{-1}\big{(}\hat{h}_{y}(x+\delta)\big{)} Φ1(h^i(x+δ))Φ1(h^y(x))Φ1(h^y(x))2rσα(x)σ\displaystyle-\Phi^{-1}\big{(}\hat{h}_{i}(x+\delta)\big{)}\geq\Phi^{-1}\big{(}\hat{h}_{y}(x)\big{)}-\Phi^{-1}\big{(}\hat{h}_{y^{\prime}}(x)\big{)}-\frac{2r_{\sigma}^{\alpha}(x)}{\sigma}

for all iyi\neq y. By the definitions of μα\mu_{\alpha}, rσα(x)r_{\sigma}^{\alpha}(x), and h^(x)\hat{h}(x), the right-hand side of this inequality equals zero. Since Φ\Phi monotonically increases, we find that h^y(x+δ)h^i(x+δ)\hat{h}_{y}(x+\delta)\geq\hat{h}_{i}(x+\delta) for all iyi\neq y. Thus,

hy(x+δ)1+μα\displaystyle\frac{h_{y}(x+\delta)}{1+\mu_{\alpha}} =𝔼ξ𝒩(0,σ2Id)[h¯y(x+δ+ξ)1+μα]=h^y(x+δ)\displaystyle=\mathbb{E}_{\xi\sim{\mathcal{N}}(0,\sigma^{2}I_{d})}\left[\frac{\overline{h}_{y}(x+\delta+\xi)}{1+\mu_{\alpha}}\right]=\hat{h}_{y}(x+\delta)
h^i(x+δ)=𝔼ξ𝒩(0,σ2Id)[h¯i(x+δ+ξ)+μα1+μα]=hi(x+δ)+μα1+μα.\displaystyle\geq\hat{h}_{i}(x+\delta)=\mathbb{E}_{\xi\sim{\mathcal{N}}(0,\sigma^{2}I_{d})}\left[\frac{\overline{h}_{i}(x+\delta+\xi)+\mu_{\alpha}}{1+\mu_{\alpha}}\right]=\frac{h_{i}(x+\delta)+\mu_{\alpha}}{1+\mu_{\alpha}}.

Hence, hy(x+δ)hi(x+δ)+μαh_{y}(x+\delta)\geq h_{i}(x+\delta)+\mu_{\alpha} for all iyi\neq y, so h()h(\cdot) is certifiably robust at xx with margin μα=1αα\mu_{\alpha}=\frac{1-\alpha}{\alpha} and radius rσα(x)r_{\sigma}^{\alpha}(x). Therefore, by Lemma 3.2, it holds that argmaxihiα(x+δ)=y\operatorname*{arg\,max}_{i}h_{i}^{\alpha}(x+\delta)=y for all δd\delta\in{\mathbb{R}}^{d} such that δ2rσα(x)\lVert\delta\rVert_{2}\leq r_{\sigma}^{\alpha}(x), which concludes the proof.