This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Institute of Information Security, Yokohama, Japan
11email: mgs214505@iisec.ac.jp, otsuka@ai.iisec.ac.jp
22institutetext: NTT TechnoCross Corporation, Yokohama, Japan

Robustness bounds on the successful adversarial examples in probabilistic models: Implications from Gaussian processes

Hiroaki Maeshima 1122    Akira Otsuka 11 0000-0001-6862-2576
Abstract

Adversarial example (AE) is an attack method for machine learning, which is crafted by adding imperceptible perturbation to the data inducing misclassification. In the current paper, we investigated the upper bound of the probability of successful AEs based on the Gaussian Process (GP) classification, a probabilistic inference model. We proved a new upper bound of the probability of a successful AE attack that depends on AE’s perturbation norm, the kernel function used in GP, and the distance of the closest pair with different labels in the training dataset. Surprisingly, the upper bound is determined regardless of the distribution of the sample dataset. We showed that our theoretical result was confirmed through the experiment using ImageNet. In addition, we showed that changing the parameters of the kernel function induces a change of the upper bound of the probability of successful AEs.

Keywords:
Gaussian Processes Adversarial Examples Robustness Bounds

1 Introduction

1.1 Adversarial Example

Today, machine learning is widely used in various fields, and concerns about its security have emerged. An adversarial example (AE) is widely investigated among such attack methods [goodfellow_explaining_2015]. AE is a sample that is slightly different from a natural sample that is misclassified by a machine learning classifier [goodfellow_explaining_2015]. AE is often crafted against neural networks by adding a small perturbation (adversarial perturbation) to the original input, and various methods are proposed to make an adversarial perturbation [carlini_towards_2017, goodfellow_explaining_2015].

Since AE is considered an attack method against machine learning, various defense methods against AE have been investigated, including the detection method [feinman_detecting_2017] and adversarial training [cai_curriculum_2018]. However, Carlini & Wagner [carlini_adversarial_2017] reviewed ten detection methods and stated that no defense method could survive a white-box AE attack, an attack using the knowledge of the victim’s architecture. Therefore, both the defense methods and their theoretical basis should be required for the defense against AE.

In the current study, we investigated the theoretical basis of AE using the Gaussian Process (GP), a stochastic process whose output is multivariate normally distributed. GP can be used for a probabilistic model for classification and regression [rasmussen_gaussian_2006], and previous research shows that GP is equivalent to some types of classifier such as linear regression and a relevance vector machine, and especially, GP is equivalent to a certain type of neural networks [williams_computing_1996]. More specifically, the activation function in neural networks corresponds to the kernel function in the NN [williams_computing_1996].

1.2 Related Researches

Until now, some studies have been conducted that aim to establish the theoretical basis of AE. Shafahi, Huang, Studer, Feizi & Goldstein [shafahi_are_2019] have suggested that AE is inevitable under a certain condition on the distribution of the training data, using the geometrical method. Blaas et al. [blaas_adversarial_2020] have shown the algorithmic method to calculate the upper and lower bound of the value of GP classification in an arbitrary set. Cardelli, Kwiatkowska, Laurenti & Patane [cardelli_robustness_2019] have shown the upper bound of the probability of robustness of the GP output in an arbitrary set. Wang, Jha & Chandhuri [wang_analyzing_2018] have shown the method of creating a robust dataset when using a k-nearest neighborhood classifier. Fawzi, Fawzi & Fawzi [fawzi_adversarial_2018] have shown the fundamental upper bound on the robustness of the classification, which some studies are based on. Mahloujifar, Diochnos & Mahmoody [mahloujifar_curse_2019] have applied their results to broader types of data distributions, and Zhang, Chen, Gu & Evans [zhang_understanding_2020] have shown the intrinsic robustness bounds for classifiers with a conditional generative model. More specifically, Gilmer et al. [gilmer_relationship_2018] have shown a fundamental bound relating to the classification error rate in the field of neural networks; Bhagoji, Cullina & Mittal [bhagoji_lower_2019] have shown lower bounds of adversarial classification loss using optimal transport theory.

Some studies suggest certified defense methods. Randomized smoothing [cohen_certified_2019] is a method for composing "smoothed" classifiers that are derived from arbitrary classifiers. The prediction of an arbitrary input by smoothed classifiers has a safety radius, within which the classification results of the neighborhood inputs are the same as the central input.

Our approach is different from the related studies in the following viewpoints. First, our approach focuses on the training data for the classifier and is easier to use practically. For example, the other studies focus on the geometry of the classifier [shafahi_are_2019] or the upper bound for which some cumbersome calculation with all training data is necessary [blaas_adversarial_2020, cardelli_robustness_2019]. Using our approach, we can analyze how the robustness will change when the distribution of training data of the classifier changes, for example, when some training data are removed from the dataset or moved within the input space. Cohen, Rosenfeld & Kolter’s randomized classifier [cohen_certified_2019] needs Monte Carlo samplings to calculate the variance used in the calculation of the safety radius. However, our approach can provide the predictive variance using GP. Thus, our approach can be used with Cohen, Rosenfeld & Kolter’s randomized smoothing.

Second, our approach can be applied to a wide range of classifiers thanks to GP. For example, Wang, Jha & Chandhuri ’s approach [wang_analyzing_2018] also focuses on how robustness changes when the distribution of training data of the classifier changes, but their approach is based on the nearest-neighbor method. Our approach is based on GP, which can be applied to broader inference models. Smith, Grosse, Backes & Alvarez [smith_adversarial_2023] have shown adversarial vulnerability bounds for binary GP classification. However, Smith, Grosse, Backes & Alvarez’s proof is based on the Gaussian kernel, while the result of our research is based on the characteristics of the kernel functions, which includes the Gaussian kernel as a special case. Therefore, the current study has a broader application than [smith_adversarial_2023].

1.3 Contributions

Our research has the following contributions.

  • Our research shows that the success probability of an AE attack against a certain dataset in the GP classification with a certain kernel function is upper-bounded by the function of the distance of the nearest points that have different labels.

  • We confirmed our theoretical result through the experiment using ImageNet with various kernel parameters in GP classification. The experimental result is well-suited to the theoretical result.

  • We showed that changing the parameters of the kernel function causes the change of the theoretical upper bound, and thus our result gives the theoretical basis for the enhancement method of robustness.

2 Theoretical Results

2.1 Problem Formulation and Assumption

Let 𝒟\mathcal{D} be a dataset with NN samples that has data points xix_{i} and labels of object classes yiy_{i}. The data point xix_{i} is a DD-dimensional vector, and the label yiy_{i} is binary. Namely,

𝒟={{x1,y1},{x2,y2},,{xN,yN}},xiD,yi{+1,1}.\mathcal{D}=\{\{x_{1},y_{1}\},\{x_{2},y_{2}\},...,\{x_{N},y_{N}\}\},x_{i}\in\mathbb{R}^{D},y_{i}\in\{+1,-1\}. (1)

Let

𝒟+={{xi,yi}𝒟yi=+1},𝒟={{xi,yi}𝒟yi=1}\begin{split}&\mathcal{D_{+}}=\{\{x_{i},y_{i}\}\in\mathcal{D}\mid y_{i}=+1\},\\ &\mathcal{D_{-}}=\{\{x_{i},y_{i}\}\in\mathcal{D}\mid y_{i}=-1\}\\ \end{split} (2)

for further simplicity.

We consider the binary classification of the dataset, using GP regression with a certain kernel function k(x,x)k(x,x^{\prime}) [rasmussen_gaussian_2006]. We define a GP regressor (x):D𝒩(y;μ,σ2)\mathcal{R}(x):\mathbb{R}^{D}\rightarrow\mathcal{N}(y;\mu_{\mathcal{R}},\sigma^{2}_{\mathcal{R}}). (x)\mathcal{R}(x) gives the output as a Gaussian distribution 𝒩(y;μ,σ2)\mathcal{N}(y;\mu_{\mathcal{R}},\sigma^{2}_{\mathcal{R}}). Then we define a probabilistic GP classifier 𝒞():D{+1,1}\mathcal{C}(\cdot):\mathbb{R}^{D}\rightarrow\{+1,-1\}, which is constructed as below:

𝒞(x):={+1ifp𝒩(x)01otherwise\begin{split}\mathcal{C}(x):=\begin{cases}+1&\mathrm{if}~p\sim\mathcal{N}(x)\geq 0\\ -1&\mathrm{otherwise}\end{cases}\end{split} (3)

where pp is a value sampled from the distribution 𝒩(x)\mathcal{N}(x). Intuitively, 𝒞(x)\mathcal{C}(x) is a probabilistic binary classifier whose output is +1+1 when a sample from the output distribution of (x)\mathcal{R}(x) is not negative.

Let the maximum value of the kernel function between two input points of 𝒟\mathcal{D} with different labels be ss. We write the definition of ss as follows, without loss of generality:

The value of the kernel function between 2 points x+𝒟+,x𝒟x_{+}\in\mathcal{D_{+}},x_{-}\in\mathcal{D_{-}} is described as s, where x𝒟x_{-}\in\mathcal{D}_{-} gives the maximum value of k(x+,x)k(x_{+},x_{-}) when x+x_{+} is fixed. ss can be written as a function of x+x_{+}, that is

s=maxx𝒟k(x+,x).s=\max_{x_{-}\in\mathcal{D}_{-}}k(x_{+},x_{-}).

Put xDx_{*}\in\mathbb{R}^{D} such that k(x+,x)=rk(x_{+},x_{*})=r, where rr is a constant.

In this paper, we introduce the following assumption.

Assumption 1 (ϵ\epsilon-proximity).

Let μ\mu_{\mathcal{R}} be the predictive mean of xx_{*} by GP regression trained with 𝒟\mathcal{D}, and μ2\mu_{\mathcal{R}2} be the corresponding value by GP regression trained with {{x+,+1},{x,1}}\mathrm{\{\{}x_{+},+1\mathrm{\},\{}x_{-},-1\mathrm{\}\}}. Then, for all xDx_{*}\in\mathbb{R}^{D} such that k(x+,x)=rk(x_{+},x_{*})=r,

|μμ2|ϵandμ>0|\mu_{\mathcal{R}}-\mu_{\mathcal{R}2}|\leq\epsilon~\mathrm{and}~\mu_{\mathcal{R}}>0 (4)

holds with some ϵ0\epsilon\geq 0.

Intuitively, the former part of ˜1 suggests that the predictive mean of the GP regressor with all training data is close to that with only two training data, implying that the effect of training data other than x+x_{+}, xx_{-} is small.

2.2 Maximum Success Probability of AE

Theorem 1.

Consider (x)\mathcal{R}(x) with kernel function k(x,x)k(x,x^{\prime}) trained with 𝒟\mathcal{D}. Let x+x_{+} be arbitrarily chosen from 𝒟+\mathcal{D_{+}} and xx_{-} be the data point of 𝒟\mathcal{D_{-}}, such that k(x+,x)k(x_{+},x_{-}) is the greatest value when x+x_{+} is fixed and xx_{-} is taken from 𝒟\mathcal{D_{-}}. Then, for any xDx_{*}\in\mathbb{R}^{D} such that k(x+,x)=rk(x_{+},x_{*})=r and k(x+,x)>k(x,x)+ϵ(k(x+,x+)k(x+,x))k(x_{+},x_{*})>k(x_{-},x_{*})+\epsilon(k(x_{+},x_{+})-k(x_{+},x_{-})), the upper bound of the probability that 𝒞(x)=1\mathcal{C}(x_{*})=-1 is upper-bounded with Maximum Success Probability (MSP) function ϕ\phi as

Pr(𝒞(x)=1)<Φ(μσ)<12exp(μ22σ2)=ϕ(r|𝒟),\Pr(\mathcal{C}(x_{*})=-1)\,<\,\Phi\left(-\frac{\mu}{\sigma}\right)\,<\,\frac{1}{2}\exp{\left(-\frac{\mu^{2}}{2\sigma^{2}}\right)}\,=\,\phi(r|\mathcal{D}), (5)

where the notations below are used.

Φ(x)=12πxexp(u22)𝑑u,\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}\exp{\left(-\frac{u^{2}}{2}\right)}du, (6)
μ=k(x+,xmax)k(x,xmax)k(x+,x+)k(x+,x)ϵ,\mu=\frac{k(x_{+},x_{*max})-k(x_{-},x_{*max})}{k(x_{+},x_{+})-k(x_{+},x_{-})}-\epsilon, (7)
σ2=k(x+,x+)k(x+,x+)(k(x+,xmax)2+k(x,xmax)2)k(x+,x+)2k(x+,x)2+2k(x+,x)k(x+,xmax)k(x,xmax)k(x+,x+)2k(x+,x)2,\begin{split}\sigma^{2}=&k(x_{+},x_{+})-\frac{k(x_{+},x_{+})({k(x_{+},x_{*max})}^{2}+{k(x_{-},x_{*max})}^{2})}{{k(x_{+},x_{+})}^{2}-{{k(x_{+},x_{-})}^{2}}}+\\ &\frac{2k(x_{+},x_{-})k(x_{+},x_{*max})k(x_{-},x_{*max})}{{k(x_{+},x_{+})}^{2}-{{k(x_{+},x_{-})}^{2}}},\\ \end{split} (8)
xmax=argmaxx[k(x,x)].x_{*max}=\mathop{\rm arg~max}\limits_{x_{*}}[k(x_{-},x_{*})]. (9)

The above bound holds for any translation-invariant kernel function k(x,x)k(x,x^{\prime}), that is, a kernel function that satisfies k(x,x)k(x,x) is constant for all xDx\in\mathbb{R}^{D}.

The schematic diagram is shown in Fig.˜1.

O𝒟+\mathcal{D}_{+}𝒟\mathcal{D}_{-}x+x_{+}xx_{-}xx_{*}ssrr
Figure 1: Illustration of the conditions in Theorem 1 assuming the input space as 2\mathbb{R}^{2}. x+x_{+} and xx_{-} are input data points from 𝒟+\mathcal{D}_{+} and 𝒟\mathcal{D}_{-} respectively. Note that r=k(x+,x)r=k(x_{+},x_{*}) and s=k(x+,x)s=k(x_{+},x_{-}).

Proof sketch. First, we prove that the prediction variance increases if the training point increases in GP regression (Lemma 1). In the next, we prove that Erf(0;μ,σ)=0𝒩(x;μ,σ2)𝑑x\operatorname{Erf}(0;\mu,\sigma)=\int^{0}_{-\infty}\mathcal{N}(x;\mu,\sigma^{2})dx increases monotonically with respect to σ2\sigma^{2} if μ>0\mu>0 (Lemma 2). Then we prove that the probability that xx_{*} is classified as 1-1 increases monotonically with respect to k(x,x)k(x_{*},x_{-}) (Lemma 3), and the theorem is proved. ∎

{toappendix}

2.3 Proof of Theorem 1

In the proof, we set the notations below.

We construct a GP regressor (x)\mathcal{R}(x) with the kernel function k(x,x)k(x,x^{\prime}). In GP regressor, the mean μ\mu_{\mathcal{R}} and the variance σ2\sigma_{\mathcal{R}}^{2} of the new data point xx_{*} can be calculated as

μ=k(x)TK1y\mu_{\mathcal{R}}=k(x_{*})^{T}K^{-1}\textbf{y} (10)
σ2=k(x,x)k(x)TK1k(x)\sigma_{\mathcal{R}}^{2}=k(x_{*},x_{*})-k(x_{*})^{T}K^{-1}k(x_{*}) (11)

where

k(x)=(k(x+,x),,k(xn,x))T,k(x_{*})=\left(k(x_{+},x_{*}),\cdots,k(x_{n},x_{*})\right)^{T}, (12)
K=(k(x+,x+)k(x+,xn)k(xn,x+)k(xn,xn)),K=\begin{pmatrix}k(x_{+},x_{+})&\cdots&k(x_{+},x_{n})\\ \cdots&\cdots&\cdots\\ k(x_{n},x_{+})&\cdots&k(x_{n},x_{n})\\ \end{pmatrix},\\ (13)
y=(y1,,yn).\textbf{y}=\left(y_{1},\cdots,y_{n}\right). (14)

In the next we construct GP classifier 𝒞(x)\mathcal{C}(x) whose prediction is y=+1y_{*}=+1 if a sample from (x)\mathcal{R}(x) is greater than 0, and the prediction is y=1y_{*}=-1 otherwise.

For further calculation, set the below notations

θ1=k(x,x),θr1=k(x+,x),θr2=k(x,x),θs=k(x+,x).\begin{split}&\theta_{1}=k(x_{*},x_{*}),~\theta_{r1}=k(x_{+},x_{*}),\\ &\theta_{r2}=k(x_{-},x_{*}),~\theta_{s}=k(x_{+},x_{-}).\\ \end{split} (15)

The following lemma is a simpler version of the statement Exercise 4 of Chapter 2 from [rasmussen_gaussian_2006]. The proof is in the supplemental material.

Lemma 1.

Let Varn(x)\mathrm{Var}_{n}(x_{*}) be the predictive variance of a GP regression at xx_{*} given a training dataset of size nn, and Varn1(x)\mathrm{Var}_{n-1}(x_{*}) be the correspondent variance given only the first n1n-1 points of the training dataset. Then, Varn(x)<Varn1(x)\mathrm{Var}_{n}(x_{*})<\mathrm{Var}_{n-1}(x_{*}) holds.

Proof.

Let

Kn=(k(x+,x+)k(x+,xn)k(x+,xn)k(xn,xn)),K_{n}=\begin{pmatrix}k(x_{+},x_{+})&\cdots&k(x_{+},x_{n})\\ \vdots&\ddots&\vdots\\ k(x_{+},x_{n})&\cdots&k(x_{n},x_{n})\\ \end{pmatrix}, (16)
kn=(k(x+,x),,k(xn,x))Tk_{*n}={(k(x_{+},x_{*}),\cdots,k(x_{n},x_{*}))}^{T} (17)
Var(xn1)=k(x,x)kn1TKn11kn1,\operatorname{Var}(x_{*n-1})=k(x_{*},x_{*})-{k_{*n-1}}^{T}K_{n-1}^{-1}k_{*n-1}, (18)
Var(xn)=k(x,x)knTKn1kn\operatorname{Var}(x_{*n})=k(x_{*},x_{*})-{k_{*n}}^{T}K_{n}^{-1}k_{*n} (19)

KnK_{n} can be written as

Kn=(Kn1BBTk(xn,xn)),K_{n}=\begin{pmatrix}K_{n-1}&B\\ B^{T}&k(x_{n},x_{n})\end{pmatrix}, (20)

where B=(k(x+,xn),,k(xn1,xn))TB={(k(x_{+},x_{n}),\cdots,k(x_{n-1},x_{n}))}^{T}.

Therefore, Kn1K_{n}^{-1} can be decomposed as

Kn1=(A~B~B~TD~),K_{n}^{-1}=\begin{pmatrix}\tilde{A}&\tilde{B}\\ \tilde{B}^{T}&\tilde{D}\end{pmatrix}, (21)

where

{A~=Kn11+Kn11BMBTKn11B~=Kn11BMD~=MM=(k(xn,xn)BTKn11B)1\left\{\begin{array}[]{l}\tilde{A}=K_{n-1}^{-1}+K_{n-1}^{-1}BMB^{T}K_{n-1}^{-1}\\ \tilde{B}=-K_{n-1}^{-1}BM\\ \tilde{D}=M\\ M=(k(x_{n},x_{n})-B^{T}K_{n-1}^{-1}B)^{-1}\end{array}\right. (22)

Now

knTKn1kn=kn1TA~kn1+2k(xn,x)B~Tkn1+k(xn,x)2D~.\begin{split}&{k_{*n}}^{T}K_{n}^{-1}k_{*n}={k_{*n-1}}^{T}\tilde{A}{k_{*n-1}}+2k(x_{n},x_{*})\tilde{B}^{T}{k_{*n-1}}+k(x_{n},x_{*})^{2}\tilde{D}.\end{split} (23)

Decomposing A results in

Var(xn)=k(x,x)(kn1TKn11kn1+kn1TKn11BMBTKn11kn1+2k(xn,x)B~Tkn1+k(xn,x)2D~)Var(xn)=Var(xn1)(kn1TKn11BMBTKn11kn1+2k(xn,x)B~Tkn1+k(xn,x)2D~).\begin{split}\operatorname{Var}(x_{*n})=&k(x_{*},x_{*})-({k_{*n-1}}^{T}K_{n-1}^{-1}{k_{*n-1}}+\\ &{k_{*n-1}}^{T}K_{n-1}^{-1}BMB^{T}K_{n-1}^{-1}{k_{*n-1}}+\\ &2k(x_{n},x_{*})\tilde{B}^{T}{k_{*n-1}}+k(x_{n},x_{*})^{2}\tilde{D})\\ \operatorname{Var}(x_{*n})=&\operatorname{Var}(x_{*n-1})-({k_{*n-1}}^{T}K_{n-1}^{-1}BMB^{T}K_{n-1}^{-1}{k_{*n-1}}+\\ &2k(x_{n},x_{*})\tilde{B}^{T}{k_{*n-1}}+k(x_{n},x_{*})^{2}\tilde{D}).\\ \end{split} (24)

Now, proving the Lemma is equivalent to prove

kn1TKn11BMBTKn11kn1+2k(xn,x)B~Tkn1+k(xn,x)2D~>0.\begin{split}&{k_{*n-1}}^{T}K_{n-1}^{-1}BMB^{T}K_{n-1}^{-1}{k_{*n-1}}+2k(x_{n},x_{*})\tilde{B}^{T}{k_{*n-1}}+k(x_{n},x_{*})^{2}\tilde{D}>0.\end{split} (25)

We now prove Equation 25. It can be decomposed as:

M(kn1TKn11BBTKn11kn12k(xn,x)BTKn11kn1+k(xn,x)2)\begin{split}&M\left({k_{*n-1}}^{T}K_{n-1}^{-1}BB^{T}K_{n-1}^{-1}{k_{*n-1}}-2k(x_{n},x_{*})B^{T}K_{n-1}^{-1}{k_{*n-1}}+k(x_{n},x_{*})^{2}\right)\end{split} (26)

(\because M is 1×11\times 1 matrix, therefore M can be regarded as a scalar.)

Now BTKn11kn1B^{T}K_{n-1}^{-1}{k_{*n-1}} is a scalar. Then, consider (BTKn11kn1k(xn,x))2(B^{T}K_{n-1}^{-1}{k_{*n-1}}-k(x_{n},x_{*}))^{2}.

(BTKn11kn1k(xn,x))2=kn1TKn11BBTKn11kn12k(xn,x)BTKn11kn1+k(xn,x)2\begin{split}(B^{T}&K_{n-1}^{-1}{k_{*n-1}}-k(x_{n},x_{*}))^{2}=\\ &{k_{*n-1}}^{T}K_{n-1}^{-1}BB^{T}K_{n-1}^{-1}{k_{*n-1}}-2k(x_{n},x_{*})B^{T}K_{n-1}^{-1}{k_{*n-1}}+k(x_{n},x_{*})^{2}\end{split} (27)

Now consider M=(k(xn,xn)BTKn11B)1M=(k(x_{n},x_{n})-B^{T}K_{n-1}^{-1}B)^{-1}. This means the predictive variance by GP regression of xnx_{n} with a dataset which includes x+,x,,xn1x_{+},x_{-},\cdots,x_{n-1}. Therefore, M>0M>0.

Plugging M>0M>0 and (BTKn11kn1k(xn,x))2>0(B^{T}K_{n-1}^{-1}{k_{*n-1}}-k(x_{n},x_{*}))^{2}>0 into Equation 26,

(kn1TKn11BMBTKn11kn1+2k(xn,x)B~Tkn1+k(xn,x)2D~)>0\begin{split}&({k_{*n-1}}^{T}K_{n-1}^{-1}BMB^{T}K_{n-1}^{-1}{k_{*n-1}}+2k(x_{n},x_{*})\tilde{B}^{T}{k_{*n-1}}+k(x_{n},x_{*})^{2}\tilde{D})>0\end{split} (28)

holds. Hence the lemma was proved.

Thus, if the two points (x+,xx_{+},x_{-}) in the training dataset of GP regression are fixed, the maximum predictive variance is the variance calculated with the two points as the training dataset only.

Next, we calculate the predicted mean under the condition that only x+x_{+} and xx_{-} are used as the training dataset. We write the predictive mean as μ2\mu_{\mathcal{R}2}.

Then k(x)k(x_{*}) and y are written as k(x)=(θr1,θr2)Tk(x_{*})=(\theta_{r1},\theta_{r2})^{T}, y=(+1,1)\textbf{y}=(+1,-1). Using these, the predicted mean is calculated below, plugging them into Eq.˜10.

μ2=k(x)TK1y=θr1θr2θ1θs\mu_{\mathcal{R}2}=k(x_{*})^{T}K^{-1}y=\frac{\theta_{r1}-\theta_{r2}}{\theta_{1}-\theta_{s}} (29)

The category boundary is regarded as the point where the predicted mean is 0, therefore on the category boundary θr1θr2θ1θs=0\frac{\theta_{r1}-\theta_{r2}}{\theta_{1}-\theta_{s}}=0 holds.

Therefore, category boundary is located where θr1=θr2.\theta_{r1}=\theta_{r2}.

This implies that the set of the points which satisfies k(x+,x)=k(x,x)k(x_{+},x_{*})=k(x_{-},x_{*}) is the category boundary, and Eq.˜29 implies that where k(x+,x)>k(x,x)k(x_{+},x_{*})>k(x_{-},x_{*}) the mean is positive.

Similarly, the prediction variance with only x+,xx_{+},x_{-} as the training dataset is described as follows, plugging the values into Eq.˜11:

σ22=θ11θ12θs2(θ1(θr12+θr22)2θsθr1θr2)\sigma_{\mathcal{R}2}^{2}=\theta_{1}-\frac{1}{{\theta_{1}}^{2}-{{\theta_{s}}^{2}}}(\theta_{1}({\theta_{r1}}^{2}+{\theta_{r2}}^{2})-2\theta_{s}\theta_{r1}\theta_{r2}) (30)

Considering Lemma˜1, Eq.˜30 gives the maximum predictive variance of GP regression with the training dataset including x+,xx_{+},x_{-}.

The probability of 𝒞(x)=1\mathcal{C}(x_{*})=-1 is the probability that the value of a sample from the output distribution is lower than 0. Therefore, it can be calculated as follows, using the cumulative distribution function of Gaussian distribution

Erf(0;μ,σ)=0𝒩(x;μ,σ2)𝑑x\operatorname{Erf}(0;\mu,\sigma)=\int^{0}_{-\infty}\mathcal{N}(x;\mu,\sigma^{2})dx (31)

where

𝒩(x;μ,σ2)=12πσ2exp((xμ)22σ2).\mathcal{N}(x;\mu,\sigma^{2})=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp{\left(-\frac{(x-\mu)^{2}}{2\sigma^{2}}\right)}. (32)
Lemma 2.

Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma) is monotonically increasing with respect to σ2\sigma^{2}, under the condition μ>0\mu>0.

Proof.

Let

Erf(0;μ,σ)=0𝒩(x;μ,σ2)𝑑x\operatorname{Erf}(0;\mu,\sigma)=\int^{0}_{-\infty}\mathcal{N}(x;\mu,\sigma^{2})dx (33)

and put

g(σ)=σ20𝒩(x;μ,σ2)𝑑x.g(\sigma)=\frac{\partial}{\partial\sigma^{2}}\int^{0}_{-\infty}\mathcal{N}(x;\mu,\sigma^{2})dx. (34)

Equation˜34 can be rewritten as:

g(σ)=0σ212πσ2exp((xμ)22σ2)𝑑xg(\sigma)=\int^{0}_{-\infty}\frac{\partial}{\partial\sigma^{2}}\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp{\left(-\frac{(x-\mu)^{2}}{2\sigma^{2}}\right)}dx (35)

and calculating partial differentiation,

g(σ)=12(2π)12σ30exp((xμ)22σ2)((xμ)2σ21)𝑑x.\begin{split}&g(\sigma)=\\ &\frac{1}{2}(2\pi)^{-\frac{1}{2}}\sigma^{-3}\int^{0}_{-\infty}\exp{\left(-\frac{(x-\mu)^{2}}{2\sigma^{2}}\right)}\left(\frac{(x-\mu)^{2}}{\sigma^{2}}-1\right)dx.\end{split} (36)

Then we show that g(σ)>0g(\sigma)>0 under the condition μ>0\mu>0.

calculating integral when μ=0\mu=0

Let μ=0\mu=0.

Then g(σ)g(\sigma) can be written as

g(σ)=12(2π)12σ30exp(x22σ2)(x2σ21)𝑑xg(\sigma)=\frac{1}{2}(2\pi)^{-\frac{1}{2}}\sigma^{-3}\int^{0}_{-\infty}\exp{\left(-\frac{x^{2}}{2\sigma^{2}}\right)}\left(\frac{x^{2}}{\sigma^{2}}-1\right)dx (37)

Calculating the integral above, using these integral(Gaussian Integral)

0exp(ax2)=12πa,0x2exp(ax2)=14aπa.\begin{split}&\int_{-\infty}^{0}\exp\left({-ax^{2}}\right)=\frac{1}{2}\sqrt{\frac{\pi}{a}},\\ &\int_{-\infty}^{0}x^{2}\exp\left({-ax^{2}}\right)=\frac{1}{4a}\sqrt{\frac{\pi}{a}}.\end{split} (38)

Now

0exp((xμ)22σ2)((xμ)2σ21)𝑑x\int^{0}_{-\infty}\exp{\left(-\frac{(x-\mu)^{2}}{2\sigma^{2}}\right)}\left(\frac{(x-\mu)^{2}}{\sigma^{2}}-1\right)dx (39)

can be written as below, under condition μ=0\mu=0.

1σ20x2exp(x22σ2)0exp(x22σ2)𝑑x\frac{1}{\sigma^{2}}\int^{0}_{-\infty}x^{2}\exp{\left(-\frac{x^{2}}{2\sigma^{2}}\right)}-\int^{0}_{-\infty}\exp{\left(-\frac{x^{2}}{2\sigma^{2}}\right)}dx (40)

Then the Gaussian integral is applied, and the integral is written as below:

1σ20x2exp(x22σ2)0exp(x22σ2)𝑑x=1σ2142σ2π2σ2122σ2π=0\begin{split}&\frac{1}{\sigma^{2}}\int^{0}_{-\infty}x^{2}\exp{\left(-\frac{x^{2}}{2\sigma^{2}}\right)}-\int^{0}_{-\infty}\exp{\left(-\frac{x^{2}}{2\sigma^{2}}\right)}dx\\ &=\frac{1}{\sigma^{2}}\cdot\frac{1}{4}\cdot 2\sigma^{2}\sqrt{\pi}\sqrt{2\sigma^{2}}-\frac{1}{2}\sqrt{2\sigma^{2}}\sqrt{\pi}\\ &=0\end{split} (41)
calculating integral when μ>0\mu>0

Let μ>0\mu>0 and σ>0\sigma>0.

Equation˜36 can be written as below, using integration by substitution y=xμy=x-\mu.

μexp(y22σ2)(y2σ21)dxdy𝑑y\int^{-\mu}_{-\infty}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(\frac{y^{2}}{\sigma^{2}}-1\right)\frac{dx}{dy}dy (42)

dxdy=1\frac{dx}{dy}=1, and this is calculated as

μexp(y22σ2)(y2σ21)𝑑y=μy2σ2exp(y22σ2)𝑑yμexp(y22σ2)𝑑y=0y2σ2exp(y22σ2)𝑑y0exp(y22σ2)𝑑y(μ0y2σ2exp(y22σ2)𝑑yμ0exp(y22σ2)𝑑y)=μ0exp(y22σ2)𝑑yμ0y2σ2exp(y22σ2)𝑑y\begin{split}&\int^{-\mu}_{-\infty}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(\frac{y^{2}}{\sigma^{2}}-1\right)dy\\ &=\int^{-\mu}_{-\infty}\frac{y^{2}}{\sigma^{2}}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy-\int^{-\mu}_{-\infty}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy\\ &=\int^{0}_{-\infty}\frac{y^{2}}{\sigma^{2}}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy-\int^{0}_{-\infty}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy\\ &-\left(\int^{0}_{-\mu}\frac{y^{2}}{\sigma^{2}}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy-\int^{0}_{-\mu}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy\right)\\ &=\int^{0}_{-\mu}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy-\int^{0}_{-\mu}\frac{y^{2}}{\sigma^{2}}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy\end{split} (43)

This is calculated by two conditions: a) μ<σ\mu<\sigma and b) μ>σ\mu>\sigma.

a) μ<σ\mu<\sigma

Equation˜43 can be written as:

μ0(1y2σ2)exp(y22σ2)𝑑y.\int^{0}_{-\mu}\left(1-\frac{y^{2}}{\sigma^{2}}\right)\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}dy. (44)

In the interval μ<y<0-\mu<y<0, (1y2σ2)>0\left(1-\frac{y^{2}}{\sigma^{2}}\right)>0 therefore the integrand >0>0 all over the integral interval. Therefore the value of Eq.˜44 is bigger than 0 regardless of σ\sigma.

b) μ>σ\mu>\sigma

From the Eq.˜41

0exp(y22σ2)(1y2σ2)𝑑y=0,\int_{-\infty}^{0}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(1-\frac{y^{2}}{\sigma^{2}}\right)dy=0, (45)

Therefore

μexp(y22σ2)(1y2σ2)dy+μ0exp(y22σ2)(1y2σ2)𝑑y=0.\begin{split}\int_{-\infty}^{-\mu}&\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(1-\frac{y^{2}}{\sigma^{2}}\right)dy+\int_{-\mu}^{0}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(1-\frac{y^{2}}{\sigma^{2}}\right)dy=0.\end{split} (46)

in the interval y<μ(<σ)y<-\mu(<-\sigma), the integrand is negative, thus

μexp(y22σ2)(1y2σ2)𝑑y<0.\int_{-\infty}^{-\mu}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(1-\frac{y^{2}}{\sigma^{2}}\right)dy<0. (47)

Equation˜47 can be written as

μexp(y22σ2)(1y2σ2)𝑑y=μ0exp(y22σ2)(1y2σ2)𝑑y,\begin{split}&\int_{-\infty}^{-\mu}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(1-\frac{y^{2}}{\sigma^{2}}\right)dy=-\int_{-\mu}^{0}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(1-\frac{y^{2}}{\sigma^{2}}\right)dy,\end{split} (48)

thus

μ0exp(y22σ2)(1y2σ2)𝑑y>0\int_{-\mu}^{0}\exp{\left(-\frac{y^{2}}{2\sigma^{2}}\right)}\left(1-\frac{y^{2}}{\sigma^{2}}\right)dy>0 (49)

holds.

Therefore, g(σ)>0g(\sigma)>0 regardless of σ\sigma if μ>0\mu>0, then Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma) is monotonically increasing with respect to σ2\sigma^{2}. ∎ Lemma˜2 suggests that, the maximum σ2\sigma^{2} gives the maximum value of Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma). Lemma˜1 suggests that the maximum value of the σ2\sigma^{2} is given by Eq.˜30, thus the maximum value of the Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma) can be calculated, when fixing xx_{*}.

When xx_{*} satisfies the condition θr1=r\theta_{r1}=r, then θ1,θs,θr1\theta_{1},\theta_{s},\theta_{r1} are constants. Thus Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma) can be regarded as the function of θr2\theta_{r2}.

Lemma 3.

Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma) is monotonically increasing with respect to θr2\theta_{r2}.

Proof.

Set Φ(x)\Phi(x) as the cumulative distribution function of 𝒩(x;0,1)\mathcal{N}\left(x;0,1\right).

Φ(x)=12πxexp(u22)𝑑u\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}\exp{\left(-\frac{u^{2}}{2}\right)}du (50)

Then Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma) can be written as

Erf(0;μ,σ)=Φ(μσ)\operatorname{Erf}(0;\mu,\sigma)=\Phi\left(-\frac{\mu}{\sigma}\right) (51)

Φ(x)\Phi(x) is the cumulative distribution function of the Gaussian distribution, thus it is monotonically increasing with respect to xx. Now

μ=θr1θr2θ1θs\mu=\frac{\theta_{r1}-\theta_{r2}}{\theta_{1}-\theta_{s}} (52)

From the condition, θr1>θr2\theta_{r1}>\theta_{r2}, therefore μ\mu is positive. σ\sigma is positive by definition, thus μσ-\frac{\mu}{\sigma} is negative.

Set

g(θr2)=μ2σ2=(θr1θr2)2(θ1θs)21σ2g(\theta_{r2})=\frac{\mu^{2}}{\sigma^{2}}=\frac{(\theta_{r1}-\theta_{r2})^{2}}{(\theta_{1}-\theta_{s})^{2}}\cdot\frac{1}{\sigma^{2}} (53)

When g(θr2)g(\theta_{r2}) is decreasing with respect to θr2\theta_{r2}, then μσ\frac{\mu}{\sigma} is decreasing with respect to θr2\theta_{r2} thus Φ(μσ)\Phi\left(-\frac{\mu}{\sigma}\right) is increasing with respect to θr2\theta_{r2}.

Below we show that g(θr2)g(\theta_{r2}) is decreasing with respect to θr2\theta_{r2}.

the differentiation of g(θr2)g(\theta_{r2}) can be decomposed as

θr2g(θr2)=2(θr1θr2){θ1(θ1+θs)θr1(θr1+θr2)}{θ1θr22+2θsθr1θr2+θ1(θ12θs2θr12)}2.\begin{split}&\frac{\partial}{\partial\theta_{r2}}g(\theta_{r2})=-\frac{2(\theta_{r1}-\theta_{r2})\left\{\theta_{1}(\theta_{1}+\theta_{s})-\theta_{r1}(\theta_{r1}+\theta_{r2})\right\}}{\left\{-\theta_{1}\theta_{r2}^{2}+2\theta_{s}\theta_{r1}\theta_{r2}+\theta_{1}(\theta_{1}^{2}-\theta_{s}^{2}-\theta_{r1}^{2})\right\}^{2}}.\end{split} (54)

The inequality below holds for any kernel functions because the determinant of the gram matrix is positive:

k(x+,x+)k(x,x)>k(x+,x)2k(x_{+},x_{+})k(x_{-},x_{-})>k(x_{+},x_{-})^{2} (55)

Now k(x+,x+)=k(x,x)k(x_{+},x_{+})=k(x_{-},x_{-}) by the condition, thus k(x+,x+)>k(x+,x)θ1>θsk(x_{+},x_{+})>k(x_{+},x_{-})\iff\theta_{1}>\theta_{s} holds.

From the condition of the theorem, θr1θr2>0\theta_{r1}-\theta_{r2}>0 holds.

Thus,

θ1(θ1+θs)>θs(θs+θs)=2θs2\theta_{1}(\theta_{1}+\theta_{s})>\theta_{s}(\theta_{s}+\theta_{s})=2\theta_{s}^{2} (56)

and

θr1(θr1+θr2)<θr1(θr1+θr1)=2θr12\theta_{r1}(\theta_{r1}+\theta_{r2})<\theta_{r1}(\theta_{r1}+\theta_{r1})=2\theta_{r1}^{2} (57)

From the condition of the theorem,

θr1=r<θs\theta_{r1}=r<\theta_{s} (58)

holds, therefore

θ1(θ1+θs)>2θs2>2θr12>θr1(θr1+θr2)\theta_{1}(\theta_{1}+\theta_{s})>2\theta_{s}^{2}>2\theta_{r1}^{2}>\theta_{r1}(\theta_{r1}+\theta_{r2}) (59)

and

θ1(θ1+θs)θr1(θr1+θr2)>0\theta_{1}(\theta_{1}+\theta_{s})-\theta_{r1}(\theta_{r1}+\theta_{r2})>0 (60)

holds.

Plugging Eq.˜58 and Eq.˜60 into Eq.˜54, the partial differentiation is negative, thus Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma) is monotonically increasing with respect to θr2\theta_{r2}. ∎

From the discussions above, xmax=argmaxx[k(x,x)]x_{*max}=\mathop{\rm arg~max}\limits_{x_{*}}[k(x_{-},x_{*})] gives the maximum value of Erf(0;μ,σ)\operatorname{Erf}(0;\mu,\sigma).

Moreover, Erf(0;μ,σ)=Φ(μσ)\operatorname{Erf}(0;\mu,\sigma)=\Phi\left(-\frac{\mu}{\sigma}\right) is monotonically decreasing with respect to μ\mu, thus for any ϵ>0,Erf(0;μ,σ)<Erf(0;μϵ,σ)\epsilon>0,\operatorname{Erf}(0;\mu,\sigma)<\operatorname{Erf}(0;\mu-\epsilon,\sigma) holds.

Now we prove Theorem 1. From the discussions above, the probability is upper-bounded as

Pr(𝒞(x)=1)<0𝒩(x;μϵ,σ2)𝑑x.\Pr(\mathcal{C}(x_{*})=-1)<\int^{0}_{-\infty}\mathcal{N}\left(x;\mu-\epsilon,\sigma^{2}\right)dx. (61)

Plugging the inequality below [shafahi_are_2019] into Eq.˜61, that is, applying

α<0;Φ(α)12exp(α2/2)\forall\alpha<0;~\Phi(\alpha)\leq\frac{1}{2}\exp{\left(-\alpha^{2}/2\right)} (62)

to the RHS of Eq.˜61, the below inequality

Pr(𝒞(x)=1)<ϕ(r|𝒟)=12exp(μ22σ2)\Pr(\mathcal{C}(x_{*})=-1)<\phi(r|\mathcal{D})=\frac{1}{2}\exp{\left(-\frac{\mu^{2}}{2\sigma^{2}}\right)} (63)

holds.

ϕ(x|𝒟)\phi(x_{*}|\mathcal{D}) is a maximum success probability (MSP) function where

μ=θr1θr2θ1θsϵ,σ2=θ1(θ1(θr12+θr22)2θsθr1θr2)θ12θs2,θ1=k(x+,x+),θr1=k(x+,xmax),θr2=k(x,xmax),θs=k(x+,x),xmax=argmaxx[k(x,x)].\begin{split}&\mu=\frac{\theta_{r1}-\theta_{r2}}{\theta_{1}-\theta_{s}}-\epsilon,\\ &\sigma^{2}=\theta_{1}-\frac{(\theta_{1}({\theta_{r1}}^{2}+{\theta_{r2}}^{2})-2\theta_{s}\theta_{r1}\theta_{r2})}{{\theta_{1}}^{2}-{{\theta_{s}}^{2}}},\\ &\theta_{1}=k(x_{+},x_{+}),~\theta_{r1}=k(x_{+},x_{*max}),\\ &\theta_{r2}=k(x_{-},x_{*max}),~\theta_{s}=k(x_{+},x_{-}),\\ &x_{*max}=\mathop{\rm arg~max}\limits_{x_{*}}[k(x_{-},x_{*})].\end{split} (64)

2.4 Remarks on ˜1

  • x+x_{+} and xx_{-} indicate the closest points of the two classes from the training dataset in the classification task since the kernel function can be regarded as the function whose output is the similarity between the data points. Practically, xx_{*} can be regarded as an AE whose original data point is x+x_{+} and the adversarial perturbation is rr. AE is formalized as a sample whose distance from the original data points is a small value (or more specifically, a small adversarial perturbation norm) rr [carlini_towards_2017].

  • This theorem indicates that when an AE is crafted using an original input point in GP with adversarial perturbation rr, the probability that the AE is classified as a different class has a non-trivial upper bound, and that bound is determined as the function of the kernel function used in the GP regression, the distance between the original data point and the nearest data point from the different class. The distance is measured using the kernel function.

  • When the AE is classified as a different class, it can be regarded as a successful attack. Thus, the theorem indicates that the probability of a successful attack using AE is upper-bounded.

  • We use GP regression for the classification task. The method using a regressor for classification instead of a classifier was also used in [lee_deep_2018] and produced a good result.

  • Intuitively, the probability of successful adversarial examples should be upper-bounded by the distance from the nearest sample to the decision boundary. However, in general, the decision boundary cannot be easily calculated. Instead of considering the decision boundary, we proved that finding the closest points from different classes is sufficient to investigate the robustness of GP.

2.5 Maximum Success Probability of AE within All Data in the Training Set

In this section, we prove the maximum success probability of AE in all data in the training set.

The problem formulation is the same as ˜1. Let S={s11,s12,smn}S=\{s_{11},s_{12},\cdots\,s_{mn}\} where sij=k(xi,xj),xi𝒟+s_{ij}=k(x_{i},x_{j}),x_{i}\in\mathcal{D}_{+} and xj𝒟x_{j}\in\mathcal{D}_{-}. Set ss as the maximum of the set SS, and put x+𝒟+,x𝒟x_{+}\in\mathcal{D}_{+},x_{-}\in\mathcal{D}_{-} such that k(x+,x)=sk(x_{+},x_{-})=s. Set rr\in\mathbb{R} such that for all x𝒟+x\in\mathcal{D}_{+} and xDx_{*}\in\mathbb{R}^{D} such that k(x,x)=rk(x,x_{*})=r and k(x,x)>k(x,x)k(x,x_{*})>k(x_{-},x_{*}) holds.

Theorem 2.

Given x+x_{+} and xx_{-} as above, and given xx_{\circ} arbitrarily from 𝒟+\mathcal{D}_{+}, set x+,xDx_{+*},x_{\circ*}\in\mathbb{R}^{D} such that k(x+,x+)=rk(x_{+},x_{+*})=r and k(x,x)=rk(x_{\circ},x_{\circ*})=r. Then, for any r>k(x,x+)r>k(x_{-},x_{+*}), x𝒟+x_{\circ}\in\mathcal{D}_{+}, and xDx_{\circ*}\in\mathbb{R}^{D}, Pr(𝒞(x)=1)\Pr(\mathcal{C}(x_{\circ*})=-1) is upper-bounded by the upper bound of Pr(𝒞(x+)=1)\Pr(\mathcal{C}(x_{+*})=-1), when MSP function ϕ(r|𝒟)\phi(r|\mathcal{D}) in ˜1 increases monotonically with respect to k(x+,x)k(x_{+},x_{-}). That is, Pr(𝒞(x)=1)ϕ(r|𝒟)\Pr(\mathcal{C}(x_{\circ*})=-1)\leq\phi(r|\mathcal{D}) holds.

The schematic diagram is shown in Fig.˜2.

O𝒟+\mathcal{D}_{+}𝒟\mathcal{D}_{-}x+x_{+}xx_{-}s=max(S)s=\max{(S)}
Figure 2: Illustration of the conditions in Theorem 2 assuming the input space as 2\mathbb{R}^{2}. Blue and orange circles suggest the distributions of the input data points of 𝒟+\mathcal{D}_{+} and 𝒟\mathcal{D}_{-} respectively. x+x_{+} and xx_{-} are the nearest input data points from 𝒟+\mathcal{D}_{+} and 𝒟\mathcal{D}_{-} respectively. Note that ss is the value of the kernel function whose inputs are the nearest points from the dataset, that is, s=k(x+,x)s=k(x_{+},x_{-}).
Proof.

Where rr is fixed, ϕ(r|𝒟)\phi(r|\mathcal{D}) can be regarded as a function of k(x+,x)k(x_{+},x_{-}) provided that the kernel function is translation invariant. Therefore, when ϕ(r|𝒟)\phi(r|\mathcal{D}) in ˜1 increases monotonically with respect to k(x+,x)k(x_{+},x_{-}), the greater value ϕ(r|𝒟)\phi(r|\mathcal{D}) is given by bigger ss. From the condition, the value of the kernel function whose inputs are x+x_{+} and xx_{-} is the greatest among the values of the kernel function with the pairwise chosen points from 𝒟+\mathcal{D}_{+} and 𝒟\mathcal{D}_{-}. Thus, the upper bound of Pr(𝒞(x+)=1)\Pr(\mathcal{C}(x_{+*})=-1) is greater than the upper bound with any x𝒟+x_{\circ}\in\mathcal{D}_{+} and the correspondent point in 𝒟\mathcal{D}_{-} used. ∎∎

2.6 Remarks on ˜2

  • ˜2 indicates that the AE crafted with the original data point more successfully when the original data point is closer to the image from another class if the kernel function meets the condition that ϕ(r|𝒟)\phi(r|\mathcal{D}) is monotonically increasing as a function of k(x+,x)k(x_{+},x_{-}).

  • If the kernel function has the condition ϕ(r|𝒟)\phi(r|\mathcal{D}) increases monotonically as a function of k(x+,x)k(x_{+},x_{-}), it suggests that one of the points of the closest pair is the original data point from which the AE is crafted most successfully in GP classification with that kernel function.

3 Experimental Results

For justification of ˜1 and confirmation of whether the theorem holds for GP classification with a standard dataset, we conducted the following experiment.

3.1 Devices

The experiments were conducted on Ubuntu 20.04.3 LSB machine, Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz (72 cores, 144 threads). We used Python 3.8.10, numpy 1.23.5, and scipy 1.10.1 for the calculation.

3.2 Materials

We used ImageNet as a dataset. ImageNet is used under the license written on this page https://www.image-net.org/download.php. The ImageNet data used are downloaded from https://www.kaggle.com/datasets/liusha249/imagenet10.

3.3 Procedure

We used the image of 10 classes from ImageNet, labeled as 0,1,,90,1,\dots,9 respectively. We chose samples from ImageNet, whose labels are either aa or bb (aa and bb are taken from the labels {0,1,,9}\{0,1,\dots,9\} pairwisely, which results in 90 combinations). We used 500 samples for each label. The data used in the experiment were those with 3 colors, and randomly cropped to 224×224224\times 224 pixels to adjust the image size with different aspect ratios. Therefore, the input dimension of the data was 150528. We trained the GP regressor [rasmussen_gaussian_2006] using these samples as training data with the Gaussian kernel. The Gaussian kernel used in the experiment is as follows:

k(x,x)=θ1exp(xx2θ2)k(x,x^{\prime})=\theta_{1}\exp{\left(-\frac{||x-x^{\prime}||^{2}}{\theta_{2}}\right)} (65)

In the experiment, the adversarial perturbation was fixed to 10, and the parameters of the kernel function changed. That is, the parameters θ1=0.1,0.5,1\theta_{1}=0.1,0.5,1 and θ2=10,50\theta_{2}=10,50 were used. The randomly cropped area of each image is the same to compare the results across the conditions.

We decided on the adversarial perturbation size considering the distance between a point and the closest point of the other class (see Figure˜5). Most points are not classified as the other class even when the adversarial perturbation of size 10 is added to the points.

We gave the objective variable -1 to the points with label aa, and 1 to the points with label bb.

We crafted AE as the point with the following conditions:

  • Each AE is crafted by adding perturbation to the original sample of label bb, resulting in crafting 500 AEs.

  • The perturbation is given by moving the original sample by a certain distance (l2l_{2} norm) towards the nearest point whose label is aa.

A sample of adversarial examples crafted in this experiment is shown in Fig.˜3.

Refer to caption
(a)
Refer to caption
(b)
Figure 3: A sample of adversarial examples of norm=1010 crafted in the current experiment. The adversarial perturbation is directed to the nearest point in the other class.

Next, we calculated the predicted mean μ\mu_{\mathcal{R}} and variance σ2\sigma^{2}_{\mathcal{R}} using the GP regressor, and the probability that the regression result is negative (i.e. the classification result is labeled bb) given by

0𝒩(x;μ,σ2)𝑑x.\int_{\infty}^{0}\mathcal{N}\left(x;\mu_{\mathcal{R}},\sigma^{2}_{\mathcal{R}}\right)dx. (66)

This is the empirical probability of the classification of an AE as the other class than the original class in the GP classifier.

Finally, we calculated the theoretical probability of classification of an AE as the other class than the original class, using Φ(μ/σ)\Phi\left(-\mu/\sigma\right) in ˜1 with ϵ=0\epsilon=0.

These theoretical and empirical probabilities were calculated for all the 500 points that had the label bb in each condition. In each condition, the mean and maximum of the theoretical and empirical probabilities were calculated and their mean across the conditions was calculated.

3.4 Result

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Refer to caption
(d)
Refer to caption
(e)
Refer to caption
(f)
Figure 4: The samples from the result of the experiment. The horizontal axis shows the distance between a certain point and the closest point whose label is different from that point. The vertical axis shows the empirical and theoretical probability that the point is classified as a different label from the closest point. Note that the theoretical upper bound changes according to the kernel parameter θ1\theta_{1} and θ2\theta_{2}.
Refer to caption
Figure 5: The histogram of the distance between a point and the nearest point with a different label. The data pool is merged across the pairwise condition, so the number of the data is 45000(=50090)45000(=500*90).

Fig.˜4 shows the sample of the theoretical upper bound and the empirical value in the condition a=0andb=7a=0~\text{and}~b=7. The horizontal axis indicates the distance between a point and the nearest point from the other class, and the vertical axis indicates the theoretical upper bound calculated with ˜1 and the empirical value calculated with Eq.˜66. The result is shown in Table˜1, which indicates the proportion of the points that follow the theorem, and the maximum theoretical value within the aa and bb combination (the values shown in Table˜1 are the means across the 90 combinations aa and bb).

The analysis of the distance between the points and their nearest points that have different labels is shown in Table˜2. The mean of the distance is 89.7289.72, and if the points are restricted to those that do not follow the theorem (that is, their theoretical values are not larger than the empirical values), the distance is smaller. The overall histogram of the distance between a point and the nearest point with a different label is shown in Figure˜5.

An extreme example is shown in (c) of Figure˜4. In this condition of kernel parameters, the empirical probability is high and in that condition the theoretical upper bound is tight.

The upper bound does not always hold because in this experiment, the upper bound is calculated with ˜1 with ϵ=0\epsilon=0, while in the real data, ϵ\epsilon could be positive. However, the fact that the theorem holds with high probability even if ϵ\epsilon is 0 suggests that we can empirically say that ˜1 is satisfied in practice.

Theoretically, when the bandwidth of the Gaussian distribution in the kernel function is small, the empirical decision boundary is not affected by the distant points and is determined mainly by the closest points. In that condition, the theoretical upper bound (calculated by two nearest points) is closer to the empirical probability.

Table 1: The proportion of points where the theoretical value is larger than the empirical value and the mean (±\pm standard deviation) of the theoretical upper bound of the closest pair in each condition. The change of kernel parameters θ1,θ2\theta_{1},\theta_{2} cause the change of the theoretical upper bound.
θ1\theta_{1} 0.1 0.1 0.5 0.5 1 1
θ2\theta_{2} 10 50 10 50 10 50
The proportion of points
that follow the theorem
0.9996 0.9999 0.9991 0.9998 0.9898 0.9997
The mean of the 0.2095 0.1216 0.3512 0.2616 0.3928 0.3216
max theoretical value ±\pm 0.1538 ±\pm 0.1927 ±\pm 0.0809 ±\pm0.1291 ±\pm0.0585 ±\pm0.0981
Table 2: The first row suggests the mean (±\pm standard deviation) of the distance between the points which doesn’t follow the theorem and their nearest points from the other class. The second row suggests the mean of the distance between the points and their nearest points and their nearest points from the other class.
θ1\theta_{1} 0.1 0.1 0.5 0.5 1 1
θ2\theta_{2} 10 50 10 50 10 50
The mean distance between the points 23.59 18.55 31.13 24.58 50.84 30.13
which don’t follow the theorem ±\pm 5.342 ±\pm4.031 ±\pm 8.524 ±\pm13.25 ±\pm 10.99 ±\pm14.98
The mean of the distance
of all points
89.72

4 Discussion

4.1 The Interpretation of the Theorems

Our theorems show the fundamental limitation of the AE in GP; the success probability of AE crafted by any method cannot exceed the value of MSP function ϕ\phi in ˜1, and the vulnerability of a training dataset to the AE is determined by the closest pair of the data points whose labels are different.

The theorems can be applied to other inference models because GP includes some inference models. In particular, the theoretical bound can be computed for neural networks. Since neural networks with an infinite number of units in the hidden layer and Bayesian neural networks are regarded GP [gal_dropout_2016, williams_computing_1996], our results apply to neural networks, resulting in the fundamental limitation of AE in neural networks, as in previous research [blaas_adversarial_2020, fawzi_adversarial_2018, shafahi_are_2019]. Compared with previous research, our result adds the viewpoint of the distance between the samples.

4.2 The Choice of Kernel Functions

Since the predictive mean can be written as the sum of the terms of the kernel functions whose inputs are each training point and test point (Representer Theorem in GP [goos_generalized_2001]), changing the parameter of the kernel functions changes the shape of the decision boundary.

The result of the experiments in Section˜3 suggests that the upper bound of the probability changes according to the choice of the kernel functions. This suggests that changing the activation function of a neural network can improve the robustness of that neural network. The change according to the kernel parameter of the Gaussian kernel (Eq.˜65) shown in the experiment in Section˜3 can be interpreted as below:

  • When θ1\theta_{1} changes, the value of the Gaussian kernel is multiplied by θ1\theta_{1}. With regard to μ\mu and σ\sigma in ˜1, when θ1\theta_{1} becomes larger, μ\mu will remain unchanged and σ\sigma will be larger. Considering that the MSP function (Eq.˜5) is based on the CDF of Gaussian distribution and the CDF of Gaussian distribution increases monotonically with respect to σ\sigma when μ>0\mu>0, the value of Eq.˜5 is greater when θ1\theta_{1} is greater.

  • When θ2\theta_{2} changes, the Gaussian distribution bandwidth in the Gaussian kernel will change. Considering the form of the exponential function, if θ2\theta_{2} is larger, the absolute value of the derivative of the Gaussian kernel with respect to xx2\lVert x-x^{\prime}\rVert^{2} at the same point is smaller. This suggests that if θ2\theta_{2} is greater, the decay of the value of the Gaussian kernel with respect to xx2\lVert x-x^{\prime}\rVert^{2} is slower, and the effect on the value of the Gaussian kernel by the distance of two points would be greater in the interval focused in the experiment.

Since ϕ(r|𝒟)\phi(r|\mathcal{D}) is not easily differentiable with respect to k(x1,x2)k(x_{1},x_{2}), the discrete calculation is required to confirm whether the kernel function satisfied the constraint in Section˜2.6 and to determine the kernel function to improve the robustness. However, it is reasonable that if k(x1,x2)k(x_{1},x_{2}) is small (that is, x1x_{1} and x2x_{2} is too far away), xx_{*}, the point near x1x_{1}, cannot be classified as the same class as x2x_{2}.

Previous research showed that the activation functions in neural networks are equivalent to the kernel functions in GP[lee_deep_2018, williams_computing_1996]. Therefore, the theoretical result of this paper suggests the theoretical basis for the enhancement method that changes the activation function in neural networks. Previous studies suggest that changing the activation function in neural networks can enhance the robustness against AEs [goodfellow_explaining_2015, gowal_uncovering_2021, taghanaki_kernelized_2019]. Our result shows that the change of the kernel function induces the change of the theoretical upper bound of the successful AEs, and considering that the activation function in neural networks is the counterpart of the kernel function in GP, our result suggests that the robustness enhancement method by changing the activation functions in neural networks has the same basis as our result.

4.3 Open Problems

This paper has the following open problems: Further investigation should solve these problems.

  • ˜1 is proved under the assumption that the effect of input points other than x1,x2x_{1},x_{2} on the predictive mean is less than ϵ\epsilon. When using the Gaussian kernel, this condition is reasonable because the value of the Gaussian kernel decreases rapidly according to the distance of the input points, and it is confirmed in the experiment that even if ϵ\epsilon is set to 0, over 98% of the points follow the theorem. The theorem under the condition of using another kernel function should be investigated further.

  • An experiment with neural networks should be conducted to confirm that one can design a more robust activation function using ˜1.

4.4 Limitations

The research has the following limitations.

  • The problem formulation of the research is binary classification, not multi-class classification.

  • Finding the nearest point can be time-consuming if the number of input points increases. An efficient algorithm, such as a divide-and-conquer algorithm, should be investigated for the search for the nearest point.

  • ϵ\epsilon in ˜1 depends on the distribution of the training data, the evaluation of the size of ϵ\epsilon is yet to be investigated (but as we wrote above, the fact that the theorem holds with high probability even if is 0 suggests that we can safely use the upper bound without ˜1 in practical use).

4.5 Conclusion

In this paper, we showed that in the GP classification, the probability of a successful attack of AE has an upper bound. The experiment showed that the parameter of kernel functions can change the theoretical upper bound of the probability of a successful attack of AE, suggesting that the choice of kernel functions affects the robustness against AE in the GP classification.

4.5.1 Disclosure of Interests.

The authors have no competing interests to declare that are relevant to the content of this article.

References

  • [1] Bhagoji, A.N., Cullina, D., Mittal, P.: Lower Bounds on Adversarial Robustness from Optimal Transport. 33rd Conference on Neural Information Processing Systems (Oct 2019), https://doi.org/10.48550/arXiv.1909.12272
  • [2] Blaas, A., Patane, A., Laurenti, L., Cardelli, L., Kwiatkowska, M., Roberts, S.: Adversarial Robustness Guarantees for Classification with Gaussian Processes. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, 108, 3372–3382 (Mar 2020), https://doi.org/10.48550/arXiv.1809.06452
  • [3] Cai, Q.Z., Du, M., Liu, C., Song, D.: Curriculum Adversarial Training. In: International Joint Conference on Artificial Intelligence (May 2018), https://doi.org/10.48550/arXiv.1805.04807
  • [4] Cardelli, L., Kwiatkowska, M., Laurenti, L., Patane, A.: Robustness Guarantees for Bayesian Inference with Gaussian Processes. The Thirty-Third AAAI Conference on Artificial Intelligence (2019), http://arxiv.org/abs/1809.06452
  • [5] Carlini, N., Wagner, D.: Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security pp. 3–14 (Nov 2017), https://doi.org/10.48550/arXiv.1705.07263
  • [6] Carlini, N., Wagner, D.: Towards Evaluating the Robustness of Neural Networks. In: 2017 IEEE Symposium on Security and Privacy (SP). pp. 39–57. IEEE, San Jose, CA, USA (May 2017). https://doi.org/10.1109/SP.2017.49
  • [7] Cohen, J.M., Rosenfeld, E., Kolter, J.Z.: Certified Adversarial Robustness via Randomized Smoothing (Jun 2019), https://doi.org/10.48550/arXiv.1902.02918, arXiv:1902.02918 [cs, stat]
  • [8] Fawzi, A., Fawzi, H., Fawzi, O.: Adversarial vulnerability for any classifier. Proceedings of the 32nd International Conference on Neural Information Processing Systems. pp. 1186–1195 (Nov 2018), https://doi.org/10.48550/arXiv.1802.08686
  • [9] Feinman, R., Curtin, R.R., Shintre, S., Gardner, A.B.: Detecting Adversarial Samples from Artifacts. arXiv:1703.00410 [cs, stat] (Nov 2017), https://doi.org/10.48550/arXiv.1703.00410
  • [10] Gal, Y., Ghahramani, Z.: Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. Proceedings of The 33rd International Conference on Machine Learning 48, 1050–1059 (Oct 2016), https://doi.org/10.48550/arXiv.1506.02142
  • [11] Gilmer, J., Metz, L., Faghri, F., Schoenholz, S.S., Raghu, M., Wattenberg, M., Goodfellow, I.: The Relationship Between High-Dimensional Geometry and Adversarial Examples. arXiv:1801.02774 [cs] (Sep 2018), https://doi.org/10.48550/arXiv.1801.02774
  • [12] Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples. International Conference on Learning Representations (Mar 2015), https://doi.org/10.48550/arXiv.1412.6572
  • [13] Gowal, S., Qin, C., Uesato, J., Mann, T., Kohli, P.: Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples (Mar 2021), https://doi.org/10.48550/arXiv.2010.03593, arXiv:2010.03593 [cs, stat]
  • [14] Lee, J., Bahri, Y., Novak, R., Schoenholz, S.S., Pennington, J., Sohl-Dickstein, J.: Deep Neural Networks as Gaussian Processes. International Conference on Learning Representations (Mar 2018), https://doi.org/10.48550/arXiv.1711.00165
  • [15] Mahloujifar, S., Diochnos, D.I., Mahmoody, M.: The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure. Proceedings of the AAAI Conference on Artificial Intelligence 33, 4536–4543 (Jul 2019). https://doi.org/10.1609/aaai.v33i01.33014536
  • [16] Rasmussen, C.E., Williams, C.K.I.: Gaussian processes for machine learning. Adaptive computation and machine learning, MIT Press, Cambridge, Mass (2006), oCLC: ocm61285753
  • [17] Schölkopf, B., Herbrich, R., Smola, A.J.: A Generalized Representer Theorem. In: Goos, G., Hartmanis, J., Van Leeuwen, J., Helmbold, D., Williamson, B. (eds.) Computational Learning Theory, vol. 2111, pp. 416–426. Springer Berlin Heidelberg, Berlin, Heidelberg (2001). https://doi.org/10.1007/3-540-44581-1_27, http://link.springer.com/10.1007/3-540-44581-1_27, series Title: Lecture Notes in Computer Science
  • [18] Shafahi, A., Huang, W.R., Studer, C., Feizi, S., Goldstein, T.: Are adversarial examples inevitable? 7th International Conference on Learning Representations (2019), https://doi.org/10.48550/arXiv.1809.02104
  • [19] Smith, M.T., Grosse, K., Backes, M., Álvarez, M.A.: Adversarial vulnerability bounds for Gaussian process classification. Machine Learning 112(3), 971–1009 (Mar 2023). https://doi.org/10.1007/s10994-022-06224-6
  • [20] Taghanaki, S.A., Abhishek, K., Azizi, S., Hamarneh, G.: A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE (2019), https://doi.org/10.48550/arXiv.1903.01015
  • [21] Wang, Y., Jha, S., Chaudhuri, K.: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples. Proceedings of the 35 th International Conference on Machine Learning p. 10 (2018). https://doi.org/10.48550/arXiv.1706.03922
  • [22] Williams, C.: Computing with infinite networks. In: Mozer, M., Jordan, M., Petsche, T. (eds.) Advances in Neural Information Processing Systems. vol. 9. MIT Press (1996), https://proceedings.neurips.cc/paper_files/paper/1996/file/ae5e3ce40e0404a45ecacaaf05e5f735-Paper.pdf
  • [23] Zhang, X., Chen, J., Gu, Q., Evans, D.: Understanding the intrinsic robustness of image distributions using conditional generative models. In: Chiappa, S., Calandra, R. (eds.) Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 108, pp. 3883–3893. PMLR (26–28 Aug 2020). https://doi.org/10.48550/arXiv.2003.00378