This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Detecting Out-of-distribution Examples via Class-conditional Impressions Reappearing

Abstract

Out-of-distribution (OOD) detection aims at enhancing standard deep neural networks to distinguish anomalous inputs from original training data. Previous progress has introduced various approaches where the in-distribution training data and even several OOD examples are prerequisites. However, due to privacy and security, auxiliary data tends to be impractical in a real-world scenario. In this paper, we propose a data-free method without training on natural data, called Class-Conditional Impressions Reappearing (C2IR), which utilizes image impressions from the fixed model to recover class-conditional feature statistics. Based on that, we introduce Integral Probability Metrics to estimate layer-wise class-conditional deviations and obtain layer weights by Measuring Gradient-based Importance (MGI). The experiments verify the effectiveness of our method and indicate that C2IR outperforms other post-hoc methods and reaches comparable performance to the full access (ID and OOD) detection method, especially in the far-OOD dataset (SVHN).

Index Terms—  Out-of-distribution, Data-free, Model inversion

1 Introduction

Out-of-distribution (OOD) detection is crucial to ensuring the reliability and safety of AI applications. While OOD examples from the open-world are prone to induce over-confident predictions, which makes the separation between In-distribution (ID) and OOD data a challenging task.

Previous works have proposed a bunch of approaches to estimate the distribution discrepancy between training data and target OOD examples [1, 2, 3, 4]. Despite the progress, conventional OOD detection methods universally rely on ID examples and even OOD examples for detector training or hyper-parameter tuning. However, as privacy and security matter more nowadays, some original datasets, especially the self-made datasets in the business application field, have become unavailable, and the distribution of OOD examples is unpredictable. Thus, utilizing auxiliary data is impractical in the real-world scenario.

Refer to caption
Fig. 1: Comparison between typical out-of-distribution detection methods and ours. In the typical methods, both the pretrained model and training data are required. * means several approaches [2, 5] additionally assume access to OOD data. While our method only requires the fixed classifier and utilizes the synthesized data to detect OOD examples.

In this paper, we propose C2IR, a data-free detection method via Class-Conditional Impressions Reappearing. Several recent studies have shown that pretrained models contain sufficient information about the distribution of training data [6, 7]. We further exploit more detailed virtual statistics from the pretrained model with the inversion trick [8].

Our contributions are as follows:

  • We propose that the image impressions from the model’s BatchNorm layers share similar intermediate feature statistics with in-distribution data.

  • Based on the virtual data, we design the layer-wise deviation metrics by utilizing virtual class-conditional activations mean.

  • To obtain the layer weights without the support of auxiliary data, we propose an attribution method by Measuring Gradient-based Importance (MGI).

2 PRELIMINARIES AND MOTIVATIONS

2.1 Out-of-distribution Detection

The goal of out-of-distribution (OOD) detection is to distinguish anomalous inputs (𝒟out\mathcal{D}_{\text{out}}) from original training data (𝒟in\mathcal{D}_{\text{in}}) based on a well-trained classifier fθf_{\theta}. This problem can be considered as a binary classification task with a score function 𝒮()\mathcal{S}(\cdot). Formally, given an input sample xx, the level-set estimation is:

g(x)={out,if𝒮(x)>γin,if𝒮(x)γg(x)=\begin{cases}\text{out},&\text{if}\quad\mathcal{S}(x)>\gamma\\ \text{in},&\text{if}\quad\mathcal{S}(x)\leq\gamma\\ \end{cases} (1)

In our work, lower scores indicate that the sample xx is more likely to be classified as in-distribution (ID) and γ\gamma represents a threshold for separating the ID and OOD.

2.2 BatchNorm Statistics for Approximate Estimation

Under a Gaussian assumption about feature statistics in the network, feature representations at one layer are denoted as Zd×d\textbf{Z}\in\mathbb{R}^{d\times d}. For training distribution 𝒳\mathcal{X}, each zZz\in\textbf{Z} follows the gaussian distribution 𝒩(μin,σin2)\mathcal{N}(\mu_{\text{in}},\sigma^{2}_{\text{in}}). Normally the statistics of 𝒳\mathcal{X} are calculated from training samples. For the purposes of estimation efficiency [9, 7] and data privacy [8], several works propose that running average from the batch normalization (BN) layer can be an alternative to original in-distribution statistics. During training time, BN expectation 𝔼bn(z)\mathbb{E}_{\text{bn}}(z) and variance Varbn(z)\text{Var}_{\text{bn}}(z) are updated after a batch of samples 𝒯\mathcal{T}:

𝔼bn(z)λ𝔼bn(z)+(1λ)1|𝒯|xb𝒯μ(xb)Varbn(z)λVarbn(z)+(1λ)1|𝒯|xb𝒯σ2(xb)\begin{matrix}\mathbb{E}_{\text{bn}}(z)\leftarrow\lambda\mathbb{E}_{\text{bn}}(z)+(1-\lambda)\frac{1}{|\mathcal{T}|}\sum_{x_{b}\in\mathcal{T}}\mu(x_{b})\\ \text{Var}_{\text{bn}}(z)\leftarrow\lambda\text{Var}_{\text{bn}}(z)+(1-\lambda)\frac{1}{|\mathcal{T}|}\sum_{x_{b}\in\mathcal{T}}\sigma^{2}(x_{b})\end{matrix} (2)

Therefore the in-distribution statistics are approximately estimated: 𝔼in(z)𝔼bn(z),Varin(z)Varbn(z)\mathbb{E}_{\text{in}}(z)\approx\mathbb{E}_{\text{bn}}(z),\text{Var}_{\text{in}}(z)\approx\text{Var}_{\text{bn}}(z).

2.3 Motivation

Feature statistics have been applied in some advanced post-hoc methods and reach striking performances [2, 7, 10]. Their paradigm involves utilizing statistical data from the feature map as a measure of difference and calculating the final score by weighting and summing the scores from each layer. Prior studies obtain the weights via training in in-distribution (ID) images [7, 10] or even additionally estimating on some OOD datasets [2, 11]. Since training (ID) samples are unreachable in our setting, BN statistics are readily available resources from pretrained model fθf_{\theta} for data-free OOD detection.

Previous works in the data-free knowledge distillation[8, 12, 13] prove that teacher-supervised synthesized images are as effective as the natural data in knowledge transferring. A reasonable assumption is that these pseudo samples constrained by the model’s BN statistics recall the impression of training data. Therefore, unlike the direct use of BatchNorm mean [7], we consider an inverting and recollecting method can be more appropriate in the data-free scenario.

3 PROPOSED METHOD

3.1 Detection Framework

The framework of our method detects an anomalous input xx by computing layer-wise deviations Δl()\Delta_{l}(\cdot). With the scaling values αlc\alpha^{c}_{l} for class cc, score function S()S(\cdot) is formulated as,

S(x)=l=1LαlcΔl(x)S(x)=\sum\limits_{l=1}^{L}\alpha^{c}_{l}\Delta_{l}(x) (3)

where class cc is determined by maximum softmax probability (MSP) [1]: c=argmaxi[1,C]fθ(x)ic=\mathop{\text{argmax}}_{i\in[1,C]}f_{\theta}(x)_{i}. Suppose unknown inputs {x}\{x\} from distribution 𝒬\mathcal{Q} and original training data {xin}\{x_{\text{in}}\} from 𝒳\mathcal{X}. The deviation between 𝒳\mathcal{X} and 𝒬\mathcal{Q} can be defined following Integral Probability Metrics[14]: supψ(𝔼x𝒬[ψ(x)]𝔼xin𝒳[ψ(xin)])\text{sup}_{\psi}(\mathbb{E}_{x\in\mathcal{Q}}[\psi(x)]-\mathbb{E}_{x_{\text{in}}\in\mathcal{X}}[\psi(x_{\text{in}})]), where ψ()\psi(\cdot) denotes a witness function. Consider the real condition that only a single input xx from 𝒬\mathcal{Q} is measured. Specially, we further estimate the class-conditional deviation by,

supψ(ψ(x)𝔼xin𝒳[ψ(xin)y=c])\text{sup}_{\psi}(\psi(x)-\mathbb{E}_{x_{\text{in}}\in\mathcal{X}}[\psi(x_{\text{in}})\mid y=c]) (4)

As the in-distribution statistics are invisible to our data-free detection method, we use the expectation of virtual images {x^}\{\hat{x}\} to be the replacement. The details of layer-wise deviations and scaling values are discussed in the following sections.

3.2 Reappearing Data Statistics from BN Layers

Model inversion is initially designed for artistic effects on clean images [15] and has been developed for synthesizing virtual training data in data-free knowledge distillation [8]. In this paper, we utilize this technique to excavate prior class-conditional statistics from the model’s BatchNorm layers.

Data Recovering. Given a random noise x^\hat{x} as input and a label cc as target, recovering process aims at optimizing x^\hat{x} into a pseudo image with discernible visual information by,

minx^CE(fθ(x^,c))(i)\displaystyle\min\limits_{\hat{x}}\underbrace{\mathcal{L}_{\text{CE}}(f_{\theta}(\hat{x},c))}_{\text{(i)}} +lμl(x^)𝔼bnl(z)2\displaystyle+\sum\nolimits_{l}\|\mu_{l}(\hat{x})-\mathbb{E}^{l}_{bn}(z)\|_{2} (5)
+lσl2(x^)Varbnl(z)2(ii)\displaystyle+\underbrace{\sum\nolimits_{l}\|\sigma^{2}_{l}(\hat{x})-\mathbb{\text{Var}}^{l}_{bn}(z)\|_{2}}_{\text{(ii)}}

where (i) represents class prior loss (cross entropy loss) and (ii) represents feature statistic (mean and variance) divergences between x^\hat{x} and BN running average at each ll layer.

Class-conditional Activation Mean. The activation units’ discrepancy is introduced to differentiate examples from ID and OOD. For each class cc, we control the class prior loss and generate the one-class dataset 𝒟sync\mathcal{D}^{c}_{\text{syn}}. Denotes the ll-layer activation map for the input xx as Al(x)h×dl×dlA^{l}(x)\in\mathbb{R}^{h\times d_{l}\times d_{l}}, where hh is the channel size and dld_{l} is the dimension. Given input xx and its MSP label cc, our layer-wise deviations Δl()\Delta_{l}(\cdot) transforms the equation4 with the channel-wise linear weighted average C-Avg(\cdot):

Δl(x)\displaystyle\Delta_{l}(x) =supψψ(x)1|𝒟sync|x^𝒟syncψ(x^)\displaystyle=\text{sup}_{\psi}\mid\psi(x)-\frac{1}{|\mathcal{D}^{c}_{\text{syn}}|}\sum\limits_{\hat{x}\in\mathcal{D}^{c}_{\text{syn}}}\psi(\hat{x})\mid (6)
=C-Avg(Al(x))1|𝒟sync|x^𝒟syncC-Avg(Al(x^))(i)\displaystyle=\mid\text{C-Avg}(A^{l}(x))-\underbrace{\frac{1}{|\mathcal{D}^{c}_{\text{syn}}|}\sum\limits_{\hat{x}\in\mathcal{D}^{c}_{\text{syn}}}\text{C-Avg}(A^{l}(\hat{x}))}_{\text{(i)}}\mid
=k=1hβl,kc1dl2i=1dlj=1dlAl(x)k,i,j\displaystyle=\mid\sum\limits_{k=1}^{h}\beta^{c}_{l,k}\cdot\frac{1}{d_{l}^{2}}\sum\limits_{i=1}^{d_{l}}\sum\limits_{j=1}^{d_{l}}A^{l}(x)_{k,i,j}
1|𝒟sync|x^𝒟synck=1hβl,kc1dl2i=1dlj=1dlAl(x^)k,i,j\displaystyle\quad\quad-\frac{1}{|\mathcal{D}^{c}_{\text{syn}}|}\sum\limits_{\hat{x}\in\mathcal{D}^{c}_{\text{syn}}}\sum\limits_{k=1}^{h}\beta^{c}_{l,k}\cdot\frac{1}{d_{l}^{2}}\sum\limits_{i=1}^{d_{l}}\sum\limits_{j=1}^{d_{l}}A^{l}(\hat{x})_{k,i,j}\mid

where βl,kc\beta^{c}_{l,k} denotes the k-th channel weight for class cc and (i) represents the empirical C-Avg¯c\overline{\text{C-Avg}}_{c} from 𝒟sync\mathcal{D}_{\text{syn}}^{c}.

3.3 Measuring Gradient-based Importance (MGI)

The scaling values αlc,βl,kc\alpha^{c}_{l},\beta^{c}_{l,k} greatly influence the performance of OOD detection. In our data-free method, the process of image evolution naturally provides an observation for the weights of neuron importance.

Channel-wise Gradient Average. Let ycy^{c} be the model output score for class cc. We introduce the gradient-based attribution method [16] to compute the gradient average w¯l,kc\overline{w}^{c}_{l,k} as the weight of kk-th channel activation map at ll-th layer.

w¯l,kc=1|𝒟sync|x^𝒟sync1dl2i=1dlj=1dlycAl(x^)k,i,j\displaystyle\overline{w}^{c}_{l,k}=\frac{1}{|\mathcal{D}^{c}_{\text{syn}}|}\sum\limits_{\hat{x}\in\mathcal{D}^{c}_{\text{syn}}}\frac{1}{d_{l}^{2}}\sum\limits_{i=1}^{d_{l}}\sum\limits_{j=1}^{d_{l}}\frac{\partial y^{c}}{\partial A^{l}(\hat{x})_{k,i,j}} (7)

Layer-wise Activation Mean Sensitivity. Denotes the image sequence during the optimizing procedure with TT iterations as {x^0,x^1,x^T}\{\hat{x}_{0},\hat{x}_{1},...\hat{x}_{T}\}. Compared with x^T\hat{x}_{T} containing useful in-distribution features that induce high-confidence prediction [17] for target class cc, random noise x^0\hat{x}_{0} is considered to be a baseline without any prior information. Based on the variations of their feature statistics, we define the ll-layer activations mean sensitivity for tt-iteration image x^t\hat{x}_{t} as,

δl(x^t)\displaystyle\delta_{l}(\hat{x}_{t}) =1ΔytcytcC-Avg(Al(x^t))\displaystyle=\frac{1}{\Delta y_{t}^{c}}\cdot\frac{\partial y_{t}^{c}}{\partial\text{C-Avg}(A^{l}(\hat{x}_{t}))} (8)
=1ytcyt1ck=1hβl,kc1dl2i=1dlj=1dlytcAl(x^t)k,i,j\displaystyle=\frac{1}{y^{c}_{t}-y^{c}_{t-1}}\cdot\sum\limits_{k=1}^{h}\beta^{c}_{l,k}\frac{1}{d_{l}^{2}}\sum\limits_{i=1}^{d_{l}}\sum\limits_{j=1}^{d_{l}}\frac{\partial y^{c}_{t}}{\partial A^{l}(\hat{x}_{t})_{k,i,j}}

This is, a larger variation of feature statistics during the evolution of cc-class output score ytcy^{c}_{t} indicates that this layer is more sensitive to the label shifts on inputs. In other words, the layer’s magnitudes can better differentiate abnormal inputs. Therefore, we use the relative sensitivity δl(x^t)\delta_{l}(\hat{x}_{t}) to measure the layer-contribution for OOD detection. The total δl(x^)\delta_{l}(\hat{x}) and empirical δ¯lc\overline{\delta}^{c}_{l} for 𝒟sync\mathcal{D}^{c}_{\text{syn}} are computed by,

δl(x^)=1Tt=1Tδl(x^t),δ¯lc=1|𝒟sync|x^𝒟syncδl(x^)\displaystyle\delta_{l}(\hat{x})=\frac{1}{T}\sum\limits_{t=1}^{T}\delta_{l}(\hat{x}_{t}),\overline{\delta}^{c}_{l}=\frac{1}{|\mathcal{D}^{c}_{syn}|}\sum\limits_{\hat{x}\in\mathcal{D}^{c}_{\text{syn}}}\delta_{l}(\hat{x}) (9)

Then the βl,kc\beta^{c}_{l,k} and αlc\alpha_{l}^{c} are obtained by,

βl,kc=exp(w¯l,kc)i=1hexp(w¯l,ic),αlc=exp(δ¯lc)i=1Lexp(δic¯)\displaystyle\beta^{c}_{l,k}=\frac{\text{exp}(\overline{w}^{c}_{l,k})}{\sum_{i=1}^{h}\text{exp}(\overline{w}^{c}_{l,i})},\alpha^{c}_{l}=\frac{\text{exp}(\overline{\delta}^{c}_{l})}{\sum_{i=1}^{L}\text{exp}(\overline{\delta^{c}_{i}})} (10)

For the sake of computational efficiency, αlc\alpha^{c}_{l}, βl,kc\beta^{c}_{l,k} and C-Avg¯c\overline{\text{C-Avg}}_{c} are calculated in advance during the data recovering for each one-class dataset 𝒟sync\mathcal{D}^{c}_{\text{syn}}.

OOD Datasets Softmax Score[1] / ODIN[3] / EBM[4] / C2IR(Ours)
TNR at TPR 95%(\uparrow) AUROC(\uparrow) Detection Acc(\uparrow) AUPRin(\uparrow)
TinyImagenet(R) 54.79/ 82.01/ 76.17/ 88.95 94.92/ 95.43/ 96.37/ 98.69 92.46/ 88.88/ 93.22/ 96.12 96.62/ 94.84/ 97.52/ 98.24
TinyImagenet(C) 63.52/ 76.79/ 81.07/ 92.37 95.60/ 93.68/ 96.94/ 98.45 92.89/ 86.59/ 93.71/ 94.98 97.01/ 92.61/ 97.86/ 98.03
iSUN 55.82/ 83.43/ 76.89/ 89.19 94.95/ 96.08/ 96.50/ 97.87 92.16/ 89.65/ 93.22/ 93.24 96.88/ 96.11/ 97.80/ 97.94
LSUN(R) 57.42/ 80.49/ 80.13/ 92.24 95.15/ 95.46/ 96.62/ 99.10 92.93/ 88.28/ 93.90/ 95.54 96.81/ 95.20/ 97.76/ 98.94
LSUN(C) 41.95/ 74.10/ 56.55/ 86.85 93.52/ 89.59/ 94.84/ 98.32 91.98/ 84.75/ 92.20/ 93.88 95.78/ 85.33/ 96.50/ 98.23
SVHN 60.33/ 56.07/ 77.59/ 94.53 94.26/ 87.00/ 95.58/ 98.81 92.93/ 80.89/ 91.61/ 94.81 90.55/ 71.51/ 90.39/ 96.86
CIFAR100 41.39/ 35.31/ 50.09/ 58.91 87.30/ 75.29/ 86.60/ 91.13 82.10/ 69.52/ 80.99/ 87.74 85.91/ 71.10/ 83.36/ 88.39
Avg 53.60/ 69.74/ 71.21/ 86.15 93.67/ 90.36/ 94.78/ 97.48 91.06/ 84.08/ 91.26/ 93.76 94.22/ 86.67/ 94.46/ 96.66
Table 1: Comparison of OOD detection performance with other Post-hoc methods. We conduct our experiments on ID dataset CIFAR10 with ResNet34. For ODIN, the temperature and noise magnitude are obtained from the original training data.

4 EXPERIMENTS

4.1 Experimental Setup

We use CIFAR10 as the in-distribution (ID) dataset and Resnet34 as the base model. By default, the model is first trained on natural data and fixed during test time to detect anomalous inputs from a mixture of 10000 ID images (CIFAR10 dataset) and 10000 OOD images (other natural datasets). Especially for our data-free approach, we inverted 2500 virtual samples as the labeled data at each class.

We apply some post-hoc methods as baselines: Softmax Score [1], ODIN [3], and Energy Based Method (EBM) [4]. Besides, some approaches that only depend on training data (1-D [18], G-ODIN [19], Gram [10]) and data-available reference ma-dis. [2] are also considered. Following OOD literature [1, 2, 3, 4], we adopt the threshold-free metric [20] to evaluate the performance: the TNR at 95% TPR, AUROC, Detection accuracy, and AUPRin.

4.2 Main Results

As shown in table 1, we evaluate the above metrics on 7 OOD datasets. Among these baselines, Softmax Score [1] and EBM [4] (without fine-tuning) are hyperparameter-free and the hyperparameters of ODIN [3] are from training data. For all OOD datasets, our C2IR outperforms other post-hoc methods without any support from natural data and, in particular, achieves the average improvement of +14.94% TNR when 95% CIFAR10 images are correctly classified. Moreover, our method significantly enhances the performance of the far-OOD dataset SVHN compared with near-OOD datasets (e.g., CIFAR100).

Figure 2a and 2b show the visualization of natural bird images in CIFAR10 and impressions from the pretrained network (ResNet34). The differences of intermediate activations means are shown in figure 2c. As in- and out-of-distribution statistics are distinguishable at each layer, the virtual ones are closer to the natural data compared with BN running average. It demonstrates that these virtual statistics can contribute more to detecting OOD examples.

4.3 Effect of Our OOD Detection Metric

As shown in table 2, we compare C2IR with several advanced in-distribution training methods (1-D [18], G-ODIN [19], GRAM [10]) with the only access to virtual data. Besides, data-available (both ID and OOD) method Ma-dis. [2] is also introduced as reference. The distribution shifts in the synthesized images cause poor performances in other baselines. Our method can utilize these virtual data to reach an ideal performance of 98.9% average AUROC, even comparable to the data-available methods.

Methods Data Access T L S Avg
Ma dis.[2] ID & OOD 99.5 99.7 99.1 99.4
1-D[18] Virtual 37.8 36.2 64.2 46.1
G-ODIN[19] Virtual 53.9 62.8 79.4 65.4
GRAM[10] Virtual 84.7 82.5 94.2 87.1
C2IR(Ours) Virtual 98.7 99.1 98.8 98.9
Table 2: AUROC(%) evaluation on virtual data and data-available method Ma dis. as reference. T denotes TinyImageNet(R), L denotes LSUN(R) and S denotes SVHN.

4.4 Effects of MGI and Virtual Statistics

To verify the effectiveness of our MGI method and the virtual activations mean, we conduct the ablation study as figure 2(d). MGI is compared with three baselines: 1) only use the penultimate layer (penu.) [6]; 2) average all layers’ statistics without weights (mean); 3) random weights. We additionally evaluate our metrics only with BN running averages for OOD detection. The results show that our virtual class-conditional activations mean outperforms BN statistics on all these strategies. Furthermore, compared with other baselines, activations mean equipped with MGI achieve the best performance.

Refer to caption
(a) Bird images in CIFAR10
Refer to caption
(b) Impressions in the network
Refer to caption
(c) Comparison of Act. mean
Refer to caption
(d) Ablation study
Fig. 2: 2(a)/2(b): Visualization of natural and virtual bird images. 2(c): We compare activations means at each layer among ID data (1000 bird images from CIFAR10), OOD data (1000 images from SVHN), virtual statistics of bird class from our method, and BN running averages. 2(d): Ablation study on our MGI method and virtual statistics.

5 Conclusion

We propose a novel OOD detection method without the need for the original training dataset, which utilizes image impressions to recover class-conditional feature statistics from a fixed model. The experiments indicate that our method outperforms other post-hoc baselines with competing results. For future work, we will investigate the efficiency issues related to data preparation. Moreover, the potential of this method for continual learning is also worth exploring.

6 ACKNOWLEDGMENT

This work is supported by the Key Research and Development Program of Guangdong Province (grant No. 2021B0101400003) and Corresponding author is Xiaoyang Qu (quxiaoy@gmail.com).

References

  • [1] Dan Hendrycks and Kevin Gimpel, “A baseline for detecting misclassified and out-of-distribution examples in neural networks,” in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.
  • [2] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin, “A simple unified framework for detecting out-of-distribution samples and adversarial attacks,” Advances in neural information processing systems, vol. 31, 2018.
  • [3] Shiyu Liang, Yixuan Li, and R. Srikant, “Enhancing the reliability of out-of-distribution image detection in neural networks,” in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018.
  • [4] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li, “Energy-based out-of-distribution detection,” Advances in neural information processing systems, vol. 33, 2020.
  • [5] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich, “Deep anomaly detection with outlier exposure,” arXiv preprint arXiv:1812.04606, 2018.
  • [6] Yiyou Sun, Chuan Guo, and Yixuan Li, “React: Out-of-distribution detection with rectified activations,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  • [7] Xin Dong, Junfeng Guo, Ang Li, Wei-Te Ting, Cong Liu, and HT Kung, “Neural mean discrepancy for efficient out-of-distribution detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19217–19227.
  • [8] Hongxu Yin, Pavlo Molchanov, Jose M Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem, Niraj K Jha, and Jan Kautz, “Dreaming to distill: Data-free knowledge transfer via deepinversion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 8715–8724.
  • [9] Sheng Jia, Ehsan Nezhadarya, Yuhuai Wu, and Jimmy Ba, “Efficient statistical tests: A neural tangent kernel approach,” in International Conference on Machine Learning. PMLR, 2021, pp. 4893–4903.
  • [10] Chandramouli Shama Sastry and Sageev Oore, “Detecting out-of-distribution examples with gram matrices,” in International Conference on Machine Learning. PMLR, 2020, pp. 8491–8501.
  • [11] Ziqian Lin, Sreya Dutta Roy, and Yixuan Li, “Mood: Multi-level out-of-distribution detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15313–15323.
  • [12] Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, and Qi Tian, “Data-free learning of student networks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 3514–3522.
  • [13] Xiaoyang Qu, Jianzong Wang, and Jing Xiao, “Enhancing data-free adversarial distillation with activation regularization and virtual interpolation,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 3340–3344.
  • [14] Alfred Müller, “Integral probability metrics and their generating classes of functions,” Advances in Applied Probability, vol. 29, no. 2, 1997.
  • [15] Alexander Mordvintsev, Christopher Olah, and Mike Tyka, “Inceptionism: Going deeper into neural networks,” 2015.
  • [16] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
  • [17] Andrew Ilyas, Shibani Santurkar, Logan Engstrom, Brandon Tran, and Aleksander Madry, “Adversarial examples are not bugs, they are features,” Advances in neural information processing systems, vol. 32, 2019.
  • [18] Alireza Zaeemzadeh, Niccolo Bisagno, Zeno Sambugaro, Nicola Conci, Nazanin Rahnavard, and Mubarak Shah, “Out-of-distribution detection using union of 1-dimensional subspaces,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9452–9461.
  • [19] Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira, “Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10951–10960.
  • [20] Jesse Davis and Mark Goadrich, “The relationship between precision-recall and roc curves,” in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 233–240.