This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Distributionally Robust Safe Screening

Hiroyuki Hanada RIKEN, Wako, Saitama, Japanhiroyuki.hanada@riken.jp    Satoshi Akahane Nagoya University, Nagoya, Aichi, Japan    Tatsuya Aoyama33footnotemark: 3    Tomonari Tanaka33footnotemark: 3    Yoshito Okura33footnotemark: 3    Yu Inatsu Nagoya Institute of Technology, Nagoya, Aichi, Japan    Noriaki Hashimoto11footnotemark: 1    Taro Murayama DENSO CORPORATION, Kariya, Aichi, Japan    Lee Hanju55footnotemark: 5    Shinya Kojima55footnotemark: 5    Ichiro Takeuchi33footnotemark: 3 11footnotemark: 1 ichiro.takeuchi@mae.nagoya-u.ac.jp
Abstract

In this study, we propose a method Distributionally Robust Safe Screening (DRSS), for identifying unnecessary samples and features within a DR covariate shift setting. This method effectively combines DR learning, a paradigm aimed at enhancing model robustness against variations in data distribution, with safe screening (SS), a sparse optimization technique designed to identify irrelevant samples and features prior to model training. The core concept of the DRSS method involves reformulating the DR covariate-shift problem as a weighted empirical risk minimization problem, where the weights are subject to uncertainty within a predetermined range. By extending the SS technique to accommodate this weight uncertainty, the DRSS method is capable of reliably identifying unnecessary samples and features under any future distribution within a specified range. We provide a theoretical guarantee of the DRSS method and validate its performance through numerical experiments on both synthetic and real-world datasets.

1 Introduction

In this study, we consider the problem of identifying unnecessary samples and features in a class of supervised learning problems within dynamically changing environments. Identifying unnecessary samples/features offers several benefits. It helps in decreasing the storage space required for keeping the training data for updating the machine learning (ML) models in the future. Moreover, in situations demanding real-time adaptation of ML models to quick environmental changes, the use of fewer samples/features enables more efficient learning.

Our basic idea to tackle this problem is to effectively combine distributionally robust (DR) learning and safe screening (SS). DR learning is a ML paradigm that focuses on developing models robust to variations in the data distribution, providing performance guarantees across different distributions (see, e.g., [1]). On the other hand, SS refers to sparse optimization techniques that can identify irrelevant samples/features before model training, ensuring computational efficiency by avoiding unnecessary computations on certain samples/features which do not contribute to the final solution [2, 3]. The key technical idea of SS is to identify a bound of the optimal solution before solving the optimization problem. This allows for the identification of unnecessary samples/features, even without knowing the optimal solution.

As a specific scenario of dynamically changing environment, we consider covariate shift setting [4, 5] with unknown test distribution. In this setting, the distribution of input features in the training data may undergo changes in the test phase, yet the actual nature of these changes remains unknown. A ML problem (e.g., regression/classification problem) in covariate shift setting can be formulated as a weighted empirical risk minimization (weighted ERM) problem, where weights are assigned based on the density ratio of each sample in the training and test distributions. Namely, by assigning higher weights to training samples that are important in the test distribution, the model can focus on learning from relevant samples and mitigate the impact of distribution differences between the training and the test phases. If the distribution during the test phase is known, the weights can be uniquely fixed. However, if the test distribution is unknown, it is necessary to solve a weighted ERM problem with unknown weights.

Our main contribution is to propose a DRSS method for covariate shift setting with unknown test distribution. The proposed method can identify unnecessary samples/features regardless of how the distribution changes within a certain range in the test phase. To address this problem, we extend the existing SS methods in two stages. The first is to extend the SS for ERM so that it can be applied to weighted ERM. The second is to further extend the SS so that it can be applied to weighted ERM when the weights are unknown. While the first extension is relatively straightforward, the second extension presents a non-trivial technical challenge (Figure 1). To overcome this challenge, we derive a novel bound of the optimal solutions of the weighted ERM problem, which properly accounts for the uncertainty in weights stemming from the uncertainty of the test distribution.

In this study, we consider DRSS for samples in sample-sparse models such as SVM [6], and that for features for feature-sparse models such as Lasso [7]. We denote the DRSS for samples as distributionally robust safe sample screening (DRSsS) and that for features as distributionally robust safe feature screening (DRSfS), respectively.

Our contributions in this study are summarized as follows. First, by effectively combining DR and SS, we introduce a framework for identifying unnecessary samples/features under dynamically changing uncertain environment. Second, We consider a DR covariate-shift setting where the input distribution of an ERM problem changes within a certain range. In this setting, we propose a novel method called DRSS method that can identify samples/features that are guaranteed not to affect the optimal solution, regardless of how the distribution changes within the specified range. Finally, through numerical experiments, we verify the effectiveness of the proposed DRSS method. Although the DRSS method is developed for convex ERM problems, in order to demonstrate the applicability to deep learning models, we also present results where the DRSS method is applied in a problem setting where the final layer of the model is fine-tuned according to changes in the test distribution.

Refer to caption
Figure 1: Schematic illustration of the proposed Distributionally Robust Safe Screening (DRSS) method. Panel A displays the training samples, each assigned equal weight, as indicated by the uniform size of the points. Panel B depicts various unknown test distributions, highlighting how the significance of training samples varies with different realizations of the test distribution. Panel C shows the outcomes of safe sample screening (SsS) across multiple realizations of test distributions. Finally, Panel D presents the results of the proposed DRSS method, demonstrating its capability to identify redundant samples regardless of the observed test distribution.

1.1 Related Works

The DR setting has been explored in various ML problems, aiming to enhance model robustness against data distribution variations. A DR learning problem is typically formulated as a worst-case optimization problem since the goal of DR learning is to ensure model performance under the worst-case data distribution within a specified range. Hence, a variety of optimization techniques tailored to DR learning have been investigated within both the ML and optimization communities [8, 9, 1]. The proposed DRSS method is one of such DR learning methods, focusing specifically on the problem of sample/feature deletion. The ability to identify irrelevant samples/features is of practical significance. For example, in the context of continual learning (see, e.g., [10]), it is crucial to effectively manage data by selectively retaining and discarding samples/features, especially in anticipation of changes in future data distributions. Incorrect deletion of essential data can lead to catastrophic forgetting [11], a phenomenon where a ML model, after being trained on new data, quickly loses information previously learned from older datasets. The proposed DRSS method tackles this challenge by identifying samples/features that, regardless of future data distribution shifts, will not have any influence on all possible newly trained model in the future.

SS refers to optimization techniques in sparse learning that identify and exclude irrelevant samples or features from the learning process. SS can reduce computational cost without changing the final trained model. Initially, SfS was introduced by [2] for the Lasso. Subsequently, SsS was proposed by [3] for the SVM. Among various SS methods developed so far, the most commonly used is based on the duality gap [12, 13]. Our proposed DRSS method also adopts this approach. Over the past decade, SS has seen diverse developments, including methodological improvements and expanded application scopes [14, 15, 16, 17, 18, 19, 20, 21]. Unlike other SS studies that primarily focused on reducing computational costs, this study adopts SS for a different purpose. We employ SS across scenarios where data distribution varies within a defined range, aiming to discard unnecessary samples/features. To our knowledge, no existing studies have utilized SS within the DR learning framework.

2 Preliminaries

Notations used in this paper are described in Table 1.

Table 1: Notations used in the paper. \mathbb{R}: all real numbers, \mathbb{N}: all positive integers, n,m,pn,m,p\in\mathbb{N}: integers, f:n{+}f:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}: convex function, Mn×mM\in\mathbb{R}^{n\times m}: matrix, 𝒗n\bm{v}\in\mathbb{R}^{n}: vector.
mijm_{ij}\in\mathbb{R} (small case of matrix variable)
the element at the ithi^{\mathrm{th}} row and
the jthj^{\mathrm{th}} column of MM
viv_{i}\in\mathbb{R} (nonbold font of vector variable)
the ithi^{\mathrm{th}} element of 𝒗\bm{v}
Mi:1×nM_{i:}\in\mathbb{R}^{1\times n} the ithi^{\mathrm{th}} row of MM
M:jm×1M_{:j}\in\mathbb{R}^{m\times 1} the jthj^{\mathrm{th}} column of MM
[n][n] {1,2,,n}\{1,2,\dots,n\}
0\mathbb{R}_{\geq 0} all nonnegative real numbers
\otimes elementwise product
diag(𝒗)n×n\mathrm{diag}(\bm{v})\in\mathbb{R}^{n\times n} diagonal matrix; (diag(𝒗))ii=vi(\mathrm{diag}(\bm{v}))_{ii}=v_{i}
and (diag(𝒗))ij=0(\mathrm{diag}(\bm{v}))_{ij}=0 (iji\neq j)
𝒗×Mn×m\bm{v}{\raisebox{0.80002pt}{$\times$}\Box}M\in\mathbb{R}^{n\times m} diag(𝒗)M\mathrm{diag}(\bm{v})M
𝟎nn\bm{0}_{n}\in\mathbb{R}^{n} [0,0,,0][0,0,\dots,0]^{\top} (vector of size nn)
𝟏nn\bm{1}_{n}\in\mathbb{R}^{n} [1,1,,1][1,1,\dots,1]^{\top} (vector of size nn)
𝒗p0\|\bm{v}\|_{p}\in\mathbb{R}_{\geq 0} (i=1nvip)1/p(\sum_{i=1}^{n}v_{i}^{p})^{1/p} (pp-norm)
f(𝒗)n\partial f(\bm{v})\subseteq\mathbb{R}^{n} all 𝒈n\bm{g}\in\mathbb{R}^{n} s.t. “for any 𝒗n\bm{v}^{\prime}\in\mathbb{R}^{n},
f(𝒗)f(𝒗)𝒈(𝒗𝒗)f(\bm{v}^{\prime})-f(\bm{v})\geq\bm{g}^{\top}(\bm{v}^{\prime}-\bm{v})
(subgradient)
𝒵[f]n{\cal Z}[f]\subseteq\mathbb{R}^{n} {𝒗nf(𝒗)={𝟎n}}\{\bm{v}^{\prime}\in\mathbb{R}^{n}\mid\partial f(\bm{v}^{\prime})=\{\bm{0}_{n}\}\}
f(𝒗){+}f^{*}(\bm{v})\in\mathbb{R}\cup\{+\infty\} sup𝒗n(𝒗𝒗f(𝒗))\sup_{\bm{v}^{\prime}\in\mathbb{R}^{n}}(\bm{v}^{\top}\bm{v}^{\prime}-f(\bm{v}^{\prime}))
(convex conjugate)
ff is κ\kappa-strongly f(𝒗)κ𝒗22f(\bm{v})-\kappa\|\bm{v}\|_{2}^{2} is convex with
   convex” (κ>0\kappa>0)    respect to 𝒗\bm{v}
ff is μ\mu-smooth” f(𝒗)f(𝒗)2μ𝒗𝒗2\|f(\bm{v})-f(\bm{v}^{\prime})\|_{2}\leq\mu\|\bm{v}-\bm{v}^{\prime}\|_{2}
   (μ>0\mu>0)    for any 𝒗,𝒗n\bm{v},\bm{v}^{\prime}\in\mathbb{R}^{n}

2.1 Weighted Regularized Empirical Risk Minimization (Weighted RERM) for Linear Prediction

We mainly assume the weighted regularized empirical risk minimization (weighted RERM) for linear prediction. This may include kernelized versions, which are discussed in Appendix C. Suppose that we learn the model parameters as linear prediction coefficients, that is, learn 𝜷(𝒘)d\bm{\beta}^{*(\bm{w})}\in\mathbb{R}^{d} such that the outcome for a sample 𝒙d\bm{x}\in\mathbb{R}^{d} is predicted as 𝒙𝜷(𝒘)\bm{x}^{\top}\bm{\beta}^{*(\bm{w})}.

Definition 2.1.

Given nn training samples of dd-dimensional input variables, scalar output variables and scalar sample weights, denoted by Xn×dX\in\mathbb{R}^{n\times d}, 𝒚n\bm{y}\in\mathbb{R}^{n} and 𝒘0n\bm{w}\in\mathbb{R}_{\geq 0}^{n}, respectively, the training computation of weighted RERM for linear prediction is formulated as follows:

𝜷(𝒘):=argmin𝜷dP𝒘(𝜷),where\displaystyle\bm{\beta}^{*(\bm{w})}:=\mathop{\rm argmin}\limits_{\bm{\beta}\in\mathbb{R}^{d}}P_{\bm{w}}(\bm{\beta}),\quad\text{where}
P𝒘(𝜷):=i=1nwiyi(Xˇi:𝜷)+ρ(𝜷).\displaystyle P_{\bm{w}}(\bm{\beta}):=\sum_{i=1}^{n}w_{i}\ell_{y_{i}}(\check{X}_{i:}\bm{\beta})+\rho(\bm{\beta}). (1)

Here, y:\ell_{y}:\mathbb{R}\to\mathbb{R} is a convex loss function111For y(t)\ell_{y}(t), we assume that only tt is a variable of the function (yy is assumed to be a constant) when we take its subgradient or convex conjugate., ρ:d\rho:\mathbb{R}^{d}\to\mathbb{R} is a convex regularization function, and Xˇn×d\check{X}\in\mathbb{R}^{n\times d} is a matrix calculated from XX and 𝒚\bm{y} and determined depending on \ell. In this paper, unless otherwise noted, we consider binary classifications (𝒚{1,+1}n\bm{y}\in\{-1,+1\}^{n}) with Xˇ:=𝒚×X\check{X}:=\bm{y}{\raisebox{0.80002pt}{$\times$}\Box}X. For regressions (𝒚n\bm{y}\in\mathbb{R}^{n}) we usually set Xˇ:=X\check{X}:=X.

Remark 2.2.

We add that, we adopt the formulation X:d=𝟏nX_{:d}=\bm{1}_{n} so that βd(𝒘)\beta^{*(\bm{w})}_{d} (the last element) represents the common coefficient for any sample (called the intercept).

Since \ell and ρ\rho are convex, we can easily confirm that P𝒘(𝜷)P_{\bm{w}}(\bm{\beta}) is convex with respect to 𝜷\bm{\beta}.

Applying Fenchel’s duality theorem (Appendix A.2), we have the following dual problem of (1):

𝜶(𝒘):=argmax𝜶nD𝒘(𝜶),where\displaystyle\bm{\alpha}^{*(\bm{w})}:=\mathop{\rm argmax}\limits_{\bm{\alpha}\in\mathbb{R}^{n}}D_{\bm{w}}(\bm{\alpha}),\quad\text{where}
D𝒘(𝜶):=\displaystyle D_{\bm{w}}(\bm{\alpha}):= (2)
i=1nwiyi(γiαi)ρ(((𝜸𝒘)×Xˇ)𝜶),\displaystyle-\sum_{i=1}^{n}w_{i}\ell^{*}_{y_{i}}(-\gamma_{i}\alpha_{i})-\rho^{*}(((\bm{\gamma}\otimes\bm{w}){\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{\alpha}),

where 𝜸\bm{\gamma} is a positive-valued vector. The relationship between the original problem (1) (called the primal problem) and the dual problem (2) are described as follows:

P𝒘(𝜷(𝒘))=D𝒘(𝜶(𝒘)),\displaystyle P_{\bm{w}}(\bm{\beta}^{*(\bm{w})})=D_{\bm{w}}(\bm{\alpha}^{*(\bm{w})}), (3)
𝜷(𝒘)ρ(((𝜸𝒘)×Xˇ)𝜶(𝒘)),\displaystyle\bm{\beta}^{*(\bm{w})}\in\partial\rho^{*}(((\bm{\gamma}\otimes\bm{w}){\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{\alpha}^{*(\bm{w})}), (4)
i[n]:γiαi(𝒘)yi(Xˇi:𝜷(𝒘)).\displaystyle\forall i\in[n]:\quad-\gamma_{i}\alpha^{*(\bm{w})}_{i}\in\partial\ell_{y_{i}}(\check{X}_{i:}\bm{\beta}^{*(\bm{w})}). (5)

2.2 Sparsity-inducing Loss Functions and Regularization Functions

In weighted RERM, we call that a loss function \ell induces sample-sparsity if elements in 𝜶(𝒘)\bm{\alpha}^{*(\bm{w})} are easy to become zero. Due to (5), this can be achieved by \ell such that {t0y(t)}\{t\in\mathbb{R}\mid 0\in\partial\ell_{y}(t)\} is not a point but an interval.

Similarly, we call that a regularization function ρ\rho induces feature-sparsity if elements in 𝜷(𝒘)\bm{\beta}^{*(\bm{w})} are easy to become zero. Due to (4), this can be achieved by ρ\rho such that {𝒗dj[d1]:0[ρ(𝒗)]j}\{\bm{v}\in\mathbb{R}^{d}\mid\exists j\in[d-1]:~{}0\in[\partial\rho^{*}(\bm{v})]_{j}\} is not a point but a region.

For example, the hinge loss y(t)=max{0,1t}\ell_{y}(t)=\max\{0,1-t\} (y{1,+1}y\in\{-1,+1\}) is a sample-sparse loss function since {t0y(t)}=[1,+)\{t\in\mathbb{R}\mid 0\in\partial\ell_{y}(t)\}=[1,+\infty). Similarly, the L1-regularization ρ(𝒗)=λj=1d1|vj|\rho(\bm{v})=\lambda\sum_{j=1}^{d-1}|v_{j}| (λ>0\lambda>0: hyperparameter) is a feature-sparse regularization function since {𝒗dj[d1]:0[ρ(𝒗)]j}={𝒗dj[d1]:|vj|λ,vd=0}\{\bm{v}\in\mathbb{R}^{d}\mid\exists j\in[d-1]:~{}0\in[\partial\rho^{*}(\bm{v})]_{j}\}=\{\bm{v}\in\mathbb{R}^{d}\mid\exists j\in[d-1]:~{}|v_{j}|\leq\lambda,~{}v_{d}=0\}. See Section 4 for examples of using them.

3 Distributionally Robust Safe Screening

In this section we show DRSS rules for weighted RERM with two steps. First, in Sections 3.1 and 3.2, we show SS rules for weighted RERM but not DR setup. To do this, we extended existing SS rules in [13, 15]. Then we derive DRSS rules in Section 3.3.

3.1 (Non-DR) Safe Sample Screening

We consider identifying training samples that do not affect the training result 𝜷(𝒘)\bm{\beta}^{*(\bm{w})}. Due to the relationship (4), if there exists i[n]i\in[n] such that αi(𝒘)=0\alpha^{*(\bm{w})}_{i}=0, then the ithi^{\mathrm{th}} row (sample) in Xˇ\check{X} does not affect 𝜷(𝒘)\bm{\beta}^{*(\bm{w})}. However, since computing 𝜶(𝒘)\bm{\alpha}^{*(\bm{w})} is as costly as 𝜷(𝒘)\bm{\beta}^{*(\bm{w})}, it is difficult to use the relationship as it is. To solve the problem, the SsS first considers identifying the possible region (𝒘)d{\cal B}^{*(\bm{w})}\subset\mathbb{R}^{d} such that 𝜷(𝒘)(𝒘)\bm{\beta}^{*(\bm{w})}\in{\cal B}^{*(\bm{w})} is assured. Then, with (𝒘){\cal B}^{*(\bm{w})} and (5), we can conclude that the ithi^{\mathrm{th}} training sample do not affect the training result 𝜷(𝒘)\bm{\beta}^{*(\bm{w})} if 𝜷(𝒘)yi(Xˇi:𝜷)={0}\bigcup_{\bm{\beta}\in{\cal B}^{*(\bm{w})}}\partial\ell_{y_{i}}(\check{X}_{i:}\bm{\beta})=\{0\}.

First we show how to compute (𝒘){\cal B}^{*(\bm{w})}. In this paper we adopt the computation methods that is available when the regularization function ρ\rho in P𝒘P_{\bm{w}} (and also P𝒘P_{\bm{w}} itself) of (1) are strongly convex.

Lemma 3.1.

Suppose that ρ\rho in P𝐰P_{\bm{w}} (and also P𝐰P_{\bm{w}} itself) of (1) are κ\kappa-strongly convex. Then, for any 𝛃^d\hat{\bm{\beta}}\in\mathbb{R}^{d} and 𝛂^n\hat{\bm{\alpha}}\in\mathbb{R}^{n}, we can assure 𝛃(𝐰)(𝐰)\bm{\beta}^{*(\bm{w})}\in{\cal B}^{*(\bm{w})} by taking

(𝒘):={𝜷|𝜷𝜷^2r(𝒘,𝜸,κ,𝜷^,𝜶^)},\displaystyle{\cal B}^{*(\bm{w})}:=\left\{\bm{\beta}\;\middle|\;\|\bm{\beta}-\hat{\bm{\beta}}\|_{2}\leq r(\bm{w},\bm{\gamma},\kappa,\hat{\bm{\beta}},\hat{\bm{\alpha}})\right\},
wherer(𝒘,𝜸,κ,𝜷^,𝜶^):=2κ[P𝒘(𝜷^)D𝒘(𝜶^)].\displaystyle\text{where}\quad r(\bm{w},\bm{\gamma},\kappa,\hat{\bm{\beta}},\hat{\bm{\alpha}}):=\sqrt{\frac{2}{\kappa}[P_{\bm{w}}(\hat{\bm{\beta}})-D_{\bm{w}}(\hat{\bm{\alpha}})]}.

The proof is presented in Appendix A.3. The amount P𝒘(𝜷^)D𝒘(𝜶^)P_{\bm{w}}(\hat{\bm{\beta}})-D_{\bm{w}}(\hat{\bm{\alpha}}) is known as the duality gap, which must be nonnegative due to (3). So we obtain the following gap safe sample screening rule from Lemma 3.1:

Lemma 3.2.

Under the same assumptions as Lemma 3.1, αi(𝐰)=0\alpha_{i}^{*(\bm{w})}=0 is assured (i.e., the ithi^{\mathrm{th}} training sample does not affect the training result 𝛃(𝐰)\bm{\beta}^{*(\bm{w})}) if there exists 𝛃^d\hat{\bm{\beta}}\in\mathbb{R}^{d} and 𝛂^n\hat{\bm{\alpha}}\in\mathbb{R}^{n} such that

[Xˇi:𝜷^Xˇi:2r(𝒘,𝜸,κ,𝜷^,𝜶^),\displaystyle[\check{X}_{i:}\hat{\bm{\beta}}-\|\check{X}_{i:}\|_{2}r(\bm{w},\bm{\gamma},\kappa,\hat{\bm{\beta}},\hat{\bm{\alpha}}),
Xˇi:𝜷^+Xˇi:2r(𝒘,𝜸,κ,𝜷^,𝜶^)]𝒵[yi].\displaystyle\quad\check{X}_{i:}\hat{\bm{\beta}}+\|\check{X}_{i:}\|_{2}r(\bm{w},\bm{\gamma},\kappa,\hat{\bm{\beta}},\hat{\bm{\alpha}})]\subseteq{\cal Z}[\ell_{y_{i}}].

The proof is presented in Appendix A.4.

3.2 (Non-DR) Safe Feature Screening

We consider identifying j[d]j\in[d] such that βj(𝒘)=0\beta_{j}^{*(\bm{w})}=0, that is, identifying that the jthj^{\mathrm{th}} feature is not used in the prediction, even when the sample weights 𝒘\bm{w} are changed.

For simplicity, suppose that the regularization function ρ\rho is decomposable, that is, ρ\rho is represented as ρ(𝜷):=j=1dσj(βj)\rho(\bm{\beta}):=\sum_{j=1}^{d}\sigma_{j}(\beta_{j}) (σ1,σ2,,σd\sigma_{1},\sigma_{2},\dots,\sigma_{d}: \mathbb{R}\to\mathbb{R}). Then, since ρ(𝒗)=j=1dσj(vj)\rho^{*}(\bm{v})=\sum_{j=1}^{d}\sigma^{*}_{j}(v_{j}) and therefore [ρ(𝒗)]j=σj(vj)[\partial\rho^{*}(\bm{v})]_{j}=\partial\sigma^{*}_{j}(v_{j}), from (4) we have

βj(𝒘)\displaystyle\beta^{*(\bm{w})}_{j} σj((𝜸𝒘Xˇ:j)𝜶(𝒘))\displaystyle\in\partial\sigma^{*}_{j}((\bm{\gamma}\otimes\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}^{*(\bm{w})})
=σj(Xˇˇ:j(𝜸,𝒘)𝜶(𝒘)),\displaystyle=\partial\sigma^{*}_{j}(\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})\top}\bm{\alpha}^{*(\bm{w})}),
where Xˇˇ:j(𝜸,𝒘):=𝜸𝒘Xˇ:j.\displaystyle\quad\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})}:=\bm{\gamma}\otimes\bm{w}\otimes\check{X}_{:j}.

If we know 𝜶(𝒘)\bm{\alpha}^{*(\bm{w})}, we can identify whether βj(𝒘)=0\beta_{j}^{*(\bm{w})}=0 holds. However, like SsS (Section 3.1), we would like to check the condition without computing 𝜶(𝒘)\bm{\alpha}^{*(\bm{w})} or 𝜷(𝒘)\bm{\beta}^{*(\bm{w})}.

So, like SsS, SfS first considers identifying the possible region 𝒜(𝒘)n{\cal A}^{*(\bm{w})}\subset\mathbb{R}^{n} such that 𝜶(𝒘)𝒜(𝒘)\bm{\alpha}^{*(\bm{w})}\in{\cal A}^{*(\bm{w})} is assured. Then we can conclude that βj(𝒘)=0\beta^{*(\bm{w})}_{j}=0 is assured if 𝜶𝒜(𝒘)σj(Xˇˇ:j(𝜸,𝒘)𝜶)={0}\bigcup_{\bm{\alpha}\in{\cal A}^{*(\bm{w})}}\partial\sigma^{*}_{j}(\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})\top}\bm{\alpha})=\{0\}.

Then we show how to compute 𝒜(𝒘){\cal A}^{*(\bm{w})}. With Lemma A.3, we can calculate 𝒜(𝒘){\cal A}^{*(\bm{w})} as follows, if the loss function y\ell_{y} in P𝒘P_{\bm{w}} of (1) is smooth:

Lemma 3.3.

Suppose that y\ell_{y} in P𝐰P_{\bm{w}} of (1) is μ\mu-smooth. Then, for any 𝛃^d\hat{\bm{\beta}}\in\mathbb{R}^{d} and 𝛂^n\hat{\bm{\alpha}}\in\mathbb{R}^{n}, we can assure 𝛂(𝐰)𝒜(𝐰)\bm{\alpha}^{*(\bm{w})}\in{\cal A}^{*(\bm{w})} by taking

𝒜(𝒘):={𝜶|𝜶𝜶^2r¯(𝒘,𝜸,μ,𝜷^,𝜶^)},\displaystyle{\cal A}^{*(\bm{w})}:=\left\{\bm{\alpha}\;\middle|\;\|\bm{\alpha}-\hat{\bm{\alpha}}\|_{2}\leq\bar{r}(\bm{w},\bm{\gamma},\mu,\hat{\bm{\beta}},\hat{\bm{\alpha}})\right\},
wherer¯(𝒘,𝜸,μ,𝜷^,𝜶^):=\displaystyle\text{where}\quad\bar{r}(\bm{w},\bm{\gamma},\mu,\hat{\bm{\beta}},\hat{\bm{\alpha}}):=
2μmini[n]wiγi2[P𝒘(𝜷^)D𝒘(𝜶^)].\displaystyle\phantom{\text{where}}\quad\sqrt{\frac{2\mu}{\min_{i\in[n]}w_{i}\gamma_{i}^{2}}[P_{\bm{w}}(\hat{\bm{\beta}})-D_{\bm{w}}(\hat{\bm{\alpha}})]}.

The proof is presented in Appendix A.5. Similar to Lemma 3.2, we obtain the gap safe feature screening rule from Lemma 3.3:

Lemma 3.4.

Under the same assumptions as Lemma 3.3, βj(𝐰)=0\beta_{j}^{*(\bm{w})}=0 is assured (i.e., the jthj^{\mathrm{th}} feature does not affect prediction results) if there exists 𝛃^d\hat{\bm{\beta}}\in\mathbb{R}^{d} and 𝛂^n\hat{\bm{\alpha}}\in\mathbb{R}^{n} such that

[Xˇˇ:j(𝜸,𝒘)𝜶^Xˇˇ:j(𝜸,𝒘)2r¯(𝒘,𝜸,μ,𝜷^,𝜶^),\displaystyle[\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})\top}\hat{\bm{\alpha}}-\|\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})}\|_{2}\bar{r}(\bm{w},\bm{\gamma},\mu,\hat{\bm{\beta}},\hat{\bm{\alpha}}),
Xˇˇ:j(𝜸,𝒘)𝜶^+Xˇˇ:j(𝜸,𝒘)2r¯(𝒘,𝜸,μ,𝜷^,𝜶^)]𝒵[σj].\displaystyle\quad\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})\top}\hat{\bm{\alpha}}+\|\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})}\|_{2}\bar{r}(\bm{w},\bm{\gamma},\mu,\hat{\bm{\beta}},\hat{\bm{\alpha}})]\subseteq{\cal Z}[\sigma^{*}_{j}].

The proof is almost same as Lemma 3.2.

3.3 Application to Distributionally Robust Setup

In Sections 3.1 and 3.2 we showed the conditions when samples or features are screened out. In this section we show how to use the conditions for the change of sample weights 𝒘\bm{w}.

Definition 3.5 (weight-changing safe screening (WCSS)).

Given Xn×dX\in\mathbb{R}^{n\times d}, 𝒚n\bm{y}\in\mathbb{R}^{n}, 𝒘~0n\tilde{\bm{w}}\in\mathbb{R}_{\geq 0}^{n} and 𝒘0n\bm{w}\in\mathbb{R}_{\geq 0}^{n}, suppose that 𝜷(𝒘~)\bm{\beta}^{*(\tilde{\bm{w}})} in Definition 2.1 (and also 𝜶(𝒘~)\bm{\alpha}^{*(\tilde{\bm{w}})}) are already computed, but 𝜷(𝒘)\bm{\beta}^{*(\bm{w})} not. Then WCSsS (resp. WCSfS) from 𝐰~\tilde{\bm{w}} to 𝐰\bm{w} is defined as finding i[n]i\in[n] satisfying Lemma 3.2 (resp. j[d1]j\in[d-1] satisfying Lemma 3.4).

Definition 3.6 (Distributionally robust safe screening (DRSS)).

Given Xn×dX\in\mathbb{R}^{n\times d}, 𝒚n\bm{y}\in\mathbb{R}^{n}, 𝒘~0n\tilde{\bm{w}}\in\mathbb{R}_{\geq 0}^{n} and 𝒲0n{\cal W}\subset\mathbb{R}_{\geq 0}^{n}, suppose that 𝜷(𝒘~)\bm{\beta}^{*(\tilde{\bm{w}})} in Definition 2.1 (and also 𝜶(𝒘~)\bm{\alpha}^{*(\tilde{\bm{w}})}) are already computed. Then the DRSsS (resp. DRSfS) for 𝒲{\cal W} is defined as finding i[n]i\in[n] satisfying Lemma 3.2 (resp. j[d1]j\in[d-1] satisfying Lemma 3.4) for any 𝒘𝒲\bm{w}\in{\cal W}.

For Definition 3.5, we have only to apply SS rules in Lemma 3.2 or 3.4 by setting 𝜷^𝜷(𝒘~)\hat{\bm{\beta}}\leftarrow\bm{\beta}^{*(\tilde{\bm{w}})} and 𝜶^𝜶(𝒘~)\hat{\bm{\alpha}}\leftarrow\bm{\alpha}^{*(\tilde{\bm{w}})}. On the other hand, for Definition 3.6, we need to maximize or minimize the interval in Lemma 3.2 or 3.4 in 𝒘𝒲\bm{w}\in{\cal W}.

Theorem 3.7.

The DRSsS rule for 𝒲{\cal W} is calculated as:

[Xˇi:𝜷(𝒘~)Xˇi:2R,Xˇi:𝜷(𝒘~)+Xˇi:2R]𝒵[yi],\displaystyle[\check{X}_{i:}\bm{\beta}^{*(\tilde{\bm{w}})}-\|\check{X}_{i:}\|_{2}R,\check{X}_{i:}\bm{\beta}^{*(\tilde{\bm{w}})}+\|\check{X}_{i:}\|_{2}R]\subseteq{\cal Z}[\ell_{y_{i}}],

where R:=max𝐰𝒲r(𝐰,𝛄,κ,𝛃(𝐰~),𝛂(𝐰~))R:=\max_{\bm{w}\in{\cal W}}r(\bm{w},\bm{\gamma},\kappa,\bm{\beta}^{*(\tilde{\bm{w}})},\bm{\alpha}^{*(\tilde{\bm{w}})}).

Similarly, the DRSfS rule for 𝒲{\cal W} is calculated as:

[L¯NR¯,L¯+NR¯]𝒵[σj],where\displaystyle[\underline{L}-N\overline{R},\overline{L}+N\overline{R}]\subseteq{\cal Z}[\sigma^{*}_{j}],\quad\text{where}
L¯:=min𝒘𝒲Xˇˇ:j(𝜸,𝒘)𝜶(𝒘~)=min𝒘𝒲(𝜸Xˇ:j𝜶(𝒘~))𝒘,\displaystyle\underline{L}:=\min_{\bm{w}\in{\cal W}}\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})\top}\bm{\alpha}^{*(\tilde{\bm{w}})}=\min_{\bm{w}\in{\cal W}}(\bm{\gamma}\otimes\check{X}_{:j}\otimes\bm{\alpha}^{*(\tilde{\bm{w}})})^{\top}\bm{w},
L¯:=max𝒘𝒲Xˇˇ:j(𝜸,𝒘)𝜶(𝒘~)=max𝒘𝒲(𝜸Xˇ:j𝜶(𝒘~))𝒘,\displaystyle\overline{L}:=\max_{\bm{w}\in{\cal W}}\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})\top}\bm{\alpha}^{*(\tilde{\bm{w}})}=\max_{\bm{w}\in{\cal W}}(\bm{\gamma}\otimes\check{X}_{:j}\otimes\bm{\alpha}^{*(\tilde{\bm{w}})})^{\top}\bm{w},
N:=max𝒘𝒲Xˇˇ:j(𝜸,𝒘)2=max𝒘𝒲𝒘𝜸Xˇ:j22,\displaystyle N:=\max_{\bm{w}\in{\cal W}}\|\check{\check{X}}_{:j}^{(\bm{\gamma},\bm{w})}\|_{2}=\sqrt{\max_{\bm{w}\in{\cal W}}\|\bm{w}\otimes\bm{\gamma}\otimes\check{X}_{:j}\|_{2}^{2}},
R¯:=max𝒘𝒲r¯(𝒘,𝜸,μ,𝜷(𝒘~),𝜶(𝒘~)).\displaystyle\overline{R}:=\max_{\bm{w}\in{\cal W}}\bar{r}(\bm{w},\bm{\gamma},\mu,\bm{\beta}^{*(\tilde{\bm{w}})},\bm{\alpha}^{*(\tilde{\bm{w}})}).

Thus, solving the maximizations and/or minimizations in Theorem 3.7 provides DRSsS and DRSfS rules. However, how to solve it largely depends on the choice of \ell, ρ\rho and 𝒲{\cal W}. In Section 4 we show specific calculations of Theorem 3.7 for some typical setups.

4 DRSS for Typical ML Setups

In this section we show DRSS rules derived in Section 3.3 for two typical ML setups: DRSsS for L1-loss L2-regularized SVM (Section 4.1) and DRSfS for L2-loss L1-regularized SVM (Section 4.2) under 𝒲:={𝒘𝒘𝒘~2S}{\cal W}:=\{\bm{w}\mid\|\bm{w}-\tilde{\bm{w}}\|_{2}\leq S\}.

In the processes, we need to solve constrained maximizations of convex functions. Although maximizations of convex functions are not easy in general (minimizations are easy), we show that the maximizations need in the processes can be algorithmically solved in Section 4.3.

4.1 DRSsS for L1-loss L2-regularized SVM

L1-loss L2-regularized SVM is a sample-sparse model for binary classification (𝒚{1,+1}n\bm{y}\in\{-1,+1\}^{n}) that satisfies the preconditions to apply SsS (Lemma 3.1). Detailed calculations are presented in Appendix B.1.

For L1-loss L2-regularized SVM, we set ρ\rho and \ell as:

ρ(𝜷):=λ2𝜷22(λ>0:hyperparameter),\displaystyle\rho(\bm{\beta}):=\frac{\lambda}{2}\|\bm{\beta}\|_{2}^{2}\quad(\lambda>0:~{}\text{hyperparameter}),
y(t):=max{0,1t}(wherey{1,+1}).\displaystyle\ell_{y}(t):=\max\{0,1-t\}\quad(\text{where}~{}y\in\{-1,+1\}).

Then ρ\rho is λ\lambda-strongly convex. Setting 𝜸=𝟏n\bm{\gamma}=\bm{1}_{n}, the dual objective function is described as

D𝒘(𝜶)=\displaystyle D_{\bm{w}}(\bm{\alpha})=
{i=1nwiαi12λ𝜶(𝒘×Xˇ)(𝒘×Xˇ)𝜶,(i[n]:0αi1).(otherwise)\displaystyle\left\{\begin{array}[]{r}\sum_{i=1}^{n}w_{i}\alpha_{i}-\frac{1}{2\lambda}\bm{\alpha}^{\top}(\bm{w}{\raisebox{0.80002pt}{$\times$}\Box}\check{X})(\bm{w}{\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{\alpha},\\ (\forall i\in[n]:0\leq\alpha_{i}\leq 1)\\ -\infty.\hfill(\text{otherwise})\end{array}\right. (9)

Here, in the viewpoint of minimization, we may consider this problem as a maximization with the constraint “i[n]:0αi1\forall i\in[n]:0\leq\alpha_{i}\leq 1”.

Optimality conditions (4) and (5) are described as:

𝜷(𝒘)=1λ(𝒘×Xˇ)𝜶(𝒘),\displaystyle\bm{\beta}^{*(\bm{w})}=\frac{1}{\lambda}(\bm{w}{\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{\alpha}^{*(\bm{w})}, (10)
i[n]:αi(𝒘){{1},(Xˇi:𝜷(𝒘)1)[0,1],(Xˇi:𝜷(𝒘)=1){0}.(Xˇi:𝜷(𝒘)1)\displaystyle\forall i\in[n]:\quad\alpha^{*(\bm{w})}_{i}\in\begin{cases}\{1\},&(\check{X}_{i:}\bm{\beta}^{*(\bm{w})}\leq 1)\\ [0,1],&(\check{X}_{i:}\bm{\beta}^{*(\bm{w})}=1)\\ \{0\}.&(\check{X}_{i:}\bm{\beta}^{*(\bm{w})}\geq 1)\end{cases} (11)

Noticing that 𝒵[yi]=(1,+){\cal Z}[\ell_{y_{i}}]=(1,+\infty), by Theorem 3.7, the DRSsS rule for 𝒲{\cal W} is calculated as:

Xˇi:𝜷(𝒘~)Xˇi:2max𝒘𝒲r(𝒘,𝜸,κ,𝜷(𝒘~),𝜶(𝒘~))>1,\displaystyle\check{X}_{i:}\bm{\beta}^{*(\tilde{\bm{w}})}-\|\check{X}_{i:}\|_{2}\max_{\bm{w}\in{\cal W}}r(\bm{w},\bm{\gamma},\kappa,\bm{\beta}^{*(\tilde{\bm{w}})},\bm{\alpha}^{*(\tilde{\bm{w}})})>1,
where (12)
r(𝒘,𝜸,κ,𝜷(𝒘~),𝜶(𝒘~))\displaystyle r(\bm{w},\bm{\gamma},\kappa,\bm{\beta}^{*(\tilde{\bm{w}})},\bm{\alpha}^{*(\tilde{\bm{w}})})
:=2κ[P𝒘(𝜷(𝒘~))D𝒘(𝜶(𝒘~))],\displaystyle:=\sqrt{\frac{2}{\kappa}[P_{\bm{w}}(\bm{\beta}^{*(\tilde{\bm{w}})})-D_{\bm{w}}(\bm{\alpha}^{*(\tilde{\bm{w}})})]},
P𝒘(𝜷(𝒘~))D𝒘(𝜶(𝒘~))\displaystyle P_{\bm{w}}(\bm{\beta}^{*(\tilde{\bm{w}})})-D_{\bm{w}}(\bm{\alpha}^{*(\tilde{\bm{w}})})
:=i=1nwi[yi(Xˇi:𝜷(𝒘~))αi(𝒘~)]+λ𝜷(𝒘~)22\displaystyle:=\sum_{i=1}^{n}w_{i}[\ell_{y_{i}}(\check{X}_{i:}\bm{\beta}^{*(\tilde{\bm{w}})})-\alpha^{*(\tilde{\bm{w}})}_{i}]+\lambda\|\bm{\beta}^{*(\tilde{\bm{w}})}\|_{2}^{2}
+12λ𝒘(𝜶(𝒘~)×Xˇ)(𝜶(𝒘~)×Xˇ)𝒘.\displaystyle\phantom{:=}+\frac{1}{2\lambda}\bm{w}^{\top}(\bm{\alpha}^{*(\tilde{\bm{w}})}{\raisebox{0.80002pt}{$\times$}\Box}\check{X})(\bm{\alpha}^{*(\tilde{\bm{w}})}{\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{w}.

Here, we can find that P𝒘(𝜷(𝒘~))D𝒘(𝜶(𝒘~))P_{\bm{w}}(\bm{\beta}^{*(\tilde{\bm{w}})})-D_{\bm{w}}(\bm{\alpha}^{*(\tilde{\bm{w}})}), which we need to maximize in reality, is the sum of linear function and convex quadratic function with respect to 𝒘𝒲\bm{w}\in{\cal W}. (Since (𝜶(𝒘~)×Xˇ)(𝜶(𝒘~)×Xˇ)(\bm{\alpha}^{*(\tilde{\bm{w}})}{\raisebox{0.80002pt}{$\times$}\Box}\check{X})(\bm{\alpha}^{*(\tilde{\bm{w}})}{\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top} is positive semidefinite, we know that it is convex quadratic). Although constrained maximization of a convex function is difficult in general, for this case we can algorithmically maximize it (Section 4.3).

4.2 DRSfS for L2-loss L1-regularized SVM

L2-loss L1-regularized SVM is a feature-sparse model for binary classification (𝒚{1,+1}n\bm{y}\in\{-1,+1\}^{n}) that satisfies the preconditions to apply SfS (Lemma 3.3). Detailed calculations are presented in Appendix B.2.

For L2-loss L1-regularized SVM, we set σj\sigma_{j} (and consequently ρ\rho) and \ell as:

j[d1]:σj(βj):=λ|βj|(λ>0:hyperparameter),\displaystyle\forall j\in[d-1]:~{}\sigma_{j}(\beta_{j}):=\lambda|\beta_{j}|\quad(\lambda>0:~{}\text{hyperparameter}),
σd(βd):=0,\displaystyle\sigma_{d}(\beta_{d}):=0,
y(t):=(max{0,1t})2(wherey{1,+1}).\displaystyle\ell_{y}(t):=(\max\{0,1-t\})^{2}\quad(\text{where}~{}y\in\{-1,+1\}).

Notice that σd(βd)\sigma_{d}(\beta_{d}) is not defined as λ|βd|\lambda|\beta_{d}| but 0: we rarely regularize the intercept with L1-regularization.

Setting 𝜸=λ𝟏n\bm{\gamma}=\lambda\bm{1}_{n}, the dual objective function is described as

D𝒘(𝜶)={λi=1nwiλαi24αi4,((14)–(16) are met),(otherwise)\displaystyle D_{\bm{w}}(\bm{\alpha})=\begin{cases}-\lambda\sum_{i=1}^{n}w_{i}\frac{\lambda\alpha^{2}_{i}-4\alpha_{i}}{4},&(\text{\eqref{eq:l2loss-l1reg-constraint-1}--\eqref{eq:l2loss-l1reg-constraint-3}~{}are~{}met})\\ -\infty,&(\text{otherwise})\end{cases} (13)
whereαi0,\displaystyle\text{where}\quad\alpha_{i}\geq 0, (14)
j[d1]:|(𝒘Xˇ:j)𝜶|1,\displaystyle\phantom{\text{where}}\quad\forall j\in[d-1]:~{}|(\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}|\leq 1, (15)
(𝒘Xˇ:d)𝜶=(𝒘𝒚)𝜶=0.\displaystyle\phantom{\text{where}}\quad(\bm{w}\otimes\check{X}_{:d})^{\top}\bm{\alpha}=(\bm{w}\otimes\bm{y})^{\top}\bm{\alpha}=0. (16)

Optimality conditions (4) and (5) are described as

j[d1]:|(𝒘Xˇ:j)𝜶(𝒘)|<1βj(𝒘)=0,\displaystyle\forall j\in[d-1]:~{}|(\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}^{*(\bm{w})}|<1\Rightarrow\beta^{*(\bm{w})}_{j}=0, (17)
i[n]:αi(𝒘)=2λmax{0,1Xˇi:𝜷(𝒘)}.\displaystyle\forall i\in[n]:\quad\alpha^{*(\bm{w})}_{i}=\frac{2}{\lambda}\max\{0,1-\check{X}_{i:}\bm{\beta}^{*(\bm{w})}\}. (18)

Noticing that 𝒵[σj]=(λ,λ){\cal Z}[\sigma_{j}^{*}]=(-\lambda,\lambda), by Theorem 3.7, the DRSfS rule for 𝒲{\cal W} is calculated as:

L¯NR¯>λ,L¯+NR¯<λ,\displaystyle\underline{L}-N\overline{R}>-\lambda,\quad\overline{L}+N\overline{R}<\lambda,

where

L¯:=λmin𝒘𝒲(Xˇ:j𝜶(𝒘~))𝒘,\displaystyle\underline{L}:=\lambda\min_{\bm{w}\in{\cal W}}(\check{X}_{:j}\otimes\bm{\alpha}^{*(\tilde{\bm{w}})})^{\top}\bm{w},
L¯:=λmax𝒘𝒲(Xˇ:j𝜶(𝒘~))𝒘,\displaystyle\overline{L}:=\lambda\max_{\bm{w}\in{\cal W}}(\check{X}_{:j}\otimes\bm{\alpha}^{*(\tilde{\bm{w}})})^{\top}\bm{w},
N:=λmax𝒘𝒲𝒘Xˇ:j22\displaystyle N:=\lambda\sqrt{\max_{\bm{w}\in{\cal W}}\|\bm{w}\otimes\check{X}_{:j}\|_{2}^{2}}
=λmax𝒘𝒲{𝒘diag(Xˇ:jXˇ:j)𝒘},\displaystyle\phantom{N}=\lambda\sqrt{\max_{\bm{w}\in{\cal W}}\{\bm{w}^{\top}\mathrm{diag}(\check{X}_{:j}\otimes\check{X}_{:j})\bm{w}\}},
R¯:=max𝒘𝒲r¯(𝒘,𝜸,μ,𝜷(𝒘~),𝜶(𝒘~)),\displaystyle\overline{R}:=\max_{\bm{w}\in{\cal W}}\bar{r}(\bm{w},\bm{\gamma},\mu,\bm{\beta}^{*(\tilde{\bm{w}})},\bm{\alpha}^{*(\tilde{\bm{w}})}),
r¯(𝒘,𝜸,μ,𝜷(𝒘~),𝜶(𝒘~))\displaystyle\bar{r}(\bm{w},\bm{\gamma},\mu,\bm{\beta}^{*(\tilde{\bm{w}})},\bm{\alpha}^{*(\tilde{\bm{w}})})
:=2μmini[n]wiγi2[P𝒘(𝜷(𝒘~))D𝒘(𝜶(𝒘~))],\displaystyle:=\sqrt{\frac{2\mu}{\min_{i\in[n]}w_{i}\gamma_{i}^{2}}[P_{\bm{w}}(\bm{\beta}^{*(\tilde{\bm{w}})})-D_{\bm{w}}(\bm{\alpha}^{*(\tilde{\bm{w}})})]},
P𝒘(𝜷(𝒘~))D𝒘(𝜶(𝒘~))\displaystyle P_{\bm{w}}(\bm{\beta}^{*(\tilde{\bm{w}})})-D_{\bm{w}}(\bm{\alpha}^{*(\tilde{\bm{w}})})
=i=1nwi[yi(Xˇi:𝜷(𝒘~))+λλ(α(𝒘~))i24αi(𝒘~)4]\displaystyle=\sum_{i=1}^{n}w_{i}\left[\ell_{y_{i}}(\check{X}_{i:}\bm{\beta}^{*(\tilde{\bm{w}})})+\lambda\frac{\lambda(\alpha^{*(\tilde{\bm{w}})})^{2}_{i}-4\alpha^{*(\tilde{\bm{w}})}_{i}}{4}\right]
+ρ(𝜷(𝒘~)).\displaystyle\phantom{=}+\rho(\bm{\beta}^{*(\tilde{\bm{w}})}).

Here, the expressions in L¯\underline{L} and L¯\overline{L} are linear with respect to 𝒘\bm{w}, and the expression in NN inside the square root is convex and quadratic with respect to 𝒘\bm{w}. Also, R¯\overline{R} is decomposed to two maximizations 2μmini[n]wiγi2\frac{2\mu}{\min_{i\in[n]}w_{i}\gamma_{i}^{2}} and P𝒘(𝜷(𝒘))D𝒘(𝜶(𝒘))P_{\bm{w}}(\bm{\beta}^{*(\bm{w})})-D_{\bm{w}}(\bm{\alpha}^{*(\bm{w})}), where the former is easily computed while the latter is linear with respect to 𝒘\bm{w}. So, similar to L1-loss L2-regularized SVM, we can obtain the maximization result by maximizing or minimizing the linear terms by Lemma A.4 in Appendix A, and maximizing the convex quadratic function by the method of Section 4.3.

4.3 Maximizing Linear and Convex Quadratic Functions in Hyperball Constraint

To derive DRSS rules of Sections 4.1 and 4.2, we need to compute the following forms of optimization problems:

max𝒘𝒲𝒘A𝒘+2𝒃𝒘,\displaystyle\max_{\bm{w}\in{\cal W}}\bm{w}^{\top}A\bm{w}+2\bm{b}^{\top}\bm{w}, (19)
where𝒲:={𝒘n𝒘𝒘~2S},\displaystyle\text{where}\quad{\cal W}:=\{\bm{w}\in\mathbb{R}^{n}\mid\|\bm{w}-\tilde{\bm{w}}\|_{2}\leq S\},
𝒘~n,𝒃n,\displaystyle\phantom{\text{where}}\quad\tilde{\bm{w}}\in\mathbb{R}^{n},\quad\bm{b}\in\mathbb{R}^{n},
An×n:symmetric, positive semidefinite,\displaystyle\phantom{\text{where}}\quad A\in\mathbb{R}^{n\times n}:~{}\text{symmetric, positive semidefinite,}
nonzero.\displaystyle\phantom{\text{where}\quad A\in\mathbb{R}^{n\times n}:~{}}\text{nonzero}.
Lemma 4.1.

The maximization (19) is achieved by the following procedure. First, we define Qn×nQ\in\mathbb{R}^{n\times n} and Φ:=diag(ϕ1,ϕ2,,ϕn)\Phi:=\mathrm{diag}(\phi_{1},\phi_{2},\dots,\phi_{n}) as the eigendecomposition of AA such that A=QΦQA=Q^{\top}\Phi Q, QQ is orthogonal (QQ=QQ=IQQ^{\top}=Q^{\top}Q=I). Also, let 𝛏:=ΦQ𝐰~Q𝐛n\bm{\xi}:=-\Phi Q\tilde{\bm{w}}-Q\bm{b}\in\mathbb{R}^{n}, and

𝒯(ν)=i=1n(ξiνϕi)2.\displaystyle{\cal T}(\nu)=\sum_{i=1}^{n}\left(\frac{\xi_{i}}{\nu-\phi_{i}}\right)^{2}. (20)

Then, the maximization (19) is equal to the largest value among them:

  • For each ν\nu such that 𝒯(ν)=S2{\cal T}(\nu)=S^{2} (see Lemma 4.2), the value νS2+(ν𝒘~+𝒃)Q(ΦνI)1𝝃+𝒃𝒘~\nu S^{2}+(\nu\tilde{\bm{w}}+\bm{b})^{\top}Q^{\top}(\Phi-\nu I)^{-1}\bm{\xi}+\bm{b}^{\top}\tilde{\bm{w}}, and

  • For each ν{ϕ1,ϕ2,,ϕn}\nu\in\{\phi_{1},\phi_{2},\dots,\phi_{n}\} (duplication removed) such that “i[n]:ϕi=νξi=0\forall i\in[n]:~{}\phi_{i}=\nu\Rightarrow\xi_{i}=0”, the value

    max𝝉n[νS2+(ν𝒘~+𝒃)Q𝝉+𝒃𝒘~],\displaystyle\max_{\bm{\tau}\in\mathbb{R}^{n}}[\nu S^{2}+(\nu\tilde{\bm{w}}+\bm{b})^{\top}Q^{\top}\bm{\tau}+\bm{b}^{\top}\tilde{\bm{w}}],
    subject toiν:τi=ξiϕiν,\displaystyle\text{subject to}\quad\forall i\in{\cal F}_{\nu}:\quad\tau_{i}=\frac{\xi_{i}}{\phi_{i}-\nu},
    i𝒰ντi2=S2iντi2,\displaystyle\phantom{\text{subject to}}\quad\sum_{i\in{\cal U}_{\nu}}\tau_{i}^{2}=S^{2}-\sum_{i\in{\cal F}_{\nu}}\tau_{i}^{2},
    where𝒰ν:={ii[n],ϕi=ν},ν:=[n]𝒰ν.\displaystyle\text{where}\quad{\cal U}_{\nu}:=\{i\mid i\in[n],~{}\phi_{i}=\nu\},\quad{\cal F}_{\nu}:=[n]\setminus{\cal U}_{\nu}.

    (Note that the maximization is easily computed by Lemma A.4.)

The proof is presented in Appendix A.6.

Lemma 4.2.

Under the same definitions as Lemma 4.1, The equation 𝒯(ν)=S2{\cal T}(\nu)=S^{2} can be solved by the following procedure: Let 𝐞:=[e1,e2,,eN]\bm{e}:=[e_{1},e_{2},\dots,e_{N}] (NnN\leq n, kkekekk\neq k^{\prime}\Rightarrow e_{k}\neq e_{k^{\prime}}) be a sequence of indices such that

  1. 1.

    ek[n]e_{k}\in[n] for any k[N]k\in[N],

  2. 2.

    i[n]i\in[n] is included in 𝒆\bm{e} if and only if ξi0\xi_{i}\neq 0, and

  3. 3.

    ϕe1ϕe2ϕeN\phi_{e_{1}}\leq\phi_{e_{2}}\leq\dots\leq\phi_{e_{N}}.

Note that, if ϕek<ϕek+1\phi_{e_{k}}<\phi_{e_{k+1}} (k[N1]k\in[N-1]), then 𝒯(ν){\cal T}(\nu) is a convex function in the interval (ϕek,ϕek+1)(\phi_{e_{k}},\phi_{e_{k+1}}) with limνϕek+0=limνϕek+10=+\lim_{\nu\to\phi_{e_{k}}+0}=\lim_{\nu\to\phi_{e_{k+1}}-0}=+\infty. Then, unless N=0N=0, each of the following intervals contains just one solution of 𝒯(ν)=S2{\cal T}(\nu)=S^{2}:

  • Intervals (,ϕe1)(-\infty,\phi_{e_{1}}) and (ϕeN,+)(\phi_{e_{N}},+\infty).

  • Let ν#(k):=argminϕek<ν<ϕek+1𝒯(ν)\nu^{\#(k)}:={\rm argmin}_{\phi_{e_{k}}<\nu<\phi_{e_{k+1}}}{\cal T}(\nu). For each k[N1]k\in[N-1] such that ϕek<ϕek+1\phi_{e_{k}}<\phi_{e_{k+1}},

    • intervals (ϕek,ν#(k))(\phi_{e_{k}},\nu^{\#(k)}) and (ν#(k),ϕek+1)(\nu^{\#(k)},\phi_{e_{k+1}}) if 𝒯(ν#(k))<S2{\cal T}(\nu^{\#(k)})<S^{2},

    • interval [ν#(k),ν#(k)][\nu^{\#(k)},\nu^{\#(k)}] (i.e., point) if 𝒯(ν#(k))=S2{\cal T}(\nu^{\#(k)})=S^{2}.

It follows that 𝒯(ν)=S2{\cal T}(\nu)=S^{2} has at most 2n2n solutions.

By Lemma 4.2, in order to compute the solution of 𝒯(ν)=S2{\cal T}(\nu)=S^{2}, we have only to compute ν#(k)\nu^{\#(k)} by Newton method or the like, and to compute the solution for each interval by Newton method or the like. We show an example of 𝒯(ν){\cal T}(\nu) in Figure 2, and the proof in Appendix A.7.

Refer to caption
Figure 2: An example of the expression 𝒯(ν){\cal T}(\nu) (black solid line) in Lemmas 4.1 and 4.2. Colored dash lines denote terms in the summation (ξek/(νϕek))2(\xi_{e_{k}}/(\nu-\phi_{e_{k}}))^{2}. We can see that, given an interval (ϕek,ϕek+1)(\phi_{e_{k}},\phi_{e_{k+1}}) (k[N1]k\in[N-1]), the function is convex.

5 Application to Deep Learning

So far, our discussion of SS rules has primarily focused on ML models with linear predictions and convex loss and regularization functions. However, there may be scenarios where we would like to employ more complex ML models, such as deep learning (DL).

For DL models, deriving SS rules for the entire model can be challenging due to the complexity of bounding the change in model parameters against changes in sample weights. However, we can simplify the process by focusing on the fact that each layer of DL is often represented as a convex function. Therefore, we propose applying SS rules specifically to the last layer of DL models.

In this formulation, the layers preceding the last one are considered as a fixed feature extraction process, even when the sample weights change (see Figure 3). We believe that this approach is valid when the change in sample weights is not significant. We plan to experimentally evaluate the effectiveness of this formulation in Section 6.3.

Refer to caption
Figure 3: Concept of how to apply SS for deep learning. SS is applied to the last layer for the final prediction.

6 Numerical Experiment

6.1 Experimental Settings

We evaluate the performances of DRSsS and DRSfS across different values of acceptable weight changes SS and hyperparameters for regularization strength λ\lambda. Performance is measured using safe screening rates, representing the ratio of screened samples or features to all samples or features. We consider three setups: DRSsS with L1-loss L2-regularized SVM (Section 4.1), DRSfS with L2-loss L1-regularized SVM (Section 4.2), and DRSsS with deep learning (Section 5) where the last layer incorporates DRSsS with L1-loss L2-regularized SVM.

In these experiments, we set initialize the sample weights before change (𝒘~\tilde{\bm{w}}) as 𝒘~=𝟏n\tilde{\bm{w}}=\bm{1}_{n}. Then, we set SS in DRSS for 𝒲:={𝒘𝒘𝒘~2S}{\cal W}:=\{\bm{w}\mid\|\bm{w}-\tilde{\bm{w}}\|_{2}\leq S\} (Section 4) as follows:

  • First we assume the weight change that the weights for positive samples ({iyi=+1}\{i\mid y_{i}=+1\}) from 11 to aa, while retaining the weights for negative samples ({iyi=1}\{i\mid y_{i}=-1\}) as 11.

  • Then, we defined SS as the size of weight change above; specifically, we set S=n+|a1|S=\sqrt{n^{+}}|a-1| (n+n^{+}: number of positive samples in the training dataset).

We vary aa within the range 0.9a1.10.9\leq a\leq 1.1, assuming a maximum change of up to 10% per sample weight.

6.2 Relationship between the Weight Changes and Safe Screening Rate

Table 2: Datasets for DRSsS/DRSfS experiments. All are binary classification datasets from LIBSVM dataset [22]. The mark \dagger denotes datasets with one feature removed due to computational constraints. See Appendix D.1 for details.
Task Name nn n+n^{+} dd
DRSsS australian 690 307 15
breast-cancer 683 239 11
heart 270 120 14
ionosphere 351 225 35
sonar 208 97 61
splice (train) 1000 517 61
svmguide1 (train) 3089 2000 5
DRSsS madelon (train) 2000 1000 \dagger 500
sonar 208 97 \dagger 60
splice (train) 1000 517 61

First, we present safe screening rates for two SVM setups. The datasets used in these experiments are detailed in Table 2. In this experiment, we adapt the regularization hyperparameter λ\lambda based on the characteristics of the data. These details are described in Appendix D.1.

As an example, for the “sonar” dataset, we show the DRSsS result in Figure 6 and the DRSfS result in Figure 6. Results for other datasets are presented in Appendix D.2.

These plots allow us to assess the tolerance for changes in sample weights. For instance, with a=0.98a=0.98 (weight of each positive sample is reduced by two percent, or equivalent weight change in L2-norm), the sample screening rate is 0.31 for L1-loss L2-regularized SVM with λ=6.58e+1\lambda=\mathrm{6.58e+1}, and the feature screening rate is 0.29 for L2-loss L1-regularized SVM with λ=3.47e+1\lambda=\mathrm{3.47e+1}. This implies that, even if the weights are changed in such ranges, a number of samples or features are still identified as redundant in the sense of prediction.

6.3 Safe Sample Screening for Deep Learning Model

Refer to caption
Figure 4: Ratio of screened samples by DRSsS for dataset “sonar”.
Refer to caption
Figure 5: Ratio of screened features by DRSfS for dataset “sonar”.
Refer to caption
Figure 6: Ratio of screened samples by DRSsS for dataset with CIFAR-10 dataset and DL model ResNet50.

We applied DRSsS to DL models (Section 5), assuming that all layers are fixed except for the last layer.

We utilized a neural network architecture comprising the following components: firstly, ResNet50 [23] with an output of 2,048 features, followed by a fully connected layer to reduce the features to 10, and finally, L1-loss L2-regularized SVM (Section 4.1) accompanied by the intercept feature (Remark 2.2).

For the experiment, we employed the CIFAR-10 dataset [24], a well-known benchmark dataset for image classification tasks. We configured the network to classify images into two classes: “airplane” and “automobile”. Given that there are 5,000 images for each class, we split the dataset into training:validation:testing=6:2:2, resulting in a total of 6,000 images in the training dataset.

The resulting safe sample screening rates are illustrated in Figure 6. We observed similar outcomes to those obtained with ordinary SVMs in Section 6.2.

This experiment validates the feasibility of applying DRSsS to DL models, demonstrating consistent results with traditional SVM setups.

7 Conclusion

In this paper, we discussed DR-SS, considering the possible changes in sample weights to represent DR setup. We developed a method for calculating SS that can handle changes in sample weights by introducing nontrivial computational techniques, such as constrained maximization of certain convex functions (Section 4.3). Additionally, to address the constraint of SS, which typically applies to ML by minimizing convex functions, we provided an application to DL by applying SS to the last layer of DL model. While this approach is an approximation, it holds certain validity.

For the future work, we aim to explore different environmental changes. In this paper, we focused on weight constraint by L2-norm 𝒘𝒘~2S\|\bm{w}-\tilde{\bm{w}}\|_{2}\leq S (Section 4) due to computational considerations. However, when interpreting changes in weights, the constraint of L1-norm 𝒘𝒘~1S\|\bm{w}-\tilde{\bm{w}}\|_{1}\leq S may be more appropriate, as it reflects changes in weights by altering the number of samples. Furthermore, in the context of DR-SS for DL, we are interested in loosening the constraint of fixing the network except for the last layer. Investigating this aspect could provide valuable insights into the flexibility of DR-SS methodologies in DL applications.

Software and Data

The code and the data to reproduce the experiments are available as the attached file.

Potential Broader Impact

This paper contributes to machine learning in dynamically changing environments, a scenario increasingly prevalent in real-world data analyses. We believe that, in such situations, ensuring prediction performance against environmental changes and minimizing storage requirements for expanding datasets will be beneficial. The method does not present significant ethical concerns or foreseeable societal consequences because this work is theoretical and, as of now, has no direct applications that might impact society or ethical considerations.

Acknowledgements

This work was partially supported by MEXT KAKENHI (20H00601), JST CREST (JPMJCR21D3 including AIP challenge program, JPMJCR22N2), JST Moonshot R&D (JPMJMS2033-05), JST AIP Acceleration Research (JPMJCR21U2), NEDO (JPNP18002, JPNP20006) and RIKEN Center for Advanced Intelligence Project.

References

  • [1] Ruidi Chen and Ioannis Ch. Paschalidis. Distributionally robust learning. arXiv Preprint, 2021.
  • [2] Laurent El Ghaoui, Vivian Viallon, and Tarek Rabbani. Safe feature elimination for the lasso and sparse supervised learning problems. Pacific Journal of Optimization, 8(4):667–698, 2012.
  • [3] Kohei Ogawa, Yoshiki Suzuki, and Ichiro Takeuchi. Safe screening of non-support vectors in pathwise svm computation. In Proceedings of the 30th International Conference on Machine Learning, pages 1382–1390, 2013.
  • [4] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000.
  • [5] Masashi Sugiyama, Matthias Krauledat, and Klaus-Robert Müller. Covariate shift adaptation by importance weighted cross validation. Journal of Machine Learning Research, 8(35):985–1005, 2007.
  • [6] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20:273–297, 1995.
  • [7] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288, 1996.
  • [8] Joel Goh and Melvyn Sim. Distributionally robust optimization and its tractable approximations. Operations Research, 58(4-1):902–917, 2010.
  • [9] Erick Delage and Yinyu Ye. Distributionally robust optimization under moment uncertainty with application to data-driven problems. Operations Research, 58(3):595–612, 2010.
  • [10] Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing HONG, Shifeng Zhang, Zhenguo Li, Yi Zhong, and Jun Zhu. Memory replay with data compression for continual learning. In International Conference on Learning Representations, 2022.
  • [11] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, 2017.
  • [12] Olivier Fercoq, Alexandre Gramfort, and Joseph Salmon. Mind the duality gap: safer rules for the lasso. In Proceedings of the 32nd International Conference on Machine Learning, pages 333–342, 2015.
  • [13] Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, and Joseph Salmon. Gap safe screening rules for sparse multi-task and multi-class models. In Advances in Neural Information Processing Systems, pages 811–819, 2015.
  • [14] Shota Okumura, Yoshiki Suzuki, and Ichiro Takeuchi. Quick sensitivity analysis for incremental data modification and its application to leave-one-out cv in linear classification problems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 885–894, 2015.
  • [15] Atsushi Shibagaki, Masayuki Karasuyama, Kohei Hatano, and Ichiro Takeuchi. Simultaneous safe screening of features and samples in doubly sparse modeling. In International Conference on Machine Learning, pages 1577–1586, 2016.
  • [16] Kazuya Nakagawa, Shinya Suzumura, Masayuki Karasuyama, Koji Tsuda, and Ichiro Takeuchi. Safe pattern pruning: An efficient approach for predictive pattern mining. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1785–1794. ACM, 2016.
  • [17] Shaogang Ren, Shuai Huang, Jieping Ye, and Xiaoning Qian. Safe feature screening for generalized lasso. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):2992–3006, 2018.
  • [18] Jiang Zhao, Yitian Xu, and Hamido Fujita. An improved non-parallel universum support vector machine and its safe sample screening rule. Knowledge-Based Systems, 170:79–88, 2019.
  • [19] Zhou Zhai, Bin Gu, Xiang Li, and Heng Huang. Safe sample screening for robust support vector machine. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6981–6988, 2020.
  • [20] Hongmei Wang and Yitian Xu. A safe double screening strategy for elastic net support vector machine. Information Sciences, 582:382–397, 2022.
  • [21] Takumi Yoshida, Hiroyuki Hanada, Kazuya Nakagawa, Kouichi Taji, Koji Tsuda, and Ichiro Takeuchi. Efficient model selection for predictive pattern mining model by safe pattern pruning. Patterns, 4(12):100890, 2023.
  • [22] Chih-Chung Chang and Chih-Jen Lin. Libsvm: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27, 2011. Datasets are provided in authors’ website: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.
  • [23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [24] Alex Krizhevsky. The cifar-10 dataset, 2009.
  • [25] Ralph Tyrell Rockafellar. Convex analysis. Princeton university press, 1970.
  • [26] Jean-Baptiste Hiriart-Urruty and Claude Lemaréchal. Convex Analysis and Minimization Algorithms II: Advanced Theory and Bundle Methods. Springer, 1993.

Appendix A Proofs

A.1 General Lemmas

Lemma A.1.

For a convex function f:d{+}f:\mathbb{R}^{d}\to\mathbb{R}\cup\{+\infty\}, ff^{**} is equivalent to ff if ff is convex, proper (i.e., 𝐯d:f(𝐯)<+\exists\bm{v}\in\mathbb{R}^{d}:~{}f(\bm{v})<+\infty) and lower-semicontinuous.

Proof.

See Section 12 of [25] for example. ∎

Lemma A.1 is known as Fenchel-Moreau theorem. Especially, Lemma A.1 holds if ff is convex and 𝒗d:f(𝒗)<+\forall\bm{v}\in\mathbb{R}^{d}:~{}f(\bm{v})<+\infty.

Lemma A.2.

For a convex function f:d{+}f:\mathbb{R}^{d}\to\mathbb{R}\cup\{+\infty\},

  • ff^{*} is (1/ν)(1/\nu)-strongly convex if ff is proper and ν\nu-smooth.

  • ff^{*} is (1/κ)(1/\kappa)-smooth if ff is proper, lower-semicontinuous and κ\kappa-strongly convex.

Proof.

See Section X.4.2 of [26] for example. ∎

Lemma A.3.

Suppose that f:d{+}f:\mathbb{R}^{d}\to\mathbb{R}\cup\{+\infty\} is a κ\kappa-strongly convex function, and let 𝐯=argmin𝐯df(𝐯)\bm{v}^{*}={\rm argmin}_{\bm{v}\in\mathbb{R}^{d}}f(\bm{v}) be the minimizer of ff. Then, for any 𝐯d\bm{v}\in\mathbb{R}^{d}, we have

𝒗𝒗22κ[f(𝒗)f(𝒗)].\displaystyle\|\bm{v}-\bm{v}^{*}\|_{2}\leq\sqrt{\frac{2}{\kappa}[f(\bm{v})-f(\bm{v}^{*})]}.
Proof.

See [13] for example. ∎

Lemma A.4.

For any vector 𝐚,𝐜n\bm{a},\bm{c}\in\mathbb{R}^{n} and S>0S>0,

min𝒗n:𝒗𝒄2S𝒂𝒗=𝒂𝒄S𝒂2,\displaystyle\min_{\bm{v}\in\mathbb{R}^{n}:~{}\|\bm{v}-\bm{c}\|_{2}\leq S}\bm{a}^{\top}\bm{v}=\bm{a}^{\top}\bm{c}-S\|\bm{a}\|_{2}, max𝒗n:𝒗𝒄2S𝒂𝒗=𝒂𝒄+S𝒂2.\displaystyle\max_{\bm{v}\in\mathbb{R}^{n}:~{}\|\bm{v}-\bm{c}\|_{2}\leq S}\bm{a}^{\top}\bm{v}=\bm{a}^{\top}\bm{c}+S\|\bm{a}\|_{2}.
Proof.

By Cauchy-Schwarz inequality,

𝒂2𝒗𝒄2𝒂(𝒗𝒄)𝒂2𝒗𝒄2.\displaystyle-\|\bm{a}\|_{2}\|\bm{v}-\bm{c}\|_{2}\leq\bm{a}^{\top}(\bm{v}-\bm{c})\leq\|\bm{a}\|_{2}\|\bm{v}-\bm{c}\|_{2}.

Noticing that the first inequality becomes equality if ω>0:𝒂=ω(𝒗𝒄)\exists\omega>0:~{}\bm{a}=-\omega(\bm{v}-\bm{c}), while the second inequality becomes equality if ω>0:𝒂=ω(𝒗𝒄)\exists\omega^{\prime}>0:~{}\bm{a}=\omega^{\prime}(\bm{v}-\bm{c}). Moreover, since 𝒗𝒄2S\|\bm{v}-\bm{c}\|_{2}\leq S,

S𝒂2𝒂(𝒗𝒄)S𝒂2\displaystyle-S\|\bm{a}\|_{2}\leq\bm{a}^{\top}(\bm{v}-\bm{c})\leq S\|\bm{a}\|_{2}

also holds, with the equality holds if 𝒗𝒄2=S\|\bm{v}-\bm{c}\|_{2}=S.

On the other hand, if we take 𝒗\bm{v} that satisfies both of the equality conditions of Cauchy-Schwarz inequality above, that is,

  • (for the first inequality being equality) 𝒗=𝒄(S/𝒂2)𝒂\bm{v}=\bm{c}-(S/\|\bm{a}\|_{2})\bm{a},

  • (for the second inequality being equality) 𝒗=𝒄+(S/𝒂2)𝒂\bm{v}=\bm{c}+(S/\|\bm{a}\|_{2})\bm{a},

then the inequalities become equalities. This proves that S𝒂2-S\|\bm{a}\|_{2} and S𝒂2S\|\bm{a}\|_{2} are surely the minimum and maximum of 𝒂(𝒗𝒄)\bm{a}^{\top}(\bm{v}-\bm{c}), respectively. ∎

A.2 Derivation of Dual Problem by Fenchel’s Duality Theorem

As the formulation of Fenchel’s duality theorem, we follow the one in Section 31 of [25].

Lemma A.5 (A special case of Fenchel’s duality theorem: f,g<+f,g<+\infty).

Let f:nf:\mathbb{R}^{n}\to\mathbb{R} and g:dg:\mathbb{R}^{d}\to\mathbb{R} be convex functions, and An×dA\in\mathbb{R}^{n\times d} be a matrix. Moreover, we define

𝒗:=min𝒗d[f(A𝒗)+g(𝒗)],\displaystyle\bm{v}^{*}:=\min_{\bm{v}\in\mathbb{R}^{d}}[f(A\bm{v})+g(\bm{v})], (21)
𝒖:=max𝒖n[f(𝒖)g(A𝒖)].\displaystyle\bm{u}^{*}:=\max_{\bm{u}\in\mathbb{R}^{n}}[-f^{*}(-\bm{u})-g^{*}(A^{\top}\bm{u})]. (22)

Then Fenchel’s duality theorem assures that

f(A𝒗)+g(𝒗)=f(𝒖)g(A𝒖),\displaystyle f(A\bm{v}^{*})+g(\bm{v}^{*})=-f^{*}(-\bm{u}^{*})-g^{*}(A^{\top}\bm{u}^{*}),
𝒖f(A𝒗),\displaystyle-\bm{u}^{*}\in\partial f(A\bm{v}^{*}),
𝒗g(A𝒖).\displaystyle\bm{v}^{*}\in\partial g^{*}(A^{\top}\bm{u}^{*}).
Sketch of the proof.

Introducing a dummy variable 𝝍n\bm{\psi}\in\mathbb{R}^{n} and a Lagrange multiplier 𝒖n\bm{u}\in\mathbb{R}^{n}, we have

min𝒗d[f(A𝒗)+g(𝒗)]=max𝒖nmin𝒗d,𝝍n[f(𝝍)+g(𝒗)𝒖(A𝒗𝝍)]\displaystyle\min_{\bm{v}\in\mathbb{R}^{d}}[f(A\bm{v})+g(\bm{v})]=\max_{\bm{u}\in\mathbb{R}^{n}}\min_{\bm{v}\in\mathbb{R}^{d},~{}\bm{\psi}\in\mathbb{R}^{n}}[f(\bm{\psi})+g(\bm{v})-\bm{u}^{\top}(A\bm{v}-\bm{\psi})] (23)
=min𝒖nmax𝒗d,𝝍n[f(𝝍)g(𝒗)+𝒖(A𝒗𝝍)]=min𝒖nmax𝒗d,𝝍n[{(𝒖)𝝍f(𝝍)}+{(A𝒖)𝒗g(𝒗)}]\displaystyle=-\min_{\bm{u}\in\mathbb{R}^{n}}\max_{\bm{v}\in\mathbb{R}^{d},~{}\bm{\psi}\in\mathbb{R}^{n}}[-f(\bm{\psi})-g(\bm{v})+\bm{u}^{\top}(A\bm{v}-\bm{\psi})]=-\min_{\bm{u}\in\mathbb{R}^{n}}\max_{\bm{v}\in\mathbb{R}^{d},~{}\bm{\psi}\in\mathbb{R}^{n}}[\{(-\bm{u})^{\top}\bm{\psi}-f(\bm{\psi})\}+\{(A^{\top}\bm{u})^{\top}\bm{v}-g(\bm{v})\}]
=min𝒖n[f(𝒖)+g(A𝒖)]=max𝒖n[f(𝒖)g(A𝒖)].\displaystyle=-\min_{\bm{u}\in\mathbb{R}^{n}}[f^{*}(-\bm{u})+g^{*}(A^{\top}\bm{u})]=\max_{\bm{u}\in\mathbb{R}^{n}}[-f^{*}(-\bm{u})-g^{*}(A^{\top}\bm{u})]. (24)

Moreover, by the optimality condition of a problem with a Lagrange multiplier (23), the optima of it, denoted by 𝒗\bm{v}^{*}, 𝝍\bm{\psi}^{*} and 𝒖\bm{u}^{*}, must satisfy

A𝒗=𝝍,A𝒖g(𝒗),𝒖f(𝝍)=f(A𝒗).\displaystyle A\bm{v}^{*}=\bm{\psi}^{*},\quad A^{\top}\bm{u}^{*}\in\partial g(\bm{v}^{*}),\quad-\bm{u}^{*}\in\partial f(\bm{\psi}^{*})=\partial f(A\bm{v}^{*}).

On the other hand, introducing a dummy variable ϕd\bm{\phi}\in\mathbb{R}^{d} and a Lagrange multiplier 𝒗d\bm{v}\in\mathbb{R}^{d} for (24), we have

max𝒖n[f(𝒖)g(A𝒖)]=min𝒗dmax𝒖n,ϕd[f(𝒖)g(ϕ)𝒗(A𝒖ϕ)]\displaystyle\max_{\bm{u}\in\mathbb{R}^{n}}[-f^{*}(-\bm{u})-g^{*}(A^{\top}\bm{u})]=\min_{\bm{v}\in\mathbb{R}^{d}}\max_{\bm{u}\in\mathbb{R}^{n},\bm{\phi}\in\mathbb{R}^{d}}[-f^{*}(-\bm{u})-g^{*}(\bm{\phi})-\bm{v}^{\top}(A^{\top}\bm{u}-\bm{\phi})] (25)
=min𝒗dmax𝒖n,ϕd[{(A𝒗)(𝒖)f(𝒖)}+{𝒗ϕg(ϕ)}]\displaystyle=\min_{\bm{v}\in\mathbb{R}^{d}}\max_{\bm{u}\in\mathbb{R}^{n},\bm{\phi}\in\mathbb{R}^{d}}[\{(A\bm{v})^{\top}(-\bm{u})-f^{*}(-\bm{u})\}+\{\bm{v}^{\top}\bm{\phi}-g^{*}(\bm{\phi})\}]
=min𝒗d[f(A𝒗)+g(𝒗)]=min𝒗d[f(A𝒗)+g(𝒗)].(Lemma A.1)\displaystyle=\min_{\bm{v}\in\mathbb{R}^{d}}[f^{**}(A\bm{v})+g^{**}(\bm{v})]=\min_{\bm{v}\in\mathbb{R}^{d}}[f(A\bm{v})+g(\bm{v})].\quad(\because~{}\text{Lemma \ref{lem:fenchel-moreau}})

Likely above, by the optimality condition of a problem with a Lagrange multiplier (25), the optima of it, denoted by 𝒖\bm{u}^{*}, ϕ\bm{\phi}^{*} and 𝒗\bm{v}^{*}, must satisfy

A𝒖=ϕ,𝒗g(ϕ)=g(A𝒖),A𝒗f(𝒖).\displaystyle A^{\top}\bm{u}^{*}=\bm{\phi}^{*},\quad\bm{v}^{*}\in\partial g^{*}(\bm{\phi}^{*})=\partial g^{*}(A^{\top}\bm{u}^{*}),\quad A\bm{v}^{*}\in\partial f(-\bm{u}^{*}).

Lemma A.6 (Dual problem of weighted regularized empirical risk minimization (weighted RERM)).

For the minimization problem

𝜷(𝒘):=argmin𝜷dP𝒘(𝜷),whereP𝒘(𝜷):=i=1nwiyi(Xˇi:𝜷)+ρ(𝜷),\displaystyle\bm{\beta}^{*(\bm{w})}:=\mathop{\rm argmin}\limits_{\bm{\beta}\in\mathbb{R}^{d}}P_{\bm{w}}(\bm{\beta}),\quad\text{where}\quad P_{\bm{w}}(\bm{\beta}):=\sum_{i=1}^{n}w_{i}\ell_{y_{i}}(\check{X}_{i:}\bm{\beta})+\rho(\bm{\beta}), ((1) restated)

we define the dual problem as the one obtained by applying Fenchel’s duality theorem (Lemma A.5), which is defined as

𝜶(𝒘):=argmax𝜶nD𝒘(𝜶),whereD𝒘(𝜶):=i=1nwiyi(γiαi)+ρ(((𝜸𝒘)×Xˇ)𝜶).\displaystyle\bm{\alpha}^{*(\bm{w})}:=\mathop{\rm argmax}\limits_{\bm{\alpha}\in\mathbb{R}^{n}}D_{\bm{w}}(\bm{\alpha}),\quad\text{where}\quad D_{\bm{w}}(\bm{\alpha}):=-\sum_{i=1}^{n}w_{i}\ell^{*}_{y_{i}}(-\gamma_{i}\alpha_{i})+\rho^{*}(((\bm{\gamma}\otimes\bm{w}){\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{\alpha}). ((2) restated)

Moreover, 𝛃(𝐰)\bm{\beta}^{*(\bm{w})} and 𝛂(𝐰)\bm{\alpha}^{*(\bm{w})} must satisfy

P𝒘(𝜷(𝒘))=D𝒘(𝜶(𝒘)),\displaystyle P_{\bm{w}}(\bm{\beta}^{*(\bm{w})})=D_{\bm{w}}(\bm{\alpha}^{*(\bm{w})}), ((3) restated)
𝜷(𝒘)ρ(((𝜸𝒘)×Xˇ)𝜶(𝒘)),\displaystyle\bm{\beta}^{*(\bm{w})}\in\partial\rho^{*}(((\bm{\gamma}\otimes\bm{w}){\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{\alpha}^{*(\bm{w})}), ((4) restated)
i[n]:γiαi(𝒘)yi(Xˇi:𝜷(𝒘)).\displaystyle\forall i\in[n]:\quad-\gamma_{i}\alpha^{*(\bm{w})}_{i}\in\partial\ell_{y_{i}}(\check{X}_{i:}\bm{\beta}^{*(\bm{w})}). ((5) restated)
Proof.

To apply Fenchel’s duality theorem, we have only to set ff, gg and AA in Lemma A.5 as

f(𝒖):=i=1nwiyi(ui),g(𝜷):=ρ(𝜷),A:=Xˇ.\displaystyle f(\bm{u}):=\sum_{i=1}^{n}w_{i}\ell_{y_{i}}(u_{i}),\quad g(\bm{\beta}):=\rho(\bm{\beta}),\quad A:=\check{X}.

Here, noticing that

f(𝒖)=sup𝒖n[𝒖𝒖i=1nwiyi(ui)]=sup𝒖ni=1n[uiuiwiyi(ui)]\displaystyle f^{*}(\bm{u})=\sup_{\bm{u}^{\prime}\in\mathbb{R}^{n}}[\bm{u}^{\top}\bm{u}^{\prime}-\sum_{i=1}^{n}w_{i}\ell_{y_{i}}(u^{\prime}_{i})]=\sup_{\bm{u}^{\prime}\in\mathbb{R}^{n}}\sum_{i=1}^{n}\left[u_{i}u^{\prime}_{i}-w_{i}\ell_{y_{i}}(u^{\prime}_{i})\right]
=sup𝒖ni=1nwi[uiwiuiyi(ui)]=i=1nwiyi(uiwi),\displaystyle=\sup_{\bm{u}^{\prime}\in\mathbb{R}^{n}}\sum_{i=1}^{n}w_{i}\left[\frac{u_{i}}{w_{i}}u^{\prime}_{i}-\ell_{y_{i}}(u^{\prime}_{i})\right]=\sum_{i=1}^{n}w_{i}\ell_{y_{i}}^{*}\left(\frac{u_{i}}{w_{i}}\right),

from (22) we have

f(𝒖)g(A𝒖)=i=1nwiyi(uiwi)ρ(Xˇ𝒖).\displaystyle-f^{*}(-\bm{u})-g^{*}(A^{\top}\bm{u})=-\sum_{i=1}^{n}w_{i}\ell_{y_{i}}^{*}\left(-\frac{u_{i}}{w_{i}}\right)-\rho^{*}(\check{X}^{\top}\bm{u}).

Replacing uiγiwiαiu_{i}\leftarrow\gamma_{i}w_{i}\alpha_{i}, that is, 𝒖(𝜸𝒘𝜶)\bm{u}\leftarrow(\bm{\gamma}\otimes\bm{w}\otimes\bm{\alpha}), we have the dual problem (2).

The relationships between the primal and the dual problem are described as follows:

𝒖f(A𝒗)𝜸𝒘𝜶(𝒘)f(Xˇ𝜷(𝒘))γiwiαi(𝒘)wiyi(Xˇi:𝜷(𝒘))\displaystyle-\bm{u}^{*}\in\partial f(A\bm{v}^{*})~{}\Rightarrow~{}-\bm{\gamma}\otimes\bm{w}\otimes\bm{\alpha}^{*(\bm{w})}\in\partial f(\check{X}\bm{\beta}^{*(\bm{w})})~{}\Rightarrow~{}-\gamma_{i}w_{i}\alpha^{*(\bm{w})}_{i}\in w_{i}\partial\ell_{y_{i}}(\check{X}_{i:}\bm{\beta}^{*(\bm{w})})
γiαi(𝒘)yi(Xˇi:𝜷(𝒘)),\displaystyle\Rightarrow-\gamma_{i}\alpha^{*(\bm{w})}_{i}\in\partial\ell_{y_{i}}(\check{X}_{i:}\bm{\beta}^{*(\bm{w})}),
𝒗g(A𝒖)𝜷(𝒘)g(Xˇ𝜸𝒘𝜶(𝒘))=g(((𝜸𝒘)×Xˇ)𝜶(𝒘)).\displaystyle\bm{v}^{*}\in\partial g^{*}(A^{\top}\bm{u}^{*})~{}\Rightarrow~{}\bm{\beta}^{*(\bm{w})}\in\partial g^{*}(\check{X}^{\top}\bm{\gamma}\otimes\bm{w}\otimes\bm{\alpha}^{*(\bm{w})})=\partial g^{*}(((\bm{\gamma}\otimes\bm{w}){\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{\alpha}^{*(\bm{w})}).

A.3 Proof of Lemma 3.1

Proof.

[13]

𝜷^𝜷(𝒘)2\displaystyle\|\hat{\bm{\beta}}-\bm{\beta}^{*(\bm{w})}\|_{2} 2λ[P𝒘(𝜷^)P𝒘(𝜷(𝒘))]\displaystyle\leq\sqrt{\frac{2}{\lambda}[P_{\bm{w}}(\hat{\bm{\beta}})-P_{\bm{w}}(\bm{\beta}^{*(\bm{w})})]} (\because setting fP𝒘f\leftarrow P_{\bm{w}} in Lemma A.3)
=2λ[P𝒘(𝜷^)D𝒘(𝜶(𝒘))]\displaystyle=\sqrt{\frac{2}{\lambda}[P_{\bm{w}}(\hat{\bm{\beta}})-D_{\bm{w}}(\bm{\alpha}^{*(\bm{w})})]} (\because (3))
2λ[P𝒘(𝜷^)D𝒘(𝜶^)].\displaystyle\leq\sqrt{\frac{2}{\lambda}[P_{\bm{w}}(\hat{\bm{\beta}})-D_{\bm{w}}(\hat{\bm{\alpha}})]}. (\because 𝜶(𝒘)\bm{\alpha}^{*(\bm{w})} is a maximizer of D𝒘D_{\bm{w}})

A.4 Proof of Lemma 3.2

Proof.

Due to (5), if yi(Xˇi:𝜷(𝒘))={0}\partial\ell_{y_{i}}(\check{X}_{i:}\bm{\beta}^{*(\bm{w})})=\{0\} is assured, then αi(𝒘)=0\alpha_{i}^{*(\bm{w})}=0 is assured. Since we do not know 𝜷(𝒘)\bm{\beta}^{*(\bm{w})} but know (𝒘){\cal B}^{*(\bm{w})} (Lemma 3.1), we can assure αi(𝒘)=0\alpha_{i}^{*(\bm{w})}=0 if 𝜷(𝒘)yi(Xˇi:𝜷)={0}\bigcup_{\bm{\beta}\in{\cal B}^{*(\bm{w})}}\partial\ell_{y_{i}}(\check{X}_{i:}\bm{\beta})=\{0\} is assured. Noticing that yi\partial\ell_{y_{i}} is monotonically increasing222Since yi\partial\ell_{y_{i}} is a multi-valued function, the monotonicity must be defined accordingly: we call a multi-valued function F:2F:\mathbb{R}\to 2^{\mathbb{R}} is monotonically increasing if, for any t<tt<t^{\prime}, FF must satisfy “sF(t)\forall s\in F(t), sF(t)\forall s^{\prime}\in F(t^{\prime})sss\leq s^{\prime}”., we have

𝜷(𝒘)yi(Xˇi:𝜷)={0}𝜷(𝒘)Xˇi:𝜷𝒵[yi][min𝜷(𝒘)Xˇi:𝜷,max𝜷(𝒘)Xˇi:𝜷]𝒵[yi]\displaystyle\bigcup_{\bm{\beta}\in{\cal B}^{*(\bm{w})}}\partial\ell_{y_{i}}(\check{X}_{i:}\bm{\beta})=\{0\}\quad\Leftrightarrow\quad\bigcup_{\bm{\beta}\in{\cal B}^{*(\bm{w})}}\check{X}_{i:}\bm{\beta}\subseteq{\cal Z}[\ell_{y_{i}}]\quad\Leftrightarrow\quad[\min_{\bm{\beta}\in{\cal B}^{*(\bm{w})}}\check{X}_{i:}\bm{\beta},\max_{\bm{\beta}\in{\cal B}^{*(\bm{w})}}\check{X}_{i:}\bm{\beta}]\subseteq{\cal Z}[\ell_{y_{i}}]
[Xˇi:𝜷^Xˇi:2r(𝒘,𝜸,κ,𝜷^,𝜶^),Xˇi:𝜷^+Xˇi:2r(𝒘,𝜸,κ,𝜷^,𝜶^)]𝒵[yi].\displaystyle\Leftrightarrow\quad\left[\check{X}_{i:}\hat{\bm{\beta}}-\|\check{X}_{i:}\|_{2}r(\bm{w},\bm{\gamma},\kappa,\hat{\bm{\beta}},\hat{\bm{\alpha}}),~{}\check{X}_{i:}\hat{\bm{\beta}}+\|\check{X}_{i:}\|_{2}r(\bm{w},\bm{\gamma},\kappa,\hat{\bm{\beta}},\hat{\bm{\alpha}})\right]\subseteq{\cal Z}[\ell_{y_{i}}]. (\because Lemma A.4)

A.5 Proof of Lemma 3.3

Proof.

The proof is almost the same as that for Lemma 3.1 (see Appendix A.3), but we additionally need to show that D𝒘-D_{\bm{w}} is ((mini[n]wiγi2)/μ)((\min_{i\in[n]}w_{i}\gamma_{i}^{2})/\mu)-strongly convex (in this case D𝒘D_{\bm{w}} is called strongly concave).

As discussed in Lemma A.2, yi(t)-\ell^{*}_{y_{i}}(t) is (1/μ)(1/\mu)-strongly convex, that is, yi(t)(1/2μ)t2-\ell^{*}_{y_{i}}(t)-(1/2\mu)t^{2} is convex. Thus,

  • yi(γiαi)(1/2μ)(γiαi)2-\ell^{*}_{y_{i}}(-\gamma_{i}\alpha_{i})-(1/2\mu)(\gamma_{i}\alpha_{i})^{2} is convex with respect to αi\alpha_{i},

  • wiyi(γiαi)(wiγi2/2μ)αi2-w_{i}\ell^{*}_{y_{i}}(-\gamma_{i}\alpha_{i})-(w_{i}\gamma_{i}^{2}/2\mu)\alpha_{i}^{2} is convex with respect to αi\alpha_{i},

  • i=1nwiyi(γiαi)i=1n(wiγi2/2μ)αi2-\sum_{i=1}^{n}w_{i}\ell^{*}_{y_{i}}(-\gamma_{i}\alpha_{i})-\sum_{i=1}^{n}(w_{i}\gamma_{i}^{2}/2\mu)\alpha_{i}^{2} is convex with respect to 𝜶\bm{\alpha}.

So, i=1nwiyi(γiαi)-\sum_{i=1}^{n}w_{i}\ell^{*}_{y_{i}}(-\gamma_{i}\alpha_{i}) is convex with respect to 𝜶\bm{\alpha} even subtracted by k=1n[mini[n](wiγi2/2μ)]αk2=(1/2)[mini[n](wiγi2/μ)]𝜶22\sum_{k=1}^{n}[\min_{i\in[n]}(w_{i}\gamma_{i}^{2}/2\mu)]\alpha_{k}^{2}=(1/2)[\min_{i\in[n]}(w_{i}\gamma_{i}^{2}/\mu)]\|\bm{\alpha}\|_{2}^{2}. ∎

A.6 Proof of Lemma 4.1

Lemma A.7.

For the optimization problem

max𝒘𝒲𝒘A𝒘+2𝒃𝒘,\displaystyle\max_{\bm{w}\in{\cal W}}\bm{w}^{\top}A\bm{w}+2\bm{b}^{\top}\bm{w}, ((19) restated)
subject to𝒲:={𝒘n𝒘𝒘~2S},\displaystyle\text{subject to}\quad{\cal W}:=\{\bm{w}\in\mathbb{R}^{n}\mid\|\bm{w}-\tilde{\bm{w}}\|_{2}\leq S\},
where𝒘~n,𝒃n,\displaystyle\text{where}\quad\tilde{\bm{w}}\in\mathbb{R}^{n},\quad\bm{b}\in\mathbb{R}^{n},
An×n:symmetric, positive semidefinite, nonzero,\displaystyle\phantom{\text{where}}\quad A\in\mathbb{R}^{n\times n}:~{}\text{symmetric, positive semidefinite, nonzero,}

its stationary points are obtained as the solution of the following equations with respect to 𝐰\bm{w} and ν\nu\in\mathbb{R}:

A𝒘+𝒃ν(𝒘𝒘~)=𝟎,\displaystyle A\bm{w}+\bm{b}-\nu(\bm{w}-\tilde{\bm{w}})=\bm{0}, (26)
𝒘𝒘~2=S.\displaystyle\|\bm{w}-\tilde{\bm{w}}\|_{2}=S. (27)

Also, when both (26) and (27) are satisfied, the function to be maximized is calculated as

𝒘A𝒘+2𝒃𝒘=νS2+(ν𝒘~+𝒃)(𝒘𝒘~)+𝒘~𝒃.\displaystyle\bm{w}^{\top}A\bm{w}+2\bm{b}^{\top}\bm{w}=\nu S^{2}+(\nu\tilde{\bm{w}}+\bm{b})^{\top}(\bm{w}-\tilde{\bm{w}})+\tilde{\bm{w}}^{\top}\bm{b}. (28)
Proof.

First, 𝒘A𝒘+2𝒃𝒘\bm{w}^{\top}A\bm{w}+2\bm{b}^{\top}\bm{w} is convex and not constant. Then we can show that (19) is optimized in {𝒘n𝒘𝒘~2=S}\{\bm{w}\in\mathbb{R}^{n}\mid\|\bm{w}-\tilde{\bm{w}}\|_{2}=S\}, that is, at the surface of the hyperball 𝒲{\cal W} (Theorem 32.1 of [25]). This proves (27). Moreover, with the fact, we write the Lagrangian function with Lagrange multiplier ν\nu\in\mathbb{R} as:

L(𝒘,ν):=𝒘A𝒘+2𝒃𝒘ν(𝒘𝒘~22S2).\displaystyle L(\bm{w},\nu):=\bm{w}^{\top}A\bm{w}+2\bm{b}^{\top}\bm{w}-\nu(\|\bm{w}-\tilde{\bm{w}}\|_{2}^{2}-S^{2}).

Then, due to the property of Lagrange multiplier, the stationary points of (19) are obtained as

L𝒘=2A𝒘+2𝒃2ν(𝒘𝒘~)=0,\displaystyle\frac{\partial L}{\partial\bm{w}}=2A\bm{w}+2\bm{b}-2\nu(\bm{w}-\tilde{\bm{w}})=0,
Lν=𝒘𝒘~22S2=0,\displaystyle\frac{\partial L}{\partial\nu}=\|\bm{w}-\tilde{\bm{w}}\|_{2}^{2}-S^{2}=0,

where the former derives (26).

Finally we show (28). If both (26) and (27) are satisfied,

𝒘A𝒘+2𝒃𝒘\displaystyle\bm{w}^{\top}A\bm{w}+2\bm{b}^{\top}\bm{w} =𝒘(ν(𝒘𝒘~)𝒃)+2𝒃𝒘\displaystyle=\bm{w}^{\top}(\nu(\bm{w}-\tilde{\bm{w}})-\bm{b})+2\bm{b}^{\top}\bm{w} (\because (26))
=ν𝒘(𝒘𝒘~)+𝒃𝒘\displaystyle=\nu\bm{w}^{\top}(\bm{w}-\tilde{\bm{w}})+\bm{b}^{\top}\bm{w}
=ν(𝒘𝒘~)(𝒘𝒘~)+ν𝒘~(𝒘𝒘~)+𝒃(𝒘𝒘~)+𝒃𝒘~\displaystyle=\nu(\bm{w}-\tilde{\bm{w}})^{\top}(\bm{w}-\tilde{\bm{w}})+\nu\tilde{\bm{w}}^{\top}(\bm{w}-\tilde{\bm{w}})+\bm{b}^{\top}(\bm{w}-\tilde{\bm{w}})+\bm{b}^{\top}\tilde{\bm{w}}
=νS2+ν𝒘~(𝒘𝒘~)+𝒃(𝒘𝒘~)+𝒃𝒘~\displaystyle=\nu S^{2}+\nu\tilde{\bm{w}}^{\top}(\bm{w}-\tilde{\bm{w}})+\bm{b}^{\top}(\bm{w}-\tilde{\bm{w}})+\bm{b}^{\top}\tilde{\bm{w}} (\because (27))
=νS2+(ν𝒘~+𝒃)(𝒘𝒘~)+𝒃𝒘~\displaystyle=\nu S^{2}+(\nu\tilde{\bm{w}}+\bm{b})^{\top}(\bm{w}-\tilde{\bm{w}})+\bm{b}^{\top}\tilde{\bm{w}} ((28) restated)

Proof of Lemma 4.1.

The condition (26) is calculated as

A𝒘+𝒃=ν(𝒘𝒘~),\displaystyle A\bm{w}+\bm{b}=\nu(\bm{w}-\tilde{\bm{w}}),
(AνI)(𝒘𝒘~)=A𝒘~𝒃.\displaystyle(A-\nu I)(\bm{w}-\tilde{\bm{w}})=-A\tilde{\bm{w}}-\bm{b}.

Here, let us apply eigendecomposition of AA, denoted by A=QΦQA=Q^{\top}\Phi Q, where Qn×nQ\in\mathbb{R}^{n\times n} is orthogonal (QQ=QQ=IQQ^{\top}=Q^{\top}Q=I) and Φ:=diag(ϕ1,ϕ2,,ϕn)\Phi:=\mathrm{diag}(\phi_{1},\phi_{2},\dots,\phi_{n}) is a diagonal matrix consisting of eigenvalues of AA. Such a decomposition is assured to exist since AA is assumed to be symmetric and positive semidefinite. Then,

(QΦQνI)(𝒘𝒘~)=QΦQ𝒘~𝒃,\displaystyle(Q^{\top}\Phi Q-\nu I)(\bm{w}-\tilde{\bm{w}})=-Q^{\top}\Phi Q\tilde{\bm{w}}-\bm{b},
Q(ΦνI)Q(𝒘𝒘~)=QΦQ𝒘~𝒃,\displaystyle Q^{\top}(\Phi-\nu I)Q(\bm{w}-\tilde{\bm{w}})=-Q^{\top}\Phi Q\tilde{\bm{w}}-\bm{b},
(ΦνI)𝝉=𝝃,(where𝝉:=Q(𝒘𝒘~),𝝃:=ΦQ𝒘~Q𝒃n,)\displaystyle(\Phi-\nu I)\bm{\tau}=\bm{\xi},\quad(\text{where}\quad\bm{\tau}:=Q(\bm{w}-\tilde{\bm{w}}),\quad\bm{\xi}:=-\Phi Q\tilde{\bm{w}}-Q\bm{b}\in\mathbb{R}^{n},) (29)
i[n]:(ϕiν)τi=ξi.\displaystyle\forall i\in[n]:\quad(\phi_{i}-\nu)\tau_{i}=\xi_{i}. (30)

Note that we have to be also aware of the constraint

S=𝝉2=𝝉𝝉=(𝒘𝒘~)QQ(𝒘𝒘~)=𝒘𝒘~2.\displaystyle S=\|\bm{\tau}\|_{2}=\sqrt{\bm{\tau}^{\top}\bm{\tau}}=\sqrt{(\bm{w}-\tilde{\bm{w}})^{\top}Q^{\top}Q(\bm{w}-\tilde{\bm{w}})}=\|\bm{w}-\tilde{\bm{w}}\|_{2}. (31)

Here, we consider these two cases.

  1. 1.

    First, consider the case when (ΦνI)(\Phi-\nu I) is nonsingular, that is, when ν\nu is different from any of ϕ1,ϕ2,,ϕn\phi_{1},\phi_{2},\dots,\phi_{n}. Then, from (31) we have

    S2=𝝉2=i=1nτi2=i=1n(ξiνϕi)2(=:𝒯(ν)).\displaystyle S^{2}=\|\bm{\tau}\|_{2}=\sum_{i=1}^{n}\tau_{i}^{2}=\sum_{i=1}^{n}\left(\frac{\xi_{i}}{\nu-\phi_{i}}\right)^{2}\quad\bigl{(}=:{\cal T}(\nu)\bigr{)}. (32)

    So, values of (19) for all stationary points with respect to 𝒘\bm{w} and ν\nu (on condition that (ΦνI)(\Phi-\nu I) is nonsingular) can be obtained by computing (28) for each ν\nu satisfying (32), that is,

    • for such ν\nu computing 𝝉\bm{\tau} by (30), and

    • computing (28) as νS2+(ν𝒘~+𝒃)(𝒘𝒘~)+𝒃𝒘~=νS2+(ν𝒘~+𝒃)Q𝝉+𝒃𝒘~\nu S^{2}+(\nu\tilde{\bm{w}}+\bm{b})^{\top}(\bm{w}-\tilde{\bm{w}})+\bm{b}^{\top}\tilde{\bm{w}}=\nu S^{2}+(\nu\tilde{\bm{w}}+\bm{b})^{\top}Q^{\top}\bm{\tau}+\bm{b}^{\top}\tilde{\bm{w}}.

  2. 2.

    Secondly, consider the case when (ΦνI)(\Phi-\nu I) is nonsingular, that is, when ν\nu is equal to one of ϕ1,ϕ2,,ϕn\phi_{1},\phi_{2},\dots,\phi_{n}. First, given ν\nu, let 𝒰ν:={ii[n],ϕi=ν}{\cal U}_{\nu}:=\{i\mid i\in[n],~{}\phi_{i}=\nu\} be the indices of {ϕi}i\{\phi_{i}\}_{i} equal to ν\nu (this may include more than one indices), and ν:=[n]𝒰ν{\cal F}_{\nu}:=[n]\setminus{\cal U}_{\nu}. Note that, by assumption, 𝒰ν{\cal U}_{\nu} is not empty. Then, all stationary points of (19) with respect to 𝒘\bm{w} and ν\nu (on condition that (ΦνI)(\Phi-\nu I) is singular) can be found by computing the followings for each ν{ϕ1,ϕ2,,ϕn}\nu\in\{\phi_{1},\phi_{2},\dots,\phi_{n}\} (duplication excluded):

    • If ξi0\xi_{i}\neq 0 for at least one i𝒰νi\in{\cal U}_{\nu}, the equation (30) cannot hold.

    • If ξi=0\xi_{i}=0 for all i𝒩νi\in{\cal N}_{\nu}, the equation (30) may hold. So we calculate 𝝉\bm{\tau} that maximizes (19) as follows:

      • Fix τi=ξi/(ϕiν)\tau_{i}=\xi_{i}/(\phi_{i}-\nu) for iνi\in{\cal F}_{\nu}.

      • Set the constraint i𝒰ντi2=S2iντi2\sum_{i\in{\cal U}_{\nu}}\tau_{i}^{2}=S^{2}-\sum_{i\in{\cal F}_{\nu}}\tau_{i}^{2} (due to (31)).

      • Maximize (19) with respect to {τi}i𝒰ν\{\tau_{i}\}_{i\in{\cal U}_{\nu}} under the constraints above. Here, by (28) we have only to calculate

        max𝝉n[νS2+(ν𝒘~+𝒃)(𝒘𝒘~)+𝒃𝒘~],\displaystyle\max_{\bm{\tau}\in\mathbb{R}^{n}}[\nu S^{2}+(\nu\tilde{\bm{w}}+\bm{b})^{\top}(\bm{w}-\tilde{\bm{w}})+\bm{b}^{\top}\tilde{\bm{w}}], (33)
        subject toiν:τi=ξiϕiν,\displaystyle\text{subject to}\quad\forall i\in{\cal F}_{\nu}:\quad\tau_{i}=\frac{\xi_{i}}{\phi_{i}-\nu},
        i𝒰ντi2=S2iντi2,\displaystyle\phantom{\text{subject to}}\quad\sum_{i\in{\cal U}_{\nu}}\tau_{i}^{2}=S^{2}-\sum_{i\in{\cal F}_{\nu}}\tau_{i}^{2},

        which is easily computed by Lemma A.4. The value of the maximization result is equal to that of (19) on condition that ν\nu is specified above.

      So, collecting these result and taking the largest one, the maximization (on condition that (ΦνI)(\Phi-\nu I) is singular) is completed.

Taking the maximum of the two cases, we have the maximization result of (19). ∎

A.7 Proof of Lemma 4.2

Proof.

We show the statements in the lemma that, if ϕek<ϕek+1\phi_{e_{k}}<\phi_{e_{k+1}} (k[N1]k\in[N-1]), then 𝒯(ν){\cal T}(\nu) is a convex function in the interval (ϕek,ϕek+1)(\phi_{e_{k}},\phi_{e_{k+1}}) with limνϕek+0=limνϕek+10=+\lim_{\nu\to\phi_{e_{k}}+0}=\lim_{\nu\to\phi_{e_{k+1}}-0}=+\infty. Then the conclusion immediately follows.

The latter statement clearly holds. The former statement is proved by directly computing the derivative.

ddν𝒯(ν)=ddνi=1n(ξiνϕi)2=2i=1nξi2(νϕi)3.\displaystyle\frac{d}{d\nu}{\cal T}(\nu)=\frac{d}{d\nu}\sum_{i=1}^{n}\left(\frac{\xi_{i}}{\nu-\phi_{i}}\right)^{2}=-2\sum_{i=1}^{n}\frac{\xi_{i}^{2}}{(\nu-\phi_{i})^{3}}.

It is an increasing function with respect to ν\nu, as long as ν\nu does not match any of {ϕi}i=1n\{\phi_{i}\}_{i=1}^{n} such that ξi0\xi_{i}\neq 0. So it is convex in the interval ϕek<ν<ϕek+1\phi_{e_{k}}<\nu<\phi_{e_{k+1}}. ∎

Appendix B Detailed Calculations

In this appendix we describe detailed calculations omitted in the main paper.

B.1 Calculations for L1-loss L2-regularized SVM (Section 4.1)

For this setup, we can calculate as

ρ(𝜷):=12λ𝜷22,y(t):={t,(1t0)+,(otherwise)ρ(𝜷):={1λ𝜷},y(t):={{1},(t<1)[1,0],(t=1){0}.(t>1)\displaystyle\rho^{*}(\bm{\beta}):=\frac{1}{2\lambda}\|\bm{\beta}\|_{2}^{2},\quad\ell^{*}_{y}(t):=\begin{cases}t,&(-1\leq t\leq 0)\\ +\infty,&(\text{otherwise})\end{cases}\quad\partial\rho^{*}(\bm{\beta}):=\left\{\frac{1}{\lambda}\bm{\beta}\right\},\quad\partial\ell_{y}(t):=\begin{cases}\{-1\},&(t<1)\\ [-1,0],&(t=1)\\ \{0\}.&(t>1)\end{cases}

Then we have the dual problem in the main paper (9).

B.2 Calculations for L2-loss L1-regularized SVM (Section 4.2)

For this setup, we can calculate as

ρ(𝜷):={0,(βd=0,j[d1]:|βj|λ)+,(otherwise)y(t):={t2+4t4,(t0)+,(otherwise)\displaystyle\rho^{*}(\bm{\beta}):=\begin{cases}0,&(\beta_{d}=0,~{}\forall j\in[d-1]:~{}|\beta_{j}|\leq\lambda)\\ +\infty,&(\text{otherwise})\end{cases}\quad\ell^{*}_{y}(t):=\begin{cases}\frac{t^{2}+4t}{4},&(t\leq 0)\\ +\infty,&(\text{otherwise})\end{cases}
j[d1]:[ρ(𝜷)]j:={,(βj<λ)[,0],(βj=λ)0,(|βj|<λ)[0,+],(βj=λ)+,(βj>λ)[ρ(𝜷)]d:={,(βd<0)[,+],(βd=0)+,(βd>0)\displaystyle\forall j\in[d-1]:~{}[\partial\rho^{*}(\bm{\beta})]_{j}:=\begin{cases}-\infty,&(\beta_{j}<-\lambda)\\ [-\infty,0],&(\beta_{j}=-\lambda)\\ 0,&(|\beta_{j}|<\lambda)\\ [0,+\infty],&(\beta_{j}=\lambda)\\ +\infty,&(\beta_{j}>\lambda)\end{cases}\quad[\partial\rho^{*}(\bm{\beta})]_{d}:=\begin{cases}-\infty,&(\beta_{d}<0)\\ [-\infty,+\infty],&(\beta_{d}=0)\\ +\infty,&(\beta_{d}>0)\end{cases}
y(t):=2max{0,1t}.\displaystyle\partial\ell_{y}(t):=-2\max\{0,1-t\}.

Then, setting γi=λ\gamma_{i}=\lambda for all i[n]i\in[n], the dual objective function is described as

D𝒘(𝜶)={i=1nwiλ2αi24λαi4,(if (35) are satisfied)+,(otherwise)\displaystyle D_{\bm{w}}(\bm{\alpha})=\begin{cases}-\sum_{i=1}^{n}w_{i}\frac{\lambda^{2}\alpha^{2}_{i}-4\lambda\alpha_{i}}{4},&(\text{if~{}\eqref{eq:l2loss-l1reg-constraint-base}~{}are~{}satisfied})\\ +\infty,&(\text{otherwise})\end{cases} (34)

where

λαi0αi0,\displaystyle\lambda\alpha_{i}\geq 0\Leftrightarrow\alpha_{i}\geq 0, (35a)
j[d1]:|((λ𝟏n𝒘)Xˇ:j)𝜶|λ|(𝒘Xˇ:j)𝜶|1,\displaystyle\forall j\in[d-1]:~{}|((\lambda\bm{1}_{n}\otimes\bm{w})\otimes\check{X}_{:j})^{\top}\bm{\alpha}|\leq\lambda\Leftrightarrow|(\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}|\leq 1, (35b)
((λ𝟏n𝒘)Xˇ:d)𝜶=0(𝒘Xˇ:d)𝜶=0.\displaystyle((\lambda\bm{1}_{n}\otimes\bm{w})\otimes\check{X}_{:d})^{\top}\bm{\alpha}=0\Leftrightarrow(\bm{w}\otimes\check{X}_{:d})^{\top}\bm{\alpha}=0. (35c)

Optimality conditions (4) and (5) are described as

j[d1]:|(λ𝟏n𝒘Xˇ:j)𝜶(𝒘)|<λ|(𝒘Xˇ:j)𝜶(𝒘)|<1βj(𝒘)=0,\displaystyle\forall j\in[d-1]:~{}|(\lambda\bm{1}_{n}\otimes\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}^{*(\bm{w})}|<\lambda\Leftrightarrow|(\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}^{*(\bm{w})}|<1\Rightarrow\beta^{*(\bm{w})}_{j}=0, (36)
i[n]:λαi(𝒘)=2max{0,1Xˇi:𝜷(𝒘)}.\displaystyle\forall i\in[n]:\quad\lambda\alpha^{*(\bm{w})}_{i}=2\max\{0,1-\check{X}_{i:}\bm{\beta}^{*(\bm{w})}\}. (37)

Appendix C Application of Safe Sample Screening to Kernelized Features

The kernel method in ML means computation methods when the input variable vector of a sample 𝒙d\bm{x}\in\mathbb{R}^{d} cannot be specifically obtained (this includes the case when dd is infinite), but for the input variable vectors for any two samples 𝒙,𝒙d\bm{x},\bm{x}^{\prime}\in\mathbb{R}^{d} its inner product 𝒙𝒙\bm{x}^{\top}\bm{x}^{\prime} can be obtained. In such a case, we cannot discuss SfS since we cannot obtain each feature specifically, however, we can discuss SsS.

We show that the SsS rules for L1-loss L2-regularized SVM (Section 4.1) can be applied even if the features are kernelized.

First, if features are kernelized, we cannot obtain either XX or 𝜷(𝒘~)\bm{\beta}^{*(\tilde{\bm{w}})} specifically. However, since we can obtain 𝜶(𝒘~)\bm{\alpha}^{*(\tilde{\bm{w}})}, with (10) we have

𝒙d:𝒙𝜷(𝒘~)=1λ𝒙(𝒘×Xˇ)𝜶(𝒘~)=1λi=1nwiαi(𝒘~)(𝒙Xˇi:).\displaystyle\forall\bm{x}\in\mathbb{R}^{d}:~{}\bm{x}^{\top}\bm{\beta}^{*(\tilde{\bm{w}})}=\frac{1}{\lambda}\bm{x}^{\top}(\bm{w}{\raisebox{0.80002pt}{$\times$}\Box}\check{X})^{\top}\bm{\alpha}^{*(\tilde{\bm{w}})}=\frac{1}{\lambda}\sum_{i=1}^{n}w_{i}\alpha_{i}^{*(\tilde{\bm{w}})}(\bm{x}^{\top}\check{X}_{i:}). (38)

This means that we can calculate the inner product of 𝜷(𝒘~)\bm{\beta}^{*(\tilde{\bm{w}})} and any vector.

Then, in order to calculate the quantity (12) to conduct SsS, we have only to calculate

  • Xˇi:𝜷(𝒘~)\check{X}_{i:}\bm{\beta}^{*(\tilde{\bm{w}})} can be calculated by (38),

  • Xˇi:2=Xˇi:Xˇi:\|\check{X}_{i:}\|_{2}=\sqrt{\check{X}_{i:}^{\top}\check{X}_{i:}} is obtained as the kernel value, and

  • P𝒘(𝜷(𝒘~))D𝒘(𝜶(𝒘~))P_{\bm{w}}(\bm{\beta}^{*(\tilde{\bm{w}})})-D_{\bm{w}}(\bm{\alpha}^{*(\tilde{\bm{w}})}) can be calculated by (38) and kernel values since two variables whose values cannot be specifically obtained (X~\tilde{X} and 𝜷(𝒘~)\bm{\beta}^{*(\tilde{\bm{w}})}) appears only as inner products.

So, all values needed to derive SsS rules (12) can be computed even if features are kernelized.

Appendix D Details of Experiments

D.1 Detailed Experimental Setup

The criteria of selecting datasets (Table 2) and detailed setups are as follows:

  • All of the datasets are downloaded from LIBSVM dataset [22]. We used scaled datasets for ones used in DRSfS or only scaled datasets are provided (“ionosphere”, “sonar” and “splice”). We used training datasets only if test datasets are provided separately (“splice”, “svmguide1” and “madelon”).

  • For DRSsS, we selected datasets from LIBSVM dataset containing 100 to 10,000 samples, 100 or fewer features, and the area under the curve (AUC) of the receiver operating characteristic (ROC) is 0.9 or higher for the regularization strengths (λ\lambda) we examined so that they tend to facilitate more effective sample screening.

  • For DRSfS, we selected datasets from LIBSVM dataset containing 50 to 1,000 features, 10,000 or fewer samples, and containing no categorical features. Also, due to computational constraints, we excluded features that have at least one zero (marked “\dagger” in Table 2). As a result, one feature from “madelon” and one from “sonar” have been excluded.

  • In the table, the column “dd” denotes the number of features including the intercept feature (Remark 2.2).

The choice of regularization hyperparameter λ\lambda, based on the characteristics of the data, is as follows:

  • For DRSsS, we set λ\lambda as nn, n×100.5n\times 10^{-0.5}, n×101.0n\times 10^{-1.0}, \ldots, n×103.0n\times 10^{-3.0}. (For DRSsS with DL, we set 1000 instead of nn.) This is because the effect of λ\lambda gets weaker for larger nn.

  • For DRSfS, we determine λ\lambda based on λmax{\lambda_{\mathrm{max}}}, defined as the smallest λ\lambda for which βj(𝒘)=0\beta^{*(\bm{w})}_{j}=0 for any j[d1]j\in[d-1] explained below. We then set λ\lambda as λmax{\lambda_{\mathrm{max}}}, λmax×101/3{\lambda_{\mathrm{max}}}\times 10^{-1/3}, λmax×102/3{\lambda_{\mathrm{max}}}\times 10^{-2/3}, \ldots, λmax×102{\lambda_{\mathrm{max}}}\times 10^{-2}.

Finally, we show the calculation of λmax{\lambda_{\mathrm{max}}} for L2-loss L1-regularized SVM. By (17), we would like to find λ\lambda so that |(𝒘Xˇ:j)𝜶(𝒘)|<1|(\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}^{*(\bm{w})}|<1 for all j[d1]j\in[d-1]. In order to judge this, we need 𝜶(𝒘)\bm{\alpha}^{*(\bm{w})}, which is calculated as follows:

  • Solve the primal problem (1) for L2-loss L1-regularized SVM by fixing βj(𝒘)=0\beta^{*(\bm{w})}_{j}=0 for any j[d1]j\in[d-1], that is,

    βd(𝒘)=argminβdi=1nwiyi(xˇidβd)=argminβdi=1nwi(max{0,1yiβd})2\displaystyle\beta^{*(\bm{w})}_{d}=\mathop{\rm argmin}\limits_{\beta_{d}}\sum_{i=1}^{n}w_{i}\ell_{y_{i}}(\check{x}_{id}\beta_{d})=\mathop{\rm argmin}\limits_{\beta_{d}}\sum_{i=1}^{n}w_{i}(\max\{0,1-y_{i}\beta_{d}\})^{2}
    =argminβdi[n],yi=+1wi(max{0,1βd})2+i[n],yi=1wi(max{0,1+βd})2\displaystyle=\mathop{\rm argmin}\limits_{\beta_{d}}\sum_{i\in[n],~{}y_{i}=+1}w_{i}(\max\{0,1-\beta_{d}\})^{2}+\sum_{i\in[n],~{}y_{i}=-1}w_{i}(\max\{0,1+\beta_{d}\})^{2}
    =i[n],yi=+1wii[n],yi=1wii=1nwi.\displaystyle=\frac{\sum_{i\in[n],~{}y_{i}=+1}w_{i}-\sum_{i\in[n],~{}y_{i}=-1}w_{i}}{\sum_{i=1}^{n}w_{i}}.
  • With βd(𝒘)\beta^{*(\bm{w})}_{d} computed above and βj(𝒘)=0\beta^{*(\bm{w})}_{j}=0 for any j[d1]j\in[d-1], calculate 𝜶$=λ𝜶(𝒘)=[2max{0,1Xˇi:𝜷(𝒘)}]i=1n\bm{\alpha}^{\$}=\lambda\bm{\alpha}^{*(\bm{w})}=[2\max\{0,1-\check{X}_{i:}\bm{\beta}^{*(\bm{w})}\}]_{i=1}^{n} by (18).

  • If |(𝒘Xˇ:j)𝜶$)|<λ|(\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}^{\$})|<\lambda for all j[d1]j\in[d-1], then βj(𝒘)=0\beta^{*(\bm{w})}_{j}=0 for any j[d1]j\in[d-1]. So, we set λmax=maxj[d1]|(𝒘Xˇ:j)𝜶$)|{\lambda_{\mathrm{max}}}=\max_{j\in[d-1]}|(\bm{w}\otimes\check{X}_{:j})^{\top}\bm{\alpha}^{\$})|.

D.2 All Experimental Results of Section 6.2

For the experiment of Section 6.2, ratios of screened samples by DRSsS setup is presented in Figure 7, while ratios of screened features by DRSfS setup in Figure 8.

Dataset: australianRefer to caption Dataset: breast-cancerRefer to caption
Dataset: heartRefer to caption Dataset: ionosphereRefer to caption
Dataset: sonarRefer to caption Dataset: spliceRefer to caption
Dataset: svmguide1Refer to caption
Figure 7: Ratios of screened samples by DRSsS.
Dataset: madelonRefer to caption Dataset: sonarRefer to caption
Dataset: spliceRefer to caption
Figure 8: Ratios of screened features by DRSfS.