This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Sharp oracle inequalities for aggregation of affine estimators

Arnak S. Dalalyanlabel=e1]arnak.dalalyan@ensae.frlabel=u1 [[    url]http://www.arnak-dalalyan.fr    Joseph Salmonlabel=e2]salmon@math.jussieu.frlabel=u2 [[    url]http://josephsalmon.eu ENSAE-Crest, Université Paris Est and Université Paris Diderot ENSAE-Crest
3 Avenue Pierre Larousse
92245 Malakoff Cedex
France
2 Pl. Jussieu BP 7012
75251 PARIS Cedex 05
France

(2012; 4 2011; 6 2012)
Abstract

We consider the problem of combining a (possibly uncountably infinite) set of affine estimators in nonparametric regression model with heteroscedastic Gaussian noise. Focusing on the exponentially weighted aggregate, we prove a PAC-Bayesian type inequality that leads to sharp oracle inequalities in discrete but also in continuous settings. The framework is general enough to cover the combinations of various procedures such as least square regression, kernel ridge regression, shrinking estimators and many other estimators used in the literature on statistical inverse problems. As a consequence, we show that the proposed aggregate provides an adaptive estimator in the exact minimax sense without discretizing the range of tuning parameters or splitting the set of observations. We also illustrate numerically the good performance achieved by the exponentially weighted aggregate.

62G08,
62C20,
62G05,
62G20,
Aggregation,
regression,
oracle inequalities,
model selection,
minimax risk,
exponentially weighted aggregation,
doi:
10.1214/12-AOS1038
keywords:
[class=AMS]
keywords:
volume: 40issue: 4

and T1Supported in part by ANR Parcimonie.

1 Introduction

There is growing empirical evidence of superiority of aggregated statistical procedures, also referred to as blending, stacked generalization or ensemble methods, with respect to “pure” ones. Since their introduction in the 1990s, famous aggregation procedures such as Boosting Freund90 , Bagging Breiman96b or Random Forest AmitGeman97 have been successfully used in practice for a large variety of applications. Moreover, most recent Machine Learning competitions such as the Pascal VOC or Netflix challenge have been won by procedures combining different types of classifiers/predictors/estimators. It is therefore of central interest to understand from a theoretical point of view what kind of aggregation strategies should be used for getting the best possible combination of the available statistical procedures.

1.1 Historical remarks and motivation

In the statistical literature, to the best of our knowledge, theoretical foundations of aggregation procedures were first studied by Nemirovski (Nemirovski Nemirovski00 , Juditsky and Nemirovski JuditskyNemirovski00 ) and independently by a series of papers by Catoni (see Catoni04 for an account) and Yang Yang00 , Yang03 , Yang04a . For the regression model, a significant progress was achieved by Tsybakov Tsybakov03 with introducing the notion of optimal rates of aggregation and proposing aggregation-rate-optimal procedures for the tasks of linear, convex and model selection aggregation. This point was further developed in Lounici07 , RigolletTsybakov07 , BuneaTsybakovWegkamp07 , especially in the context of high dimension with sparsity constraints and in Rigollet09 for Kullback–Leibler aggregation. However, it should be noted that the procedures proposed in Tsybakov03 that provably achieve the lower bounds in convex and linear aggregation require full knowledge of design distribution. This limitation was overcome in the recent work Wang2011 .

From a practical point of view, an important limitation of the previously cited results on aggregation is that they are valid under the assumption that the aggregated procedures are deterministic (or random, but independent of the data used for aggregation). The generality of those results—almost no restriction on the constituent estimators—compensates to this practical limitation.

In the Gaussian sequence model, a breakthrough was reached by Leung and Barron LeungBarron06 . Building on very elegant but not very well-known results by George George86a 222Corollary 2 in George86a coincides with Theorem 1 in LeungBarron06 in the case of exponential weights with temperature β=2σ2\beta=2\sigma^{2}; cf. equation (4) below for a precise definition of exponential weights. Furthermore, to the best of our knowledge, George86a is the first reference using the Stein lemma for evaluating the expected risk of the exponentially weighted aggregate., they established sharp oracle inequalities for the exponentially weighted aggregate (EWA) for constituent estimators obtained from the data vector by orthogonally projecting it on some linear subspaces. Dalalyan and Tsybakov DalalyanTsybakov07 , DalalyanTsybakov08 showed the result of LeungBarron06 remains valid under more general (non-Gaussian) noise distributions and when the constituent estimators are independent of the data used for the aggregation. A natural question arises whether a similar result can be proved for a larger family of constituent estimators containing projection estimators and deterministic ones as specific examples. The main aim of the present paper is to answer this question by considering families of affine estimators.

Our interest in affine estimators is motivated by several reasons. First, affine estimators encompass many popular estimators such as smoothing splines, the Pinsker estimator Pinsker80 , EfromovichPinsker96 , local polynomial estimators, nonlocal means BuadesCollMorel05 , SalmonLepennec09b , etc. For instance, it is known that if the underlying (unobserved) signal belongs to a Sobolev ball, then the (linear) Pinsker estimator is asymptotically minimax up to the optimal constant, while the best projection estimator is only rate-minimax. A second motivation is that—as proved by Juditsky and Nemirovski JuditskyNemirovski09 —the set of signals that are well estimated by linear estimators is very rich. It contains, for instance, sampled smooth functions, sampled modulated smooth functions and sampled harmonic functions. One can add to this set the family of piecewise constant functions as well, as demonstrated in PolzehlSpokoiny00 , with natural application in magnetic resonance imaging. It is worth noting that oracle inequalities for penalized empirical risk minimizer were also proved by Golubev Golubev10 , and for model selection by Arlot and Bach ArlotBach09 , Baraud, Giraud and Huet BaraudGiraudHuet10 .

In the present work, we establish sharp oracle inequalities in the model of heteroscedastic regression, under various conditions on the constituent estimators assumed to be affine functions of the data. Our results provide theoretical guarantees of optimality, in terms of expected loss, for the exponentially weighted aggregate. They have the advantage of covering in a unified fashion the particular cases of frozen estimators considered in DalalyanTsybakov08 and of projection estimators treated in LeungBarron06 .

We focus on the theoretical guarantees expressed in terms of oracle inequalities for the expected squared loss. Interestingly, although several recent papers ArlotBach09 , BaraudGiraudHuet10 , GoldenshlugerLepski08 discuss the paradigm of competing against the best linear procedure from a given family, none of them provide oracle inequalities with leading constant equal to one. Furthermore, most existing results involve some constants depending on different parameters of the setup. In contrast, the oracle inequality that we prove herein is with leading constant one and admits a simple formulation. It is established for (suitably symmetrized, if necessary) exponentially weighted aggregates George86a , Catoni04 , DalalyanTsybakov07 with an arbitrary prior and a temperature parameter which is not too small. The result is nonasymptotic but leads to an asymptotically optimal residual term when the sample size, as well as the cardinality of the family of constituent estimators, tends to infinity. In its general form, the residual term is similar to those obtained in the PAC-Bayes setting Mcallester98 , Langford02 , Seeger03 in that it is proportional to the Kullback–Leibler divergence between two probability distributions.

The problem of competing against the best procedure in a given family was extensively studied in the context of online learning and prediction with expert advice KivinenWarmuth99 , Cesa-BianchiLugosi06 . A connection between the results on online learning and statistical oracle inequalities was established by Gerchinovitz Gerchinovitz11 .

1.2 Notation and examples of linear estimators

Throughout this work, we focus on the heteroscedastic regression model with Gaussian additive noise. We assume we are given a vector 𝐘=(y1,,yn)n\mathbf{Y}=(y_{1},\ldots,y_{n})^{\top}\in\mathbb{R}^{n} obeying the model

yi=fi+ξifor i=1,,n,y_{i}=f_{i}+\xi_{i}\qquad\mbox{for }i=1,\ldots,n, (1)

where \boldsξ=(ξ1,,ξn)\bolds{\xi}=(\xi_{1},\ldots,\xi_{n})^{\top} is a centered Gaussian random vector, fi=𝐟(xi)f_{i}=\mathbf{f}(x_{i}) where 𝐟\dvtx𝒳\mathbf{f}\dvtx\mathcal{X}\rightarrow\mathbb{R} is an unknown function and x1,,xn𝒳x_{1},\ldots,x_{n}\in\mathcal{X} are deterministic points. Here, no assumption is made on the set 𝒳\mathcal{X}. Our objective is to recover the vector 𝐟=(f1,,fn)\mathbf{f}=(f_{1},\ldots,f_{n})^{\top}, often referred to as signal, based on the data y1,,yny_{1},\ldots,y_{n}. In our work, the noise covariance matrix Σ=𝔼[\boldsξ\boldsξ]\Sigma=\mathbb{E}[\bolds{\xi}\bolds{\xi}^{\top}] is assumed to be finite with a known upper bound on its spectral norm |Σ||\!|\!|{\Sigma}|\!|\!|. We denote by |n\langle\cdot|\cdot\rangle_{n} the empirical inner product in n\mathbb{R}^{n}: 𝐮|𝐯n=(1/n)i=1nuivi\langle\mathbf{u}|\mathbf{v}\rangle_{n}=(1/n)\sum_{i=1}^{n}u_{i}v_{i}. We measure the performance of an estimator 𝐟^\hat{\mathbf{f}} by its expected empirical quadratic loss: r=𝔼[𝐟𝐟^n2]r=\mathbb{E}[\|\mathbf{f}-\hat{\mathbf{f}}\|_{n}^{2}] where 𝐟𝐟^n2=1ni=1n(fif^i)2\|\mathbf{f}-\hat{\mathbf{f}}\|_{n}^{2}=\frac{1}{n}\sum_{i=1}^{n}(f_{i}-\hat{f}_{i})^{2}.

We only focus on the task of aggregating affine estimators 𝐟^λ\hat{\mathbf{f}}_{\lambda} indexed by some parameter λΛ\lambda\in\Lambda. These estimators can be written as affine transforms of the data 𝐘=(y1,,yn)n\mathbf{Y}=(y_{1},\ldots,y_{n})^{\top}\in\mathbb{R}^{n}. Using the convention that all vectors are one-column matrices, we have 𝐟^λ=Aλ𝐘+𝐛λ\hat{\mathbf{f}}_{\lambda}=A_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda}, where the n×nn\times n real matrix AλA_{\lambda} and the vector 𝐛λn\mathbf{b}_{\lambda}\in\mathbb{R}^{n} are deterministic. It means the entries of AλA_{\lambda} and 𝐛λ\mathbf{b}_{\lambda} may depend on the points x1,,xnx_{1},\ldots,x_{n} but not on the data 𝐘\mathbf{Y}. Let us describe now different families of linear and affine estimators successfully used in the statistical literature. Our results apply to all these families, leading to a procedure that behaves nearly as well as the best (unknown) one of the family.

Ordinary least squares. Let {𝒮λ\dvtxλΛ}\{\mathcal{S}_{\lambda}\dvtx\lambda\in\Lambda\} be a set of linear subspaces of n\mathbb{R}^{n}. A well-known family of affine estimators, successfully used in the context of model selection BarronBirgeMassart99 , is the set of orthogonal projections onto 𝒮λ\mathcal{S}_{\lambda}. In the case of a family of linear regression models with design matrices XλX_{\lambda}, one has Aλ=Xλ(XλXλ)+XλA_{\lambda}=X_{\lambda}(X_{\lambda}^{\top}X_{\lambda})^{+}X_{\lambda}^{\top}, where (XλXλ)+(X_{\lambda}^{\top}X_{\lambda})^{+} stands for the Moore–Penrose pseudo-inverse of XλXλX_{\lambda}^{\top}X_{\lambda}.

Diagonal filters. Other common estimators are the so-called diagonal filters corresponding to diagonal matrices A=\operatornamediag(a1,,an)A=\operatorname{diag}(a_{1},\ldots,a_{n}). Examples include the following:

  • Ordered projections: ak=1(kλ)a_{k}=\mathbh{1}_{(k\leq\lambda)} for some integer λ\lambda [1()\mathbh{1}_{(\cdot)} is the indicator function]. Those weights are also called truncated SVD (Singular Value Decomposition) or spectral cutoff. In this case a natural parametrization is Λ={1,,n}\Lambda=\{1,\ldots,n\}, indexing the number of elements conserved.

  • Block projections: ak=1(kw1)+j=1m1λj1(wjkwj+1)a_{k}=\mathbh{1}_{(k\leq w_{1})}+\sum_{j=1}^{m-1}\lambda_{j}\mathbh{1}_{(w_{j}\leq k\leq w_{j+1})}, k=1,,nk=1,\ldots,n, where λj{0,1}\lambda_{j}\in\{0,1\}. Here the natural parametrization is Λ={0,1}m1\Lambda=\{0,1\}^{m-1}, indexing subsets of {1,,m1}\{1,\ldots,m-1\}.

  • Tikhonov–Philipps filter: ak=11+(k/w)αa_{k}=\frac{1}{1+(k/w)^{\alpha}}, where w,α>0w,\alpha>0. In this case, Λ=(+)2\Lambda=(\mathbb{R}_{+}^{*})^{2}, indexing continuously the smoothing parameters.

  • Pinsker filter: ak=(1kαw)+a_{k}=(1-\frac{k^{\alpha}}{w})_{+}, where x+=max(x,0)x_{+}=\max(x,0) and (w,α)=λΛ=(+)2(w,\alpha)=\lambda\in\Lambda=(\mathbb{R}_{+}^{*})^{2}.

Kernel ridge regression. Assume that we have a positive definite kernel k\dvtx𝒳×𝒳k\dvtx\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R} and we aim at estimating the true function ff in the associated reproducing kernel Hilbert space (k,k\mathcal{H}_{k},\|\cdot\|_{k}). The kernel ridge estimator is obtained by minimizing the criterion 𝐘𝐟n2+λ𝐟k2\|\mathbf{Y}-\mathbf{f}\|^{2}_{n}+\lambda\|\mathbf{f}\|_{k}^{2} w.r.t. fkf\in\mathcal{H}_{k} (see Shawe-TaylorChristianini00 , page 118). Denoting by KK the n×nn\times n kernel-matrix with element Ki,j=k(xi,xj)K_{i,j}=k(x_{i},x_{j}), the unique solution 𝐟^\hat{\mathbf{f}} is a linear estimate of the data, 𝐟^=Aλ𝐘\hat{\mathbf{f}}=A_{\lambda}\mathbf{Y}, with Aλ=K(K+nλIn×n)1A_{\lambda}=K(K+n\lambda{I_{n\times n}})^{-1}, where In×n{I_{n\times n}} is the n×nn\times n identity matrix.

Multiple Kernel learning. As described in ArlotBach09 , it is possible to handle the case of several kernels k1,,kMk_{1},\ldots,k_{M}, with associated positive definite matrices K1,,KMK_{1},\ldots,K_{M}. For a parameter λ=(λ1,,λM)Λ=+M\lambda=(\lambda_{1},\ldots,\lambda_{M})\in\Lambda=\mathbb{R}_{+}^{M}, one can define the estimators 𝐟^λ=Aλ𝐘\hat{\mathbf{f}}_{\lambda}=A_{\lambda}\mathbf{Y} with

Aλ=(m=1MλmKm)(m=1MλmKm+nIn×n)1.A_{\lambda}=\Biggl{(}\sum_{m=1}^{M}\lambda_{m}K_{m}\Biggr{)}\Biggl{(}\sum_{m=1}^{M}\lambda_{m}K_{m}+n{I_{n\times n}}\Biggr{)}^{-1}. (2)

It is worth mentioning that the formulation in equation (2) can be linked to the group Lasso YuanLin06 and to the multiple kernel learning introduced in LanckrietCristianiniBartlettElGhaouiJordan03 —see ArlotBach09 for more details.

Moving averages. If we think of coordinates of 𝐟\mathbf{f} as some values assigned to the vertices of an undirected graph, satisfying the property that two nodes are connected if the corresponding values of 𝐟\mathbf{f} are close, then it is natural to estimate fif_{i} by averaging out the values YjY_{j} for indices jj that are connected to ii. The resulting estimator is a linear one with a matrix A=(aij)i,j=1nA=(a_{ij})_{i,j=1}^{n} such that aij=1Vi(j)/nia_{ij}=\mathbh{1}_{V_{i}}(j)/n_{i}, where ViV_{i} is the set of neighbors of the node ii in the graph and nin_{i} is the cardinality of ViV_{i}.

1.3 Organization of the paper

In Section 2 we introduce EWA and state a PAC-Bayes type bound in expectation assessing optimality properties of EWA in combining affine estimators. The strengths and limitations of the results are discussed in Section 3. The extension of these results to the case of grouped aggregation—in relation with ill-posed inverse problems—is developed in Section 4. As a consequence, we provide in Section 5 sharp oracle inequalities in various setups: ranging from finite to continuous families of constituent estimators and including sparse scenarii. In Section 6 we apply our main results to prove that combining Pinsker’s type filters with EWA leads to asymptotically sharp adaptive procedures over Sobolev ellipsoids. Section 7 is devoted to numerical comparison of EWA with other classical filters (soft thresholding, blockwise shrinking, etc.) and illustrates the potential benefits of aggregating. The conclusion is given in Section 8, while the proofs of some technical results (Propositions 26) are provided in the supplementary material DalalyanSalmonsupp .

2 Aggregation of estimators: Main results

In this section we describe the statistical framework for aggregating estimators and we introduce the exponentially weighted aggregate. The task of aggregation consists in estimating 𝐟\mathbf{f} by a suitable combination of the elements of a family of constituent estimators Λ=(𝐟^λ)λΛn\mathcal{F}_{\Lambda}=(\hat{\mathbf{f}}_{\lambda})_{\lambda\in\Lambda}\in\mathbb{R}^{n}. The target objective of the aggregation is to build an aggregate 𝐟^aggr\hat{\mathbf{f}}_{\mathrm{aggr}} that mimics the performance of the best constituent estimator, called oracle (because of its dependence on the unknown function 𝐟\mathbf{f}). In what follows, we assume that Λ\Lambda is a measurable subset of M\mathbb{R}^{M}, for some MM\in\mathbb{N}.

The theoretical tool commonly used for evaluating the quality of an aggregation procedure is the oracle inequality (OI), generally written

𝔼[𝐟^aggr𝐟n2]CninfλΛ𝔼[𝐟^λ𝐟n2]+Rn,\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\mathrm{aggr}}-\mathbf{f}\|^{2}_{n}\bigr{]}\leq C_{n}\inf_{\lambda\in\Lambda}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|^{2}_{n}\bigr{]}+R_{n}, (3)

with residual term RnR_{n} tending to zero as nn\to\infty, and leading constant CnC_{n} being bounded. The OIs with leading constant one are of central theoretical interest since they allow to bound the excess risk and to assess the aggregation-rate-optimality. They are often referred to as sharp OI.

2.1 Exponentially weighted aggregate (EWA)

Let rλ=𝔼[𝐟^λ𝐟n2]r_{\lambda}=\mathbb{E}[\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}] denote the risk of the estimator 𝐟^λ\hat{\mathbf{f}}_{\lambda}, for any λΛ\lambda\in\Lambda, and let r^λ\hat{r}_{\lambda} be an estimator of rλr_{\lambda}. The precise form of r^λ\hat{r}_{\lambda} strongly depends on the nature of the constituent estimators. For any probability distribution π\pi over Λ\Lambda and for any β>0\beta>0, we define the probability measure of exponential weights, π^\hat{\pi}, by

π^(dλ)=θ(λ)π(dλ)with θ(λ)=exp(nr^λ/β)Λexp(nr^ω/β)π(dω).\hat{\pi}(d\lambda)=\theta(\lambda)\pi(d\lambda)\qquad\mbox{with }\theta(\lambda)=\frac{\exp(-n\hat{r}_{\lambda}/\beta)}{\int_{\Lambda}\exp(-n\hat{r}_{\omega}/\beta)\pi(d\omega)}. (4)

The corresponding exponentially weighted aggregate, henceforth denoted by 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}}, is the expectation of 𝐟^λ\hat{\mathbf{f}}_{\lambda} w.r.t. the probability measure π^\hat{\pi}:

𝐟^EWA=Λ𝐟^λπ^(dλ).{\hat{\mathbf{f}}}_{\mathrm{EWA}}=\int_{\Lambda}\hat{\mathbf{f}}_{\lambda}~\hat{\pi}(d\lambda). (5)

We will frequently use the terminology of Bayesian statistics: the measure π\pi is called prior, the measure π^\hat{\pi} is called posterior and the aggregate 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}} is then the posterior mean. The parameter β\beta will be referred to as the temperature parameter. In the framework of aggregating statistical procedures, the use of such an aggregate can be traced back to George George86a .

The interpretation of the weights θ(λ)\theta(\lambda) is simple: they up-weight estimators all the more that their performance, measured in terms of the risk estimate r^λ\hat{r}_{\lambda}, is good. The temperature parameter reflects the confidence we have in this criterion: if the temperature is small (β0\beta\approx 0), the distribution concentrates on the estimators achieving the smallest value for r^λ\hat{r}_{\lambda}, assigning almost zero weights to the other estimators. On the other hand, if β+\beta\rightarrow+\infty, then the probability distribution over Λ\Lambda is simply the prior π\pi, and the data do not influence our confidence in the estimators.

2.2 Main results

In this paper we only focus on affine estimators

𝐟^λ=Aλ𝐘+𝐛λ,\hat{\mathbf{f}}_{\lambda}=A_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda}, (6)

where the n×nn\times n real matrix AλA_{\lambda} and the vector 𝐛λn\mathbf{b}_{\lambda}\in\mathbb{R}^{n} are deterministic. Furthermore, we will assume that an unbiased estimator Σ^\widehat{\Sigma} of the noise covariance matrix Σ\Sigma is available. It is well known (cf. Appendix for details) that the risk of the estimator (6) is given by

rλ=𝔼[𝐟^λ𝐟n2]=(AλIn×n)𝐟+𝐛λn2+\operatornameTr(AλΣAλ)nr_{\lambda}=\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}\bigr{]}=\bigl{\|}(A_{\lambda}-I_{n\times n})\mathbf{f}+\mathbf{b}_{\lambda}\bigr{\|}_{n}^{2}+\frac{\operatorname{Tr}(A_{\lambda}\Sigma A_{\lambda}^{\top})}{n} (7)

and that r^λunb\hat{r}_{\lambda}^{\mathrm{unb}}, defined by

r^λunb=𝐘𝐟^λn2+2n\operatornameTr(Σ^Aλ)1n\operatornameTr[Σ^],\hat{r}_{\lambda}^{\mathrm{unb}}=\|\mathbf{Y}-\hat{\mathbf{f}}_{\lambda}\|^{2}_{n}+\frac{2}{n}\operatorname{Tr}(\widehat{\Sigma}A_{\lambda})-\frac{1}{n}\operatorname{Tr}[\widehat{\Sigma}], (8)

is an unbiased estimator of rλr_{\lambda}. Along with r^λunb\hat{r}_{\lambda}^{\mathrm{unb}}, we will use another estimator of the risk that we call the adjusted risk estimate and define by

r^λadj=𝐘𝐟^λn2+2n\operatornameTr(Σ^Aλ)1n\operatornameTr[Σ^]r^λunb+1n𝐘(AλAλ2)𝐘.\hat{r}_{\lambda}^{\mathrm{adj}}=\underbrace{\|\mathbf{Y}-\hat{\mathbf{f}}_{\lambda}\|^{2}_{n}+\frac{2}{n}\operatorname{Tr}(\widehat{\Sigma}A_{\lambda})-\frac{1}{n}\operatorname{Tr}[\widehat{\Sigma}]}_{\hat{r}_{\lambda}^{\mathrm{unb}}}+\frac{1}{n}\mathbf{Y}^{\top}\bigl{(}A_{\lambda}-A_{\lambda}^{2}\bigr{)}\mathbf{Y}. (9)

One can notice that the adjusted risk estimate r^λadj\hat{r}_{\lambda}^{\mathrm{adj}} coincides with the unbiased risk estimate r^λunb\hat{r}_{\lambda}^{\mathrm{unb}} if and only if the matrix AλA_{\lambda} is an orthogonal projector.

To state our main results, we denote by 𝒫Λ\mathcal{P}_{\Lambda} the set of all probability measures on Λ\Lambda and by 𝒦(p,p)\mathcal{K}(p,p^{\prime}) the Kullback–Leibler divergence between two probability measures p,p𝒫Λp,p^{\prime}\in\mathcal{P}_{\Lambda}:

𝒦(p,p)={Λlog(dpdp(λ))p(dλ), if p is absolutely continuous w.r.t. p,+, otherwise.\mathcal{K}\bigl{(}p,p^{\prime}\bigr{)}=\cases{\displaystyle\int_{\Lambda}\log\biggl{(}\frac{dp}{dp^{\prime}}(\lambda)\biggr{)}p(d\lambda),&\quad$\mbox{if }p\mbox{ is absolutely continuous w.r.t. }p^{\prime},$\vskip 2.0pt\cr+\infty,&\quad$\mbox{otherwise.}$}

We write S1S2S_{1}\preceq S_{2} (resp., S1S2S_{1}\succeq S_{2}) for two symmetric matrices S1S_{1} and S2S_{2}, when S2S1S_{2}-S_{1} (resp., S1S2S_{1}-S_{2}) is semi-definite positive.

Theorem 1

Let all the matrices AλA_{\lambda} be symmetric and Σ^\widehat{\Sigma} be unbiased and independent of 𝐘\mathbf{Y}. {longlist}[(ii)]

Assume that for all λ,λΛ\lambda,\lambda^{\prime}\in\Lambda, it holds that AλAλ=AλAλA_{\lambda}A_{\lambda^{\prime}}=A_{\lambda^{\prime}}A_{\lambda}, AλΣ+ΣAλ0A_{\lambda}\Sigma+\Sigma A_{\lambda}\succeq 0 and 𝐛λ=0\mathbf{b}_{\lambda}=0. If β8|Σ|\beta\geq 8|\!|\!|{\Sigma}|\!|\!|, then the aggregate 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}} defined by equations (4), (5) and the unbiased risk estimate r^λ=r^λunb\hat{r}_{\lambda}=\hat{r}_{\lambda}^{\mathrm{unb}} (8) satisfies

𝔼[𝐟^EWA𝐟n2]infp𝒫Λ{Λ𝔼[𝐟^λ𝐟n2]p(dλ)+βn𝒦(p,π)}.\mathbb{E}\bigl{[}\|{\hat{\mathbf{f}}}_{\mathrm{EWA}}-\mathbf{f}\|_{n}^{2}\bigr{]}\leq\inf_{p\in\mathcal{P}_{\Lambda}}\biggl{\{}\int_{\Lambda}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}\bigr{]}p(d\lambda)+\frac{\beta}{n}\mathcal{K}(p,\pi)\biggr{\}}. (10)

Assume that, for all λΛ\lambda\in\Lambda, AλIn×nA_{\lambda}\preceq I_{n\times n} and Aλ𝐛λ=0A_{\lambda}\mathbf{b}_{\lambda}=0. If β4|Σ|\beta\geq 4|\!|\!|{\Sigma}|\!|\!|, then the aggregate 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}} defined by equations (4), (5) and the adjusted risk estimate r^λ=r^λadj\hat{r}_{\lambda}=\hat{r}_{\lambda}^{\mathrm{adj}} (9) satisfies

𝔼[𝐟^EWA𝐟n2]\displaystyle\mathbb{E}\bigl{[}\|{\hat{\mathbf{f}}}_{\mathrm{EWA}}-\mathbf{f}\|_{n}^{2}\bigr{]} \displaystyle\leq infp𝒫Λ{Λ𝔼[𝐟^λ𝐟n2]p(dλ)+βn𝒦(p,π)\displaystyle\inf_{p\in\mathcal{P}_{\Lambda}}\biggl{\{}\int_{\Lambda}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}\bigr{]}p(d\lambda)+\frac{\beta}{n}\mathcal{K}(p,\pi)
+1nΛ(𝐟(AλAλ2)𝐟+\operatornameTr[Σ(AλAλ2)])p(dλ)}.\displaystyle\hskip 26.0pt{}+\frac{1}{n}\int_{\Lambda}\bigl{(}\mathbf{f}^{\top}\bigl{(}A_{\lambda}-A_{\lambda}^{2}\bigr{)}\mathbf{f}+\operatorname{Tr}\bigl{[}\Sigma\bigl{(}A_{\lambda}-A_{\lambda}^{2}\bigr{)}\bigr{]}\bigr{)}p(d\lambda)\biggr{\}}.

The simplest setting in which all the conditions of part (i) of Theorem 1 are fulfilled is when the matrices AλA_{\lambda} and Σ\Sigma are all diagonal, or diagonalizable in a common base. This result, as we will see in Section 6, leads to a new estimator which is adaptive, in the exact minimax sense, over the collection of all Sobolev ellipsoids. It also suggests a new method for efficiently combining varying-block-shrinkage estimators, as described in Section 5.4.

However, part (i) of Theorem 1 leaves open the issue of aggregating affine estimators defined via noncommuting matrices. In particular, it does not allow us to evaluate the MSE of EWA when each AλA_{\lambda} is a convex or linear combination of a fixed family of projection matrices on nonorthogonal linear subspaces. These kinds of situations may be handled via the result of part (ii) of Theorem 1. One can observe that in the particular case of a finite collection of projection estimators (i.e., Aλ=Aλ2A_{\lambda}=A_{\lambda}^{2} and 𝐛λ=0\mathbf{b}_{\lambda}=0 for every λ\lambda), the result of part (ii) offers an extension of LeungBarron06 , Corollary 6, to the case of general noise covariances (LeungBarron06 deals only with i.i.d. noise).

An important situation covered by part (ii) of Theorem 1, but not by part (i), concerns the case when signals of interest 𝐟\mathbf{f} are smooth or sparse in a basis sig\mathcal{B}_{\mathrm{sig}} which is different from the basis noise\mathcal{B}_{\mathrm{noise}} orthogonalizing the covariance matrix Σ\Sigma. In such a context, one may be interested in considering matrices AλA_{\lambda} that are diagonalizable in the basis sig\mathcal{B}_{\mathrm{sig}} which, in general, do not commute with Σ\Sigma.

Remark 1.

While the results in LeungBarron06 yield a sharp oracle inequality in the case of projection matrices AλA_{\lambda}, they are of no help in the case when the matrices AλA_{\lambda} are nearly idempotent and not exactly. Assertion (ii) of Theorem 1 fills this gap by showing that if maxλ\operatornameTr[AλAλ2]δ\max_{\lambda}\operatorname{Tr}[A_{\lambda}-A_{\lambda}^{2}]\leq\delta, then 𝔼[𝐟^EWA𝐟n2]\mathbb{E}[\|{\hat{\mathbf{f}}}_{\mathrm{EWA}}-\mathbf{f}\|_{n}^{2}] is bounded by

infp𝒫Λ{Λ𝔼[𝐟^λ𝐟n2]p(dλ)+βn𝒦(p,π)}+δ(𝐟n2+n1|Σ|).\inf_{p\in\mathcal{P}_{\Lambda}}\biggl{\{}\int_{\Lambda}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}\bigr{]}p(d\lambda)+\frac{\beta}{n}\mathcal{K}(p,\pi)\biggr{\}}+\delta\bigl{(}\|\mathbf{f}\|_{n}^{2}+n^{-1}|\!|\!|{\Sigma}|\!|\!|\bigr{)}.
Remark 2.

We have focused only on Gaussian errors to emphasize that it is possible to efficiently aggregate almost any family of affine estimators. We believe that by a suitable adaptation of the approach developed in DalalyanTsybakov08 , claims of Theorem 1 can be generalized—at least when ξi\xi_{i} are independent with known variances—to some other common noise distributions.

The results presented so far concern the situation when the matrices AλA_{\lambda} are symmetric. However, using the last part of Theorem 1, it is possible to propose an estimator of 𝐟\mathbf{f} that is almost as accurate as the best affine estimator Aλ𝐘+𝐛λA_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda} even if the matrices AλA_{\lambda} are not symmetric. Interestingly, the estimator enjoying this property is not obtained by aggregating the original estimators 𝐟^λ=Aλ𝐘+𝐛λ\hat{\mathbf{f}}_{\lambda}=A_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda} but the “symmetrized” estimators 𝐟~λ=A~λ𝐘+𝐛λ\tilde{\mathbf{f}}_{\lambda}=\tilde{A}_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda}, where A~λ=Aλ+AλAλAλ\tilde{A}_{\lambda}=A_{\lambda}+A_{\lambda}^{\top}-A_{\lambda}^{\top}A_{\lambda}. Besides symmetry, an advantage of the matrices A~λ\tilde{A}_{\lambda}, as compared to the AλA_{\lambda}’s, is that they automatically satisfy the contraction condition A~λIn×n\tilde{A}_{\lambda}\preceq I_{n\times n} required by part (ii) of Theorem 1. We will refer to this method as Symmetrized Exponentially Weighted Aggregates (or SEWA) DalalyanSalmon11b .

Theorem 2

Assume that the matrices AλA_{\lambda} and the vectors 𝐛λ\mathbf{b}_{\lambda} satisfy Aλ𝐛λ=Aλ𝐛λ=0A_{\lambda}\mathbf{b}_{\lambda}=A_{\lambda}^{\top}\mathbf{b}_{\lambda}=0 for every λΛ\lambda\in\Lambda. Assume in addition that Σ^\widehat{\Sigma} is an unbiased estimator of Σ\Sigma and is independent of 𝐘\mathbf{Y}. Let 𝐟~SEWA\tilde{\mathbf{f}}_{\mathrm{SEWA}} denote the exponentially weighted aggregate of the (symmetrized) estimators 𝐟~λ=(Aλ+AλAλAλ)𝐘+𝐛λ\tilde{\mathbf{f}}_{\lambda}=(A_{\lambda}+A_{\lambda}^{\top}-A_{\lambda}^{\top}A_{\lambda})\mathbf{Y}+\mathbf{b}_{\lambda} with the weights (4) defined via the risk estimate r^λunb\hat{r}_{\lambda}^{\mathrm{unb}}. Then, under the conditions β4|Σ|\beta\geq 4|\!|\!|{\Sigma}|\!|\!| and

π{λΛ\dvtx\operatornameTr(Σ^Aλ)\operatornameTr(Σ^AλAλ)}=1a.s.\pi\biggl{\{}\lambda\in\Lambda\dvtx\operatorname{Tr}(\widehat{\Sigma}A_{\lambda})\leq\operatorname{Tr}\bigl{(}\widehat{\Sigma}A_{\lambda}^{\top}A_{\lambda}\bigr{)}\biggr{\}}=1\qquad\mbox{a.s.} (C)

it holds that

𝔼[𝐟~SEWA𝐟n2]infp𝒫Λ{Λ𝔼[𝐟^λ𝐟n2]p(dλ)+βn𝒦(p,π)}.\mathbb{E}\bigl{[}\|\tilde{\mathbf{f}}_{\mathrm{SEWA}}-\mathbf{f}\|_{n}^{2}\bigr{]}\leq\inf_{p\in\mathcal{P}_{\Lambda}}\biggl{\{}\int_{\Lambda}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}\bigr{]}p(d\lambda)+\frac{\beta}{n}\mathcal{K}(p,\pi)\biggr{\}}. (2.9)

To understand the scope of condition (C), let us present several cases of widely used linear estimators for which this condition is satisfied:

  • The simplest class of matrices AλA_{\lambda} for which condition (C) holds true are orthogonal projections. Indeed, if AλA_{\lambda} is a projection matrix, it satisfies AλAλ=AλA_{\lambda}^{\top}A_{\lambda}=A_{\lambda} and, therefore, \operatornameTr(Σ^Aλ)=\operatornameTr(Σ^AλAλ)\operatorname{Tr}(\widehat{\Sigma}A_{\lambda})=\operatorname{Tr}(\widehat{\Sigma}A_{\lambda}^{\top}A_{\lambda}).

  • When the matrix Σ^\widehat{\Sigma} is diagonal, then a sufficient condition for (C) is aiij=1naji2a_{ii}\leq\sum_{j=1}^{n}a_{ji}^{2}. Consequently, (C) holds true for matrices having only zeros on the main diagonal. For instance, the kkNN filter in which the weight of the observation YiY_{i} is replaced by zero, that is, aij=𝟏j{ji,1,,ji,k}/ka_{ij}=\mathbf{1}_{j\in\{j_{i,1},\ldots,j_{i,k}\}}/k satisfies this condition.

  • Under a little bit more stringent assumption of homoscedasticity, that is, when Σ^=σ^2In×n\widehat{\Sigma}=\widehat{\sigma}^{2}I_{n\times n}, if the matrices AλA_{\lambda} are such that all the nonzero elements of each row are equal and sum up to one (or a quantity larger than one), then \operatornameTr(Aλ)=\operatornameTr(AλAλ)\operatorname{Tr}(A_{\lambda})=\operatorname{Tr}(A_{\lambda}^{\top}A_{\lambda}) and (C) is fulfilled. A notable example of linear estimators that satisfy this condition are Nadaraya–Watson estimators with rectangular kernel and nearest neighbor filters.

3 Discussion

Before elaborating on the main results stated in the previous section, by extending them to inverse problems and by deriving adaptive procedures, let us discuss some aspects of the presented OIs.

3.1 Assumptions on Σ\Sigma

In some rare situations, the matrix Σ\Sigma is known and it is natural to use Σ^=Σ\widehat{\Sigma}=\Sigma as an unbiased estimator. Besides this not very realistic situation, there are at least two contexts in which it is reasonable to assume that an unbiased estimator of Σ\Sigma, independent of 𝐘\mathbf{Y}, is available.

The first case corresponds to problems in which a signal can be recorded several times by the same device, or once but by several identical devices. For instance, this is the case when an object is photographed many times by the same digital camera during a short time period. Let 𝐙1,,𝐙N\mathbf{Z}_{1},\ldots,\mathbf{Z}_{N} be the available signals, which can be considered as i.i.d. copies of an nn-dimensional Gaussian vector with mean 𝐟\mathbf{f} and covariance matrix ΣZ\Sigma_{Z}. Then, defining 𝐘=(𝐙1++𝐙N)/N\mathbf{Y}=(\mathbf{Z}_{1}+\cdots+\mathbf{Z}_{N})/N and Σ^Z=(N1)1(𝐙1𝐙1++𝐙N𝐙NN𝐘𝐘)\widehat{\Sigma}_{Z}=(N-1)^{-1}(\mathbf{Z}_{1}\mathbf{Z}_{1}^{\top}+\cdots+\mathbf{Z}_{N}\mathbf{Z}_{N}^{\top}-N\mathbf{Y}\mathbf{Y}^{\top}), we find ourselves within the framework covered by previous theorems. Indeed, 𝐘𝒩n(𝐟,ΣY)\mathbf{Y}\sim\mathcal{N}_{n}(\mathbf{f},\Sigma_{Y}) with ΣY=ΣZ/N\Sigma_{Y}=\Sigma_{Z}/N and Σ^Y=Σ^Z/N\widehat{\Sigma}_{Y}=\widehat{\Sigma}_{Z}/N is an unbiased estimate of ΣY\Sigma_{Y}, independent of 𝐘\mathbf{Y}. Note that our theory applies in this setting for every integer N2N\geq 2.

The second case is when the dominating part of the noise comes from the device which is used for recording the signal. In this case, the practitioner can use the device in order to record a known signal, 𝐠\mathbf{g}. In digital image processing, 𝐠\mathbf{g} can be a black picture. This will provide a noisy signal 𝐙\mathbf{Z} drawn from Gaussian distribution 𝒩n(𝒈,Σ)\mathcal{N}_{n}(\bm{g},\Sigma), independent of 𝐘\mathbf{Y} which is the signal of interest. Setting Σ^=(𝐙𝐠)(𝐙𝐠)\widehat{\Sigma}=(\mathbf{Z}-\mathbf{g})(\mathbf{Z}-\mathbf{g})^{\top}, one ends up with an unbiased estimator of Σ\Sigma, which is independent of 𝐘\mathbf{Y}.

3.2 OI in expectation versus OI with high probability

All the results stated in this work provide sharp nonasymptotic bounds on the expected risk of EWA. It would be insightful to complement this study by risk bounds that hold true with high probability. However, it was recently proved in DaiRigZha12 that EWA is deviation suboptimal: there exist a family of constituent estimators and a constant C>0C>0 such that the difference between the risk of EWA and that of the best constituent estimator is larger than C/nC/\sqrt{n} with probability at least 0.060.06. Nevertheless, several empirical studies (see, e.g., DaiZhang11 ) demonstrated that EWA has often a smaller risk than some of its competitors, such as the empirical star procedure Audibert07 , which are provably optimal in the sense of OIs with high probability. Furthermore, numerical experiments carried out in Section 7 show that the standard-deviation of the risk of EWA is of the order of 1/n1/n. This suggests that under some conditions on the constituent estimators it might be possible to establish OIs for EWA that are similar to (10) but hold true with high probability. A step in proving this kind of result was done in LecueMendelson10 , Theorem C, for the model of regression with random design.

3.3 Relation to previous work and limits of our results

The OI of the previous section requires various conditions on the constituent estimators 𝐟^λ=Aλ𝐘+𝐛λ\hat{\mathbf{f}}_{\lambda}=A_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda}. One may wonder how general these conditions are and is it possible to extend these OIs to more general 𝐟^λ\hat{\mathbf{f}}_{\lambda}’s. Although this work does not answer this question, we can sketch some elements of response.

First of all, we stress that the conditions of the present paper relax significantly those of previous results existing in statistical literature. For instance, Kneip Kneip94 considered only linear estimators, that is, 𝐛λ0\mathbf{b}_{\lambda}\equiv 0 and, more importantly, only ordered sets of commuting matrices AλA_{\lambda}. The ordering assumption is dropped in Leung and Barron LeungBarron06 , in the case of projection matrices. Note that neither of these assumptions is satisfied for the families of Pinsker and Tikhonov–Philipps estimators. The present work strengthens existing results in considering more general, affine estimators extending both projection matrices and ordered commuting matrices.

Despite the advances achieved in this work, there are still interesting cases that are not covered by our theory. We now introduce a family of estimators commonly used in image processing that do not satisfy our assumptions. In recent years, nonlocal means (NLM) became quite popular in image processing BuadesCollMorel05 . This method of signal denoising, shown to be tied in with EWA SalmonLepennec09b , removes noise by exploiting signals self-similarities. We briefly define the NLM procedure in the case of one-dimensional signals.

Assume that a vector 𝐘=(y1,,yn)\mathbf{Y}=(y_{1},\ldots,y_{n})^{\top} given by (1) is observed with fi=𝐟(i/n)f_{i}=\mathbf{f}(i/n), i=1,,ni=1,\ldots,n, for some function 𝐟\dvtx[0,1]\mathbf{f}\dvtx[0,1]\to\mathbb{R}. For a fixed “patch-size” k{1,,n}k\in\{1,\ldots,n\}, let us define 𝐟[i]=(fi,fi+1,,fi+k1)\mathbf{f}_{[i]}=(f_{i},f_{i+1},\ldots,f_{i+k-1})^{\top} and 𝐘[i]=(yi,yi+1,,yi+k1)\mathbf{Y}_{[i]}=(y_{i},y_{i+1},\ldots,y_{i+k-1})^{\top} for every i=1,,nk+1i=1,\ldots,n-k+1. The vectors 𝐟[i]\mathbf{f}_{[i]} and 𝐘[i]\mathbf{Y}_{[i]} are, respectively, called true patch and noisy patch. The NLM consists in regarding the noisy patches 𝐘[i]\mathbf{Y}_{[i]} as constituent estimators for estimating the true patch 𝐟[i0]\mathbf{f}_{[i_{0}]} by applying EWA. One easily checks that the constituent estimators 𝐘[i]\mathbf{Y}_{[i]} are affine in 𝐘[i0]\mathbf{Y}_{[i_{0}]}, that is, 𝐘[i]=Ai𝐘[i0]+𝐛i\mathbf{Y}_{[i]}=A_{i}\mathbf{Y}_{[i_{0}]}+\mathbf{b}_{i} with AiA_{i} and 𝐛i\mathbf{b}_{i} independent of 𝐘[i0]\mathbf{Y}_{[i_{0}]}. Indeed, if the distance between ii and i0i_{0} is larger than kk, then 𝐘[i]\mathbf{Y}_{[i]} is independent of 𝐘[i0]\mathbf{Y}_{[i_{0}]} and, therefore, Ai=0A_{i}=0 and 𝐛i=𝐘[i]\mathbf{b}_{i}=\mathbf{Y}_{[i]}. If |ii0|<k|i-i_{0}|<k, then the matrix AiA_{i} is a suitably chosen shift matrix and 𝐛i\mathbf{b}_{i} is the projection of 𝐘[i]\mathbf{Y}_{[i]} onto the orthogonal complement of the image of AiA_{i}. Unfortunately, these matrices {Ai}\{A_{i}\} and vectors {bi}\{b_{i}\} do not fit our framework, that is, the assumption Aibi=Aibi=0A_{i}b_{i}=A_{i}^{\top}b_{i}=0 is not satisfied.

Finally, our proof technique is specific to affine estimators. Its extension to estimators defined as a more complex function of the data will certainly require additional tools and is a challenging problem for future research. Yet, it seems unlikely to get sharp OIs with optimal remainder term for a fairly general family of constituent estimators (without data-splitting), since this generality inherently increases the risk of overfitting.

4 Ill-posed inverse problems and group-weighting

As explained in CavalierGolubevPicardTsybakov02 , Cavalier08 , the model of heteroscedastic regression is well suited for describing inverse problems. In fact, let TT be a known linear operator on some Hilbert space \mathcal{H}, with inner product |\langle\cdot|\cdot\rangle_{\mathcal{H}}. For some hh\in\mathcal{H}, let YY be the random process indexed by gg\in\mathcal{H} such that

Y=Th+εξ(Y(g)=Th|g+εξ(g),g),Y=Th+\varepsilon\xi\quad\Longleftrightarrow\quad\bigl{(}Y(g)=\langle Th|g\rangle_{\mathcal{H}}+\varepsilon\xi(g),\forall g\in\mathcal{H}\bigr{)}, (10)

where ε>0\varepsilon>0 is the noise magnitude and ξ\xi is a white Gaussian noise on \mathcal{H}, that is, for any g1,,gkg_{1},\ldots,g_{k}\in\mathcal{H} the vector (Y(g1),,Y(gk))(Y(g_{1}),\ldots,Y(g_{k})) is Gaussian with zero mean and covariance matrix {gi|gj}\{\langle g_{i}|g_{j}\rangle_{\mathcal{H}}\}. The problem is then the following: estimate the element hh assuming the value of YY can be measured for any given gg. It is customary to use as gg the eigenvectors of the adjoint TT^{*} of TT. Under the condition that the operator TTT^{*}T is compact, the SVD yields Tϕk=bkψkT\phi_{k}=b_{k}\psi_{k} and Tψk=bkϕkT^{*}\psi_{k}=b_{k}\phi_{k}, for kk\in\mathbb{N}, where bkb_{k} are the singular values, {ψk}\{\psi_{k}\} is an orthonormal basis in \operatornameRange(T)\operatorname{Range}(T)\subset\mathcal{H} and {ϕk}\{\phi_{k}\} is the corresponding orthonormal basis in \mathcal{H}. In view of (10), it holds that

Y(ψk)=h|ϕkbk+εξ(ψk),k.Y(\psi_{k})=\langle h|\phi_{k}\rangle_{\mathcal{H}}b_{k}+\varepsilon\xi(\psi_{k}),\qquad k\in\mathbb{N}. (11)

Since in practice only a finite number of measurements can be computed, it is natural to assume that the values Y(ψk)Y(\psi_{k}) are available only for kk smaller than some integer nn. Under the assumption that bk0b_{k}\neq 0, the last equation is equivalent to (1) with fi=h|ϕif_{i}=\langle h|\phi_{i}\rangle_{\mathcal{H}} and Σ=\operatornamediag(σi2,i=1,2,)\Sigma=\operatorname{diag}(\sigma_{i}^{2},i=1,2,\ldots) for σi=εbi1\sigma_{i}=\varepsilon b_{i}^{-1}. Examples of inverse problems to which this statistical model has been successfully applied are derivative estimation, deconvolution with known kernel, computerized tomography—see Cavalier08 and the references therein for more applications.

For very mildly ill-posed inverse problems, that is, when the singular values bkb_{k} of TT tend to zero not faster than any negative power of kk, the approach presented in Section 2 will lead to satisfactory results. Indeed, by choosing β=8|Σ|\beta=8|\!|\!|{\Sigma}|\!|\!| or β=4|Σ|\beta=4|\!|\!|{\Sigma}|\!|\!|, the remainder term in (10) and (2.9) becomes—up to a logarithmic factor—proportional to max1knbk2/n\max_{1\leq k\leq n}b_{k}^{-2}/{n}, which is the optimal rate in the case of very mild ill-posedness.

However, even for mildly ill-posed inverse problems, the approach developed in the previous section becomes obsolete since the remainder blows up when nn increases to infinity. Furthermore, this is not an artifact of our theoretical results, but rather a drawback of the aggregation strategy adopted in the previous section. Indeed, the posterior probability measure π^\hat{\pi} defined by (4) can be seen as the solution of the entropy-penalized empirical risk minimization problem:

π^n=\operatornamearginfp{Λr^λp(dλ)+βn𝒦(p,π)},\hat{\pi}_{n}=\operatorname{arg}\inf_{p}\biggl{\{}\int_{\Lambda}\hat{r}_{\lambda}p(d\lambda)+\frac{\beta}{n}\mathcal{K}(p,\pi)\biggr{\}}, (12)

where the inf\inf is taken over the set of all probability distributions. It means the same regularization parameter β\beta is employed for estimating both the coefficients fi=h|ϕif_{i}=\langle h|\phi_{i}\rangle_{\mathcal{H}} corrupted by noise of small magnitude and those corrupted by large noise. Since we place ourselves in the setting of known operator TT and, therefore, known noise levels, such a uniform treatment of all coefficients is unreasonable. It is more natural to upweight the regularization term in the case of large noise downweighting the data fidelity term and, conversely, to downweight the regularization in the case of small noise. This motivates our interest in the grouped EWA (or GEWA).

Let us consider a partition B1,,BJB_{1},\ldots,B_{J} of the set {1,,n}\{1,\ldots,n\}: Bj={Tj+1,,Tj+1}B_{j}=\{T_{j}+1,\ldots,T_{j+1}\}, for some integers 0=T1<T2<<TJ+1=n0=T_{1}<T_{2}<\cdots<T_{J+1}=n. To each element BjB_{j} of this partition, we associate the data sub-vector 𝐘j=(Yi\dvtxiBj)\mathbf{Y}^{j}=(Y_{i}\dvtx i\in B_{j}) and the sub-vector of true function 𝐟j=(fi\dvtxiBj)\mathbf{f}^{j}=(f_{i}\dvtx i\in B_{j}). As in previous sections, we are concerned by the aggregation of affine estimators 𝐟^λ=Aλ𝐘+𝐛λ\hat{\mathbf{f}}_{\lambda}=A_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda}, but here we will assume the matrices AλA_{\lambda} are block-diagonal:

Aλ=[Aλ1000Aλ2000AλJ]with Aλj(Tj+1Tj)×(Tj+1Tj).A_{\lambda}=\left[\matrix{A_{\lambda}^{1}&0&\ldots&0\vskip 2.0pt\cr 0&A_{\lambda}^{2}&\ldots&0\vskip 2.0pt\cr\vdots&\vdots&\ddots&\vdots\vskip 2.0pt\cr 0&0&\ldots&A_{\lambda}^{J}}\right]\qquad\mbox{with }A_{\lambda}^{j}\in\mathbb{R}^{(T_{j+1}-T_{j})\times(T_{j+1}-T_{j})}.

Similarly, we define 𝐟^λj\hat{\mathbf{f}}_{\lambda}^{j} and 𝐛λj\mathbf{b}_{\lambda}^{j} as the sub-vectors of 𝐟^λ\hat{\mathbf{f}}_{\lambda} and 𝐛λ\mathbf{b}_{\lambda}, respectively, corresponding to the indices belonging to BjB_{j}. We will also assume that the noise covariance matrix Σ\Sigma and its unbiased estimate Σ^\widehat{\Sigma} are block-diagonal with (Tj+1Tj)×(Tj+1Tj)(T_{j+1}-T_{j})\times(T_{j+1}-T_{j}) blocks Σj\Sigma^{j} and Σ^j\widehat{\Sigma}^{j}, respectively. This notation implies, in particular, that 𝐟^λj=Aλj𝐘j+𝐛λj\hat{\mathbf{f}}_{\lambda}^{j}=A_{\lambda}^{j}\mathbf{Y}^{j}+\mathbf{b}_{\lambda}^{j} for every j=1,,Jj=1,\ldots,J. Moreover, the unbiased risk estimate r^λunb\hat{r}_{\lambda}^{\mathrm{unb}} of 𝐟^λ\hat{\mathbf{f}}_{\lambda} can be decomposed into the sum of unbiased risk estimates r^λj,unb\hat{r}_{\lambda}^{j,\mathrm{unb}} of 𝐟^λj\hat{\mathbf{f}}_{\lambda}^{j}, namely, r^λunb=j=1Jr^λj,unb\hat{r}_{\lambda}^{\mathrm{unb}}=\sum_{j=1}^{J}\hat{r}_{\lambda}^{j,\mathrm{unb}}, where

r^λj,unb=𝐘j𝐟^λj+2n\operatornameTr(Σ^jAλj)1n\operatornameTr[Σ^j],j=1,,J.\hat{r}_{\lambda}^{j,\mathrm{unb}}=\bigl{\|}\mathbf{Y}^{j}-\hat{\mathbf{f}}_{\lambda}^{j}\bigr{\|}+\frac{2}{n}\operatorname{Tr}\bigl{(}\widehat{\Sigma}^{j}A_{\lambda}^{j}\bigr{)}-\frac{1}{n}\operatorname{Tr}\bigl{[}\widehat{\Sigma}^{j}\bigr{]},\qquad j=1,\ldots,J.

To state the analogues of Theorems 1 and 2, we introduce the following settings. {longlist}

For all λ,λΛ\lambda,\lambda^{\prime}\in\Lambda and j{1,,J}j\in\{1,\ldots,J\}, AλjA_{\lambda}^{j} are symmetric and satisfy AλjAλj=AλjAλjA_{\lambda}^{j}A_{\lambda^{\prime}}^{j}=A_{\lambda^{\prime}}^{j}A_{\lambda}^{j}, AλjΣj+ΣjAλj0A_{\lambda}^{j}\Sigma^{j}+\Sigma^{j}A_{\lambda}^{j}\succeq 0 and 𝐛λj=0\mathbf{b}_{\lambda}^{j}=0. For a temperature vector \boldsβ=(β1,,βJ)\bolds{\beta}=(\beta_{1},\ldots,\beta_{J})^{\top} and a prior π\pi, we define GEWA as 𝐟^GEWAj=Λ𝐟^λjπ^j(dλ){\hat{\mathbf{f}}}_{\mathrm{GEWA}}^{j}=\int_{\Lambda}\hat{\mathbf{f}}_{\lambda}^{j}\hat{\pi}^{j}(d\lambda), where π^j(dλ)=θj(λ)π(dλ)\hat{\pi}^{j}(d\lambda)=\theta^{j}(\lambda)\pi(d\lambda) with

θj(λ)=exp(nr^λj,unb/βj)Λexp(nr^ωj,unb/βj)π(dω).\theta^{j}(\lambda)=\frac{\exp(-n\hat{r}_{\lambda}^{j,\mathrm{unb}}/\beta_{j})}{\int_{\Lambda}\exp(-n\hat{r}_{\omega}^{j,\mathrm{unb}}/\beta_{j})\pi(d\omega)}. (13)

For every j=1,,Jj=1,\ldots,J and for every λ\lambda belonging to a set of π\pi-measure one, the matrices AλA_{\lambda} satisfy a.s. the inequality \operatornameTr(Σ^jAλj)\operatornameTr(Σ^j(Aλj)Aλj)\operatorname{Tr}(\widehat{\Sigma}^{j}A_{\lambda}^{j})\leq\break\operatorname{Tr}(\widehat{\Sigma}^{j}(A_{\lambda}^{j})^{\top}A_{\lambda}^{j}) while the vectors 𝐛λ\mathbf{b}_{\lambda} are such that Aλj𝐛λj=(Aλj)𝐛λj=0A_{\lambda}^{j}\mathbf{b}_{\lambda}^{j}=(A_{\lambda}^{j})^{\top}\mathbf{b}_{\lambda}^{j}=0. In this case, for a temperature vector \boldsβ=(β1,,βJ)\bolds{\beta}=(\beta_{1},\ldots,\beta_{J})^{\top} and a prior π\pi, we define GEWA as 𝐟^GEWAj=Λ𝐟~λjπ^j(dλ){\hat{\mathbf{f}}}_{\mathrm{GEWA}}^{j}=\int_{\Lambda}{\tilde{\mathbf{f}}_{\lambda}}^{j}\hat{\pi}^{j}(d\lambda), where 𝐟~λj=(Aλj+(Aλj)(Aλj)Aλj)𝐘j+𝐛λj{\tilde{\mathbf{f}}_{\lambda}}^{j}=(A_{\lambda}^{j}+(A_{\lambda}^{j})^{\top}-(A_{\lambda}^{j})^{\top}A_{\lambda}^{j})\mathbf{Y}^{j}+\mathbf{b}_{\lambda}^{j} and π^j\hat{\pi}^{j} is defined by (13). Note that this setting is the grouped version of the SEWA.

Theorem 3

Assume that Σ^\widehat{\Sigma} is unbiased and independent of 𝐘\mathbf{Y}. Under setting 1, if βj8|Σj|\beta_{j}\geq 8|\!|\!|{\Sigma^{j}}|\!|\!| for all j=1,,Jj=1,\ldots,J, then

𝔼[𝐟^GEWA𝐟n2]j=1Jinfpj{Λ𝔼𝐟^λj𝐟jn2pj(dλ)+βjn𝒦(pj,π)}.\quad\mathbb{E}\bigl{[}\|{\hat{\mathbf{f}}}_{\mathrm{GEWA}}-\mathbf{f}\|_{n}^{2}\bigr{]}\leq\sum_{j=1}^{J}\inf_{p_{j}}\biggl{\{}\int_{\Lambda}\mathbb{E}\bigl{\|}\hat{\mathbf{f}}_{\lambda}^{j}-\mathbf{f}^{j}\bigr{\|}_{n}^{2}p_{j}(d\lambda)+\frac{\beta_{j}}{n}\mathcal{K}(p_{j},\pi)\biggr{\}}. (14)

Under setting 2, this inequality holds true if βj4|Σj|\beta_{j}\geq 4|\!|\!|{\Sigma^{j}}|\!|\!| for every j=1,,Jj=1,\ldots,J.

As we shall see in Section 6, this theorem allows us to propose an estimator of the unknown signal which is adaptive w.r.t. the smoothness properties of the underlying signal and achieves the minimax rates and constants over the Sobolev ellipsoids provided that the operator TT is mildly ill-posed, that is, its singular values decrease at most polynomially.

5 Examples of sharp oracle inequalities

In this section we discuss consequences of the main result for specific choices of prior measures. For conveying the main messages of this section it is enough to focus on settings 1 and 2 in the case of only one group (J=1J=1).

5.1 Discrete oracle inequality

In order to demonstrate that inequality (14) can be reformulated in terms of an OI as defined by (3), let us consider the case when the prior π\pi is discrete, that is, π(Λ0)=1\pi(\Lambda_{0})=1 for a countable set Λ0Λ\Lambda_{0}\subset\Lambda, and w.l.o.g Λ0=\Lambda_{0}=\mathbb{N}. Then, the following result holds true.

Proposition 1.

Let Σ^\widehat{\Sigma} be unbiased, independent of 𝐘\mathbf{Y} and π\pi be supported by \mathbb{N}. Under setting 1 with J=1J=1 and β=β18|Σ|\beta=\beta_{1}\geq 8|\!|\!|{\Sigma}|\!|\!|, the aggregate 𝐟^GEWA{\hat{\mathbf{f}}}_{\mathrm{GEWA}} satisfies the inequality

𝔼[𝐟^GEWA𝐟n2]inf\dvtxπ>0(𝔼[𝐟^𝐟n2]+βlog(1/π)n).\mathbb{E}\bigl{[}\|{\hat{\mathbf{f}}}_{\mathrm{GEWA}}-\mathbf{f}\|_{n}^{2}\bigr{]}\leq\inf_{\ell\in\mathbb{N}\dvtx\pi_{\ell}>0}\biggl{(}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\ell}-\mathbf{f}\|_{n}^{2}\bigr{]}+\frac{\beta\log(1/\pi_{\ell})}{n}\biggr{)}. (15)

Furthermore, (15) holds true under setting 2 for β4|Σ|\beta\geq 4|\!|\!|{\Sigma}|\!|\!|.

{pf}

It suffices to apply Theorem 3 and to upper-bound the right-hand side by the minimum over all Dirac measures p=δp=\delta_{\ell} such that π>0\pi_{\ell}>0. This inequality can be compared to Corollary 2 in BaraudGiraudHuet10 , Section 4.3. Our result has the advantage of having factor one in front of the expectation of the left-hand side, while in BaraudGiraudHuet10 a constant much larger than 1 appears. However, it should be noted that the assumptions on the (estimated) noise covariance matrix are much weaker in BaraudGiraudHuet10 .

5.2 Continuous oracle inequality

It may be useful in practice to combine a family of affine estimators indexed by an open subset of M\mathbb{R}^{M} for some MM\in\mathbb{N} (e.g., to build an estimator nearly as accurate as the best kernel estimator with fixed kernel and varying bandwidth). To state an oracle inequality in such a “continuous” setup, let us denote by d2(\boldsλ,Λ)d_{2}(\bolds{\lambda},\partial\Lambda) the largest real τ>0\tau>0 such that the ball centered at \boldsλ\bolds{\lambda} of radius τ\tau—hereafter denoted by B\boldsλ(τ)B_{\bolds{\lambda}}(\tau)—is included in Λ\Lambda. Let \operatornameLeb()\operatorname{Leb}(\cdot) be the Lebesgue measure in M\mathbb{R}^{M}.

Proposition 2.

Let Σ^\widehat{\Sigma} be unbiased, independent of 𝐘\mathbf{Y}. Let ΛM\Lambda\subset\mathbb{R}^{M} be an open and bounded set and let π\pi be the uniform distribution on Λ\Lambda. Assume that the mapping \boldsλr\boldsλ\bolds{\lambda}\mapsto r_{\bolds{\lambda}} is Lipschitz continuous, that is, |r\boldsλr\boldsλ|Lr\boldsλ\boldsλ2|r_{\bolds{\lambda}^{\prime}}-r_{\bolds{\lambda}}|\leq L_{r}\|\bolds{\lambda}^{\prime}-\bolds{\lambda}\|_{2}, \boldsλ,\boldsλΛ\forall\bolds{\lambda},\bolds{\lambda}^{\prime}\in\Lambda. Under setting 1 with J=1J=1 and β=β18|Σ|\beta=\beta_{1}\geq 8|\!|\!|{\Sigma}|\!|\!|, the aggregate 𝐟^GEWA{\hat{\mathbf{f}}}_{\mathrm{GEWA}} satisfies the inequality

𝔼𝐟^GEWA𝐟n2\displaystyle\mathbb{E}\|{\hat{\mathbf{f}}}_{\mathrm{GEWA}}-\mathbf{f}\|_{n}^{2} \displaystyle\leq inf\boldsλΛ{𝔼[𝐟^λ𝐟n2]+βMnlog(M2min(n1,d2(\boldsλ,Λ)))}\displaystyle\inf_{\bolds{\lambda}\in\Lambda}\biggl{\{}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}\bigr{]}+\frac{\beta M}{n}\log\biggl{(}\frac{\sqrt{M}}{2\min(n^{-1},d_{2}(\bolds{\lambda},\partial\Lambda))}\biggr{)}\biggr{\}}\hskip-30.0pt
+Lr+βlog(\operatornameLeb(Λ))n.\displaystyle{}+\frac{L_{r}+\beta\log(\operatorname{Leb}(\Lambda))}{n}.\hskip-30.0pt

Furthermore, (2) holds true under setting 2 for every β4|Σ|\beta\geq 4|\!|\!|{\Sigma}|\!|\!|.

{pf}

It suffices to apply assertion (i) of Theorem 1 and to upper-bound the right-hand side in inequality (10) by the minimum over all measures having as density p\boldsλ,τ(\boldsλ)=1B\boldsλ(τ)(\boldsλ)/\operatornameLeb(B\boldsλ(τ))p_{\bolds{\lambda}^{*},\tau^{*}}(\bolds{\lambda})=\mathbh{1}_{B_{\bolds{\lambda}^{*}}(\tau^{*})}(\bolds{\lambda})/\operatorname{Leb}(B_{\bolds{\lambda}^{*}}(\tau^{*})). Choosing τ=min(n1,d2(\boldsλ,Λ))\tau^{*}=\min(n^{-1},d_{2}(\bolds{\lambda}^{*},\partial\Lambda)) such that B\boldsλ(τ)ΛB_{\bolds{\lambda}^{*}}(\tau^{*})\subset\Lambda, the measure p\boldsλ,τ(\boldsλ)d\boldsλp_{\bolds{\lambda}^{*},\tau^{*}}(\bolds{\lambda})\,d\bolds{\lambda} is absolutely continuous w.r.t. the uniform prior π\pi and the Kullback–Leibler divergence between these two measures equals log{\operatornameLeb(Λ)/\operatornameLeb(B\boldsλ(τ))}\log\{\operatorname{Leb}(\Lambda)/\operatorname{Leb}(B_{\bolds{\lambda}^{*}}(\tau^{*}))\}. Using \operatornameLeb(B\boldsλ(τ))(2τ/M)M\operatorname{Leb}(B_{\bolds{\lambda}^{*}}(\tau^{*}))\geq({2\tau^{*}}/{\sqrt{M}})^{M} and the Lipschitz condition, we get the desired inequality.

Note that it is not very stringent to require the risk function r\boldsλr_{\bolds{\lambda}} to be Lipschitz continuous, especially since this condition needs not be satisfied uniformly in 𝐟\mathbf{f}. Let us consider the ridge regression: for a given design matrix Xn×pX\in\mathbb{R}^{n\times p}, Aλ=X(XX+γnλIn×n)1XA_{\lambda}=X(X^{\top}X+\gamma_{n}\lambda I_{n\times n})^{-1}X^{\top} and 𝐛λ=0\mathbf{b}_{\lambda}=0 with λ[λ,λ]\lambda\in[{\lambda_{*}},{\lambda^{*}}], γn\gamma_{n} being a given normalization factor typically set to nn or n\sqrt{n}, λ>0\lambda_{*}>0 and λ[λ,]\lambda^{*}\in[\lambda_{*},\infty]. One can easily check the Lipschitz property of the risk function with Lr=Lr(f)=4λ1𝐟n2+(2/n)\operatornameTr(Σ)L_{r}=L_{r}(f)=4\lambda_{*}^{-1}\|\mathbf{f}\|_{n}^{2}+(2/n)\operatorname{Tr}(\Sigma).

5.3 Sparsity oracle inequality

The continuous oracle inequality stated in the previous subsection is well adapted to the problems in which the dimension MM of Λ\Lambda is small w.r.t. the sample size nn (or, more precisely, the signal to noise ratio n/|Σ|n/|\!|\!|{\Sigma}|\!|\!|). When this is not the case, the choice of the prior should be done more carefully. For instance, consider ΛM\Lambda\subset\mathbb{R}^{M} with large MM under the sparsity scenario: there is a sparse vector \boldsλΛ\bolds{\lambda}^{*}\in\Lambda such that the risk of 𝐟^\boldsλ\hat{\mathbf{f}}_{\bolds{\lambda}^{*}} is small. Then, it is natural to choose a prior that favors sparse \boldsλ\bolds{\lambda}’s. This can be done in the same vein as in DalalyanTsybakov07 , DalalyanTsybakov08 , DalalyanTsybakov12a , DalalyanTsybakov12b , by means of the heavy tailed prior,

π(d\boldsλ)m=1M1(1+|λm/τ|2)21Λ(\boldsλ),\pi(d\bolds{\lambda})\propto\prod_{m=1}^{M}\frac{1}{(1+|\lambda_{m}/\tau|^{2})^{2}}\mathbh{1}_{\Lambda}(\bolds{\lambda}), (17)

where τ>0\tau>0 is a tuning parameter.

Proposition 3.

Let Σ^\widehat{\Sigma} be unbiased, independent of 𝐘\mathbf{Y}. Let Λ=M\Lambda=\mathbb{R}^{M} and let π\pi be defined by (17). Assume that the mapping \boldsλr\boldsλ\bolds{\lambda}\mapsto r_{\bolds{\lambda}} is continuously differentiable and, for some M×MM\times M matrix \mathcal{M}, satisfies

r\boldsλr\boldsλr\boldsλ(\boldsλ\boldsλ)(\boldsλ\boldsλ)(\boldsλ\boldsλ)\boldsλ,\boldsλΛ.r_{\bolds{\lambda}}-r_{\bolds{\lambda}^{\prime}}-\nabla r_{\bolds{\lambda}^{\prime}}^{\top}\bigl{(}\bolds{\lambda}-\bolds{\lambda}^{\prime}\bigr{)}\leq\bigl{(}\bolds{\lambda}-\bolds{\lambda}^{\prime}\bigr{)}^{\top}\mathcal{M}\bigl{(}\bolds{\lambda}-\bolds{\lambda}^{\prime}\bigr{)}\qquad\forall\bolds{\lambda},\bolds{\lambda}^{\prime}\in\Lambda. (18)

Under setting 1 if β8|Σ|\beta\geq 8|\!|\!|{\Sigma}|\!|\!|, then the aggregate 𝐟^EWA=𝐟^GEWA{\hat{\mathbf{f}}}_{\mathrm{EWA}}={\hat{\mathbf{f}}}_{\mathrm{GEWA}} satisfies

𝔼[𝐟^GEWA𝐟n2]\displaystyle\mathbb{E}\bigl{[}\|{\hat{\mathbf{f}}}_{\mathrm{GEWA}}-\mathbf{f}\|_{n}^{2}\bigr{]} \displaystyle\leq inf\boldsλM{𝔼𝐟^λ𝐟n2+4βnm=1Mlog(1+|λm|τ)}\displaystyle\inf_{\bolds{\lambda}\in\mathbb{R}^{M}}\Biggl{\{}\mathbb{E}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}+\frac{4\beta}{n}\sum_{m=1}^{M}\log\biggl{(}1+\frac{|\lambda_{m}|}{\tau}\biggr{)}\Biggr{\}}
+\operatornameTr()τ2.\displaystyle{}+\operatorname{Tr}(\mathcal{M})\tau^{2}.

Moreover, (3) holds true under setting 2 if β4|Σ|\beta\geq 4|\!|\!|{\Sigma}|\!|\!|.

Let us discuss here some consequences of this sparsity oracle inequality. First of all, consider the case of (linearly) combining frozen estimators, that is, when 𝐟^λ=j=1Mλjφj\hat{\mathbf{f}}_{\lambda}=\sum_{j=1}^{M}\lambda_{j}\varphi_{j} with some known functions φj\varphi_{j}. Then, it is clear that r\boldsλr\boldsλr\boldsλ(\boldsλ\boldsλ)=2(\boldsλ\boldsλ)Φ(\boldsλ\boldsλ)r_{\bolds{\lambda}}-r_{\bolds{\lambda}^{\prime}}-\nabla r_{\bolds{\lambda}^{\prime}}^{\top}(\bolds{\lambda}-\bolds{\lambda}^{\prime})=2(\bolds{\lambda}-\bolds{\lambda}^{\prime})^{\top}\Phi(\bolds{\lambda}-\bolds{\lambda}^{\prime}), where Φ\Phi is the Gram matrix defined by Φi,j=φi|φjn\Phi_{i,j}=\langle\varphi_{i}|\varphi_{j}\rangle_{n}. So the condition in Proposition 3 consists in bounding the Gram matrix of the atoms φj\varphi_{j}. Let us remark that in this case—see, for instance, DalalyanTsybakov08 , DalalyanTsybakov12b \operatornameTr()\operatorname{Tr}(\mathcal{M}) is on the order of MM and the choice τ=β/(nM)\tau=\sqrt{\beta/(nM)} ensures that the last term in the right-hand side of equation (3) decreases at the parametric rate 1/n1/n. This is the choice we recommend for practical applications.

As a second example, let us consider the case of a large number of linear estimators 𝐠^1=G1𝐘,,𝐠^M=GM𝐘\hat{\mathbf{g}}_{1}=G_{1}\mathbf{Y},\ldots,\hat{\mathbf{g}}_{M}=G_{M}\mathbf{Y} satisfying conditions of setting 1 and such that maxm=1,,M|Gm|1\max_{m=1,\ldots,M}|\!|\!|{G_{m}}|\!|\!|\leq 1. Assume we aim at proposing an estimator mimicking the behavior of the best possible convex combination of a pair of estimators chosen among 𝐠^1,,𝐠^M\hat{\mathbf{g}}_{1},\ldots,\hat{\mathbf{g}}_{M}. This task can be accomplished in our framework by setting Λ=M\Lambda=\mathbb{R}^{M} and 𝐟^λ=λ1𝐠^1++λM𝐠^M\hat{\mathbf{f}}_{\lambda}=\lambda_{1}\hat{\mathbf{g}}_{1}+\cdots+\lambda_{M}\hat{\mathbf{g}}_{M}, where \boldsλ=(λ1,,λM)\bolds{\lambda}=(\lambda_{1},\ldots,\lambda_{M}). Remark that if {𝐠^m}\{\hat{\mathbf{g}}_{m}\} satisfies conditions of setting 1, so does {𝐟^λ}\{\hat{\mathbf{f}}_{\lambda}\}. Moreover, the mapping \boldsλr\boldsλ\bolds{\lambda}\mapsto r_{\bolds{\lambda}} is quadratic with Hessian matrix 2r\boldsλ\nabla^{2}r_{\bolds{\lambda}} given by the entries 2Gm𝐟|Gm𝐟n+2n\operatornameTr(GmΣGm)2\langle G_{m}\mathbf{f}|G_{m^{\prime}}\mathbf{f}\rangle_{n}+\frac{2}{n}\operatorname{Tr}(G_{m^{\prime}}\Sigma G_{m}), m,m=1,,Mm,m^{\prime}=1,\ldots,M. It implies that inequality (18) holds with =2r\boldsλ/2\mathcal{M}=\nabla^{2}r_{\bolds{\lambda}}/2. Therefore, denoting by σi2\sigma_{i}^{2} the iith diagonal entry of Σ\Sigma and setting \boldsσ=(σ1,,σn)\bolds{\sigma}=(\sigma_{1},\ldots,\sigma_{n}), we get \operatornameTr()|m=1MGm2|[𝐟n2+\boldsσn2]M[𝐟n2+\boldsσn2]\operatorname{Tr}(\mathcal{M})\leq|\!|\!|{\sum_{m=1}^{M}G_{m}^{2}}|\!|\!|[\|\mathbf{f}\|_{n}^{2}+\|\bolds{\sigma}\|_{n}^{2}]\leq M[\|\mathbf{f}\|_{n}^{2}+\|\bolds{\sigma}\|_{n}^{2}]. Applying Proposition 3 with τ=β/(nM)\tau=\sqrt{\beta/(nM)}, we get

𝔼[𝐟^EWA𝐟n2]\displaystyle\mathbb{E}\bigl{[}\|{\hat{\mathbf{f}}}_{\mathrm{EWA}}-\mathbf{f}\|_{n}^{2}\bigr{]} \displaystyle\leq infα,m,m𝔼[α𝐠^m+(1α)𝐠^m𝐟n2]\displaystyle\inf_{\alpha,m,m^{\prime}}\mathbb{E}\bigl{[}\bigl{\|}\alpha\hat{\mathbf{g}}_{m}+(1-\alpha)\hat{\mathbf{g}}_{m^{\prime}}-\mathbf{f}\bigr{\|}_{n}^{2}\bigr{]}
+8βnlog(1+[Mnβ]1/2)+βn[𝐟n2+\boldsσn2],\displaystyle{}+\frac{8\beta}{n}\log\biggl{(}1+\biggl{[}{\frac{Mn}{\beta}}\biggr{]}^{1/2}\biggr{)}+\frac{\beta}{n}\bigl{[}\|\mathbf{f}\|_{n}^{2}+\|\bolds{\sigma}\|_{n}^{2}\bigr{]},

where the inf\inf is taken over all α[0,1]\alpha\in[0,1] and m,m{1,,M}m,m^{\prime}\in\{1,\ldots,M\}. This inequality is derived from (3) by upper-bounding the inf\boldsλM\inf_{\bolds{\lambda}\in\mathbb{R}^{M}} by the infimum over \boldsλ\bolds{\lambda}’s having at most two nonzero coefficients, λm0\lambda_{m^{\vphantom{1}}_{0}} and λm0\lambda_{m^{\prime}_{0}}, that are nonnegative and sum to one: λm0+λm0=1\lambda_{m^{\vphantom{1}}_{0}}+\lambda_{m^{\prime}_{0}}=1. To get (5.3), one simply notes that only two terms of the sum mlog(1+|λm|τ1)\sum_{m}\log(1+|\lambda_{m}|\tau^{-1}) are nonzero and each of them is not larger than log(1+τ1)\log(1+\tau^{-1}). Thus, one can achieve using EWA the best possible risk over the convex combinations of a pair of linear estimators—selected from a large (but finite) family—at the price of a residual term that decreases at the parametric rate up to a log factor.

5.4 Oracle inequalities for varying-block-shrinkage estimators

Let us consider now the problem of aggregation of two-block shrinkage estimators. This means that the constituent estimators have the following form: for \boldsλ=(a,b,k)[0,1]2×{1,,n}:=Λ\bolds{\lambda}=(a,b,k)\in[0,1]^{2}\times\{1,\ldots,n\}:=\Lambda, 𝐟^λ=A\boldsλ𝐘\hat{\mathbf{f}}_{\lambda}=A_{\bolds{\lambda}}\mathbf{Y} where A\boldsλ=\operatornamediag(a1(ik)+b1(i>k),i=1,,n)A_{\bolds{\lambda}}=\operatorname{diag}(a\mathbh{1}(i\leq k)+b\mathbh{1}(i>k),i=1,\ldots,n). Let us choose the prior π\pi as uniform on Λ\Lambda.

Proposition 4.

Let 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}} be the exponentially weighted aggregate having as constituent estimators two-block shrinkage estimators A\boldsλ𝐘A_{\bolds{\lambda}}\mathbf{Y}. If Σ\Sigma is diagonal, then for any \boldsλΛ\bolds{\lambda}\in\Lambda and for any β8|Σ|\beta\geq 8|\!|\!|{\Sigma}|\!|\!|,

𝔼[𝐟^EWA𝐟n2]𝔼[𝐟^\boldsλ𝐟n2]+βn{1+log(n2𝐟n2+n\operatornameTr(Σ)12β)}.\qquad\mathbb{E}\bigl{[}\|{\hat{\mathbf{f}}}_{\mathrm{EWA}}-\mathbf{f}\|_{n}^{2}\bigr{]}\leq\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\bolds{\lambda}}-\mathbf{f}\|_{n}^{2}\bigr{]}+\frac{\beta}{n}\biggl{\{}1+\log\biggl{(}\frac{n^{2}\|\mathbf{f}\|_{n}^{2}+n\operatorname{Tr}(\Sigma)}{12\beta}\biggr{)}\biggr{\}}. (21)

In the case Σ=In×n\Sigma=I_{n\times n}, this result is comparable to Leung , page 20, Theorem 2.49, which states that in the homoscedastic regression model (Σ=In×n\Sigma=I_{n\times n}), EWA acting on two-block positive-part James–Stein estimators satisfies, for any \boldsλΛ\bolds{\lambda}\in\Lambda such that 3kn33\leq k\leq n-3 and for β=8\beta=8, the oracle inequality

𝔼[𝐟^Leung𝐟n2]𝔼[𝐟^\boldsλ𝐟n2]+9n+8nminK>0{K(logn6K1)}.\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\mathrm{Leung}}-\mathbf{f}\|_{n}^{2}\bigr{]}\leq\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\bolds{\lambda}}-\mathbf{f}\|_{n}^{2}\bigr{]}+\frac{9}{n}+\frac{8}{n}\min_{K>0}\biggl{\{}K\vee\biggl{(}\log\frac{n-6}{K}-1\biggr{)}\biggr{\}}.\hskip-35.0pt (22)

6 Application to minimax adaptive estimation

Pinsker proved in his celebrated paper Pinsker80 that in the model (1) the minimax risk over ellipsoids can be asymptotically attained by a linear estimator. Let us denote by θk(𝐟)=𝐟|φkn\theta_{k}(\mathbf{f})=\langle\mathbf{f}|\varphi_{k}\rangle_{n} the coefficients of the (orthogonal) discrete cosine333The results of this section hold true not only for the discrete cosine transform, but also for any linear transform 𝒟\mathcal{D} such that 𝒟𝒟=𝒟𝒟=n1In×n\mathcal{D}\mathcal{D}^{\top}=\mathcal{D}^{\top}\mathcal{D}=n^{-1}I_{n\times n}. (DCT) transform of 𝐟\mathbf{f}, hereafter denoted by 𝒟𝐟\mathcal{D}\mathbf{f}. Pinsker’s result—restricted to Sobolev ellipsoids 𝒟(α,R)={𝐟n\dvtxk=1nk2αθk(𝐟)2R}\mathcal{F}_{\mathcal{D}}(\alpha,R)=\{\mathbf{f}\in\mathbb{R}^{n}\dvtx\sum_{k=1}^{n}k^{2\alpha}\theta_{k}(\mathbf{f})^{2}\leq R\}— states that, as nn\to\infty, the equivalences

inf𝐟^sup𝐟𝒟(α,R)𝔼[𝐟^𝐟n2]\displaystyle\inf_{\hat{\mathbf{f}}}\sup_{\mathbf{f}\in\mathcal{F}_{\mathcal{D}}(\alpha,R)}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}-\mathbf{f}\|_{n}^{2}\bigr{]} \displaystyle\sim infAsupf𝒟(α,R)𝔼[A𝐘𝐟n2]\displaystyle\inf_{A}\sup_{f\in\mathcal{F}_{\mathcal{D}}(\alpha,R)}\mathbb{E}\bigl{[}\|A\mathbf{Y}-\mathbf{f}\|_{n}^{2}\bigr{]} (23)
\displaystyle\sim infw>0supf𝒟(α,R)𝔼[Aα,w𝐘𝐟n2]\displaystyle\inf_{w>0}\sup_{f\in\mathcal{F}_{\mathcal{D}}(\alpha,R)}\mathbb{E}\bigl{[}\|A_{\alpha,w}\mathbf{Y}-\mathbf{f}\|_{n}^{2}\bigr{]} (24)

hold Tsybakov09 , Theorem 3.2, where the first inf\inf is taken over all possible estimators 𝐟^\hat{\mathbf{f}} and Aα,w=𝒟\operatornamediag((1kα/w)+;k=1,,n)𝒟A_{\alpha,w}=\mathcal{D}^{\top}\operatorname{diag}((1-k^{\alpha}/w)_{+};k=1,\ldots,n)\mathcal{D} is the Pinsker filter in the discrete cosine basis. In simple words, this implies that the (asymptotically) minimax estimator can be chosen from the quite narrow class of linear estimators with Pinsker’s filter. However, it should be emphasized that the minimax linear estimator depends on the parameters α\alpha and RR, that are generally unknown. An (adaptive) estimator, that does not depend on (α,R)(\alpha,R) and is asymptotically minimax over a large scale of Sobolev ellipsoids, has been proposed by Efromovich and Pinsker EfromovichPinsker84 . The next result, that is, a direct consequence of Theorem 1, shows that EWA with linear constituent estimators is also asymptotically sharp adaptive over Sobolev ellipsoids.

Proposition 5.

Let \boldsλ=(α,w)Λ=+2\bolds{\lambda}=(\alpha,w)\in\Lambda=\mathbb{R}_{+}^{2} and consider the prior

π(d\boldsλ)=2nσα/(2α+1)(1+nσα/(2α+1)w)3eαdαdw,\pi(d\bolds{\lambda})=\frac{2n_{\sigma}^{-\alpha/(2\alpha+1)}}{(1+n_{\sigma}^{-\alpha/(2\alpha+1)}w)^{3}}e^{-\alpha}\,d\alpha\,dw, (25)

where nσ=n/σ2n_{\sigma}=n/\sigma^{2}. Then, in model (1) with homoscedastic errors, the aggregate 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}} based on the temperature β=8σ2\beta=8\sigma^{2} and the constituent estimators 𝐟^α,w=Aα,w𝐘\hat{\mathbf{f}}_{\alpha,w}=A_{\alpha,w}\mathbf{Y} (with Aα,wA_{\alpha,w} being the Pinsker filter) is adaptive in the exact minimax sense444See Tsybakov09 , Definition 3.8. on the family of classes {𝒟(α,R)\dvtxα>0,R>0}\{\mathcal{F}_{\mathcal{D}}(\alpha,R)\dvtx\alpha>0,R>0\}.

It is worth noting that the exact minimax adaptivity property of our estimator 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}} is achieved without any tuning parameter. All previously proposed methods that are provably adaptive in an exact minimax sense depend on some parameters such as the lengths of blocks for blockwise Stein CavalierTsybakov02 and Efromovich–Pinsker EfromovichPinsker96 estimators or the step of discretization and the maximal value of bandwidth CavalierGolubevPicardTsybakov02 . Another nice property of the estimator 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}} is that it does not require any pilot estimator based on the data splitting device GaiffasLecue11 .

We now turn to the setup of heteroscedastic regression, which corresponds to ill-posed inverse problems as described in Section 4. To achieve adaptivity in the exact minimax sense, we make use of 𝐟^GEWA{\hat{\mathbf{f}}}_{\mathrm{GEWA}}, the grouped version of the exponentially weighted aggregate. We assume hereafter that the matrix Σ\Sigma is diagonal with diagonal entries σ12,,σn2\sigma_{1}^{2},\ldots,\sigma_{n}^{2} satisfying the following property:

σ,γ>0such that σk2=σ2k2γ(1+ok(1))as k.\exists\sigma_{*},\gamma>0\qquad\mbox{such that }\sigma_{k}^{2}=\sigma_{*}^{2}k^{2\gamma}\bigl{(}1+o_{k}(1)\bigr{)}\qquad\mbox{as }k\to\infty. (26)

This kind of problems arises when TT is a differential operator or the Radon transform Cavalier08 , Section 1.3. To handle such situations, we define the groups in the same spirit as the weakly geometrically increasing blocks in CavalierTsybakov01 . Let ν=νn\nu=\nu_{n} be a positive integer that increases as nn\to\infty. Set ρn=νn1/3\rho_{n}=\nu_{n}^{-1/3} and define

Tj={(1+νn)j11, j=1,2,Tj1+νnρn(1+ρn)j2, j=3,4,,T_{j}=\cases{(1+\nu_{n})^{j-1}-1,&\quad$j=1,2,$\vskip 2.0pt\cr T_{j-1}+\bigl{\lfloor}\nu_{n}\rho_{n}(1+\rho_{n})^{j-2}\bigr{\rfloor},&\quad$j=3,4,\ldots,$} (27)

where x\lfloor x\rfloor stands for the largest integer strictly smaller than xx. Let JJ be the smallest integer jj such that TjnT_{j}\geq n. We redefine TJ+1=nT_{J+1}=n and set Bj={Tj+1,,Tj+1}B_{j}=\{T_{j}+1,\ldots,T_{j+1}\} for all j=1,,Jj=1,\ldots,J.

Proposition 6.

Let the groups B1,,BJB_{1},\ldots,B_{J} be defined as above with νn\nu_{n} satisfying logνn/logn\log\nu_{n}/\log n\to\infty and νn\nu_{n}\to\infty as nn\to\infty. Let \boldsλ=(α,w)Λ=+2\bolds{\lambda}=(\alpha,w)\in\Lambda=\mathbb{R}_{+}^{2} and consider the prior

π(d\boldsλ)=2nα/(2α+2γ+1)(1+nα/(2α+2γ+1)w)3eαdαdw.\pi(d\bolds{\lambda})=\frac{2n^{-\alpha/(2\alpha+2\gamma+1)}}{(1+n^{-\alpha/(2\alpha+2\gamma+1)}w)^{3}}e^{-\alpha}\,d\alpha\,dw. (28)

Then, in model (1) with diagonal covariance matrix Σ=\operatornamediag(σk2;1kn)\Sigma=\operatorname{diag}(\sigma_{k}^{2};1\leq k\leq n) satisfying condition (26), the aggregate 𝐟^GEWA{\hat{\mathbf{f}}}_{\mathrm{GEWA}} (under setting 1) based on the temperatures βj=8maxiBjσi2\beta_{j}=8\max_{i\in B_{j}}\sigma_{i}^{2} and the constituent estimators 𝐟^α,w=Aα,w𝐘\hat{\mathbf{f}}_{\alpha,w}=A_{\alpha,w}\mathbf{Y} (with Aα,wA_{\alpha,w} being the Pinsker filter) is adaptive in the exact minimax sense on the family of classes {(α,R)\dvtxα>0,R>0}\{\mathcal{F}(\alpha,R)\dvtx\alpha>0,R>0\}.

Note that this result provides an estimator attaining the optimal constant in the minimax sense when the unknown signal lies in an ellipsoid. This property holds because minimax estimators over the ellipsoids are linear. For other subsets of n\mathbb{R}^{n}, such as hyper-rectangles, Besov bodies and so on, this is not true anymore. However, as proved by Donoho, Liu and MacGibbon DonohoLiuMacGibbon90 , for orthosymmetric quadratically convex sets the minimax linear estimators have a risk which is within 25%25\% of the minimax risk among all estimates. Therefore, following the approach developed here, it is also possible to prove that GEWA can lead to an adaptive estimator whose risk is within 25%25\% of the minimax risk, for a broad class of hyperrectangles.

Refer to caption
Figure 1: Test signals used in our experiments: Piece-Regular, Ramp, Piece-Polynomial, HeaviSine, Doppler and Blocks. (a) nonsmooth (Experiment I) and (b) smooth (Experiment II).

7 Experiments

In this section we present some numerical experiments on synthetic data, by focusing only on the case of homoscedastic Gaussian noise (Σ=σ2In×n\Sigma=\sigma^{2}I_{n\times n}) with known variance. A toolbox is made available freely for download at http://josephsalmon.eu/code/index_codes.php. Additional details and numerical experiments can be found in DalalyanSalmon11b , SalmonDalalyan11c .

We evaluate different estimation routines on several 1D signals considered as a benchmark in the literature on signal processing DonohoJohnstone94 . The six signals we retained for our experiments because of their diversity are depicted in Figure 1. Since these signals are nonsmooth, we have also carried out experiments on their smoothed versions obtained by taking the antiderivative. Experiments on nonsmooth (resp., smooth) signals are referred to as Experiment I (resp., Experiment II). In both cases, prior to applying estimation routines, we normalize the (true) sampled signal to have an empirical norm equal to one and use the DCT denoted by \boldsθ(𝐘)=(θ1(𝐘),,θn(𝐘))\bolds{\theta}(\mathbf{Y})=(\theta_{1}(\mathbf{Y}),\ldots,\theta_{n}(\mathbf{Y}))^{\top}.

The four tested estimation routines—including EWA—are detailed below.

Soft-Thresholding (ST) DonohoJohnstone94 : For a given shrinkage parameter tt, the soft-thresholding estimator is θ^k=\operatornamesgn(θk(𝐘))(|θk(𝐘)|σt)+\widehat{\theta}_{k}=\operatorname{sgn}(\theta_{k}(\mathbf{Y}))(|\theta_{k}(\mathbf{Y})|-\sigma t)_{+}. We use the data-driven threshold minimizing the Stein unbiased risk estimate DonohoJohnstone95 .

Blockwise James–Stein (BJS) shrinkage Cai99 : The set {1,,n}\{1,\ldots,n\} is partitioned into N=[n/log(n)]N=[n/\log(n)] blocks B1,B2,,BNB_{1},B_{2},\ldots,B_{N} of nearly equal size LL. The corresponding blocks of true coefficients θBk(𝐟)=(θj(𝐟))jBk\theta_{B_{k}}(\mathbf{f})=(\theta_{j}(\mathbf{f}))_{j\in B_{k}} are then estimated by θ^Bk=(1λLσ2Sk2(𝐘))+θBk(𝐘),\widehat{\theta}_{B_{k}}=(1-\frac{\lambda L\sigma^{2}}{S_{k}^{2}(\mathbf{Y})})_{+}{\theta_{B_{k}}}(\mathbf{Y}), k=1,,Nk={1,\ldots,N}, with blocks of noisy coefficients θBk(𝐘){\theta_{B_{k}}}(\mathbf{Y}), Sk2=θBk(𝐘)22S_{k}^{2}=\|\theta_{B_{k}}(\mathbf{Y})\|_{2}^{2} and λ=4.50524\lambda=4.50524.

Unbiased risk estimate (URE) minimization with Pinsker’s filters CavalierGolubevPicardTsybakov02 : Pinsker filter with data-driven parameters α\alpha and ww selected by minimizing an unbiased estimate of the risk over a suitably chosen grid for the values of α\alpha and ww. Here, we use geometric grids ranging from 0.10.1 to 100100 for α\alpha and from 11 to nn for ww.

EWA on Pinsker’s filters: We consider the same finite family of linear filters (defined by Pinsker’s filters) as in the URE routine described above. According to Proposition 1, this leads to an estimator nearly as accurate as the best Pinsker’s estimator in the given family.

To report the result of our experiments, we have also computed the best linear smoother, hereafter referred to as the oracle, based on a Pinsker filter chosen among the candidates that we used for defining URE and EWA. By best smoother we mean the one minimizing the squared error (it can be computed since we know the ground truth). Results summarized in Table 1 for Experiment I and Table 2 for Experiment II correspond to the average over 1000 trials of the mean squared error (MSE) from which we subtract the MSE of the oracle and multiply the resulting difference by the sample size. We report the results for σ=0.33\sigma=0.33 and for n{28,29,210,211}n\in\{2^{8},2^{9},2^{10},2^{11}\}.

Simulations show that EWA and URE have very comparable performances and are significantly more accurate than soft-thresholding and block James–Stein (cf. Table 1) for every size nn of signals considered. Improvements are particularly important when signals have large peaks or discontinuities. In most cases, EWA also outperforms URE, but differences are less pronounced. One can also observe that for smooth signals, the difference of MSEs between EWA and the oracle, multiplied by nn, remains nearly constant when nn varies. This is in agreement with our theoretical results in which the residual term decreases to zero inversely proportionally to nn.

Of course, soft-thresholding and blockwise James–Stein procedures have been designed for being applied to the wavelet transform of a Besov smooth function, rather than to the Fourier transform of a Sobolev-smooth function. However, the point here is not to demonstrate the superiority of EWA as compared to ST and BJS procedures. The point is to stress the importance of having sharp adaptivity up to an optimal constant and not simply adaptivity in the sense of rate of convergence. Indeed, the procedures ST and BJS are provably rate-adaptive when applied to the Fourier transform of a Sobolev-smooth function, but they are not sharp adaptive—they do not attain the optimal constant—whereas EWA and URE do attain.

Table 1: Evaluation of 4 adaptive methods on 6 (nonsmooth) signals. For each sample size and each method, we report the average value of n(\operatornameMSE\operatornameMSE\operatornameOracle)n(\operatorname{MSE}-\operatorname{MSE}_{\operatorname{Oracle}}) and the corresponding standard deviation (in parentheses), for 1000 replications of the experiment
\boldsn\bolds{n} EWA URE BJS ST EWA URE BJS ST
Blocks Doppler
256 0.0510.051 0.2450.245 9.6179.617 4.8464.846 0.0620.062 0.2120.212 13.23313.233 6.0366.036
(0.42)(0.42) (0.39)(0.39) (1.78)(1.78) (1.29)(1.29) (0.35)(0.35) (0.31)(0.31) (2.11)(2.11) (1.23)(1.23)
512 0.052-0.052 0.3020.302 13.80713.807 9.2569.256 0.100-0.100 0.2050.205 17.08017.080 12.62012.620
(0.35)(0.35) (0.50)(0.50) (2.16)(2.16) (1.70)(1.70) (0.30)(0.30) (0.39)(0.39) (2.29)(2.29) (1.75)(1.75)
1024 0.050-0.050 0.2990.299 19.98419.984 17.56917.569 0.107-0.107 0.2700.270 21.86221.862 23.00623.006
(0.36)(0.36) (0.46)(0.46) (2.68)(2.68) (2.17)(2.17) (0.35)(0.35) (0.41)(0.41) (2.92)(2.92) (2.35)(2.35)
2048 0.007-0.007 0.3620.362 28.94828.948 30.44730.447 0.150-0.150 0.2340.234 28.73328.733 38.67138.671
(0.42)(0.42) (0.57)(0.57) (3.31)(3.31) (2.96)(2.96) (0.34)(0.34) (0.42)(0.42) (3.19)(3.19) (3.02)(3.02)
HeaviSine Piece-Regular
256 0.060-0.060 0.2470.247 1.1551.155 3.9663.966 0.069-0.069 0.2480.248 8.8838.883 4.8794.879
(0.19)(0.19) (0.42)(0.42) (0.57)(0.57) (1.12)(1.12) (0.32)(0.32) (0.40)(0.40) (1.76)(1.76) (1.20)(1.20)
512 0.079-0.079 0.2150.215 2.0642.064 5.8895.889 0.105-0.105 0.2370.237 12.14712.147 9.7939.793
(0.19)(0.19) (0.39)(0.39) (0.86)(0.86) (1.36)(1.36) (0.30)(0.30) (0.37)(0.37) (2.28)(2.28) (1.64)(1.64)
1024 0.059-0.059 0.2400.240 3.1203.120 8.6858.685 0.092-0.092 0.2910.291 15.20715.207 16.79816.798
(0.23)(0.23) (0.36)(0.36) (1.20)(1.20) (1.64)(1.64) (0.34)(0.34) (0.46)(0.46) (2.18)(2.18) (2.13)(2.13)
2048 0.051-0.051 0.2780.278 4.8584.858 12.66712.667 0.059-0.059 0.2830.283 21.54321.543 27.38727.387
(0.25)(0.25) (0.48)(0.48) (1.42)(1.42) (2.03)(2.03) (0.34)(0.34) (0.54)(0.54) (2.47)(2.47) (2.77)(2.77)
Ramp Piece-Polynomial
256 0.0380.038 0.2940.294 6.9336.933 5.6445.644 0.0170.017 0.2030.203 12.20112.201 3.9883.988
(0.37)(0.37) (0.47)(0.47) (1.54)(1.54) (1.20)(1.20) (0.37)(0.37) (0.37)(0.37) (1.81)(1.81) (1.19)(1.19)
512 0.0100.010 0.2930.293 9.7129.712 9.9779.977 0.078-0.078 0.3120.312 17.76517.765 9.0319.031
(0.36)(0.36) (0.51)(0.51) (1.76)(1.76) (1.67)(1.67) (0.35)(0.35) (0.49)(0.49) (2.72)(2.72) (1.62)(1.62)
1024 0.002-0.002 0.3000.300 13.65613.656 16.79016.790 0.026-0.026 0.3210.321 23.32123.321 17.56517.565
(0.30)(0.30) (0.45)(0.45) (2.25)(2.25) (2.06)(2.06) (0.38)(0.38) (0.48)(0.48) (2.96)(2.96) (2.28)(2.28)
2048 0.0070.007 0.3120.312 19.11319.113 27.31527.315 0.007-0.007 0.3140.314 31.55031.550 29.46129.461
(0.34)(0.34) (0.50)(0.50) (2.68)(2.68) (2.61)(2.61) (0.41)(0.41) (0.49)(0.49) (3.05)(3.05) (2.95)(2.95)
Table 2: Evaluation of 4 adaptive methods on 6 smoothed signals. For each sample size and each method, we report the average value of n(\operatornameMSE\operatornameMSE\operatornameOracle)n(\operatorname{MSE}-\operatorname{MSE}_{\operatorname{Oracle}}) and the corresponding standard deviation (in parentheses), for 1000 replications of the experiment
\boldsn\bolds{n} EWA URE BJS ST EWA URE BJS ST
Blocks Doppler
256 0.3870.387 0.2160.216 0.2160.216 2.2782.278 0.2140.214 0.2370.237 1.6081.608 2.7772.777
(0.43)(0.43) (0.40)(0.40) (0.24)(0.24) (0.98)(0.98) (0.23)(0.23) (0.40)(0.40) (0.73)(0.73) (1.04)(1.04)
512 0.1700.170 0.2090.209 0.6500.650 3.1933.193 0.1650.165 0.2500.250 1.2001.200 3.6823.682
(0.20)(0.20) (0.41)(0.41) (0.25)(0.25) (1.07)(1.07) (0.20)(0.20) (0.44)(0.44) (0.48)(0.48) (1.24)(1.24)
1024 0.1620.162 0.2260.226 1.2821.282 4.5074.507 0.1470.147 0.2290.229 1.8421.842 5.0435.043
(0.18)(0.18) (0.41)(0.41) (0.44)(0.44) (1.28)(1.28) (0.19)(0.19) (0.45)(0.45) (0.86)(0.86) (1.43)(1.43)
2048 0.1200.120 0.2200.220 1.5741.574 6.1076.107 0.1380.138 0.2290.229 1.8641.864 6.5846.584
(0.17)(0.17) (0.37)(0.37) (0.55)(0.55) (1.55)(1.55) (0.20)(0.20) (0.40)(0.40) (1.07)(1.07) (1.58)(1.58)
HeaviSine Piece-Regular
256 0.2170.217 0.2070.207 1.3991.399 2.4962.496 0.2690.269 0.2790.279 2.1202.120 2.0532.053
(0.16)(0.16) (0.42)(0.42) (0.54)(0.54) (0.96)(0.96) (0.27)(0.27) (0.49)(0.49) (1.09)(1.09) (0.95)(0.95)
512 0.2060.206 0.2210.221 0.0240.024 3.0453.045 0.2160.216 0.2480.248 2.0452.045 2.8832.883
(0.18)(0.18) (0.43)(0.43) (0.26)(0.26) (1.10)(1.10) (0.20)(0.20) (0.45)(0.45) (1.17)(1.17) (1.13)(1.13)
1024 0.1790.179 0.2000.200 0.1130.113 3.9053.905 0.1830.183 0.2280.228 1.2511.251 3.7803.780
(0.18)(0.18) (0.50)(0.50) (0.27)(0.27) (1.27)(1.27) (0.20)(0.20) (0.41)(0.41) (0.70)(0.70) (1.37)(1.37)
2048 0.1620.162 0.1890.189 0.4210.421 5.0195.019 0.1450.145 0.2230.223 1.6501.650 4.9924.992
(0.15)(0.15) (0.37)(0.37) (0.27)(0.27) (1.53)(1.53) (0.19)(0.19) (0.42)(0.42) (1.12)(1.12) (1.42)(1.42)
Ramp Piece-Polynomial
256 0.1620.162 0.2000.200 0.3390.339 2.7702.770 0.2150.215 0.2570.257 1.4861.486 2.6492.649
(0.16)(0.16) (0.38)(0.38) (0.24)(0.24) (1.00)(1.00) (0.25)(0.25) (0.48)(0.48) (0.68)(0.68) (1.01)(1.01)
512 0.1500.150 0.2150.215 0.4250.425 3.6583.658 0.1700.170 0.2430.243 1.8651.865 3.6833.683
(0.18)(0.18) (0.38)(0.38) (0.23)(0.23) (1.20)(1.20) (0.20)(0.20) (0.46)(0.46) (0.84)(0.84) (1.20)(1.20)
1024 0.1460.146 0.2110.211 0.9350.935 4.8154.815 0.1790.179 0.2360.236 1.5471.547 5.0175.017
(0.18)(0.18) (0.39)(0.39) (0.33)(0.33) (1.35)(1.35) (0.20)(0.20) (0.47)(0.47) (1.02)(1.02) (1.38)(1.38)
2048 0.1410.141 0.2210.221 1.3161.316 6.4326.432 0.1650.165 0.2100.210 2.2462.246 6.6286.628
(0.20)(0.20) (0.43)(0.43) (0.42)(0.42) (1.54)(1.54) (0.20)(0.20) (0.39)(0.39) (1.15)(1.15) (1.70)(1.70)

8 Summary and future work

In this paper we have addressed the problem of aggregating a set of affine estimators in the context of regression with fixed design and heteroscedastic noise. Under some assumptions on the constituent estimators, we have proven that EWA with a suitably chosen temperature parameter satisfies PAC-Bayesian type inequality, from which different types of oracle inequalities have been deduced. All these inequalities are with leading constant one and rate-optimal residual term. As an application of our results, we have shown that EWA acting on Pinsker’s estimators produces an adaptive estimator in the exact minimax sense.

Next in our agenda is carrying out an experimental evaluation of the proposed aggregate using the approximation schemes described by Dalalyan and Tsybakov DalalyanTsybakov12b , Rigollet and Tsybakov RigolletTsybakov11 , RigolletTsybakov11b and Alquier and Lounici AlquierLounici10 , with a special focus on the problems involving large scale data.

Although we do not assume the covariance matrix Σ\Sigma of the noise to be known, our approach relies on an unbiased estimator of Σ\Sigma which is independent on the observed signal and on an upper bound on the largest singular value of Σ\Sigma. In some applications, such information may be hard to obtain and it can be helpful to relax the assumptions on Σ^\widehat{\Sigma}. This is another interesting avenue for future research for which, we believe, the approach developed by Giraud Giraud08 can be of valuable guidance.

Appendix: Proofs of main theorems

We develop now the detailed proofs of the results stated in the manuscript.

.1 Stein’s lemma

The proofs of our main results rely on Stein’s lemma Stein73 , recalled below, providing an unbiased risk estimate for any estimator that depends sufficiently smoothly on the data vector 𝐘\mathbf{Y}.

Lemma 1.

Let 𝐘\mathbf{Y} be a random vector drawn form the Gaussian distribution 𝒩n(𝐟,Σ)\mathcal{N}_{n}(\mathbf{f},\Sigma). If the estimator 𝐟^\hat{\mathbf{f}} is a.e. differentiable in 𝐘\mathbf{Y} and the elements of the matrix \bolds𝐟^:=(i𝐟^j)\bolds{\nabla}\cdot\hat{\mathbf{f}}^{\top}:=(\partial_{i}\hat{\mathbf{f}}_{j}) have finite first moment, then

r^=𝐘𝐟^n2+2n\operatornameTr[Σ(\bolds𝐟^)]1n\operatornameTr[Σ]\hat{r}=\|\mathbf{Y}-\hat{\mathbf{f}}\|^{2}_{n}+\frac{2}{n}\operatorname{Tr}\bigl{[}\Sigma\bigl{(}\bolds{\nabla}\cdot\hat{\mathbf{f}}^{\top}\bigr{)}\bigr{]}-\frac{1}{n}\operatorname{Tr}[\Sigma]

is an unbiased estimate of rr, that is, 𝔼[r^]=r\mathbb{E}[\hat{r}]=r.

The proof can be found in Tsybakov09 , page 157. We apply Stein’s lemma to the affine estimators 𝐟^λ=Aλ𝐘+𝐛λ\hat{\mathbf{f}}_{\lambda}=A_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda}, with AλA_{\lambda} an n×nn\times n deterministic real matrix and 𝐛λn\mathbf{b}_{\lambda}\in\mathbb{R}^{n} a deterministic vector. We get that if Σ^\widehat{\Sigma} is an unbiased estimator of Σ\Sigma, then r^λunb=𝐘𝐟^λn2+2n\operatornameTr[Σ^Aλ]1n\operatornameTr[Σ^]\hat{r}_{\lambda}^{\mathrm{unb}}=\|\mathbf{Y}-\hat{\mathbf{f}}_{\lambda}\|^{2}_{n}+\frac{2}{n}\operatorname{Tr}[\widehat{\Sigma}A_{\lambda}]-\frac{1}{n}\operatorname{Tr}[\widehat{\Sigma}] is an unbiased estimator of the risk rλ=𝔼[𝐟^λ𝐟n2]=(AλIn×n)𝐟+𝐛λn2+1n\operatornameTr[AλΣAλ]r_{\lambda}=\mathbb{E}[\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}]=\|(A_{\lambda}-I_{n\times n})\mathbf{f}+\mathbf{b}_{\lambda}\|_{n}^{2}+\frac{1}{n}\operatorname{Tr}[A_{\lambda}\Sigma A_{\lambda}^{\top}].

.2 An auxiliary result

Prior to proceeding with the proof of the main theorems, we prove an important auxiliary result which is the central ingredient of the proofs for our main results.

Lemma 2.

Let assumptions of Lemma 1 be satisfied. Let {𝐟^λ\dvtxλΛ}\{\hat{\mathbf{f}}_{\lambda}\dvtx\lambda\in\Lambda\} be a family of estimators of 𝐟\mathbf{f} and {r^λ\dvtxλΛ}\{\hat{r}_{\lambda}\dvtx\lambda\in\Lambda\} a family of risk estimates such that the mapping 𝐘(𝐟^λ,r^λ)\mathbf{Y}\mapsto(\hat{\mathbf{f}}_{\lambda},\hat{r}_{\lambda}) is a.e. differentiable for every λΛ\lambda\in\Lambda. Let r^λunb\hat{r}_{\lambda}^{\mathrm{unb}} be the unbiased risk estimate of 𝐟^λ\hat{\mathbf{f}}_{\lambda} given by Stein’s lemma. {longlist}[(1)]

For every π𝒫Λ\pi\in\mathcal{P}_{\Lambda} and for any β>0\beta>0, the estimator 𝐟^EWA{\hat{\mathbf{f}}}_{\mathrm{EWA}} defined as the average of 𝐟^λ\hat{\mathbf{f}}_{\lambda} w.r.t. to the probability measure

π^(𝐘,dλ)=θ(𝐘,λ)π(dλ)with θ(𝐘,λ)exp{nr^λ(𝐘)/β}\hat{\pi}(\mathbf{Y},d\lambda)=\theta(\mathbf{Y},\lambda)\pi(d\lambda)\qquad\mbox{with }\theta(\mathbf{Y},\lambda)\propto{\exp\bigl{\{}-{n}\hat{r}_{\lambda}(\mathbf{Y})/\beta\bigr{\}}}

admits

r^EWA=Λ(r^λunb𝐟^λ𝐟^EWAn22nβ𝐘r^λ|Σ(𝐟^λ𝐟^EWA)n)π^(dλ)\hat{r}_{\mathrm{EWA}}=\int_{\Lambda}\biggl{(}\hat{r}_{\lambda}^{\mathrm{unb}}-\|\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}}\|_{n}^{2}-\frac{2n}{\beta}\bigl{\langle}\nabla_{\mathbf{Y}}\hat{r}_{\lambda}|\Sigma(\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}})\bigr{\rangle}_{n}\biggr{)}\hat{\pi}(d\lambda)

as unbiased estimator of the risk.

If, furthermore, r^λr^λunb\hat{r}_{\lambda}\geq\hat{r}_{\lambda}^{\mathrm{unb}}, λΛ\forall\lambda\in\Lambda and Λn𝐘r^λ|Σ(𝐟^λ𝐟^EWA)nπ^(dλ)aΛ𝐟^λ𝐟^EWAn2π^(dλ)\int_{\Lambda}\langle n\nabla_{\mathbf{Y}}\hat{r}_{\lambda}|\Sigma(\hat{\mathbf{f}}_{\lambda}-\break{\hat{\mathbf{f}}}_{\mathrm{EWA}})\rangle_{n}\hat{\pi}(d\lambda)\geq-a\int_{\Lambda}\|\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}}\|_{n}^{2}\hat{\pi}(d\lambda) for some constant a>0a>0, then for every β2a\beta\geq 2a it holds that

𝔼[𝐟^EWA𝐟n2]infp𝒫Λ{Λ𝔼[r^λ]p(dλ)+β𝒦(p,π)n}.\mathbb{E}\bigl{[}\|{\hat{\mathbf{f}}}_{\mathrm{EWA}}-\mathbf{f}\|_{n}^{2}\bigr{]}\leq\inf_{p\in\mathcal{P}_{\Lambda}}\biggl{\{}\int_{\Lambda}\mathbb{E}[\hat{r}_{\lambda}]p(d\lambda)+\frac{\beta\mathcal{K}(p,\pi)}{n}\biggr{\}}. (29)
{pf}

According to the Stein lemma, the quantity

r^EWA=𝐘𝐟^EWAn2+2n\operatornameTr[Σ(\bolds𝐟^EWA(𝐘))]1n\operatornameTr[Σ]\hat{r}_{\mathrm{EWA}}=\|\mathbf{Y}-{\hat{\mathbf{f}}}_{\mathrm{EWA}}\|^{2}_{n}+\frac{2}{n}\operatorname{Tr}\bigl{[}\Sigma\bigl{(}\bolds{\nabla}\cdot{\hat{\mathbf{f}}}_{\mathrm{EWA}}(\mathbf{Y})\bigr{)}\bigr{]}-\frac{1}{n}\operatorname{Tr}[\Sigma] (30)

is an unbiased estimate of the risk rn=𝔼[𝐟^EWA𝐟n2]r_{n}=\mathbb{E}[\|{\hat{\mathbf{f}}}_{\mathrm{EWA}}-\mathbf{f}\|_{n}^{2}]. Using simple algebra, one checks that

𝐘𝐟^EWAn2\displaystyle\|\mathbf{Y}-{\hat{\mathbf{f}}}_{\mathrm{EWA}}\|^{2}_{n} =Λ(𝐘𝐟^λn2𝐟^λ𝐟^EWAn2)π^(dλ).\displaystyle=\int_{\Lambda}\bigl{(}\|\mathbf{Y}-\hat{\mathbf{f}}_{\lambda}\|_{n}^{2}-\|\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}}\|_{n}^{2}\bigr{)}\hat{\pi}(d\lambda). (31)

By interchanging the integral and differential operators, we get the following relation: yi𝐟^EWA,j=Λ{(yif^λ,j(𝐘))θ(𝐘,λ)+f^λ,j(𝐘)(yiθ(𝐘,λ))}π(dλ)\partial_{y_{i}}\hat{\mathbf{f}}_{{\rm EWA},j}=\int_{\Lambda}\{(\partial_{y_{i}}\hat{f}_{\lambda,j}(\mathbf{Y}))\theta(\mathbf{Y},\lambda)+\hat{f}_{\lambda,j}(\mathbf{Y})(\partial_{y_{i}}\theta(\mathbf{Y},\lambda))\}\pi(d\lambda).Then, combining this equality with equations (30) and (31) implies that

r^EWA=Λ(r^λunb𝐟^λ𝐟^EWAn2)π^(dλ)+2nΛ\operatornameTr[Σ𝐟^λ𝐘θ(𝐘,λ)]π(dλ).\displaystyle\hat{r}_{\mathrm{EWA}}=\int_{\Lambda}\bigl{(}\hat{r}_{\lambda}^{\mathrm{unb}}-\|\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}}\|_{n}^{2}\bigr{)}\hat{\pi}(d\lambda)+\frac{2}{n}\int_{\Lambda}\operatorname{Tr}\bigl{[}\Sigma\hat{\mathbf{f}}_{\lambda}\nabla_{\mathbf{Y}}\theta(\mathbf{Y},\lambda)^{\top}\bigr{]}\pi(d\lambda).

After having interchanged differentiation and integration, we obtain that Λ𝐟^EWA(𝐘θ(𝐘,λ))π(dλ)=𝐟^EWA𝐘(Λθ(𝐘,λ)π(dλ))=0\int_{\Lambda}{\hat{\mathbf{f}}}_{\mathrm{EWA}}(\nabla_{\mathbf{Y}}\theta(\mathbf{Y},\lambda))^{\top}\pi(d\lambda)={\hat{\mathbf{f}}}_{\mathrm{EWA}}\nabla_{\mathbf{Y}}(\int_{\Lambda}\theta(\mathbf{Y},\lambda)\pi(d\lambda))=0 and, therefore, we come up with the following expression for r^EWA\hat{r}_{\mathrm{EWA}}:

r^EWA\displaystyle\hat{r}_{\mathrm{EWA}} =\displaystyle= Λ(r^λunb𝐟^λ𝐟^nn2+2𝐘logθ(λ)|Σ(𝐟^λ𝐟^EWA)n)π^(dλ)\displaystyle\int_{\Lambda}\bigl{(}\hat{r}_{\lambda}^{\mathrm{unb}}-\|\hat{\mathbf{f}}_{\lambda}-\hat{\mathbf{f}}_{n}\|_{n}^{2}+2\bigl{\langle}\nabla_{\mathbf{Y}}\log\theta(\lambda)|\Sigma(\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}})\bigr{\rangle}_{n}\bigr{)}\hat{\pi}(d\lambda)
=\displaystyle= Λ(r^λunb𝐟^λ𝐟^EWAn22nβ1𝐘r^λ|Σ(𝐟^λ𝐟^EWA)n)π^(dλ).\displaystyle\int_{\Lambda}\bigl{(}\hat{r}_{\lambda}^{\mathrm{unb}}-\|\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}}\|_{n}^{2}-2n\beta^{-1}\bigl{\langle}\nabla_{\mathbf{Y}}\hat{r}_{\lambda}|\Sigma(\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}})\bigr{\rangle}_{n}\bigr{)}\hat{\pi}(d\lambda).

This completes the proof of the first assertion of the lemma.

To prove the second assertion, let us observe that under the required condition and in view of the first assertion, for every β2a\beta\geq 2a it holds that r^EWAΛr^λunbπ^(dλ)Λr^λπ^(dλ)Λr^λπ^(dλ)+βn𝒦(π^,π)\hat{r}_{\mathrm{EWA}}\leq\int_{\Lambda}\hat{r}_{\lambda}^{\mathrm{unb}}\hat{\pi}(d\lambda)\leq\int_{\Lambda}\hat{r}_{\lambda}\hat{\pi}(d\lambda)\leq\int_{\Lambda}\hat{r}_{\lambda}\hat{\pi}(d\lambda)+\frac{\beta}{n}\mathcal{K}(\hat{\pi},\pi). To conclude, it suffices to remark that π^\hat{\pi} is the probability measure minimizing the criterion Λr^λp(dλ)+βn𝒦(p,π)\int_{\Lambda}\hat{r}_{\lambda}p(d\lambda)+\frac{\beta}{n}\mathcal{K}(p,\pi) among all p𝒫Λp\in\mathcal{P}_{\Lambda}. Thus, for every p𝒫Λp\in\mathcal{P}_{\Lambda}, we have

r^EWAΛr^λp(dλ)+βn𝒦(p,π).\hat{r}_{\mathrm{EWA}}\leq\int_{\Lambda}\hat{r}_{\lambda}p(d\lambda)+\frac{\beta}{n}\mathcal{K}(p,\pi).

Taking the expectation of both sides, the desired result follows.

.3 Proof of Theorem 1

Assertion (i). In what follows, we use the matrix shorthand I=In×nI=I_{n\times n} and AEWAΛAλπ^(dλ){A}_{\mathrm{EWA}}\triangleq\int_{\Lambda}A_{\lambda}\hat{\pi}(d\lambda). We apply Lemma 2 with r^λ=r^λunb\hat{r}_{\lambda}=\hat{r}_{\lambda}^{\mathrm{unb}}. To check the conditions of the second part of Lemma 2, note that in view of equations (6) and (8), as well as the assumptions Aλ=AλA_{\lambda}^{\top}=A_{\lambda} and Aλ𝐛λ=0A_{\lambda^{\prime}}\mathbf{b}_{\lambda}=0, we get

𝐘r^λunb=2n(IAλ)(IAλ)𝐘2n(IAλ)𝐛λ=2n(IAλ)2𝐘2n𝐛λ.\displaystyle\nabla_{\mathbf{Y}}\hat{r}_{\lambda}^{\mathrm{unb}}=\frac{2}{n}({I}-A_{\lambda})^{\top}({I}-A_{\lambda})\mathbf{Y}-\frac{2}{n}({I}-A_{\lambda})^{\top}\mathbf{b}_{\lambda}=\frac{2}{n}({I}-A_{\lambda})^{2}\mathbf{Y}-\frac{2}{n}\mathbf{b}_{\lambda}.

Recall now that for any pair of commuting matrices PP and QQ the identity (IP)2=(IQ)2+2(IP+Q2)(QP)(I-P)^{2}=(I-Q)^{2}+2(I-\frac{P+Q}{2})(Q-P) holds true. Applying this identity to P=AλP=A_{\lambda} and Q=AEWAQ={A}_{\mathrm{EWA}} (in view of the commuting property of the AλA_{\lambda}’s), we get the following relation: (IAλ)2𝐘|Σ(AλAEWA)𝐘n=(IAEWA)2𝐘|Σ(AλAEWA)𝐘n2(IAEWA+Aλ2)(AEWAAλ)𝐘|Σ(AEWAAλ)𝐘n\langle(I-A_{\lambda})^{2}\mathbf{Y}|\Sigma(A_{\lambda}-{A}_{\mathrm{EWA}})\mathbf{Y}\rangle_{n}=\langle(I-{A}_{\mathrm{EWA}})^{2}\mathbf{Y}|\Sigma(A_{\lambda}-{A}_{\mathrm{EWA}})\mathbf{Y}\rangle_{n}-2\langle(I-\frac{{A}_{\mathrm{EWA}}+A_{\lambda}}{2})({A}_{\mathrm{EWA}}-A_{\lambda})\mathbf{Y}|\Sigma({A}_{\mathrm{EWA}}-A_{\lambda})\mathbf{Y}\rangle_{n}. When one integrates over Λ\Lambda with respect to the measure π^\hat{\pi}, the term of the first scalar product in the right-hand side of the last equation vanishes. On the other hand,

Aλ(AEWAAλ)𝐘|Σ(AEWAAλ)𝐘n\displaystyle\bigl{\langle}A_{\lambda}({A}_{\mathrm{EWA}}-A_{\lambda})\mathbf{Y}|\Sigma({A}_{\mathrm{EWA}}-A_{\lambda})\mathbf{Y}\bigr{\rangle}_{n}
=Aλ(𝐟^EWA𝐟^λ)|Σ(𝐟^EWA𝐟^λ)n\displaystyle\qquad=\bigl{\langle}A_{\lambda}({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})|\Sigma({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})\bigr{\rangle}_{n}
=(𝐟^EWA𝐟^λ)|AλΣ(𝐟^EWA𝐟^λ)n\displaystyle\qquad=\bigl{\langle}({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})|A_{\lambda}\Sigma({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})\bigr{\rangle}_{n}
=12n(𝐟^EWA𝐟^λ)(AλΣ+ΣAλ)(𝐟^EWA𝐟^λ)0.\displaystyle\qquad=\frac{1}{2n}({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})^{\top}(A_{\lambda}\Sigma+\Sigma A_{\lambda})({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})\geq 0.

Since positive semi-definiteness of matrices ΣAλ+AλΣ\Sigma A_{\lambda}+A_{\lambda}\Sigma implies the one of the matrix ΣAEWA+ΣAEWA\Sigma{A}_{\mathrm{EWA}}+\Sigma{A}_{\mathrm{EWA}}, we also have AEWA(AEWAAλ)𝐘|Σ(AEWAAλ)𝐘n0\langle{A}_{\mathrm{EWA}}({A}_{\mathrm{EWA}}-A_{\lambda})\mathbf{Y}|\Sigma({A}_{\mathrm{EWA}}-A_{\lambda})\mathbf{Y}\rangle_{n}\geq 0. Therefore,

(IAEWA+Aλ2)(AEWAAλ)𝐘|Σ(AEWAAλ)𝐘n\displaystyle\biggl{\langle}\biggl{(}I-\frac{{A}_{\mathrm{EWA}}+A_{\lambda}}{2}\biggr{)}({A}_{\mathrm{EWA}}-A_{\lambda})\mathbf{Y}|\Sigma({A}_{\mathrm{EWA}}-A_{\lambda})\mathbf{Y}\biggr{\rangle}_{n}
(𝐟^EWA𝐟^λ)|Σ(𝐟^EWA𝐟^λ)n\displaystyle\qquad\leq\bigl{\langle}({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})|\Sigma({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})\bigr{\rangle}_{n}
=Σ1/2(𝐟^EWA𝐟^λ)n2.\displaystyle\qquad=\bigl{\|}\Sigma^{1/2}({\hat{\mathbf{f}}}_{\mathrm{EWA}}-\hat{\mathbf{f}}_{\lambda})\bigr{\|}_{n}^{2}.

This inequality implies that

Λn𝐘r^λunb|Σ(𝐟^λ𝐟^EWA)nπ^(dλ)4ΛΣ1/2(𝐟^λ𝐟^EWA)n2π^(dλ).\displaystyle\int_{\Lambda}\bigl{\langle}n\nabla_{\mathbf{Y}}\hat{r}_{\lambda}^{\mathrm{unb}}|\Sigma(\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}})\bigr{\rangle}_{n}\hat{\pi}(d\lambda)\geq-4\int_{\Lambda}\bigl{\|}\Sigma^{1/2}(\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}})\bigr{\|}_{n}^{2}\hat{\pi}(d\lambda).

Therefore, the claim of Theorem 1 holds true for every β8|Σ|\beta\geq 8|\!|\!|{\Sigma}|\!|\!|.

Assertion (ii). Let now 𝐟^λ=Aλ𝐘+𝐛λ\hat{\mathbf{f}}_{\lambda}=A_{\lambda}\mathbf{Y}+\mathbf{b}_{\lambda} with symmetric AλIn×nA_{\lambda}\preceq I_{n\times n} and 𝐛λ\operatornameKer(Aλ)\mathbf{b}_{\lambda}\in\operatorname{Ker}(A_{\lambda}). Using the definition r^λadj=r^λunb+1n𝐘(AλAλ2)𝐘\hat{r}_{\lambda}^{\mathrm{adj}}=\hat{r}_{\lambda}^{\mathrm{unb}}+\frac{1}{n}\mathbf{Y}^{\top}(A_{\lambda}-A_{\lambda}^{2})\mathbf{Y}, one easily checks that r^λadjr^λunb\hat{r}_{\lambda}^{\mathrm{adj}}\geq\hat{r}_{\lambda}^{\mathrm{unb}} for every λ\lambda and that

Λnr^λadj|Σ(𝐟^λ𝐟^EWA)nπ^(dλ)\displaystyle\int_{\Lambda}\bigl{\langle}n\nabla\hat{r}_{\lambda}^{\mathrm{adj}}|\Sigma(\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}})\bigr{\rangle}_{n}\hat{\pi}(d\lambda) =\displaystyle= Λ2(𝐘𝐟^λ)|Σ(𝐟^λ𝐟^EWA)nπ^(dλ)\displaystyle\int_{\Lambda}\bigl{\langle}2(\mathbf{Y}-\hat{\mathbf{f}}_{\lambda})|\Sigma(\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}})\bigr{\rangle}_{n}\hat{\pi}(d\lambda)
=\displaystyle= 2ΛΣ1/2(𝐟^λ𝐟^EWA)n2π^(dλ).\displaystyle-2\int_{\Lambda}\bigl{\|}\Sigma^{1/2}(\hat{\mathbf{f}}_{\lambda}-{\hat{\mathbf{f}}}_{\mathrm{EWA}})\bigr{\|}_{n}^{2}\hat{\pi}(d\lambda).

Therefore, if β4|Σ|\beta\geq 4|\!|\!|{\Sigma}|\!|\!|, all the conditions required in the second part of Lemma 2 are fulfilled. Applying this lemma, we get the desired result.

.4 Proof of Theorem 2

We apply the result of assertion (ii) of Theorem 1 to the prior π(dλ)\pi(d\lambda) replaced by the probability measure proportional to e(2/β)\operatornameTr[Σ^(AλAλAλ)]π(dλ)e^{(2/{\beta})\operatorname{Tr}[\widehat{\Sigma}(A_{\lambda}-A_{\lambda}^{\top}A_{\lambda})]}\pi(d\lambda). This leads to

𝔼[𝐟~SEWA𝐟n2]\displaystyle\mathbb{E}\bigl{[}\|\tilde{\mathbf{f}}_{\mathrm{SEWA}}-\mathbf{f}\|_{n}^{2}\bigr{]} \displaystyle\leq infp𝒫Λ{Λ𝔼[𝐟^λ𝐟n2]p(dλ)+βn𝒦(p,π)}\displaystyle\inf_{p\in\mathcal{P}_{\Lambda}}\biggl{\{}\int_{\Lambda}\mathbb{E}\bigl{[}\|\hat{\mathbf{f}}_{\lambda}-\mathbf{f}\|_{n}^{2}\bigr{]}p(d\lambda)+\frac{\beta}{n}\mathcal{K}(p,\pi)\biggr{\}}
+βn𝔼[logΛe(2/β)\operatornameTr[Σ^(AλAλAλ)]π(dλ)].\displaystyle{}+\frac{\beta}{n}\mathbb{E}\biggl{[}\log\int_{\Lambda}e^{(2/{\beta)}\operatorname{Tr}[\widehat{\Sigma}(A_{\lambda}-A_{\lambda}^{\top}A_{\lambda})]}\pi(d\lambda)\biggr{]}.

Condition (C) entails that the last term is always nonnegative and the result follows.

.5 Proof of Theorem 3

Let us place ourselves in setting 1. It is clear that 𝔼[𝐟^GEWA𝐟n2]=j=1J𝔼[𝐟^GEWAj𝐟jn2]\mathbb{E}[\|{\hat{\mathbf{f}}}_{\mathrm{GEWA}}-\mathbf{f}\|_{n}^{2}]=\sum_{j=1}^{J}\mathbb{E}[\|{\hat{\mathbf{f}}}_{\mathrm{GEWA}}^{j}-\mathbf{f}^{j}\|_{n}^{2}]. For each j{1,,J}j\in\{1,\ldots,J\}, since βj8|Σj|\beta_{j}\geq 8|\!|\!|{\Sigma^{j}}|\!|\!|, one can apply assertion (i) of Theorem 1, which leads to the desired result. The case of setting 2 is handled in the same manner.

Acknowledgment

The authors thank Pierre Alquier for fruitful discussions.

{supplement}\stitle

Proofs of some propositions \slink[doi]10.1214/12-AOS1038SUPP \sdatatype.pdf \sfilenameaos1038_supp.pdf \sdescriptionIn this supplement we present the detailed proofs of Propositions 26.

References

  • (1) {barticle}[mr] \bauthor\bsnmAlquier, \bfnmPierre\binitsP. and \bauthor\bsnmLounici, \bfnmKarim\binitsK. (\byear2011). \btitlePAC-Bayesian bounds for sparse regression estimation with exponential weights. \bjournalElectron. J. Stat. \bvolume5 \bpages127–145. \biddoi=10.1214/11-EJS601, issn=1935-7524, mr=2786484 \bptnotecheck year\bptokimsref \endbibitem
  • (2) {barticle}[author] \bauthor\bsnmAmit, \bfnmY.\binitsY. and \bauthor\bsnmGeman, \bfnmD.\binitsD. (\byear1997). \btitleShape quantization and recognition with randomized trees. \bjournalNeural Comput. \bvolume9 \bpages1545–1588. \bptokimsref \endbibitem
  • (3) {binproceedings}[author] \bauthor\bsnmArlot, \bfnmS.\binitsS. and \bauthor\bsnmBach, \bfnmF.\binitsF. (\byear2009). \btitleData-driven calibration of linear estimators with minimal penalties. In \bbooktitleNIPS \bpages46–54. \bpublisherMIT Press, \blocationVancouver. \bptokimsref \endbibitem
  • (4) {binproceedings}[author] \bauthor\bsnmAudibert, \bfnmJ-Y.\binitsJ.-Y. (\byear2007). \btitleProgressive mixture rules are deviation suboptimal. In \bbooktitleNIPS \bpages41–48. \bpublisherMIT Press, \blocationVancouver. \bptokimsref \endbibitem
  • (5) {bmisc}[author] \bauthor\bsnmBaraud, \bfnmY.\binitsY., \bauthor\bsnmGiraud, \bfnmC.\binitsC. and \bauthor\bsnmHuet, \bfnmS.\binitsS. (\byear2010). \bhowpublishedEstimator selection in the Gaussian setting. Unpublished manuscript. \bptokimsref \endbibitem
  • (6) {barticle}[mr] \bauthor\bsnmBarron, \bfnmAndrew\binitsA., \bauthor\bsnmBirgé, \bfnmLucien\binitsL. and \bauthor\bsnmMassart, \bfnmPascal\binitsP. (\byear1999). \btitleRisk bounds for model selection via penalization. \bjournalProbab. Theory Related Fields \bvolume113 \bpages301–413. \biddoi=10.1007/s004400050210, issn=0178-8051, mr=1679028 \bptokimsref \endbibitem
  • (7) {barticle}[author] \bauthor\bsnmBreiman, \bfnmL.\binitsL. (\byear1996). \btitleBagging predictors. \bjournalMach. Learn. \bvolume24 \bpages123–140. \bptokimsref \endbibitem
  • (8) {barticle}[mr] \bauthor\bsnmBuades, \bfnmA.\binitsA., \bauthor\bsnmColl, \bfnmB.\binitsB. and \bauthor\bsnmMorel, \bfnmJ. M.\binitsJ. M. (\byear2005). \btitleA review of image denoising algorithms, with a new one. \bjournalMultiscale Model. Simul. \bvolume4 \bpages490–530. \biddoi=10.1137/040616024, issn=1540-3459, mr=2162865 \bptokimsref \endbibitem
  • (9) {barticle}[mr] \bauthor\bsnmBunea, \bfnmFlorentina\binitsF., \bauthor\bsnmTsybakov, \bfnmAlexandre B.\binitsA. B. and \bauthor\bsnmWegkamp, \bfnmMarten H.\binitsM. H. (\byear2007). \btitleAggregation for Gaussian regression. \bjournalAnn. Statist. \bvolume35 \bpages1674–1697. \biddoi=10.1214/009053606000001587, issn=0090-5364, mr=2351101 \bptokimsref \endbibitem
  • (10) {barticle}[mr] \bauthor\bsnmCai, \bfnmT. Tony\binitsT. T. (\byear1999). \btitleAdaptive wavelet estimation: A block thresholding and oracle inequality approach. \bjournalAnn. Statist. \bvolume27 \bpages898–924. \biddoi=10.1214/aos/1018031262, issn=0090-5364, mr=1724035 \bptokimsref \endbibitem
  • (11) {bbook}[mr] \bauthor\bsnmCatoni, \bfnmOlivier\binitsO. (\byear2004). \btitleStatistical Learning Theory and Stochastic Optimization. \bseriesLecture Notes in Math. \bvolume1851. \bpublisherSpringer, \blocationBerlin. \biddoi=10.1007/b99352, mr=2163920 \bptokimsref \endbibitem
  • (12) {barticle}[mr] \bauthor\bsnmCavalier, \bfnmL.\binitsL. (\byear2008). \btitleNonparametric statistical inverse problems. \bjournalInverse Problems \bvolume24 \bpages19. \biddoi=10.1088/0266-5611/24/3/034004, issn=0266-5611, mr=2421941 \bptokimsref \endbibitem
  • (13) {barticle}[mr] \bauthor\bsnmCavalier, \bfnmL.\binitsL., \bauthor\bsnmGolubev, \bfnmG. K.\binitsG. K., \bauthor\bsnmPicard, \bfnmD.\binitsD. and \bauthor\bsnmTsybakov, \bfnmA. B.\binitsA. B. (\byear2002). \btitleOracle inequalities for inverse problems. \bjournalAnn. Statist. \bvolume30 \bpages843–874. \biddoi=10.1214/aos/1028674843, issn=0090-5364, mr=1922543 \bptokimsref \endbibitem
  • (14) {barticle}[mr] \bauthor\bsnmCavalier, \bfnmLaurent\binitsL. and \bauthor\bsnmTsybakov, \bfnmAlexandre\binitsA. (\byear2002). \btitleSharp adaptation for inverse problems with random noise. \bjournalProbab. Theory Related Fields \bvolume123 \bpages323–354. \biddoi=10.1007/s004400100169, issn=0178-8051, mr=1918537 \bptokimsref \endbibitem
  • (15) {barticle}[mr] \bauthor\bsnmCavalier, \bfnmL.\binitsL. and \bauthor\bsnmTsybakov, \bfnmA. B.\binitsA. B. (\byear2001). \btitlePenalized blockwise Stein’s method, monotone oracles and sharp adaptive estimation. \bjournalMath. Methods Statist. \bvolume10 \bpages247–282. \bidissn=1066-5307, mr=1867161 \bptokimsref \endbibitem
  • (16) {bbook}[mr] \bauthor\bsnmCesa-Bianchi, \bfnmNicolò\binitsN. and \bauthor\bsnmLugosi, \bfnmGábor\binitsG. (\byear2006). \btitlePrediction, Learning, and Games. \bpublisherCambridge Univ. Press, \blocationCambridge. \biddoi=10.1017/CBO9780511546921, mr=2409394 \bptokimsref \endbibitem
  • (17) {bmisc}[author] \bauthor\bsnmDai, \bfnmD.\binitsD., \bauthor\bsnmRigollet, \bfnmP.\binitsP. and \bauthor\bsnmZhang, \bfnmT.\binitsT. (\byear2012). \bhowpublishedDeviation optimal learning using greedy QQ-aggregation. Ann. Statist. To appear. Available at arXiv:\arxivurl1203.2507. \bptokimsref \endbibitem
  • (18) {bincollection}[author] \bauthor\bsnmDai, \bfnmD.\binitsD. and \bauthor\bsnmZhang, \bfnmT.\binitsT. (\byear2011). \btitleGreedy model averaging. In \bbooktitleNIPS \bpages1242–1250. \bpublisherMIT Press, \blocationGranada. \bptokimsref \endbibitem
  • (19) {binproceedings}[author] \bauthor\bsnmDalalyan, \bfnmA. S.\binitsA. S. and \bauthor\bsnmSalmon, \bfnmJ.\binitsJ. (\byear2011). \btitleCompeting against the best nearest neighbor filter in regression. In \bbooktitleALT. \bseriesLecture Notes in Computer Science \bvolume6925 \bpages129–143. \bpublisherSpringer, \blocationBerlin. \bptokimsref \endbibitem
  • (20) {bmisc}[author] \bauthor\bsnmDalalyan, \bfnmA. S.\binitsA. S. and \bauthor\bsnmSalmon, \bfnmJ.\binitsJ. (\byear2012). \bhowpublishedSupplement to “Sharp oracle inequalities for aggregation of affine estimators.” DOI:\doiurl10.1214/12-AOS1038SUPP. \bptokimsref \endbibitem
  • (21) {bincollection}[mr] \bauthor\bsnmDalalyan, \bfnmArnak S.\binitsA. S. and \bauthor\bsnmTsybakov, \bfnmAlexandre B.\binitsA. B. (\byear2007). \btitleAggregation by exponential weighting and sharp oracle inequalities. In \bbooktitleLearning Theory. \bseriesLecture Notes in Computer Science \bvolume4539 \bpages97–111. \bpublisherSpringer, \blocationBerlin. \biddoi=10.1007/978-3-540-72927-3_9, mr=2397581 \bptokimsref \endbibitem
  • (22) {barticle}[author] \bauthor\bsnmDalalyan, \bfnmA. S.\binitsA. S. and \bauthor\bsnmTsybakov, \bfnmA. B.\binitsA. B. (\byear2008). \btitleAggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity. \bjournalMach. Learn. \bvolume72 \bpages39–61. \bptokimsref \endbibitem
  • (23) {barticle}[mr] \bauthor\bsnmDalalyan, \bfnmA. S.\binitsA. S. and \bauthor\bsnmTsybakov, \bfnmA. B.\binitsA. B. (\byear2012). \btitleSparse regression learning by aggregation and Langevin Monte-Carlo. \bjournalJ. Comput. System Sci. \bvolume78 \bpages1423–1443. \biddoi=10.1016/j.jcss.2011.12.023, issn=0022-0000, mr=2926142 \bptokimsref \endbibitem
  • (24) {barticle}[author] \bauthor\bsnmDalalyan, \bfnmA. S.\binitsA. S. and \bauthor\bsnmTsybakov, \bfnmA. B.\binitsA. B. (\byear2012). \btitleMirror averaging with sparsity priors. \bjournalBernoulli \bvolume18 \bpages914–944. \bptokimsref \endbibitem
  • (25) {barticle}[mr] \bauthor\bsnmDonoho, \bfnmDavid L.\binitsD. L. and \bauthor\bsnmJohnstone, \bfnmIain M.\binitsI. M. (\byear1994). \btitleIdeal spatial adaptation by wavelet shrinkage. \bjournalBiometrika \bvolume81 \bpages425–455. \biddoi=10.1093/biomet/81.3.425, issn=0006-3444, mr=1311089 \bptokimsref \endbibitem
  • (26) {barticle}[mr] \bauthor\bsnmDonoho, \bfnmDavid L.\binitsD. L. and \bauthor\bsnmJohnstone, \bfnmIain M.\binitsI. M. (\byear1995). \btitleAdapting to unknown smoothness via wavelet shrinkage. \bjournalJ. Amer. Statist. Assoc. \bvolume90 \bpages1200–1224. \bidissn=0162-1459, mr=1379464 \bptokimsref \endbibitem
  • (27) {barticle}[mr] \bauthor\bsnmDonoho, \bfnmDavid L.\binitsD. L., \bauthor\bsnmLiu, \bfnmRichard C.\binitsR. C. and \bauthor\bsnmMacGibbon, \bfnmBrenda\binitsB. (\byear1990). \btitleMinimax risk over hyperrectangles, and implications. \bjournalAnn. Statist. \bvolume18 \bpages1416–1437. \biddoi=10.1214/aos/1176347758, issn=0090-5364, mr=1062717 \bptokimsref \endbibitem
  • (28) {barticle}[mr] \bauthor\bsnmEfromovich, \bfnmSam\binitsS. and \bauthor\bsnmPinsker, \bfnmMark\binitsM. (\byear1996). \btitleSharp-optimal and adaptive estimation for heteroscedastic nonparametric regression. \bjournalStatist. Sinica \bvolume6 \bpages925–942. \bidissn=1017-0405, mr=1422411 \bptokimsref \endbibitem
  • (29) {barticle}[mr] \bauthor\bsnmEfroĭmovich, \bfnmS. Yu.\binitsS. Y. and \bauthor\bsnmPinsker, \bfnmM. S.\binitsM. S. (\byear1984). \btitleA self-training algorithm for nonparametric filtering. \bjournalAvtomat. i Telemekh. \bvolume11 \bpages58–65. \bidissn=0005-2310, mr=0797991 \bptokimsref \endbibitem
  • (30) {binproceedings}[author] \bauthor\bsnmFreund, \bfnmY.\binitsY. (\byear1990). \btitleBoosting a weak learning algorithm by majority. In \bbooktitleCOLT \bpages202–216. \bpublisherMorgan Kaufmann, \blocationRochester. \bptokimsref \endbibitem
  • (31) {barticle}[mr] \bauthor\bsnmGaïffas, \bfnmStéphane\binitsS. and \bauthor\bsnmLecué, \bfnmGuillaume\binitsG. (\byear2011). \btitleHyper-sparse optimal aggregation. \bjournalJ. Mach. Learn. Res. \bvolume12 \bpages1813–1833. \bidissn=1532-4435, mr=2819018 \bptokimsref \endbibitem
  • (32) {barticle}[mr] \bauthor\bsnmGeorge, \bfnmEdward I.\binitsE. I. (\byear1986). \btitleMinimax multiple shrinkage estimation. \bjournalAnn. Statist. \bvolume14 \bpages188–205. \biddoi=10.1214/aos/1176349849, issn=0090-5364, mr=0829562 \bptokimsref \endbibitem
  • (33) {barticle}[author] \bauthor\bsnmGerchinovitz, \bfnmS.\binitsS. (\byear2011). \btitleSparsity regret bounds for individual sequences in online linear regression. \bjournalJ. Mach. Learn. Res. \bvolume19 \bpages377–396. \bptokimsref \endbibitem
  • (34) {barticle}[mr] \bauthor\bsnmGiraud, \bfnmChristophe\binitsC. (\byear2008). \btitleMixing least-squares estimators when the variance is unknown. \bjournalBernoulli \bvolume14 \bpages1089–1107. \biddoi=10.3150/08-BEJ135, issn=1350-7265, mr=2543587 \bptokimsref \endbibitem
  • (35) {barticle}[mr] \bauthor\bsnmGoldenshluger, \bfnmAlexander\binitsA. and \bauthor\bsnmLepski, \bfnmOleg\binitsO. (\byear2008). \btitleUniversal pointwise selection rule in multivariate function estimation. \bjournalBernoulli \bvolume14 \bpages1150–1190. \biddoi=10.3150/08-BEJ144, issn=1350-7265, mr=2543590 \bptokimsref \endbibitem
  • (36) {barticle}[mr] \bauthor\bsnmGolubev, \bfnmYuri\binitsY. (\byear2010). \btitleOn universal oracle inequalities related to high-dimensional linear models. \bjournalAnn. Statist. \bvolume38 \bpages2751–2780. \biddoi=10.1214/10-AOS803, issn=0090-5364, mr=2722455 \bptokimsref \endbibitem
  • (37) {barticle}[mr] \bauthor\bsnmJuditsky, \bfnmAnatoli\binitsA. and \bauthor\bsnmNemirovski, \bfnmArkadii\binitsA. (\byear2000). \btitleFunctional aggregation for nonparametric regression. \bjournalAnn. Statist. \bvolume28 \bpages681–712. \biddoi=10.1214/aos/1015951994, issn=0090-5364, mr=1792783 \bptokimsref \endbibitem
  • (38) {barticle}[mr] \bauthor\bsnmJuditsky, \bfnmAnatoli\binitsA. and \bauthor\bsnmNemirovski, \bfnmArkadi\binitsA. (\byear2009). \btitleNonparametric denoising of signals with unknown local structure. I. Oracle inequalities. \bjournalAppl. Comput. Harmon. Anal. \bvolume27 \bpages157–179. \biddoi=10.1016/j.acha.2009.02.001, issn=1063-5203, mr=2543191 \bptokimsref \endbibitem
  • (39) {bincollection}[mr] \bauthor\bsnmKivinen, \bfnmJyrki\binitsJ. and \bauthor\bsnmWarmuth, \bfnmManfred K.\binitsM. K. (\byear1999). \btitleAveraging expert predictions. In \bbooktitleComputational Learning Theory (Nordkirchen, 1999). \bseriesLecture Notes in Computer Science \bvolume1572 \bpages153–167. \bpublisherSpringer, \blocationBerlin. \biddoi=10.1007/3-540-49097-3_13, mr=1724987 \bptokimsref \endbibitem
  • (40) {barticle}[mr] \bauthor\bsnmKneip, \bfnmAlois\binitsA. (\byear1994). \btitleOrdered linear smoothers. \bjournalAnn. Statist. \bvolume22 \bpages835–866. \biddoi=10.1214/aos/1176325498, issn=0090-5364, mr=1292543 \bptokimsref \endbibitem
  • (41) {barticle}[mr] \bauthor\bsnmLanckriet, \bfnmGert R. G.\binitsG. R. G., \bauthor\bsnmCristianini, \bfnmNello\binitsN., \bauthor\bsnmBartlett, \bfnmPeter\binitsP., \bauthor\bsnmEl Ghaoui, \bfnmLaurent\binitsL. and \bauthor\bsnmJordan, \bfnmMichael I.\binitsM. I. (\byear2003/04). \btitleLearning the kernel matrix with semidefinite programming. \bjournalJ. Mach. Learn. Res. \bvolume5 \bpages27–72. \bidissn=1532-4435, mr=2247973 \bptokimsref \endbibitem
  • (42) {binproceedings}[author] \bauthor\bsnmLangford, \bfnmJ.\binitsJ. and \bauthor\bsnmShawe-Taylor, \bfnmJ.\binitsJ. (\byear2002). \btitlePAC-Bayes & margins. In \bbooktitleNIPS \bpages423–430. \bpublisherMIT Press, \blocationVancouver. \bptokimsref \endbibitem
  • (43) {bmisc}[author] \bauthor\bsnmLecué, \bfnmG.\binitsG. and \bauthor\bsnmMendelson, \bfnmS.\binitsS. (\byear2012). \bhowpublishedOn the optimality of the aggregate with exponential weights for low temperatures. Bernoulli. To appear. \bptokimsref \endbibitem
  • (44) {bmisc}[author] \bauthor\bsnmLeung, \bfnmG.\binitsG. (\byear2004). \bhowpublishedInformation theory and mixing least squares regression. Ph.D. thesis, Yale Univ. \bptokimsref \endbibitem
  • (45) {barticle}[mr] \bauthor\bsnmLeung, \bfnmGilbert\binitsG. and \bauthor\bsnmBarron, \bfnmAndrew R.\binitsA. R. (\byear2006). \btitleInformation theory and mixing least-squares regressions. \bjournalIEEE Trans. Inform. Theory \bvolume52 \bpages3396–3410. \biddoi=10.1109/TIT.2006.878172, issn=0018-9448, mr=2242356 \bptokimsref \endbibitem
  • (46) {barticle}[mr] \bauthor\bsnmLounici, \bfnmK.\binitsK. (\byear2007). \btitleGeneralized mirror averaging and DD-convex aggregation. \bjournalMath. Methods Statist. \bvolume16 \bpages246–259. \biddoi=10.3103/S1066530707030040, issn=1066-5307, mr=2356820 \bptokimsref \endbibitem
  • (47) {binproceedings}[mr] \bauthor\bsnmMcAllester, \bfnmDavid A.\binitsD. A. (\byear1998). \btitleSome PAC-Bayesian theorems. In \bbooktitleProceedings of the Eleventh Annual Conference on Computational Learning Theory (Madison, WI, 1998) \bpages230–234 (electronic). \bpublisherACM, \blocationNew York. \biddoi=10.1145/279943.279989, mr=1811587 \bptokimsref \endbibitem
  • (48) {bincollection}[mr] \bauthor\bsnmNemirovski, \bfnmArkadi\binitsA. (\byear2000). \btitleTopics in non-parametric statistics. In \bbooktitleLectures on Probability Theory and Statistics (Saint-Flour, 1998). \bseriesLecture Notes in Math. \bvolume1738 \bpages85–277. \bpublisherSpringer, \blocationBerlin. \bidmr=1775640 \bptokimsref \endbibitem
  • (49) {barticle}[mr] \bauthor\bsnmPinsker, \bfnmM. S.\binitsM. S. \btitleOptimal filtration of square-integrable signals in Gaussian noise. \bjournalProbl. Peredachi Inf. \bvolume16 \bpages52–68. \bidmr=0624591 \bptokimsref \endbibitem
  • (50) {barticle}[mr] \bauthor\bsnmPolzehl, \bfnmJörg\binitsJ. and \bauthor\bsnmSpokoiny, \bfnmVladimir G.\binitsV. G. (\byear2000). \btitleAdaptive weights smoothing with applications to image restoration. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume62 \bpages335–354. \biddoi=10.1111/1467-9868.00235, issn=1369-7412, mr=1749543 \bptokimsref \endbibitem
  • (51) {barticle}[author] \bauthor\bsnmRigollet, \bfnmPhilippe\binitsP. (\byear2012). \btitleKullback–Leibler aggregation and misspecified generalized linear models. \bjournalAnn. Statist. \bvolume40 \bpages639–665. \bptokimsref \endbibitem
  • (52) {barticle}[mr] \bauthor\bsnmRigollet, \bfnmPhilippe\binitsP. and \bauthor\bsnmTsybakov, \bfnmAlexandre\binitsA. (\byear2011). \btitleExponential screening and optimal rates of sparse estimation. \bjournalAnn. Statist. \bvolume39 \bpages731–771. \biddoi=10.1214/10-AOS854, issn=0090-5364, mr=2816337 \bptokimsref \endbibitem
  • (53) {barticle}[mr] \bauthor\bsnmRigollet, \bfnmPh.\binitsP. and \bauthor\bsnmTsybakov, \bfnmA. B.\binitsA. B. (\byear2007). \btitleLinear and convex aggregation of density estimators. \bjournalMath. Methods Statist. \bvolume16 \bpages260–280. \biddoi=10.3103/S1066530707030052, issn=1066-5307, mr=2356821 \bptokimsref \endbibitem
  • (54) {bmisc}[author] \bauthor\bsnmRigollet, \bfnmP.\binitsP. and \bauthor\bsnmTsybakov, \bfnmA. B.\binitsA. B. (\byear2011). \bhowpublishedSparse estimation by exponential weighting. Unpublished manuscript. \bptokimsref \endbibitem
  • (55) {barticle}[author] \bauthor\bsnmSalmon, \bfnmJ.\binitsJ. and \bauthor\bsnmDalalyan, \bfnmA. S.\binitsA. S. (\byear2011). \btitleOptimal aggregation of affine estimators. \bjournalJ. Mach. Learn. Res. \bvolume19 \bpages635–660. \bptokimsref \endbibitem
  • (56) {binproceedings}[author] \bauthor\bsnmSalmon, \bfnmJ.\binitsJ. and \bauthor\bsnmLe Pennec, \bfnmE.\binitsE. (\byear2009). \btitleNL-Means and aggregation procedures. In \bbooktitleICIP \bpages2977–2980. \bpublisherIEEE, \blocationCaìro. \bptokimsref \endbibitem
  • (57) {barticle}[mr] \bauthor\bsnmSeeger, \bfnmMatthias\binitsM. (\byear2003). \btitlePAC-Bayesian generalisation error bounds for Gaussian process classification. \bjournalJ. Mach. Learn. Res. \bvolume3 \bpages233–269. \biddoi=10.1162/153244303765208377, issn=1532-4435, mr=1971338 \bptokimsref \endbibitem
  • (58) {bbook}[author] \bauthor\bsnmShawe-Taylor, \bfnmJ.\binitsJ. and \bauthor\bsnmCristianini, \bfnmN.\binitsN. (\byear2000). \btitleAn Introduction to Support Vector Machines: And Other Kernel-Based Learning Methods. \bpublisherCambridge Univ. Press, \blocationCambridge. \bptokimsref \endbibitem
  • (59) {binproceedings}[author] \bauthor\bsnmStein, \bfnmC. M.\binitsC. M. (\byear1973). \btitleEstimation of the mean of a multivariate distribution. In \bbooktitleProc. Prague Symp. Asymptotic Statist. \bpublisherCharles Univ., \blocationPrague. \bptokimsref \endbibitem
  • (60) {binproceedings}[author] \bauthor\bsnmTsybakov, \bfnmA. B.\binitsA. B. (\byear2003). \btitleOptimal rates of aggregation. In \bbooktitleCOLT \bpages303–313. \bpublisherSpringer, \blocationWashington, DC. \bptokimsref \endbibitem
  • (61) {bbook}[mr] \bauthor\bsnmTsybakov, \bfnmAlexandre B.\binitsA. B. (\byear2009). \btitleIntroduction to Nonparametric Estimation. \bpublisherSpringer, \blocationNew York. \biddoi=10.1007/b13794, mr=2724359 \bptokimsref \endbibitem
  • (62) {bmisc}[author] \bauthor\bsnmWang, \bfnmZhan\binitsZ., \bauthor\bsnmPaterlini, \bfnmSandra\binitsS., \bauthor\bsnmGao, \bfnmFuchang\binitsF. and \bauthor\bsnmYang, \bfnmYuhong\binitsY. (\byear2012). \bhowpublishedAdaptive minimax estimation over sparse lql_{q}-hulls. Technical report. Available at arXiv:\arxivurl1108.1961v4 [math.ST]. \bptokimsref \endbibitem
  • (63) {barticle}[mr] \bauthor\bsnmYang, \bfnmYuhong\binitsY. (\byear2000). \btitleCombining different procedures for adaptive regression. \bjournalJ. Multivariate Anal. \bvolume74 \bpages135–161. \biddoi=10.1006/jmva.1999.1884, issn=0047-259X, mr=1790617 \bptokimsref \endbibitem
  • (64) {barticle}[mr] \bauthor\bsnmYang, \bfnmYuhong\binitsY. (\byear2003). \btitleRegression with multiple candidate models: Selecting or mixing? \bjournalStatist. Sinica \bvolume13 \bpages783–809. \bidissn=1017-0405, mr=1997174 \bptokimsref \endbibitem
  • (65) {barticle}[mr] \bauthor\bsnmYang, \bfnmYuhong\binitsY. (\byear2004). \btitleAggregating regression procedures to improve performance. \bjournalBernoulli \bvolume10 \bpages25–47. \biddoi=10.3150/bj/1077544602, issn=1350-7265, mr=2044592 \bptokimsref \endbibitem
  • (66) {barticle}[mr] \bauthor\bsnmYuan, \bfnmMing\binitsM. and \bauthor\bsnmLin, \bfnmYi\binitsY. (\byear2006). \btitleModel selection and estimation in regression with grouped variables. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume68 \bpages49–67. \biddoi=10.1111/j.1467-9868.2005.00532.x, issn=1369-7412, mr=2212574 \bptokimsref \endbibitem