This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Note on Parameter Estimation for Misspecified Regression Models with Heteroskedastic Errors

James P. Longlabel=e1]jlong@stat.tamu.edu [ Department of Statistics
3143 TAMU
College Station, TX 77843-3143
Texas A&M University
(2017)
Abstract

Misspecified models often provide useful information about the true data generating distribution. For example, if yy is a non–linear function of xx the least squares estimator β^\widehat{\beta} is an estimate of β\beta, the slope of the best linear approximation to the non–linear function. Motivated by problems in astronomy, we study how to incorporate observation measurement error variances into fitting parameters of misspecified models. Our asymptotic theory focuses on the particular case of linear regression where often weighted least squares procedures are used to account for heteroskedasticity. We find that when the response is a non–linear function of the independent variable, the standard procedure of weighting by the inverse of the observation variances can be counter–productive. In particular, ordinary least squares may have lower asymptotic variance. We construct an adaptive estimator which has lower asymptotic variance than either OLS or standard WLS. We demonstrate our theory in a small simulation and apply these ideas to the problem of estimating the period of a periodic function using a sinusoidal model.

62J05,
62F10,
heteroskedasticity,
model misspecification,
approximate models,
weighted least squares,
sandwich estimators,
astrostatistics,
keywords:
[class=MSC]
keywords:
volume: 0issue: 0
\startlocaldefs\endlocaldefs

T1The author thanks the Editor and two reviewers for their constructive comments.

t1Long’s work was supported by a faculty startup grant from Texas A&M University.

1 Introduction

Misspecified models are common. In prediction problems, simple, misspecified models may be used instead of complex models with many parameters in order to avoid overfitting. In big data problems, true models may be computationally intractable, leading to model simplifications which induce some level of misspecification. In many scientific domains there exist sets of well established models with fast computer implementations. A practitioner with a particular data set may have to choose between using one of these models (even when none are exactly appropriate) and devising, testing and implementing a new model. Pressed for time, the practitioner may use an existing misspecified model. In this work we study how to fit a misspecified linear regression model with heteroskedastic measurement error. Problems involving heteroskedastic measurement error and misspecified models are common in astronomy. We discuss an example in Section 2.

Suppose xipFXx_{i}\in\mathbb{R}^{p}\sim F_{X} independent across ii and σiFσ\sigma_{i}\in\mathbb{R}\sim F_{\sigma} independent across ii for 1in1\leq i\leq n. Suppose

yi=f(xi)+σiϵiy_{i}=f(x_{i})+\sigma_{i}\epsilon_{i}

where ϵiFϵ\epsilon_{i}\sim F_{\epsilon} with 𝔼[ϵi]=0\mathbb{E}[\epsilon_{i}]=0 and Var(ϵi)=1i\text{Var}(\epsilon_{i})=1\,\forall i, independent across ii and independent of xix_{i} and σi\sigma_{i}. Define

βargmin𝛽 𝔼[(f(x)xTβ)2]=𝔼[xxT]1𝔼[xf(x)].\beta\equiv\underset{\beta}{\operatorname{argmin}}\text{ }\mathbb{E}[(f(x)-x^{T}\beta)^{2}]=\mathbb{E}[xx^{T}]^{-1}\mathbb{E}[xf(x)].

The parameter β\beta is the slope of the best fitting least squares line. The parameter β\beta may be of interest in several situations. For example, β\beta minimizes mean squared error in predicting yy from xx among all linear functions, ie β=argmin𝛽 𝔼[(yxTβ)2]\beta=\underset{\beta}{\operatorname{argmin}}\text{ }\mathbb{E}[(y-x^{T}\beta)^{2}]. Define g(x)=f(x)xTβg(x)=f(x)-x^{T}\beta. The function gg is the non–linear component of ff.

When the model is correctly specified (ie g(x)0g(x)\equiv 0), weighted least squares (WLS) using the inverse of the observation variances as weights is asymptotically normal and has minimum asymptotic variance among all WLS estimators. In the case with model misspecification and xix_{i}, σi\sigma_{i} independent, we show that WLS estimators remain asymptotically normal. However weighting by the inverse of the observation variances can result in a larger asymptotic variance than other weightings, including ordinary least squares. Using the asymptotic variance formula we determine an optimal weighting which has lower asymptotic variance than standard WLS (using the inverse of the observation variances as weights) and OLS. The optimal weighting function has the form w(σ)=(σ2+Δ)1w(\sigma)=(\sigma^{2}+\Delta)^{-1} where Δ0\Delta\geq 0 is a function of the degree of model misspecification and the design. We find adaptive estimators for ww in the cases where the error variances are assumed known and where the error variances belong to one of MM groups with group membership known. We also briefly consider the case where xix_{i} and σi\sigma_{i} are dependent. In this setting the OLS estimator is consistent but weighted estimators are generally not consistent.

This work is organized as follows. In Section 2 we introduce a motivating problem from astronomy and offer some heuristic thinking about misspecified models and heteroskedasticity. For those readers primarily interested in the statistical theory, Section 2 can be skipped. In Section 3 we review some relevant literature and develop asymptotic results for the linear model. We present results for simulated data and the astronomy application in Section 4. We conclude in Section 5.

2 Misspecified Models and Heteroskedastic Error in Astronomy

Periodic variables are stars that vary in brightness periodically over time. Figure 1a shows the brightness of a single periodic variable star over time. This is known as the light curve of the star. Two sigma uncertainties are plotted as vertical bars around each point. Magnitude is inversely proportional to brightness, so lower magnitudes are plotted higher on the y–axis. This is a periodic variable so the changes in brightness over time are periodic. Using this data one may estimate a period for the star. When we plot the brightness measurements as time modulo period (Figure 1b), the pattern in brightness variation becomes clear. Periodic variables play an important role in several areas of astronomy including extra–galactic distance determination and estimation of the Hubble constant [26, 21]. Modern surveys, such as OGLE-III, have collected hundreds of thousands of periodic variable star light curves [28].

Refer to caption
(a) Unfolded Light Curve.
Refer to caption
(b) Folded Light Curve.
Figure 1: (a) SDSS-III RR Lyrae light curve. (b) Folded light curve (x–axis is time modulo period) after estimating the period using the data in a).

Accurate period estimation algorithms are necessary for creating the folded light curve (Figure 1b). A common procedure for determining the period is to perform maximum likelihood estimation using some parametric model for light curve variation. One popular model choice is a sinusoid with KK harmonics. Let the data for a single periodic variable be D={(ti,yi,σi)}i=1nD=\{(t_{i},y_{i},\sigma_{i})\}_{i=1}^{n} where yiy_{i} is the brightness at time tit_{i}, measured with known uncertainty σi\sigma_{i}. Magnitude variation is modeled as

yi=β0+k=1Kaksin(kωti+ϕk)+σiϵiy_{i}=\beta_{0}+\sum_{k=1}^{K}a_{k}\sin(k\omega t_{i}+\phi_{k})+\sigma_{i}\epsilon_{i} (2.1)

where ϵiN(0,1)\epsilon_{i}\sim N(0,1) independent across ii. Here ω\omega is the frequency, aka_{k} is the amplitude of the kthk^{th} harmonic, and ϕk\phi_{k} is the phase of the kthk^{th} harmonic. Let a=(a1,,aK)a=(a_{1},\ldots,a_{K}) and ϕ=(ϕ1,,ϕK)\phi=(\phi_{1},\ldots,\phi_{K}). Let Ω\Omega be a grid of possible frequencies. The maximum likelihood estimate for frequency is

ω^=argminωΩ mina,ϕ,β0i=1n(yiβ0k=1Kaksin(kωti+ϕk)σi)2.\widehat{\omega}=\underset{\omega\in\Omega}{\operatorname{argmin}}\text{ }\min_{a,\phi,\beta_{0}}\sum_{i=1}^{n}\left(\frac{y_{i}-\beta_{0}-\sum_{k=1}^{K}a_{k}\sin(k\omega t_{i}+\phi_{k})}{\sigma_{i}}\right)^{2}. (2.2)

Generalized Lomb–Scargle (GLS) is equivalent to this estimator with K=1K=1 [32]. The analysis of variance periodogram in [23] uses this model with a fast algorithm for computing ω^\widehat{\omega}.

We used estimator (2.2) with K=1,2K=1,2 to determine the period of the light curve in Figure 1a. The estimates for period were essentially the same for both K=1K=1 and K=2K=2 so in Figure 1b we folded the light curve using the K=1K=1 estimate. The solid orange line is the maximum likelihood fit for the K=1K=1 model (notice the sinusoidal shape). The blue dashed line is for the K=2K=2 model.

While the period estimates are accurate, both models are misspecified. In particular, note that the vertical lines around the brightness measurements are four standard deviations (4σi4\sigma_{i}) in width. If the model is correct, we would expect about 95% of these intervals to contain the maximum likelihood fitted curves. For the K=1K=1 model, 10% of the intervals contain the fitted curve. For K=2K=2, 37% of observations contain the ML fitted curve. The source of model misspecification is the light curve shape which cannot be perfectly represented by a sinusoid with K=1,2K=1,2 harmonics. The light curve has a long, slow decline and a sudden, sharp increase in brightness.

The parameter fits of misspecified models are estimates of an approximation. In the K=1K=1 case, the parameter fits are the orange line in Figure 1b and the approximation is the sinusoid which is closest to the true light curve shape. In many cases this approximation may be useful. For example the period of the approximation may match the period of the light curve.

When fitting a misspecified model with heteroskedastic measurement error, one should choose a weighting which ensures the estimator has small variance and thus is likely close to the approximation. The use of the inverse of the observation variances as weights (in Equation (2.3)) is motivated by maximum likelihood theory under the assumption that the model is correct. However as we show in Section 3 for the linear model, these weights are generally not optimal when there is model misspecification.

As a thought experiment, consider the case where one observation has extremely small variance and other observations have much larger variance. The maximum likelihood fitted curve for this data will be very close to the observation with small variance. However the best sinusoidal approximation to the true function at this point may not be particularly close to the true function. Thus using the inverse of observation variances as weights may overweight observations with small variance in the case of model misspecification. We make these ideas precise in Section 3.3.

The choice of weights is not critical for the light curve in Figure 1a because it is well sampled (n>50n>50), so the period is easy to determine. However in many other cases light curves are more poorly sampled (n20n\approx 20), in which case weighting may affect period estimation accuracy.

2.1 Sinusoidal Fit and Linear Models

Finding the best fitting sinusoid is closely related to fitting a linear model. Using the sine angle addition formula we can rewrite the maximum likelihood estimator from Equation (2.2) as

argminωΩ mina,ϕ,β0i=1n(yik=1K(akcos(ϕk)sin(kωti)+aksin(ϕk)cos(kωti))β0σi)2\underset{\omega\in\Omega}{\operatorname{argmin}}\text{ }\min_{a,\phi,\beta_{0}}\sum_{i=1}^{n}\left(\frac{y_{i}-\sum_{k=1}^{K}(a_{k}\cos(\phi_{k})\sin(k\omega t_{i})+a_{k}\sin(\phi_{k})\cos(k\omega t_{i}))-\beta_{0}}{\sigma_{i}}\right)^{2}

The sum over ii can be simplified by noting the linearity of the model and reparameterizing. Let Y=(y1,,yn)TY=(y_{1},\ldots,y_{n})^{T}. Let βk1=akcos(ϕk)\beta_{k1}=a_{k}\cos(\phi_{k}) and βk2=aksin(ϕk)\beta_{k2}=a_{k}\sin(\phi_{k}). Define β=(β0,β11,β12,,βK1,βK2)T2K+1\beta=(\beta_{0},\beta_{11},\beta_{12},\ldots,\beta_{K1},\beta_{K2})^{T}\in\mathbb{R}^{2K+1}. Let Σ\Sigma be a n×nn\times n diagonal matrix where Σii=σi2\Sigma_{ii}=\sigma_{i}^{2}. Define

X(ω)=(1sin(ωt1)cos(ωt1)sin(Kωt1)cos(Kωt1)1sin(ωt2)cos(ωt2)sin(Kωt2)cos(Kωt2)1sin(ωtn)cos(ωtn)sin(Kωtn)cos(Kωtn))n×(2K+1).X(\omega)=\begin{pmatrix}1&\sin(\omega t_{1})&\cos(\omega t_{1})&\dots&\sin(K\omega t_{1})&\cos(K\omega t_{1})\\ 1&\sin(\omega t_{2})&\cos(\omega t_{2})&\dots&\sin(K\omega t_{2})&\cos(K\omega t_{2})\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 1&\sin(\omega t_{n})&\cos(\omega t_{n})&\dots&\sin(K\omega t_{n})&\cos(K\omega t_{n})\\ \end{pmatrix}\in\mathbb{R}^{n\times(2K+1)}.

We rewrite the ML estimator as

ω^=argminωΩ minβ(YX(ω)β)TΣ1(YX(ω)β)\widehat{\omega}=\underset{\omega\in\Omega}{\operatorname{argmin}}\text{ }\min_{\beta}\,(Y-X(\omega)\beta)^{T}\Sigma^{-1}(Y-X(\omega)\beta)

Every frequency ω\omega in the grid of frequencies Ω\Omega determines a design matrix X(ω)X(\omega). At a particular ω\omega, the β\beta which minimizes the objective function is the weighted least squares estimator

β^(ω)=(X(ω)Σ1X(ω))1X(ω)TΣ1Y.\widehat{\beta}(\omega)=(X(\omega)\Sigma^{-1}X(\omega))^{-1}X(\omega)^{T}\Sigma^{-1}Y. (2.3)

The frequency estimate may then be written as

ω^=argminωΩ (YX(ω)β^(ω))TΣ1(YX(ω)β^(ω)).\widehat{\omega}=\underset{\omega\in\Omega}{\operatorname{argmin}}\text{ }\,(Y-X(\omega)\widehat{\beta}(\omega))^{T}\Sigma^{-1}(Y-X(\omega)\widehat{\beta}(\omega)). (2.4)

Thus estimating frequency involves performing a weighted least squares regression (Equation (2.3)) at every frequency in the grid Ω\Omega. The motivation for the procedure is maximum likelihood. As discussed earlier, in cases where the model is misspecified, there is no theoretical support for using Σ1\Sigma^{-1} as the weight matrix in either Equation (2.3) or (2.4).

3 Asymptotic Theory

3.1 Problem Setup and Related Literature

Let Xn×pX\in\mathbb{R}^{n\times p} be the matrix with row ii equal to xiTx_{i}^{T}. Let Y=(y1,,yn)TY=(y_{1},\ldots,y_{n})^{T}. Let Σ\Sigma be the diagonal matrix of observation variances such that Σii=σi2\Sigma_{ii}=\sigma_{i}^{2}. Let W^\widehat{W} be a diagonal positive definite matrix. The weighted least squares estimator is

β^(W^)=(XTW^X)1XTW^Y.\widehat{\beta}(\widehat{W})=(X^{T}\widehat{W}X)^{-1}X^{T}\widehat{W}Y.

In this work we seek W^\widehat{W} which minimize error in estimating β=𝔼[xxT]1𝔼[xf(x)]\beta=\mathbb{E}[xx^{T}]^{-1}\mathbb{E}[xf(x)].

There is a long history of studying estimators for misspecified models, often in the context of sandwich estimators for asymptotic variances. In [10], it was shown that when the true data generating distribution θt\theta_{t} is not in the model, the MLE converges to the distribution θ0\theta_{0} in the model Θ\Theta which minimizes Kullback–Liebler divergence, ie

θ0=argminθΘ 𝔼θt[logfθt(X)logfθ(X)].\theta_{0}=\underset{\theta\in\Theta}{\operatorname{argmin}}\text{ }\mathbb{E}_{\theta_{t}}\left[\frac{\log f_{\theta_{t}}(X)}{\log f_{\theta}(X)}\right].

The asymptotic variance has a “sandwich” form which is not the inverse of the information matrix. [30] and [31] studied this behavior in the context of the linear regression model and the OLS estimator, proposing consistent estimators of the asymptotic variance. See [18] and [15] for sandwich estimators with improved finite sample performance and [27] for recent work on sandwich estimators in a Bayesian context. [2] provides a summary of sandwich estimators and proposes a bootstrap estimator for the asymptotic variance. By specializing our asymptotic theory from the weighted to the unweighted case, we rederive some of these results. However our focus is different in that we find weightings for least squares which minimize asymptotic variance, rather than estimating the asymptotic variance of unweighted procedures.

Other work has focused on correcting model misspecification, often by modeling deviations from a parametric regression function with some non–parametric model. [1] studied model misspecification when response variances are known up to a constant due to repeated measurements, ie Var(yi)=σ2/miVar(y_{i})=\sigma^{2}/m_{i} where mim_{i} is known. A Gaussian prior was placed on β\beta and the non–linear component gg was modeled as being drawn from a Gaussian process. See [13] for an example with homoskedastic errors in the context of computer simulations. See [6] for an example in astronomy with known heteroskedastic errors. Our focus here is different in that instead of correcting model misspecification we consider how weighting observations affects estimation of the linear component of ff.

Heteroskedasticity in the partial linear model

yi=xiTβ+h(zi)+ϵi.y_{i}=x_{i}^{T}\beta+h(z_{i})+\epsilon_{i}.

is studied in [17] and [16]. Here Var(ϵi)=ξ(xi,zi)\text{Var}(\epsilon_{i})=\xi(x_{i},z_{i}) for some function ξ\xi. The parameter hh is some unknown function. The response yy depends on the xx covariates linearly and the zz covariates nonlinearly. When hh is estimated poorly, weighting by the inverse of the observation variances causes parameter estimates of β\beta to be inconsistent. In contrast, ignoring observation variances leads to consistent estimates of β\beta. Qualitatively these conclusion are similar to our own in that they caution against using weights in the standard way.

3.2 Asymptotic Results

Our asymptotic theory makes assumptions on the form of the weight matrix.

Assumptions 1 (Weight Matrix).

Suppose W^n×n\widehat{W}\in\mathbb{R}^{n}\times\mathbb{R}^{n} is a positive definite diagonal matrix with elements

W^ii=w(σi)+n1/2δnmih(σi)+n1d(σi,δn)\widehat{W}_{ii}=w(\sigma_{i})+n^{-1/2}\delta_{nm_{i}}h(\sigma_{i})+n^{-1}d(\sigma_{i},\delta_{n})

where w(σ)>0w(\sigma)>0, 𝔼[w(σ)4]<\mathbb{E}[w(\sigma)^{4}]<\infty, hh is a bounded function, mi{1,,M}m_{i}\in\{1,\ldots,M\} is a discrete random variable independent of xix_{i} and ϵi\epsilon_{i}, δnmi\delta_{nm_{i}} is OP(1)O_{P}(1) for all n+n\in\mathbb{Z}^{+} and mim_{i}, and d(σ,δn)d(\sigma,\delta_{n}) is uniformly in σ\sigma bounded above by an OP(1)O_{P}(1) random variable (ie supσ|d(σ,δn)|<δn\sup_{\sigma}|d(\sigma,\delta_{n})|<\delta_{n}^{\prime} where δn\delta_{n}^{\prime} is OP(1)O_{P}(1)).

These assumptions include both the ordinary least squares (OLS) estimator where W^ii=w(σi)=1\widehat{W}_{ii}=w(\sigma_{i})=1 and the standard weighted least squares estimator where W^ii=w(σi)=σi2\widehat{W}_{ii}=w(\sigma_{i})=\sigma_{i}^{-2} (assuming 𝔼[σ8]<\mathbb{E}[\sigma^{-8}]<\infty). In both these cases δnmi=0\delta_{nm_{i}}=0 and d=0d=0 for all n,mn,m. These additional terms are used in Sections 3.5 and 3.6 to construct adaptive estimators for the known and unknown variance cases.

Assumptions 2 (Moment Conditions).

Suppose xx and σ\sigma are independent, the design 𝔼[xxT]\mathbb{E}[xx^{T}] is full rank, and 𝔼[xj4xk4]<\mathbb{E}[x_{j}^{4}x_{k}^{4}]<\infty for all 1j,kp1\leq j,k\leq p. Assume 𝔼[g(x)4]<\mathbb{E}[g(x)^{4}]<\infty, 𝔼[σ4]<\mathbb{E}[\sigma^{4}]<\infty, 𝔼[ϵ4]<\mathbb{E}[\epsilon^{4}]<\infty, and the variances are bounded below by a positive constant σmin2inf{σ2:Fσ(σ)>0}>0\sigma_{min}^{2}\equiv\inf\{\sigma^{2}:F_{\sigma}(\sigma)>0\}>0.

The major assumption here is independence between xx and σ\sigma. We address dependence in Section 3.7.

Theorem 3.1.

Under Assumptions 1 and 2

n(β^(W^)β)𝑑N(0,ν(w))\sqrt{n}(\widehat{\beta}(\widehat{W})-\beta)\overset{d}{\to}N(0,\nu(w))

where

ν(w)=𝔼[w2]𝔼[xxT]1𝔼[g2(x)xxT]𝔼[xxT]1+𝔼[σ2w2]𝔼[xxT]1𝔼[w]2.\nu(w)=\frac{\mathbb{E}[w^{2}]\mathbb{E}[xx^{T}]^{-1}\mathbb{E}[g^{2}(x)xx^{T}]\mathbb{E}[xx^{T}]^{-1}+\mathbb{E}[\sigma^{2}w^{2}]\mathbb{E}[xx^{T}]^{-1}}{\mathbb{E}[w]^{2}}. (3.1)

See Section A.1 for a proof. If the response is linear (g0g\equiv 0) then the variance is

ν(w)=𝔼[σ2w2]𝔼[w]2𝔼[xxT]1.\nu(w)=\frac{\mathbb{E}[\sigma^{2}w^{2}]}{\mathbb{E}[w]^{2}}\mathbb{E}[xx^{T}]^{-1}.

Setting w(σ)=σ2w(\sigma)=\sigma^{-2} we have 𝔼[σ2w2]𝔼[w]2=(𝔼[σ2])1\frac{\mathbb{E}[\sigma^{2}w^{2}]}{\mathbb{E}[w]^{2}}=(\mathbb{E}[\sigma^{-2}])^{-1}. This is the standard weighted least squares estimator. This ww can be shown to minimize the variance using the Cauchy Schwartz inequality. With w(σ)=1w(\sigma)=1, the asymptotic variance can be rewritten

𝔼[xxT]1𝔼[(g2(x)+σ2)xxT]𝔼[xxT]1.\mathbb{E}[xx^{T}]^{-1}\mathbb{E}[(g^{2}(x)+\sigma^{2})xx^{T}]\mathbb{E}[xx^{T}]^{-1}. (3.2)

This is the sandwich form of the covariance for OLS derived in [30] and [31] (see [2], specifically Equations 1-3), valid even when σ\sigma and xx are not independent.

3.3 OLS and Standard WLS

For notational simplicity define

B\displaystyle B =𝔼[xxT]1\displaystyle=\mathbb{E}[xx^{T}]^{-1}
A\displaystyle A =BT𝔼[g2(x)xxT]B.\displaystyle=B^{T}\mathbb{E}[g^{2}(x)xx^{T}]B.

The asymptotic variances for OLS (β^(I)\widehat{\beta}(I)) and standard WLS (β^(Σ1)\widehat{\beta}(\Sigma^{-1})) are

ν(I)\displaystyle\nu(I) =A+𝔼[σ2]B\displaystyle=A+\mathbb{E}[\sigma^{2}]B
ν(Σ1)\displaystyle\nu(\Sigma^{-1}) =𝔼[σ4]𝔼[σ2]2A+1𝔼[σ2]B.\displaystyle=\frac{\mathbb{E}[\sigma^{-4}]}{\mathbb{E}[\sigma^{-2}]^{2}}A+\frac{1}{\mathbb{E}[\sigma^{-2}]}B.

Each of these asymptotic variances is composed of the same two terms. The AA term is caused by model misspecification while the BB term is the standard asymptotic variance in the case of no model misspecification. The coefficient on AA is larger for W=Σ1W=\Sigma^{-1} because 𝔼[σ4]𝔼[σ2]21\frac{\mathbb{E}[\sigma^{-4}]}{\mathbb{E}[\sigma^{-2}]^{2}}\geq 1 by Jensen’s Inequality. The coefficient on BB is larger for W=IW=I because 𝔼[σ2]1𝔼[σ2]\mathbb{E}[\sigma^{2}]\geq\frac{1}{\mathbb{E}[\sigma^{-2}]}. The relative merits of OLS and standard WLS depend on the size of the coefficients and the precise values of AA and BB. However, qualitatively, OLS and standard WLS suffer from high asymptotic variance in opposite situations which depend on the distribution of the errors. To make matters concrete, consider error distributions of the form

P(σ=c1)=δ1\displaystyle P(\sigma=c^{-1})=\delta_{1}
P(σ=1)=1δ1δ2\displaystyle P(\sigma=1)=1-\delta_{1}-\delta_{2}
P(σ=c)=δ2\displaystyle P(\sigma=c)=\delta_{2}

where δ1,δ2\delta_{1},\delta_{2} are small non–negative numbers and c>1c>1 is large. Note that AA and BB do not depend on FσF_{\sigma}.

  • δ1=0,δ2>0\delta_{1}=0,\delta_{2}>0: In this situation the error standard deviation is usually 11 and occasionally some large value cc. The result is large asymptotic variance for OLS. Since 𝔼[σ2]>c2δ2\mathbb{E}[\sigma^{2}]>c^{2}\delta_{2},

    ν(I)A+c2δ2B\nu(I)\succeq A+c^{2}\delta_{2}B

    For large cc this will be large. In contrast the coefficients on AA and BB for standard WLS can be bounded. For the coefficient on BB we have 𝔼[σ2]1(1δ2)1\mathbb{E}[\sigma^{-2}]^{-1}\leq(1-\delta_{2})^{-1}. The coefficient on AA with c>1c>1 is

    𝔼[σ4]𝔼[σ2]2=δ2c4+(1δ2)δ22c4+2δ2c2(1δ2)+(1δ2)2<11δ2.\frac{\mathbb{E}[\sigma^{-4}]}{\mathbb{E}[\sigma^{-2}]^{2}}=\frac{\delta_{2}c^{-4}+(1-\delta_{2})}{\delta_{2}^{2}c^{-4}+2\delta_{2}c^{-2}(1-\delta_{2})+(1-\delta_{2})^{2}}<\frac{1}{1-\delta_{2}}.

    Therefore

    ν(Σ1)(1δ2)1(A+B).\nu(\Sigma^{-1})\preceq(1-\delta_{2})^{-1}(A+B).

    In summary, standard WLS performs better than OLS when there are a small number of observations with large variance.

  • δ1>0,δ2=0\delta_{1}>0,\delta_{2}=0: In this situation the error standard deviation is usually 11 and occasionally some small value c1c^{-1}. For standard WLS with cc large and δ1\delta_{1} small, the coefficient for AA is

    𝔼[σ4]𝔼[σ2]2=δ1c4+(1δ1)δ12c4+2δ1c2(1δ1)+(1δ1)21δ1.\frac{\mathbb{E}[\sigma^{-4}]}{\mathbb{E}[\sigma^{-2}]^{2}}=\frac{\delta_{1}c^{4}+(1-\delta_{1})}{\delta_{1}^{2}c^{4}+2\delta_{1}c^{2}(1-\delta_{1})+(1-\delta_{1})^{2}}\approx\frac{1}{\delta_{1}}.

    Thus the asymptotic variance induced by model misspecification will be large for standard WLS. In contrast, we can bound the asymptotic variance above for OLS, independently of cc and δ1\delta_{1}. Since c>1c>1, 𝔼[σ2]<1\mathbb{E}[\sigma^{2}]<1 and

    ν(I)A+B.\nu(I)\preceq A+B.

The case where both δ1\delta_{1} and δ2\delta_{2} are non–zero presents problems for both OLS and standard WLS. For example if δ=δ1=δ2\delta=\delta_{1}=\delta_{2}, both OLS and standard WLS can be made to have large asymptotic variance by setting δ\delta small and cc large. In the following section we construct an adaptive weighting which improves upon both OLS and standard WLS.

3.4 Improving on OLS and Standard WLS

Let Γ\Gamma be a linear function from the set of p×pp\times p matrices to \mathbb{R} such that Γ(C)>0\Gamma(C)>0 whenever CC is positive definite. We seek some weighting w=w(σ)w=w(\sigma) for which Γ(ν(w))\Gamma(\nu(w)) (recall that ν\nu is the asymptotic variance) is lower than OLS and standard WLS. Natural choices for Γ\Gamma include the trace (minimize the sum of variances of the parameter estimates) and the Γ(C)=Cjj\Gamma(C)=C_{jj} (minimize the variance of one of the parameter estimates).

Theorem 3.2.

Under Assumptions 1 and 2, every function in the set

argminw(σ) Γ(ν(w))\underset{w(\sigma)}{\operatorname{argmin}}\text{ }\Gamma(\nu(w))

is proportional to

wmin(σ)=(σ2+Γ(A)Γ(B)1)1w_{min}(\sigma)=(\sigma^{2}+\Gamma(A)\Gamma(B)^{-1})^{-1} (3.3)

with probability 11.

Section A.2 contains a proof. The proportionality is due to the fact that the estimator is invariant to multiplicative scaling of the weights.

Corollary 3.1.

Under Assumptions 2,

Γ(ν(wmin))min(Γ(ν(I)),Γ(ν(Σ1)))\Gamma(\nu(w_{min}))\leq\min(\Gamma(\nu(I)),\Gamma(\nu(\Sigma^{-1})))

with strict inequality if 𝔼[g2(x)xxT]\mathbb{E}[g^{2}(x)xx^{T}] is positive definite and the distribution of σ\sigma is not a point mass.

A proof is contained in Section A.3. Thus if we can construct a weight matrix W^\widehat{W} which satisfies Assumptions 1 with w(σ)=wmin(σ)w(\sigma)=w_{min}(\sigma) , then by the preceding theorem the associated weighted estimator will have lower asymptotic variance then either OLS or standard WLS. We now construct such a weighting in the case of known and unknown error variances.

3.5 Known Error Variances

With the σi\sigma_{i} known we only need to estimate AA and BB in wminw_{min} in Equation (3.3). Let Δ=Γ(A)Γ(B)1\Delta=\Gamma(A)\Gamma(B)^{-1}. Let

B^=(1nXTX)1.\widehat{B}=\left(\frac{1}{n}X^{T}X\right)^{-1}.

Let β^(W^)\widehat{\beta}(\widehat{W}) be a root nn consistent estimator of β\beta (eg W^=I\widehat{W}=I is root nn consistent by Theorem 3.1) and let

g^(xi)2=(yixiTβ^(W^))2σi2.\widehat{g}(x_{i})^{2}=(y_{i}-x_{i}^{T}\widehat{\beta}(\widehat{W}))^{2}-\sigma_{i}^{2}.

Let

A^=B^T(σi4)1(xixiTg^(xi)2σi4)B^.\widehat{A}=\widehat{B}^{T}\left(\sum\sigma_{i}^{-4}\right)^{-1}\left(\sum x_{i}x_{i}^{T}\widehat{g}(x_{i})^{2}\sigma_{i}^{-4}\right)\widehat{B}.

Then we have

Δ^=max(Γ(A^)Γ(B^)1,0).\widehat{\Delta}=\max(\Gamma(\widehat{A})\Gamma(\widehat{B})^{-1},0). (3.4)

The estimated optimal weighting matrix is the diagonal matrix W^min\widehat{W}_{min} with diagonal elements

W^min,ii=1σi2+Δ^.\widehat{W}_{min,ii}=\frac{1}{\sigma_{i}^{2}+\widehat{\Delta}}. (3.5)

A few notes on this estimator:

  • The term xixiTg^(xi)2x_{i}x_{i}^{T}\widehat{g}(x_{i})^{2} is an estimate of xixiTg(xi)2x_{i}x_{i}^{T}g(x_{i})^{2}. These estimates are weighted by σi4\sigma_{i}^{-4}. The term (σi4)1(\sum\sigma_{i}^{-4})^{-1} normalizes the weights. This weighting is motivated by the fact that

    xixiTg^(xi)2\displaystyle x_{i}x_{i}^{T}\widehat{g}(x_{i})^{2} =xixiT((yixiTβ^(W^))2σi2)\displaystyle=x_{i}x_{i}^{T}((y_{i}-x_{i}^{T}\widehat{\beta}(\widehat{W}))^{2}-\sigma_{i}^{2})
    =xixiT((yixiTβ)2σi2)+O(n1/2)\displaystyle=x_{i}x_{i}^{T}((y_{i}-x_{i}^{T}\beta)^{2}-\sigma_{i}^{2})+O(n^{-1/2})
    =xixiT((g(xi)+σiϵi)2σi2)+O(n1/2).\displaystyle=x_{i}x_{i}^{T}((g(x_{i})+\sigma_{i}\epsilon_{i})^{2}-\sigma_{i}^{2})+O(n^{-1/2}).

    Analysis of the first order term shows

    𝔼[xixiT((g(xi)+σiϵi)2σi2)|xi,σi]=xixiTg2(xi)\mathbb{E}[x_{i}x_{i}^{T}((g(x_{i})+\sigma_{i}\epsilon_{i})^{2}-\sigma_{i}^{2})|x_{i},\sigma_{i}]=x_{i}x_{i}^{T}g^{2}(x_{i})

    and

    Var(xixiT((g(xi)+σiϵi)2σi2)|xi,σi)jk\displaystyle\text{Var}(x_{i}x_{i}^{T}((g(x_{i})+\sigma_{i}\epsilon_{i})^{2}-\sigma_{i}^{2})|x_{i},\sigma_{i})_{jk}
    =xij2xik2(σi4(𝔼[ϵ4]1)+4g(xi)2σi2+4g(xi)σi3𝔼[ϵ3])\displaystyle=x_{ij}^{2}x_{ik}^{2}(\sigma_{i}^{4}(\mathbb{E}[\epsilon^{4}]-1)+4g(x_{i})^{2}\sigma_{i}^{2}+4g(x_{i})\sigma_{i}^{3}\mathbb{E}[\epsilon^{3}])

    Thus by weighting the estimates by σi4\sigma_{i}^{-4}, we can somewhat account for the different variances. Unfortunately since the variance depends on gg, 𝔼[ϵ3]\mathbb{E}[\epsilon^{3}], and 𝔼[ϵ4]\mathbb{E}[\epsilon^{4}] which are unknown, it is not possible to weight by exactly the inverse of the variances. Other weightings are possible and in general adaptivity will hold.

  • Since AA and BB are positive semi–definite, Γ(A)Γ(B)10\Gamma(A)\Gamma(B)^{-1}\geq 0. Thus for estimating Δ\Delta, we use the maximum of a plug–in estimator and 0 (Equation (3.4)).

Theorem 3.3.

Under Assumptions 2, W^min\widehat{W}_{min} from Equation (3.5) satisfies Assumptions 1 with w(σ)=wmin(σ)w(\sigma)=w_{min}(\sigma).

See Section A.4 for a proof. Theorem 3.3 shows it is possible to construct better estimators than both OLS and standard WLS. In practice, it may be best to iteratively update estimates of W^min\widehat{W}_{min} starting with a known root nn consistent estimator such as W^=I\widehat{W}=I. We take this approach in our numerical simulations in Section 4.1.

For the purposes of making confidence regions we need estimators of the asymptotic variance. Above we developed consistent estimators for AA and BB. We take a plug–in approach to estimating the asymptotic variance for a particular weighting WW. Specifically

ν^1(W^)=n(1TW^21)A^+n(1TW^ΣW^1)B^(1TW^1)2.\widehat{\nu}_{1}(\widehat{W})=\frac{n(1^{T}\widehat{W}^{2}1)\widehat{A}+n(1^{T}\widehat{W}\Sigma\widehat{W}1)\widehat{B}}{(1^{T}\widehat{W}1)^{2}}. (3.6)

We also define the oracle ν^OR(W^)\widehat{\nu}_{OR}(\widehat{W}) which is the same as ν^1\widehat{\nu}_{1} but uses AA and BB rather than A^\widehat{A} and B^\widehat{B}. While ν^OR\widehat{\nu}_{OR} cannot be used in practice, it is useful for evaluating the performance of ν^1\widehat{\nu}_{1} in simulations.

Finally suppose the error variance is known up to a constant, i.e. σi2=kτi2\sigma_{i}^{2}=k\tau_{i}^{2} where τi2\tau_{i}^{2} is known but kk and σi2\sigma_{i}^{2} are unknown. In the case without model misspecification, one can simply use weights τi2\tau_{i}^{-2} since the weighted estimator is invariant up to rescaling of the weights. The situation is more complicated when model misspecification is present. Simulations and informal mathematical derivations (not included in this work) suggest that replacing the σi\sigma_{i} with τi\tau_{i} in Equation (3.5) results in weights that are suboptimal. In particular, when k>1k>1 (underestimated errors), the resulting weights are closer to OLS than optimal while if k<1k<1 (overestimated errors), the resulting weights are closer to standard WLS than optimal.

3.6 Unknown Error Variances

Suppose for observation ii we observe mi{1,,M}m_{i}\in\{1,\ldots,M\}, the group membership of observation ii. Observations in group mm have the same (unknown) variance σm2>0\sigma_{m}^{2}>0. See [8], [5], and [9] for work on grouped error models in the case where the response is linear.

The mim_{i} are assumed independent of xix_{i} and ϵi\epsilon_{i}, with probability mass function fmf_{m} (supported on 1,,M1,\ldots,M). While the σm\sigma_{m} for m=1,,Mm=1,\ldots,M are fixed unknown parameters, the probability mass function fmf_{m} induces the probability distribution function FσF_{\sigma} on σ\sigma. So we can define

𝔼[h(σ)]=m=1Mh(σm)fm(m)\mathbb{E}[h(\sigma)]=\sum_{m=1}^{M}h(\sigma_{m})f_{m}(m)

for any function hh.

Theorem 3.1 shows that even if the σm\sigma_{m} were known, standard weighted least squares is not generally optimal for estimating β\beta in this model. It is not possible to estimate wminw_{min} as proposed in Section 3.5 because that method requires knowledge of σm\sigma_{m}. However we can re–express the optimal weight function as

wmin(m)\displaystyle w_{min}(m) =1σm2+Γ(BT𝔼[g2(x)xxT]B)Γ(B)\displaystyle=\frac{1}{\sigma_{m}^{2}+\frac{\Gamma(B^{T}\mathbb{E}[g^{2}(x)xx^{T}]B)}{\Gamma(B)}}
=Γ(B)Γ(BT𝔼[(g2(x)+σm2)xxT]B)\displaystyle=\frac{\Gamma(B)}{\Gamma(B^{T}\mathbb{E}[(g^{2}(x)+\sigma_{m}^{2})xx^{T}]B)}
=Γ(B)Γ(BTCmB)\displaystyle=\frac{\Gamma(B)}{\Gamma(B^{T}C_{m}B)}

where the last equality defines CmC_{m}. Note that σm\sigma_{m} is a fixed unknown parameter, not a random variable. One can estimate BB with B^=(n1XTX)1\widehat{B}=(n^{-1}X^{T}X)^{-1} and CmC_{m} with

C^m=1i=1n𝟙mi=mi=1n(yixiTβ^(W^))2xixiT𝟙mi=m\widehat{C}_{m}=\frac{1}{\sum_{i=1}^{n}\mathbbm{1}_{m_{i}=m}}\sum_{i=1}^{n}(y_{i}-x_{i}^{T}\widehat{\beta}(\widehat{W}))^{2}x_{i}x_{i}^{T}\mathbbm{1}_{m_{i}=m}

where β^(W^)\widehat{\beta}(\widehat{W}) is a root nn consistent estimator of β\beta (for example W^=I\widehat{W}=I suffices by Theorem 3.1). The estimated weight matrix W^min\widehat{W}_{min} is diagonal with

W^min,ii=Γ(B^)Γ(B^TC^miB^).\widehat{W}_{min,ii}=\frac{\Gamma(\widehat{B})}{\Gamma(\widehat{B}^{T}\widehat{C}_{m_{i}}\widehat{B})}. (3.7)
Theorem 3.4.

Under Assumptions 2, W^min\widehat{W}_{min} from Equation (3.7) satisfies Assumptions 1 with w(σ)=wmin(m)w(\sigma)=w_{min}(m).

See Section A.5 for a proof. Thus in the case of unknown errors it is possible to construct an estimator which outperforms standard WLS and OLS. As is the case with known errors, one can iteratively update W^min\widehat{W}_{min}, starting with some (possibly inefficient) root nn consistent estimate of β\beta.

For estimating the asymptotic variance we cannot use Equation (3.6) because that method required an estimate of AA, a quantity for which we do not have an estimate in the unknown error variance setting. Instead note that the asymptotic variance of Equation (3.1) may be rewritten

ν(W)=B𝔼[(g2(x)+σ2)w2xxT]B𝔼[w]2=B𝔼[(yxTβ)2w2xxT]B𝔼[w]2.\nu(W)=\frac{B\mathbb{E}[(g^{2}(x)+\sigma^{2})w^{2}xx^{T}]B}{\mathbb{E}[w]^{2}}=\frac{B\mathbb{E}[(y-x^{T}\beta)^{2}w^{2}xx^{T}]B}{\mathbb{E}[w]^{2}}.

Thus a natural estimator for the asymptotic variance is

ν^2(W^)=nB^(i=1n(yixiTβ^(W^))2W^ii2xixiT)B^(1TW^1)2.\widehat{\nu}_{2}(\widehat{W})=\frac{n\widehat{B}\left(\sum_{i=1}^{n}(y_{i}-x_{i}^{T}\widehat{\beta}(\widehat{W}))^{2}\widehat{W}_{ii}^{2}x_{i}x_{i}^{T}\right)\widehat{B}}{(1^{T}\widehat{W}1)^{2}}. (3.8)

3.7 Dependent Errors

Suppose one drops the independence assumption between xx and σ\sigma. This will be the case whenever the error variance is a function of xx, a common assumption in the heteroskedasticity literature [3, 4, 12]. We require the weight matrix WW to be diagonal positive definite with diagonal elements Wii=w(σi)W_{ii}=w(\sigma_{i}), some function of the error variance. The estimator for β\beta is

β^(W)=(XTWX)1XTWY.\widehat{\beta}(W)=(X^{T}WX)^{-1}X^{T}WY.

Recalling we write ww for w(σ)w(\sigma), we have the following result.

Theorem 3.5.

Assuming 𝔼[xxTw]\mathbb{E}[xx^{T}w], 𝔼[wxf(x)]\mathbb{E}[wxf(x)], and 𝔼[xwσ]\mathbb{E}[xw\sigma] exist and 𝔼[xxT]\mathbb{E}[xx^{T}] is positive definite,

β^(W)a.s.𝔼[xxTw]1𝔼[wxf(x)].\widehat{\beta}(W)\rightarrow_{a.s.}\mathbb{E}[xx^{T}w]^{-1}\mathbb{E}[wxf(x)]. (3.9)

See Section A.6 for a proof. If xx and σ\sigma are independent then the r.h.s is 𝔼[xxT]1𝔼[xf(x)]\mathbb{E}[xx^{T}]^{-1}\mathbb{E}[xf(x)] and the estimator is consistent (as demonstrated by Theorem 3.1). Interestingly the estimator is also consistent if one lets w(σ)=1w(\sigma)=1 (OLS), regardless of the dependence structure between xx and σ\sigma. However weighted estimators will not generally be consistent (including standard WLS). This observation suggests the OLS estimator may be preferred in the case of dependent errors. We show an example of this situation in the simulations of Section 4.1.

4 Numerical Experiments

4.1 Simulation

Refer to caption
(a) W^=Σ1\widehat{W}=\Sigma^{-1}
Refer to caption
(b) W^=I\widehat{W}=I
Refer to caption
(c) W^=(Σ+Δ^)1\widehat{W}=(\Sigma+\widehat{\Delta})^{-1}
Refer to caption
(d) W^min,ii=Γ(B^)Γ(B^TC^miB^)\widehat{W}_{min,ii}=\frac{\Gamma(\widehat{B})}{\Gamma(\widehat{B}^{T}\widehat{C}_{m_{i}}\widehat{B})}
Figure 2: Parameter estimates using (a) standard WLS (b) OLS (c) estimated weights assuming the σi\sigma_{i} are known (d) estimated weights using only the group membership of the variances. The red ellipses are the asymptotic variances of the various methods.
WLS OLS (Σ+Δ^)1(\Sigma+\widehat{\Delta})^{-1} Γ(B^)Γ(B^TC^miB^)\frac{\Gamma(\widehat{B})}{\Gamma(\widehat{B}^{T}\widehat{C}_{m_{i}}\widehat{B})}
ν^1\widehat{\nu}_{1} 0.536 0.945 0.807 —–
ν^2\widehat{\nu}_{2} 0.393 0.96 0.843 0.759
ν^OR\widehat{\nu}_{OR} 0.925 0.945 0.956 —–
Table 1: Fraction of times β\beta is in 95% confidence region.

We conduct a small simulation study to demonstrate some of the ideas presented in the last section.111Code to reproduce the work in this section can be accessed at http://stat.tamu.edu/~jlong/hetero.zip or by contacting the author. Consider modeling the function f(x)=x2f(x)=x^{2} using linear regression with an intercept term. Let xUnif(0,1)x\sim Unif(0,1). The best linear approximation to ff is β1+β2x\beta_{1}+\beta_{2}x where β1=1/6\beta_{1}=-1/6 and β2=1\beta_{2}=1. We first suppose σ\sigma is drawn independently from xx from a discrete probability distribution such that P(σ=0.01)=P(σ=1)=0.05P(\sigma=0.01)=P(\sigma=1)=0.05 and P(σ=0.1)=0.9P(\sigma=0.1)=0.9. Since σ\sigma has support on a finite set of values, we can consider the cases where σi\sigma_{i} is known (Section 3.5) and where only the group mim_{i} of observation ii is known (Section 3.6). We let Γ\Gamma be the trace of the matrix.

We generate samples of size n=100n=100, N=1000N=1000 times and make scatterplots of the parameter estimates using weights W=Σ1W=\Sigma^{-1} (standard WLS), W=IW=I (OLS), W^min=(Σ+Δ^)1\widehat{W}_{min}=(\Sigma+\widehat{\Delta})^{-1}, and W^min,ii=Γ(B^)Γ(B^TC^miB^)\widehat{W}_{min,ii}=\frac{\Gamma(\widehat{B})}{\Gamma(\widehat{B}^{T}\widehat{C}_{m_{i}}\widehat{B})}. The OLS estimator does not require any knowledge about the σi\sigma_{i}. The fourth estimator uses only the group mim_{i} of observation ii. For the two adaptive estimators, we use β^(I)\widehat{\beta}(I) as an initial root nn consistent estimator of β\beta and iterate twice to obtain the weights.

Results are shown in Figure 2. The red ellipses are the asymptotic variances. The results show that OLS outperforms standard WLS. Estimating the optimal weighting with or without knowledge of the variances outperforms both OLS and standard WLS. Exact knowledge of the weights (c) somewhat outperforms only knowing the group membership of the variances (d).

We construct 95% confidence regions using estimates of the asymptotic variance and determine the fraction of times (out of the NN simulations) that the true parameters are in the confidence regions. Recall that in Section 3.5 we proposed ν^1\widehat{\nu}_{1} (Equation (3.6)) as well as the oracle ν^OR\widehat{\nu}_{OR} for estimating the asymptotic variance when the error variances are known. In Section 3.6 we proposed ν^2\widehat{\nu}_{2} (Equation (3.8)) when the error variances are unknown. The estimator ν^2\widehat{\nu}_{2} can also be used when the error variances are known. We use all three of these methods for constructing confidence regions for standard WLS, OLS, and W^=(Σ+Δ^)1\widehat{W}=(\Sigma+\widehat{\Delta})^{-1}. For W^ii=Γ(B^)Γ(B^TC^miB^)\widehat{W}_{ii}=\frac{\Gamma(\widehat{B})}{\Gamma(\widehat{B}^{T}\widehat{C}_{m_{i}}\widehat{B})} we use only ν^2\widehat{\nu}_{2} because ν^1\widehat{\nu}_{1} requires knowledge of Σ\Sigma. Table 1 contains the results. While for OLS the nominal coverage probability is approximately attained, the other methods are anti–conservative for ν^1\widehat{\nu}_{1} and ν^2\widehat{\nu}_{2}. Estimates for WLS are especially poor. The performance of the oracle is rather good, suggesting that the problem lies in estimating AA and BB.

Refer to caption
(a) W^=Σ1\widehat{W}=\Sigma^{-1}
Refer to caption
(b) W^=I\widehat{W}=I
Figure 3: Parameter estimates using (a) standard WLS and (b) OLS when there is dependence between xx and σ\sigma. We see that WLS is no longer consistent. The red point in each plot is the true parameter values. The orange ×\times in the left plot is the value to which standard WLS is converging (r.h.s. of Equation (3.9)). The red ellipse is the OLS sandwich asymptotic variance for the dependent case, Equation (3.2).

To illustrate the importance of the σ\sigma, xx independence assumption, we now consider the case where σ\sigma is a function of xx. Specifically,

σ={0.01:x<0.050.1:0.05x0.951:x>0.95\sigma=\left\{\begin{array}[]{lr}0.01&:x<0.05\\ 0.1&:0.05\leq x\leq 0.95\\ 1&:x>0.95\end{array}\right.

All other parameters in the simulation are the same as before. Note that the marginal distribution of σ\sigma is the same as the first simulation. We know from Section 3.7 that weighted estimators may no longer be consistent. In Figure 3 we show a scatter plot of parameter estimates using standard WLS and OLS. We see that the WLS estimator has low variance but is highly biased. The OLS estimator is strongly preferred.

4.2 Analysis of Astronomy Data

[25] identified 483 RR Lyrae periodic variable stars in Stripe 82 of the Sloan Digital Sky Survey III. We obtained 450 of these light curves from a publicly available data base [11].222We use only the g–band data for determining periods. Figure 1a shows one of these light curves. These light curves are well observed (n>50n>50), so it is fairly easy to estimate periods. For example, [25] used a method based on the Supersmoother algorithm of [7]. However there is interest in astronomy in developing period estimation algorithms that work well on poorly sampled light curves [29, 19, 14, 24]. Well sampled light curves offer an opportunity to test period estimation algorithms because ground truth is known and they can be artificially downsampled to create realistic simulations of poorly sampled light curves.

As discussed in Section 2, each light curve can be represented as {(ti,yi,σi)}i=1n\{(t_{i},y_{i},\sigma_{i})\}_{i=1}^{n} where tit_{i} is the time of the yiy_{i} brightness measurement made with uncertainty σi\sigma_{i}. In Figure 4 we plot magnitude error (σi\sigma_{i}) against magnitude (yiy_{i}) for all observations of all 450 light curves. For higher magnitudes (less bright observations), the observation uncertainty is larger. In an attempt to ensure independence between σ\sigma and xx assumed by our asymptotic theory, we use only the bright stars in which all magnitudes are below 18 (left of the vertical black line in Figure 4). In this region, magnitude and magnitude error are approximately independent. This reduces the sample to 238 stars. We also ran our methods on the larger set of stars. Qualitatively, the results which follow are similar.

Refer to caption
Figure 4: Magnitude error versus magnitude scatterplot. As magnitude increases (observation is less bright), the uncertainty rises. We use only stars where all photometric measurements are less than 18 magnitudes. In this region magnitude error and magnitude are approximately independent.

In order to simulate challenging period recovery settings, we downsample each of these light curves to have n=10,20,30,40n=10,20,30,40. We estimate periods using sinusoidal models with K=1,2,3K=1,2,3 harmonics. For each model we consider three methods for incorporating the error variances. In the first two methods, we weight by the the inverse of the observations variances (Σ1\Sigma^{-1}) as suggested by maximum likelihood for correctly specified models and the identity matrix (II). Since this is not a linear model, it is not possible to directly use the weighting idea proposed in Section 3.5. We propose a modification for the light curve scenario. We first fit the model using identity weights and determine a best fit period. We then determine the optimal weighting at this period following the procedure of Section 3.5. Recall from Section 2 that at a fixed period, the sinusoidal models are linear. Using the new weights, we then refit the model and estimate the period. A period estimate is considered correct if it is within 1% of the true value.

K=1K=1 K=2K=2 K=3K=3
Σ1\Sigma^{-1} II Δ\Delta Σ1\Sigma^{-1} II Δ\Delta Σ1\Sigma^{-1} II Δ\Delta
10 0.09 0.16 0.15 0.13 0.11 0.11 0.03 0.03 0.03
20 0.46 0.58 0.59 0.63 0.68 0.69 0.69 0.77 0.77
30 0.64 0.78 0.79 0.71 0.82 0.83 0.82 0.86 0.85
40 0.75 0.79 0.79 0.80 0.85 0.85 0.87 0.92 0.92
Table 2: Fraction of periods estimated correctly using different weightings for models with K=1,2,3K=1,2,3 harmonics. Ignoring the observation uncertainties (II) in the fitting is superior to using them (Σ1\Sigma^{-1}). The strategy for determining an optimal weight function (Δ\Delta) does not provide much improvement over ignoring the weights. More complex models (K=3K=3) perform worse than simple models (K=1K=1) when there is limited data (n=10n=10), but better when the functions are better sampled (n=40n=40). The standard errors on these accuracies is no larger than 0.5(10.5)/2380.032\sqrt{0.5(1-0.5)/238}\approx 0.032 .

The fraction of periods estimated correctly are contained in Table 2. In nearly all cases ignoring observation uncertainties (II) outperforms using the inverse of the observation variances as weights (Σ1\Sigma^{-1}). The improvement is greatest for the K=1K=1 model and least for the K=3K=3 model, possibly due to the decreasing model misspecification as the number of harmonics increases. The very poor performance of the K=3K=3 models with 10 magnitude measurements is due to overfitting. With K=3K=3, there are 8 parameters which is too complex a model for 10 observations. Optimizing the observation weights does not appear to improve performance over not using weights. This is potentially due to the fact that the model is highly misspecified (see Figure 1b).

5 Discussion

5.1 Other Problems in Astronomy

Heteroskedastic measurement error is ubiquitous in astronomy problems. In many cases some degree of model misspecification is present. In this work, we focused on the problem of estimating periods of light curves. Other problems include:

  • [22] observe the brightness of galaxies through several photometric filters. Variances on the brightness measurements are heteroskedastic. The brightness measurements for each galaxy are matched to a set of templates. Assuming a normal measurement error model, maximum likelihood would suggest weighting the difference between observed brightness and template brightness by the inverse of the observation variance. In personal communication, [22] stated that galaxy templates contained some level of misspecification. [22] addressed this issue by inflating observation variances, using weights of (σ2+Δ)1(\sigma^{2}+\Delta)^{-1} instead of σ2\sigma^{-2}. The choice of Δ>0\Delta>0 was based on qualitative analysis of model fits. Section 3.3 provides a theoretical justification for this practice.

  • [20] models spectra of galaxies as linear combinations of simple stellar populations (SSP) and non–linear distortions. While parameters which define an SSP are continuous, a discrete set of SSPs are selected as prototypes and the galaxies are modeled as linear combinations of the prototypes. This is done for computational efficiency and to avoid overfitting. However prototype selection introduces some degree of model misspecification as the prototypes may not be able to perfectly reconstruct all galaxy spectra. Galaxy spectra are observed with heteroskedastic measurement error and the inverse of the observation variances are used as weights when fitting the model (see Equation 2.2 in [20]).

5.2 Conclusions

We have shown that WLS estimators can perform poorly when the response is not a linear function of the predictors because observations with small variance have too much influence on the fit. In the misspecified model setting, OLS suffers from the usual problem that observations with large variance induce large asymptotic variance in the parameter estimates. For cases in which some observations have very small variance and other observations have very large variance, procedures which optimize the weights may achieve significant performance improvements as shown in the simulation in Section 4.1.

This work primarily focused on the case where xx and σ\sigma are independent. However results from Section 3.7 showed that when independence fails, weighted estimators will typically be biased. This additional complication makes OLS more attractive relative to weighted procedures.

For practitioners we recommend caution in using the inverse of the observation variances as weights when model misspecification is present. As a check, practitioners could fit models twice, with and without weights, and compare performance based on some metric. More sophisticated methods, such as specifically tuning weights for optimal performance may be attempted. Our asymptotic theory provides guidance on how to do this in the case of the linear model.

Appendix A Technical Notes

A.1 Proof of Theorem 3.1

Let g(X)ng(X)\in\mathbb{R}^{n} be the function gg applied to the rows of XX. We sometimes write ww for w(σ)w(\sigma). We have

β^(W^)\displaystyle\widehat{\beta}(\widehat{W}) =(XTW^X)1XTW^Y\displaystyle=(X^{T}\widehat{W}X)^{-1}X^{T}\widehat{W}Y
=(XTW^X)1XTW^(Xβ+g(X)+Σ1/2ϵ)\displaystyle=(X^{T}\widehat{W}X)^{-1}X^{T}\widehat{W}(X\beta+g(X)+\Sigma^{1/2}\epsilon)
=β+((1/n)XTW^X)1q(1/n)XTW^(g(X)+Σ1/2ϵ)z.\displaystyle=\beta+\underbrace{((1/n)X^{T}\widehat{W}X)^{-1}}_{\equiv q}\underbrace{(1/n)X^{T}\widehat{W}(g(X)+\Sigma^{1/2}\epsilon)}_{\equiv z}.

In part 1 we show that

q𝑃𝔼[xxT]1𝔼[w]1.q\overset{P}{\to}\mathbb{E}[xx^{T}]^{-1}\mathbb{E}[w]^{-1}.

In part 2 we show that

nz𝑑N(0,𝔼[w2]𝔼[g2xxT]+𝔼[σ2w2]𝔼[xxT]).\sqrt{n}z\overset{d}{\to}N(0,\mathbb{E}[w^{2}]\mathbb{E}[g^{2}xx^{T}]+\mathbb{E}[\sigma^{2}w^{2}]\mathbb{E}[xx^{T}]).

Thus by Slutsky’s Theorem

n(β^(W^)β)\displaystyle\sqrt{n}(\widehat{\beta}(\widehat{W})-\beta)
=qnz\displaystyle=q\sqrt{n}z
𝑑N(0,𝔼[w]2(𝔼[w2]𝔼[xxT]1𝔼[g2(x)xxT]𝔼[xxT]1+𝔼[σ2w2]𝔼[xxT]1))\displaystyle\overset{d}{\to}N\left(0,\mathbb{E}[w]^{-2}(\mathbb{E}[w^{2}]\mathbb{E}[xx^{T}]^{-1}\mathbb{E}[g^{2}(x)xx^{T}]\mathbb{E}[xx^{T}]^{-1}+\mathbb{E}[\sigma^{2}w^{2}]\mathbb{E}[xx^{T}]^{-1})\right)
  1. 1.

    Show q𝑃𝔼[xxT]1𝔼[w]1q\overset{P}{\to}\mathbb{E}[xx^{T}]^{-1}\mathbb{E}[w]^{-1}: Recall that by Assumptions 1

    W^ii=w(σi)+n1/2δnmih(σi)+n1d(σi,δn)\widehat{W}_{ii}=w(\sigma_{i})+n^{-1/2}\delta_{nm_{i}}h(\sigma_{i})+n^{-1}d(\sigma_{i},\delta_{n})

    where hh is a bounded function, δnmi\delta_{nm_{i}} are OP(1)O_{P}(1), and the dd is uniformly (in σ\sigma) bounded by an OP(1)O_{P}(1) random variable.

    q1\displaystyle q^{-1} =(1/n)XTW^X\displaystyle=(1/n)X^{T}\widehat{W}X
    =1nxixiTW^ii\displaystyle=\frac{1}{n}\sum x_{i}x_{i}^{T}\widehat{W}_{ii}
    =1nxixiTw(σi)+1n3/2xixiTh(σi)δnmiR1+1n2xixiTd(σi,δn)R2\displaystyle=\frac{1}{n}\sum x_{i}x_{i}^{T}w(\sigma_{i})+\underbrace{\frac{1}{n^{3/2}}\sum x_{i}x_{i}^{T}h(\sigma_{i})\delta_{nm_{i}}}_{\equiv R_{1}}+\underbrace{\frac{1}{n^{2}}\sum x_{i}x_{i}^{T}d(\sigma_{i},\delta_{n})}_{\equiv R_{2}}

    We show that R1,R2𝑃0R_{1},R_{2}\overset{P}{\to}0. Noting that 𝔼[|xijxikh(σi)𝟙mi=m|]<\mathbb{E}[|x_{ij}x_{ik}h(\sigma_{i})\mathbbm{1}_{m_{i}=m}|]<\infty because hh is bounded and the xx have second moments we have

    |R1jk|=n1/2|m=1Mδnm(n1i=1nxijxikh(σi)𝟙mi=m)|𝑃0.|R_{1jk}|=n^{-1/2}\left|\sum_{m=1}^{M}\delta_{nm}\left(n^{-1}\sum_{i=1}^{n}x_{ij}x_{ik}h(\sigma_{i})\mathbbm{1}_{m_{i}=m}\right)\right|\overset{P}{\to}0.

    Using the fact that |d(σi,δn)|<δn|d(\sigma_{i},\delta_{n})|<\delta_{n}^{\prime} where δn\delta_{n}^{\prime} is OP(1)O_{P}(1) we have

    |R2jk|n1δn(1ni=1n|xijxik|)𝑃0.|R_{2jk}|\leq n^{-1}\delta_{n}^{\prime}\left(\frac{1}{n}\sum_{i=1}^{n}|x_{ij}x_{ik}|\right)\overset{P}{\to}0.

    Thus

    q1𝑃𝔼[xxTw]=𝔼[xxT]𝔼[w]q^{-1}\overset{P}{\to}\mathbb{E}[xx^{T}w]=\mathbb{E}[xx^{T}]\mathbb{E}[w]

    where the last equality follows from the facts that σ\sigma and xx are independent. The desired result follows from the continuous mapping theorem.

  2. 2.

    Show nz𝑑N(0,𝔼[w2]𝔼[g2xxT]+𝔼[σ2w2]𝔼[xxT])\sqrt{n}z\overset{d}{\to}N(0,\mathbb{E}[w^{2}]\mathbb{E}[g^{2}xx^{T}]+\mathbb{E}[\sigma^{2}w^{2}]\mathbb{E}[xx^{T}]):

    nz\displaystyle\sqrt{n}z =n1/2i=1n(g(xi)+σiϵi)W^iixi\displaystyle=n^{-1/2}\sum_{i=1}^{n}(g(x_{i})+\sigma_{i}\epsilon_{i})\widehat{W}_{ii}x_{i}
    =n1/2i=1n(g(xi)+σiϵi)w(σi)xiai+n1i=1n(g(xi)+σiϵi)xiδnmih(σi)R3\displaystyle=n^{-1/2}\sum_{i=1}^{n}\underbrace{(g(x_{i})+\sigma_{i}\epsilon_{i})w(\sigma_{i})x_{i}}_{a_{i}}+\underbrace{n^{-1}\sum_{i=1}^{n}(g(x_{i})+\sigma_{i}\epsilon_{i})x_{i}\delta_{nm_{i}}h(\sigma_{i})}_{R_{3}}
    +n3/2i=1n(g(xi)+σiϵi)d(σi,δn)xiR4\displaystyle+\underbrace{n^{-3/2}\sum_{i=1}^{n}(g(x_{i})+\sigma_{i}\epsilon_{i})d(\sigma_{i},\delta_{n})x_{i}}_{R_{4}}

    𝔼[ai]=𝔼[(g(xi)+σiϵi)w(σi)xi]=0\mathbb{E}[a_{i}]=\mathbb{E}[(g(x_{i})+\sigma_{i}\epsilon_{i})w(\sigma_{i})x_{i}]=0 because 𝔼[g(xi)xi]=0\mathbb{E}[g(x_{i})x_{i}]=0 and ϵi\epsilon_{i} is independent of all other terms and mean 0. We have

    Cov(ai)jk\displaystyle Cov(a_{i})_{jk} =𝔼[aijaik]\displaystyle=\mathbb{E}[a_{ij}a_{ik}]
    =𝔼[(g(x)+σϵ)2w2xjxk]\displaystyle=\mathbb{E}[(g(x)+\sigma\epsilon)^{2}w^{2}x_{j}x_{k}]
    =𝔼[g2(x)w2xjxk]+2𝔼[g(x)σϵw2xjxk]+𝔼[σ2ϵ2w2xjxk]\displaystyle=\mathbb{E}[g^{2}(x)w^{2}x_{j}x_{k}]+2\mathbb{E}[g(x)\sigma\epsilon w^{2}x_{j}x_{k}]+\mathbb{E}[\sigma^{2}\epsilon^{2}w^{2}x_{j}x_{k}]
    =𝔼[w2]𝔼[g2(x)xjxk]+𝔼[σ2w2]𝔼[xjxk].\displaystyle=\mathbb{E}[w^{2}]\mathbb{E}[g^{2}(x)x_{j}x_{k}]+\mathbb{E}[\sigma^{2}w^{2}]\mathbb{E}[x_{j}x_{k}].

    So Cov(ai)=𝔼[w2]𝔼[g2xxT]+𝔼[σ2w2]𝔼[xxT]Cov(a_{i})=\mathbb{E}[w^{2}]\mathbb{E}[g^{2}xx^{T}]+\mathbb{E}[\sigma^{2}w^{2}]\mathbb{E}[xx^{T}]. The desired result now follows from the CLT and showing that R3,R4𝑃0R_{3},R_{4}\overset{P}{\to}0. Note that

    𝔼[(g(xi)+σiϵi)xih(σi)𝟙mi=m]\displaystyle\mathbb{E}[(g(x_{i})+\sigma_{i}\epsilon_{i})x_{i}h(\sigma_{i})\mathbbm{1}_{m_{i}=m}]
    =𝔼[g(xi)xi]𝔼[h(σi)𝟙mi=m]+𝔼[σiϵixih(σi)𝟙mi=m]\displaystyle=\mathbb{E}[g(x_{i})x_{i}]\mathbb{E}[h(\sigma_{i})\mathbbm{1}_{m_{i}=m}]+\mathbb{E}[\sigma_{i}\epsilon_{i}x_{i}h(\sigma_{i})\mathbbm{1}_{m_{i}=m}]
    =0.\displaystyle=0.

    Thus

    R3=m=1M(δnmn1i=1n(g(xi)+σiϵi)xih(σi)𝟙mi=m)𝑃0R_{3}=\sum_{m=1}^{M}\left(\delta_{nm}n^{-1}\sum_{i=1}^{n}(g(x_{i})+\sigma_{i}\epsilon_{i})x_{i}h(\sigma_{i})\mathbbm{1}_{m_{i}=m}\right)\overset{P}{\to}0

    because the terms inside the ii summand are i.i.d. with expectation 0. Finally recalling that the d(σi,δn)d(\sigma_{i},\delta_{n}) is bounded above by δn\delta^{\prime}_{n} which is uniform OP(1)O_{P}(1), we have

    |R4|n1/2δn1ni=1n|(g(xi)+σiϵi)xi|𝑃0.|R_{4}|\leq n^{-1/2}\delta^{\prime}_{n}\frac{1}{n}\sum_{i=1}^{n}\left|(g(x_{i})+\sigma_{i}\epsilon_{i})x_{i}\right|\overset{P}{\to}0.

A.2 Proof of Theorem 3.2

Since w>0w>0, by Cauchy Schwartz

Γ(ν(w))=𝔼[w2(Γ(A)+σ2Γ(B))]𝔼[w]2𝔼[(Γ(A)+σ2Γ(B))1]1\Gamma(\nu(w))=\frac{\mathbb{E}[w^{2}(\Gamma(A)+\sigma^{2}\Gamma(B))]}{\mathbb{E}[w]^{2}}\geq\mathbb{E}[(\Gamma(A)+\sigma^{2}\Gamma(B))^{-1}]^{-1}

with equality iff

w(σ)1Γ(A)+σ2Γ(B)(σ2+Γ(A)Γ(B)1)1w(\sigma)\propto\frac{1}{\Gamma(A)+\sigma^{2}\Gamma(B)}\propto(\sigma^{2}+\Gamma(A)\Gamma(B)^{-1})^{-1}

with probability 1.

A.3 Proof of Corollary 3.1

We must show

Γ(ν(wmin))min(Γ(ν(I)),Γ(ν(Σ1)))\Gamma(\nu(w_{min}))\leq\min(\Gamma(\nu(I)),\Gamma(\nu(\Sigma^{-1})))

with strict inequality if 𝔼[g2(x)xxT]\mathbb{E}[g^{2}(x)xx^{T}] is positive definite and the distribution of σ\sigma is not a point mass. The inequality follows from Theorem 3.2. By Theorem 3.2, the inequality is strict whenever the functions w(σ)=1w(\sigma)=1 and w(σ)=σ2w(\sigma)=\sigma^{-2} are not proportional to wmin(σ)=(σ2+Γ(A)Γ(B)1)1w_{min}(\sigma)=(\sigma^{2}+\Gamma(A)\Gamma(B)^{-1})^{-1} with probability 11. Since B0B\succ 0 and A=BT𝔼[xxTg(x)2]B0A=B^{T}\mathbb{E}[xx^{T}g(x)^{2}]B\succ 0, Γ(A)Γ(B)1>0\Gamma(A)\Gamma(B)^{-1}>0. So if σ\sigma is not constant with probability 11, P(wmin(σ)=c)<1P(w_{min}(\sigma)=c)<1 for any cc. Therefore wminw_{min} is not proportional to w(σ)=1w(\sigma)=1 with probability 11. Similarly, for wminw_{min} to be proportional to w(σ)=σ2w(\sigma)=\sigma^{-2}, there must exist a cc such that

1=P(σ2+Γ(A)Γ(B)1=cσ2)=P(Γ(A)Γ(B)1=σ2(c1)).1=P(\sigma^{2}+\Gamma(A)\Gamma(B)^{-1}=c\sigma^{2})=P(\Gamma(A)\Gamma(B)^{-1}=\sigma^{2}(c-1)).

However since the constant Γ(A)Γ(B)1>0\Gamma(A)\Gamma(B)^{-1}>0 and σ\sigma is not a point mass, such a cc does not exist.

A.4 Proof of Theorem 3.3

Let Δ=Γ(A)Γ(B)1\Delta=\Gamma(A)\Gamma(B)^{-1}. In part 1 we show that

Δ^=Δ+n1/2δn\widehat{\Delta}=\Delta+n^{-1/2}\delta_{n}

where δn\delta_{n} is OP(1)O_{P}(1). In part 2 we show that

1σi2+Δ^=1σi2+Δw(σi)+n1/2δnh(σi)+n1d(σi,δn)\frac{1}{\sigma_{i}^{2}+\widehat{\Delta}}=\underbrace{\frac{1}{\sigma_{i}^{2}+\Delta}}_{\equiv w(\sigma_{i})}+n^{-1/2}\delta_{n}h(\sigma_{i})+n^{-1}d(\sigma_{i},\delta_{n})

where δn\delta_{n} is OP(1)O_{P}(1), d(σi,δn)d(\sigma_{i},\delta_{n}) is bounded uniformly by an OP(1)O_{P}(1) random variable, and hh is a bounded function. Thus the weight matrix W^\widehat{W} with diagonal elements W^ii=(σi2+Δ^)1\widehat{W}_{ii}=(\sigma_{i}^{2}+\widehat{\Delta})^{-1} satisfies Assumptions 1 with w(σ)=wmin(σ)w(\sigma)=w_{min}(\sigma).

  1. 1.

    Recall B=𝔼[xxT]1B=\mathbb{E}[xx^{T}]^{-1}. Let δn\delta_{n} be OP(1)O_{P}(1) which changes definition at each appearance. Define B^1=n1XTX\widehat{B}^{-1}=n^{-1}X^{T}X. By the delta method we have

    B^=B+n1/2δn\widehat{B}=B+n^{-1/2}\delta_{n} (A.1)

    and

    Γ(B^)=Γ(B)+n1/2δn.\Gamma(\widehat{B})=\Gamma(B)+n^{-1/2}\delta_{n}. (A.2)

    By assumption β^(W^)=β+n1/2δn\widehat{\beta}(\widehat{W})=\beta+n^{-1/2}\delta_{n}, thus

    (σi4)1σi4xixiTg^(xi)2\displaystyle\left(\sum\sigma_{i}^{-4}\right)^{-1}\sum\sigma_{i}^{-4}x_{i}x_{i}^{T}\widehat{g}(x_{i})^{2}
    =(σi4)1σi4xixiT((yixiTβ^(W^))2σi2)\displaystyle=\left(\sum\sigma_{i}^{-4}\right)^{-1}\sum\sigma_{i}^{-4}x_{i}x_{i}^{T}((y_{i}-x_{i}^{T}\widehat{\beta}(\widehat{W}))^{2}-\sigma_{i}^{2})
    =(σi4)1σi4xixiT((yixiTβ)2σi2)+n1/2δn\displaystyle=\left(\sum\sigma_{i}^{-4}\right)^{-1}\sum\sigma_{i}^{-4}x_{i}x_{i}^{T}((y_{i}-x_{i}^{T}\beta)^{2}-\sigma_{i}^{2})+n^{-1/2}\delta_{n}
    =𝔼[σ4]n1σi41n𝔼[σ4]1σi4xixiT((yixiTβ)2σi2)+n1/2δn\displaystyle=\frac{\mathbb{E}[\sigma^{-4}]}{n^{-1}\sum\sigma_{i}^{-4}}\frac{1}{n}\sum\mathbb{E}[\sigma^{-4}]^{-1}\sigma_{i}^{-4}x_{i}x_{i}^{T}((y_{i}-x_{i}^{T}\beta)^{2}-\sigma_{i}^{2})+n^{-1/2}\delta_{n}

    Note that 𝔼[σ4](n1σi4)1𝑃1\mathbb{E}[\sigma^{-4}](n^{-1}\sum\sigma_{i}^{-4})^{-1}\overset{P}{\to}1. Further note that 𝔼[σ4]1σi4xixiT((yixiTβ)2σi2)\mathbb{E}[\sigma^{-4}]^{-1}\sigma_{i}^{-4}x_{i}x_{i}^{T}((y_{i}-x_{i}^{T}\beta)^{2}-\sigma_{i}^{2}) are i.i.d. with expectation 𝔼[xxTg(x)2]\mathbb{E}[xx^{T}g(x)^{2}]. Thus by the CLT and Slutsky’s Theorem

    (σi4)1σi4xixiTg^(xi)2=𝔼[xxTg(x)2]+n1/2δn.\left(\sum\sigma_{i}^{-4}\right)^{-1}\sum\sigma_{i}^{-4}x_{i}x_{i}^{T}\widehat{g}(x_{i})^{2}=\mathbb{E}[xx^{T}g(x)^{2}]+n^{-1/2}\delta_{n}. (A.3)

    Since A^=B^T(σi4)1(σi4xixiTg^(xi)2)B^\widehat{A}=\widehat{B}^{T}\left(\sum\sigma_{i}^{-4}\right)^{-1}\left(\sum\sigma_{i}^{-4}x_{i}x_{i}^{T}\widehat{g}(x_{i})^{2}\right)\widehat{B}, by Equations (A.1) and (A.3) we have

    A^=A+n1/2δn.\widehat{A}=A+n^{-1/2}\delta_{n}.

    which implies

    Γ(A^)=Γ(A)+n1/2δn.\Gamma(\widehat{A})=\Gamma(A)+n^{-1/2}\delta_{n}.

    Combining this result with Equation (A.2) we have

    Γ(A^)Γ(B^)1=Γ(A)Γ(B)1Δ+n1/2δn.\Gamma(\widehat{A})\Gamma(\widehat{B})^{-1}=\underbrace{\Gamma(A)\Gamma(B)^{-1}}_{\equiv\Delta}+\,n^{-1/2}\delta_{n}.

    Since AA and BB are p.s.d., Δ0\Delta\geq 0. Therefore

    |Δmax(Γ(A^)Γ(B^)1,0)Δ^||ΔΓ(A^)Γ(B^)1|.|\Delta-\underbrace{\max(\Gamma(\widehat{A})\Gamma(\widehat{B})^{-1},0)}_{\equiv\widehat{\Delta}}|\leq|\Delta-\Gamma(\widehat{A})\Gamma(\widehat{B})^{-1}|.

    Thus

    Δ^=Δ+n1/2δn.\widehat{\Delta}=\Delta+n^{-1/2}\delta_{n}.
  2. 2.

    From part 1, using the fact that (1x)1=1+x+x2(1x)1(1-x)^{-1}=1+x+x^{2}(1-x)^{-1}, we have

    1σi2+Δ^\displaystyle\frac{1}{\sigma_{i}^{2}+\widehat{\Delta}} =1σi2+Δ+n1/2δn\displaystyle=\frac{1}{\sigma_{i}^{2}+\Delta+n^{-1/2}\delta_{n}}
    =(1σi2+Δ)(11(n1/2δnσi2+Δ))\displaystyle=\left(\frac{1}{\sigma_{i}^{2}+\Delta}\right)\left(\frac{1}{1-\left(\frac{-n^{-1/2}\delta_{n}}{\sigma_{i}^{2}+\Delta}\right)}\right)
    =(1σi2+Δ)(1n1/2δnσi2+Δ+n1δn2(σi2+Δ)21+n1/2δnσi2+Δ)\displaystyle=\left(\frac{1}{\sigma_{i}^{2}+\Delta}\right)\left(1-\frac{n^{-1/2}\delta_{n}}{\sigma_{i}^{2}+\Delta}+\frac{\frac{n^{-1}\delta_{n}^{2}}{(\sigma_{i}^{2}+\Delta)^{2}}}{1+\frac{n^{-1/2}\delta_{n}}{\sigma_{i}^{2}+\Delta}}\right)
    =1σi2+Δn1/2δn1(σi2+Δ)2h(σi)+n1δn2(σi2+Δ)2(σi2+Δ)+n1/2δnd(σi,δn).\displaystyle=\frac{1}{\sigma_{i}^{2}+\Delta}-n^{-1/2}\delta_{n}\underbrace{\frac{1}{(\sigma_{i}^{2}+\Delta)^{2}}}_{\equiv h(\sigma_{i})}+n^{-1}\underbrace{\frac{\delta_{n}^{2}(\sigma_{i}^{2}+\Delta)^{-2}}{(\sigma_{i}^{2}+\Delta)+n^{-1/2}\delta_{n}}}_{\equiv d(\sigma_{i},\delta_{n})}.

    The function hh is bounded because the σi\sigma_{i} are bounded below by a positive constant and Δ0\Delta\geq 0. Note that since σiσmin>0\sigma_{i}\geq\sigma_{min}>0 we have

    d(σi,δn)δn2σmin4σmin2+n1/2δnd(\sigma_{i},\delta_{n})\leq\frac{\frac{\delta_{n}^{2}}{\sigma_{min}^{4}}}{\sigma_{min}^{2}+n^{-1/2}\delta_{n}}

    where the right hand side is OP(1)O_{P}(1).

A.5 Proof of Theorem 3.4

Let δn,δnm\delta_{n},\delta_{nm} be OP(1)O_{P}(1) which change definition at each appearance. From Equations (A.1) and (A.2) in Proof A.4 we have

B^\displaystyle\widehat{B} =B+n1/2δn\displaystyle=B+n^{-1/2}\delta_{n}
Γ(B^)\displaystyle\Gamma(\widehat{B}) =Γ(B)+n1/2δn.\displaystyle=\Gamma(B)+n^{-1/2}\delta_{n}.

We have

C^m\displaystyle\widehat{C}_{m} =1i=1n𝟙mi=mi=1n(yixiTβ^(W^))2xixiT𝟙mi=m\displaystyle=\frac{1}{\sum_{i=1}^{n}\mathbbm{1}_{m_{i}=m}}\sum_{i=1}^{n}(y_{i}-x_{i}^{T}\widehat{\beta}(\widehat{W}))^{2}x_{i}x_{i}^{T}\mathbbm{1}_{m_{i}=m}
=nfm(m)i=1n𝟙mi=m(1ni=1n(yixiTβ)2xixiT𝟙mi=mfm(m))+n1/2δnm\displaystyle=\frac{nf_{m}(m)}{\sum_{i=1}^{n}\mathbbm{1}_{m_{i}=m}}\left(\frac{1}{n}\sum_{i=1}^{n}\frac{(y_{i}-x_{i}^{T}\beta)^{2}x_{i}x_{i}^{T}\mathbbm{1}_{m_{i}=m}}{f_{m}(m)}\right)+n^{-1/2}\delta_{nm}
=Cm+n1/2δnm\displaystyle=C_{m}+n^{-1/2}\delta_{nm}

where the last equality follow from the facts that the terms inside the sum are i.i.d with expectation Cm=𝔼[(g2(x)+σm2)xxT]C_{m}=\mathbb{E}[(g^{2}(x)+\sigma_{m}^{2})xx^{T}] and nfm(m)i=1n𝟙mi=mP1\frac{nf_{m}(m)}{\sum_{i=1}^{n}\mathbbm{1}_{m_{i}=m}}\rightarrow_{P}1. Thus we have

W^min,ii=wmin(mi)+δnmin1/2\widehat{W}_{min,ii}=w_{min}(m_{i})+\delta_{nm_{i}}n^{-1/2}

which satisfies the form of Assumptions 1.

A.6 Proof of Theorem 3.5

β^(W)=(XTWX)1XTWY=(1nxixiTw(σi))1(1nxiw(σi)yi)\widehat{\beta}(W)=(X^{T}WX)^{-1}X^{T}WY=\left(\frac{1}{n}\sum x_{i}x_{i}^{T}w(\sigma_{i})\right)^{-1}\left(\frac{1}{n}\sum x_{i}w(\sigma_{i})y_{i}\right)

By the SLLN and the continuous mapping theorem

(1nxixiTw(σi))1as𝔼[xxTw(σ)]1.\left(\frac{1}{n}\sum x_{i}x_{i}^{T}w(\sigma_{i})\right)^{-1}\rightarrow_{as}\mathbb{E}[xx^{T}w(\sigma)]^{-1}.

Note that

1nxiw(σi)yi=1nxiw(σi)f(xi)+1nxiw(σi)ϵiσi.\frac{1}{n}\sum x_{i}w(\sigma_{i})y_{i}=\frac{1}{n}\sum x_{i}w(\sigma_{i})f(x_{i})+\frac{1}{n}\sum x_{i}w(\sigma_{i})\epsilon_{i}\sigma_{i}.

The summands in second term on the r.h.s. are i.i.d. with expectation 0. Therefore

1nxiw(σi)yias𝔼[xw(σ)f(x)].\frac{1}{n}\sum x_{i}w(\sigma_{i})y_{i}\rightarrow_{as}\mathbb{E}[xw(\sigma)f(x)].

References

  • Blight and Ott [1975] B. Blight and L. Ott. A bayesian approach to model inadequacy for polynomial regression. Biometrika, 62(1):79–88, 1975.
  • Buja et al. [2014] A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, M. Traskin, K. Zhan, and L. Zhao. Models as approximations: How random predictors and model violations invalidate classical inference in regression. arXiv preprint arXiv:1404.1578, 2014.
  • Carroll [1982] R. J. Carroll. Adapting for heteroscedasticity in linear models. The Annals of Statistics, pages 1224–1233, 1982.
  • Carroll and Ruppert [1982] R. J. Carroll and D. Ruppert. Robust estimation in heteroscedastic linear models. The Annals of Statistics, pages 429–441, 1982.
  • Chen and Shao [1993] J. Chen and J. Shao. Iterative weighted least squares estimators. The Annals of Statistics, pages 1071–1092, 1993.
  • Czekala et al. [2015] I. Czekala, S. M. Andrews, K. S. Mandel, D. W. Hogg, and G. M. Green. Constructing a flexible likelihood function for spectroscopic inference. The Astrophysical Journal, 812(2):128, 2015.
  • Friedman [1984] J. H. Friedman. A variable span smoother. Technical report, DTIC Document, 1984.
  • Fuller and Rao [1978] W. A. Fuller and J. Rao. Estimation for a linear regression model with unknown diagonal covariance matrix. The Annals of Statistics, pages 1149–1158, 1978.
  • Hooper [1993] P. M. Hooper. Iterative weighted least squares estimation in heteroscedastic linear models. Journal of the American Statistical Association, 88(421):179–184, 1993.
  • Huber [1967] P. J. Huber. The behavior of maximum likelihood estimates under nonstandard conditions. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 221–233, 1967.
  • Ivezić et al. [2007] Ž. Ivezić, J. A. Smith, G. Miknaitis, H. Lin, D. Tucker, R. H. Lupton, J. E. Gunn, G. R. Knapp, M. A. Strauss, B. Sesar, et al. Sloan digital sky survey standard star catalog for stripe 82: The dawn of industrial 1% optical photometry. The Astronomical Journal, 134(3):973, 2007.
  • Jobson and Fuller [1980] J. Jobson and W. Fuller. Least squares estimation when the covariance matrix and parameter vector are functionally related. Journal of the American Statistical Association, 75(369):176–181, 1980.
  • Kennedy and O’Hagan [2001] M. C. Kennedy and A. O’Hagan. Bayesian calibration of computer models. Journal of the Royal Statistical Society. Series B, Statistical Methodology, pages 425–464, 2001.
  • Long et al. [2014] J. P. Long, E. C. Chi, and R. G. Baraniuk. Estimating a common period for a set of irregularly sampled functions with applications to periodic variable star data. arXiv preprint arXiv:1412.6520, 2014.
  • Long and Ervin [2000] J. S. Long and L. H. Ervin. Using heteroscedasticity consistent standard errors in the linear regression model. The American Statistician, 54(3):217–224, 2000.
  • Ma and Zhu [2013] Y. Ma and L. Zhu. Doubly robust and efficient estimators for heteroscedastic partially linear single-index models allowing high dimensional covariates. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(2):305–322, 2013.
  • Ma et al. [2006] Y. Ma, J.-M. Chiou, and N. Wang. Efficient semiparametric estimator for heteroscedastic partially linear models. Biometrika, 93(1):75–84, 2006.
  • MacKinnon and White [1985] J. G. MacKinnon and H. White. Some heteroskedasticity-consistent covariance matrix estimators with improved finite sample properties. Journal of econometrics, 29(3):305–325, 1985.
  • Mondrik et al. [2015] N. Mondrik, J. P. Long, and J. L. Marshall. A multiband generalization of the analysis of variance period estimation algorithm and the effect of inter-band observing cadence on period recovery rate. arXiv preprint arXiv:1508.04772, 2015.
  • Richards et al. [2012] J. W. Richards, A. B. Lee, C. M. Schafer, P. E. Freeman, et al. Prototype selection for parameter estimation in complex models. The Annals of Applied Statistics, 6(1):383–408, 2012.
  • Riess et al. [2011] A. G. Riess, L. Macri, S. Casertano, H. Lampeitl, H. C. Ferguson, A. V. Filippenko, S. W. Jha, W. Li, and R. Chornock. A 3% solution: determination of the hubble constant with the hubble space telescope and wide field camera 3. The Astrophysical Journal, 730(2):119, 2011.
  • Salmon et al. [2015] B. Salmon, C. Papovich, S. L. Finkelstein, V. Tilvi, K. Finlator, P. Behroozi, T. Dahlen, R. Davé, A. Dekel, M. Dickinson, et al. The relation between star formation rate and stellar mass for galaxies at 3.5≤ z≤ 6.5 in candels. The Astrophysical Journal, 799(2):183, 2015.
  • Schwarzenberg-Czerny [1996] A. Schwarzenberg-Czerny. Fast and statistically optimal period search in uneven sampled observations. The Astrophysical Journal Letters, 460(2):L107, 1996.
  • Sesar et al. [2007] B. Sesar, Ž. Ivezić, R. H. Lupton, M. Jurić, J. E. Gunn, G. R. Knapp, N. De Lee, J. A. Smith, G. Miknaitis, H. Lin, et al. Exploring the variable sky with the sloan digital sky survey. The Astronomical Journal, 134(6):2236, 2007.
  • Sesar et al. [2010] B. Sesar, Ž. Ivezić, S. H. Grammer, D. P. Morgan, A. C. Becker, M. Jurić, N. De Lee, J. Annis, T. C. Beers, X. Fan, et al. Light curve templates and galactic distribution of rr lyrae stars from sloan digital sky survey stripe 82. The Astrophysical Journal, 708(1):717, 2010.
  • Shappee and Stanek [2011] B. J. Shappee and K. Stanek. A new cepheid distance to the giant spiral m101 based on image subtraction of hubble space telescope/advanced camera for surveys observations. The Astrophysical Journal, 733(2):124, 2011.
  • Szpiro et al. [2010] A. A. Szpiro, K. M. Rice, and T. Lumley. Model-robust regression and a bayesian” sandwich” estimator. The Annals of Applied Statistics, pages 2099–2113, 2010.
  • Udalski et al. [2008] A. Udalski, M. Szymanski, I. Soszynski, and R. Poleski. The optical gravitational lensing experiment. final reductions of the ogle-iii data. Acta Astronomica, 58:69–87, 2008.
  • VanderPlas and Ivezic [2015] J. T. VanderPlas and Z. Ivezic. Periodograms for multiband astronomical time series. arXiv preprint arXiv:1502.01344, 2015.
  • White [1980a] H. White. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica: Journal of the Econometric Society, pages 817–838, 1980a.
  • White [1980b] H. White. Using least squares to approximate unknown regression functions. International Economic Review, pages 149–170, 1980b.
  • Zechmeister and Kürster [2009] M. Zechmeister and M. Kürster. The generalised lomb-scargle periodogram. a new formalism for the floating-mean and keplerian periodograms. Astronomy and Astrophysics, 496(2):577–584, 2009.