This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Empirical eigenvalue based testing for structural breaks in linear panel data models

Lajos Horváth Department of Mathematics, University of Utah, Salt Lake City, UT, USA  and  Gregory Rice Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON, Canada
Abstract.

Testing for stability in linear panel data models has become an important topic in both the statistics and econometrics research communities. The available methodologies address testing for changes in the mean/linear trend, or testing for breaks in the covariance structure by checking for the constancy of common factor loadings. In such cases when an external shock induces a change to the stochastic structure of panel data, it is unclear whether the change would be reflected in the mean, the covariance structure, or both. In this paper, we develop a test for structural stability of linear panel data models that is based on monitoring for changes in the largest eigenvalue of the sample covariance matrix. The asymptotic distribution of the proposed test statistic is established under the null hypothesis that the mean and covariance structure of the panel data’s cross sectional units remain stable during the observation period. We show that the test is consistent assuming common breaks in the mean or factor loadings. These results are investigated by means of a Monte Carlo simulation study, and their usefulness is demonstrated with an application to U.S. treasury yield curve data, in which some interesting features of the 2007-2008 subprime crisis are illuminated.

Key words and phrases:
panel data, change point detection, time series, empirical eigenvalues, CUSUM process, weak convergence

1. Introduction

We consider in this paper the problem of testing for the presence of a structural break in linear panel data models. Structural breaks in panel data may result from any of a number of sources. For example, if the data under consideration consists of U.S. macroeconomic indicators, then the onset of a recession, or the introduction of a new technology, may be evidenced by changes in the correlations between indicators or linear model parameters fitted from the data.

Change point analysis has been extensively developed to study such features in data; we refer to Aue and Horváth (2012) for a recent survey of the field in the context of time series. Adapting change point methodology to the panel data setting presents a difficulty since the dimension, or number of cross sectional units (NN), may be larger in relation to the sample size (TT) than is typical in classical change point analysis. This encourages asymptotic frameworks in which both NN and TT tend jointly to infinity.

Most of the literature in this direction address either testing for changes in the mean, or testing for changes in the correlation structure as measured by changes in common factor loadings. With regards to testing for and estimating changes in the mean, we refer to Bai (2010), who derives a least squares change point estimator. Kim (2011, 2014), and Baltagi et al. (2015) extend this methodology to account for changes in linear trends in the presence of cross sectional dependence modeled by common factors. Horváth and Hušková (2012) develop a test for a structural change in the mean based on the CUSUM estimator. Li et al. (2014) and Qian and Su (2014) consider multiple structural breaks in panel data, and Kao et al. (2014) considers break testing under cointegration.

Estimating and testing for changes in the covariance of scalar and vector valued time series of a fixed dimension are considered in Galeano and Peña (2007), Aue et al. (2009), and Wied et al (2012). With regards to testing for changes in the factor structure of panel data, Breitung and Eickmeier (2011) develop methodology that relies on testing for constancy of the least squares estimates obtained by regression on the principal component factors. Their test depends on estimating the number of common factors according to the information criterion developed in Bai and Ng (2002). In both the testing procedure, and the method used to determine the number of common factors, it is presumed that the mean remains constant.

In such instances when external shocks induce a change to the stochastic structure of panel data, it is unclear whether or not the change would affect the mean, the covariance structure, or both. Methods for detecting changes in the mean appear to be somewhat robust to small changes in the covariance structure of the panels, however the methods proposed in Breitung and Eickmeier (2011) to test for changes in the common factor loadings are sensitive to both changes in the mean, and large changes in the covariance, evidenced by non-monotonic power. This was recently addressed in Yamamoto and Tanaka (2015), in which a correction is proposed, but it raises the question of whether alternative methods to estimating principal components, and the number of common factors, might be effective in terms of detecting instability in panel data.

The alternative that we explore here relies on analyzing the largest eigenvalues of the covariance matrix. Using the largest eigenvalues of a covariance matrix as a simplified summary of the covariance structure of multivariate time series has served an important role in finance and econometrics for quite some time. This idea is utilized in Markowitz portfolio optimization (cf. Markowitz (1952, 1956)), and to model co–movements of markets and stocks as a barometer for risk (cf. Keogh et al. (2004) and Zovko and Farmer (2007)), among other applications.

In this paper, we propose methodology for testing structural stability in linear panel data models that is based on a process derived from the largest eigenvalue of the covariance matrix based on an increasing proportion of the total sample. The asymptotic distribution of the eigenvalue process is established assuming structural stability. Furthermore, we show that functionals of the eigenvalue process diverge when there is a common break in the mean or covariance as measured by the common factor loadings.

The rest of the paper is organized as follows. In Section 2, we present the linear panel data models and assumptions considered in the paper, as well as the main asymptotic results for the largest eigenvalue under the null hypothesis of stability of the model parameters. Section 3 contains the details of applying the results of Section 2 to the change point problem, including asymptotic consistency results under the mean break and factor loading break alternatives. In Section 4, we discuss the practical implementation of the test, and present the results of a Monte Carlo simulation study. Section 5 contains an application of the methodology developed in the paper to US treasury yield curve data. Analogous results for smaller eigenvalues are considered in Section 6. All proofs of the technical results are collected in Section 7.

2. Models, assumptions, and asymptotics under H0H_{0}

We consider the model

(2.1) Xi,t=(μi+δiI{tt})+(γi+ψiI{tt})ηt+ei,t,  1iN,1tT,X_{i,t}=(\mu_{i}+\delta_{i}I\{t\geq t^{*}\})+(\gamma_{i}+\psi_{i}I\{t\geq t^{*}\})\eta_{t}+e_{i,t},\;\;1\leq i\leq N,1\leq t\leq T,

where Xi,tX_{i,t} denotes the ithi^{\mbox{th}} cross section of the panel at time tt, μi\mu_{i} denotes the initial mean of the ithi^{\mbox{th}} cross section that changes to μi+δi\mu_{i}+\delta_{i} at the unknown time tt^{*}, ηt\eta_{t} denotes a real valued common factor with initial loadings γi\gamma_{i} that may change to γi+ψi\gamma_{i}+\psi_{i}, and ei,te_{i,t} denote the idiosyncratic errors. It is presumed that both the common factor and idiosyncratic errors may be serially correlated. As we develop asymptotics, we assume that the number of cross sections NN depends on the observation period TT, and NN is allowed to tend to infinity with TT. We make the assumption that ηt\eta_{t}\in{\mathbb{R}} for the sake of simplicity; these results could be extended to the more general case of a vector valued common factor and factor loading.

We are interested in testing the null hypothesis that the model parameters remain stable during the observation period 1tT1\leq t\leq T, i.e.

H0:t>T.H_{0}:\;t^{*}>T.

When H0H_{0} holds, the model of (2.1) reduces to

(2.2) Xi,t=μi+γiηt+ei,t,  1iN,1tT.X_{i,t}=\mu_{i}+\gamma_{i}\eta_{t}+e_{i,t},\;\;1\leq i\leq N,1\leq t\leq T.

Let \cdot^{\top} denote the matrix transpose, and define the vectors 𝐗t=(X1,t,X2,t,,XN,t)N{\bf X}_{t}=(X_{1,t},X_{2,t},\ldots,X_{N,t})^{\top}\in{\mathbb{R}}^{N}. We define

(2.3) 𝐂^N,T(u)=1Tut=1Tu(𝐗t𝐗¯T)(𝐗t𝐗¯T),  1/Tu1,\hat{\bf C}_{N,T}(u)=\frac{1}{\lfloor Tu\rfloor}\sum_{t=1}^{\lfloor Tu\rfloor}({\bf X}_{t}-\bar{\bf X}_{T})({\bf X}_{t}-\bar{\bf X}_{T})^{\top},\;\;1/T\leq u\leq 1,

to be the sample covariance matrix based on the proportion uu of the sample, where

𝐗¯T=1Tt=1T𝐗t.\bar{\bf X}_{T}=\frac{1}{T}\sum_{t=1}^{T}{\bf X}_{t}.

In order to test H0H_{0}, we utilize the processes derived from the KK largest eigenvalues λ^1(u)λ^2(u)λ^K(u)\hat{\lambda}_{1}(u)\geq\hat{\lambda}_{2}(u)\geq\ldots\geq\hat{\lambda}_{K}(u) of C^N,T(u)\hat{C}_{N,T}(u). We focus our attention at first on the process derived from the largest eigenvalue, and make the primary objective of this section is to establish the weak convergence of λ^1(u)\hat{\lambda}_{1}(u) under H0H_{0}. Analogous results for processes derived from the smaller eigenvalues are provided in Section 6. We note that an alternative to using λ^i(u)\hat{\lambda}_{i}(u) is to use λ~i(u)=(Tu/T)λ^i(u)\tilde{\lambda}_{i}(u)=(\lfloor Tu\rfloor/T)\hat{\lambda}_{i}(u), which are equivalent with the largest eigenvalues of

(2.4) 𝐂~N,T(u)=1Tt=1Tu(𝐗t𝐗¯T)(𝐗t𝐗¯T),  0u1.\tilde{\bf C}_{N,T}(u)=\frac{1}{T}\sum_{t=1}^{\lfloor Tu\rfloor}({\bf X}_{t}-\bar{\bf X}_{T})({\bf X}_{t}-\bar{\bf X}_{T})^{\top},\;\;0\leq u\leq 1.

Assuming that H0H_{0} holds, 𝐂=cov(𝐗t){\bf C}=\mbox{cov}({\bf X}_{t}) does not depend on tt, and in this case we define the eigenvalues and eigenvectors of 𝐂{\bf C} by

(2.5) λi𝔢i=𝐂𝔢i,  1iN,\lambda_{i}{\mathfrak{e}}_{i}={\bf C}{\mathfrak{e}}_{i},\;\;1\leq i\leq N,

where 𝔢i=1,  1iN\|{\mathfrak{e}}_{i}\|=1,\;\;1\leq i\leq N, and \|\cdot\| denotes the Euclidean norm in N{\mathbb{R}}^{N}. Since NN is allowed to depend on TT, both the eigenvalues λi\lambda_{i} and eigenvectors 𝔢i{\mathfrak{e}}_{i} may evolve as TT\to\infty. Throughout this paper, we make use of the following assumptions:

Assumption 2.1.

The eigenvalues λ1,λ2,,λK\lambda_{1},\lambda_{2},\ldots,\lambda_{K} satisfy that min1iK(λiλi+1)c0\min_{1\leq i\leq K}(\lambda_{i}-\lambda_{i+1})\geq c_{0} for some constant c0>0c_{0}>0.

Assumption 2.2.

The common factor loadings satisfy that |γi|c1for all  1iNwith somec1>0.|\gamma_{i}|\leq c_{1}\;\;\mbox{for all}\;\;1\leq i\leq N\;\;\mbox{with some}\;\;c_{1}>0.

Assuming that the eigenvalues of 𝐂{\bf C} are distinct is necessary to derive a normal approximation for their estimates, and is a common assumption in the literature. We assume that the common factors and idiosyncratic errors satisfy a fairly general weak dependence condition.

Definition 2.1.

We say that a stationary time series {εt,<t<}\{\varepsilon_{t},\;-\infty<t<\infty\} is an LpmL^{p}-m-approximable Bernoulli shift with rate function χ\chi if Eεt=0,Eεtp<,E\varepsilon_{t}=0,\;\;E\varepsilon_{t}^{p}<\infty, and εt=g(νt,νt1,)\varepsilon_{t}=g({\nu}_{t},{\nu}_{t-1},\ldots) for some measurable function g:g:{\mathbb{R}}^{\infty}\to{\mathbb{R}} where {νs,<s<}\{{\nu}_{s},-\infty<s<\infty\} are independent and identically distributed random variables, and (E(ηtηt(m))p)1/p=χ(m)(E(\eta_{t}-\eta_{t}^{(m)})^{p})^{1/p}=\chi(m) with ηt(m)=g(νt,νt1,,νtm,νtm1,t,m,νtm2,t,m,)\eta_{t}^{(m)}=g({\nu}_{t},{\nu}_{t-1},\ldots,{\nu}_{t-m},{\nu}^{*}_{t-m-1,t,m},{\nu}^{*}_{t-m-2,t,m},\ldots) and the νi,j,{\nu}^{*}_{i,j,\ell} are independent and identically distributed copies of ν0{\nu}_{0}.

The space of stationary processes that may be represented as Bernoulli shifts is enormous; we refer to Wu (2005) for a discussion. Examples include stationary ARMA, ARCH, and GARCH processes. The rate function describes the rate at which such processes can be approximated with sequences exhibiting a finite range of dependence. In many examples of interest, the rate function may be taken to decay exponentially in the lag parameter.

Assumption 2.3.
  1. (a)

    {ηt,<t<}\{\eta_{t},\;\;-\infty<t<\infty\} is L12mapproximableL^{12}-m-approximable with rate function χη(m)=c2mαη\chi_{\eta}(m)=c_{2}m^{-\alpha_{\eta}} for constants c2>0c_{2}>0 and αη>1\alpha_{\eta}>1, and Eηt2=1E\eta_{t}^{2}=1.

  2. (b)

    The sequences {ei,t,<t<}, 1iN\{e_{i,t},\;\;-\infty<t<\infty\},\;1\leq i\leq N, are each L12mapproximableL^{12}-m-approximable with rate functions χe,i(m)c3mαe\chi_{e,i}(m)\leq c_{3}m^{-\alpha_{e}} for constants c3>0c_{3}>0 and αe>1\alpha_{e}>1. There exist constants c4c_{4} and c5c_{5} such that 0<c4Eei,t2=σi2c5<0<c_{4}\leq Ee^{2}_{i,_{t}}=\sigma_{i}^{2}\leq c_{5}<\infty.

  3. (c)

    The sequences {ηt,<t<},\{\eta_{t},\;\;-\infty<t<\infty\}, and {ei,t,<t<}, 1iN\{e_{i,t},\;\;-\infty<t<\infty\},\;1\leq i\leq N are independent.

The least restrictive moment condition that could be assumed in order to obtain a normal approximation for the empirical eigenvalues is four moments. Our assumption of twelve moments comes from the fact that we apply a third order Taylor series expansion for the difference between the empirical eigenvalue process λ^i(u)\hat{\lambda}_{i}(u) and λi\lambda_{i}, (cf. Hall and Hosseini–Nasab (2009)) and twelve moments are needed to get an upper bound for the highest order term that is uniform with respect to uu. The condition in Assumption 2.3 that Eηt2=1E\eta_{t}^{2}=1 is nonrestrictive; it makes the model (2.2) identifiable. In order to state the main result, we define

ξi,t=𝔢i(𝐗tE𝐗0)(𝐗tE𝐗0)𝔢i.\xi_{i,t}={\mathfrak{e}}_{i}^{\top}({\bf X}_{t}-E{\bf X}_{0})({\bf X}_{t}-E{\bf X}_{0})^{\top}{\mathfrak{e}}_{i}.
Theorem 2.1.

If H0H_{0} and Assumptions 2.1, 2.2, and 2.3 hold, and

(2.6) N(logT)1/3T1/20,asT,\frac{N(\log T)^{1/3}}{T^{1/2}}\to 0,\;\;\mbox{as}\;\;T\to\infty,

then

T1/2σ1u(λ^1(u)λ1)𝒟[0,1]W(u),\frac{T^{1/2}}{\sigma_{1}}u(\hat{\lambda}_{1}(u)-\lambda_{1})\stackrel{{\scriptstyle{\mathcal{D}[0,1]}}}{{\longrightarrow}}W(u),

where W(u)W(u) is a Wiener process, 𝒟[0,1]\stackrel{{\scriptstyle{\mathcal{D}[0,1]}}}{{\longrightarrow}} denotes weak convergence in the Skorokhod topology, and

σ12=σ12(T)=t=cov(ξ1,0,ξ1,t).\sigma_{1}^{2}=\sigma_{1}^{2}(T)=\sum_{t=-\infty}^{\infty}\mbox{{\rm cov}}(\xi_{1,0},\xi_{1,t}).

Theorem 2.1 shows that the distribution of the largest eigenvalue process may be approximated by a Brownian motion. We note that the norming sequence σ12\sigma_{1}^{2}, which is essentially the long run variance of the quadratic forms ξ1,t\xi_{1,t}, may change with NN. In fact, we show in Section 7 that if 𝜸=(γ1,γ2,,γN){\mbox{\boldmath$\gamma$}}=(\gamma_{1},\gamma_{2},\ldots,\gamma_{N})^{\top}, then under H0H_{0} σ12\sigma_{1}^{2}\to\infty, as TT\to\infty, if 𝜸\|{\mbox{\boldmath$\gamma$}}\|\to\infty. The necessity of including the logarithm term in the rate condition (2.6) comes from the fact that we establish weak convergence on the entire unit interval. This condition can be improved by considering convergence on an interval that is bounded away from zero.

Theorem 2.2.

If the conditions of Theorem 2.1 are satisfied and (2.6) is replaced with

(2.7) NT1/20,asT,\frac{N}{T^{1/2}}\to 0,\;\;\;\mbox{as}\;\;T\to\infty,

then for all c(0,1]c\in(0,1],

T1/2σ1u(λ^1(u)λ1)𝒟[c,1]W(u).\frac{T^{1/2}}{\sigma_{1}}u(\hat{\lambda}_{1}(u)-\lambda_{1})\stackrel{{\scriptstyle{\mathcal{D}[c,1]}}}{{\longrightarrow}}W(u).

where σ12\sigma_{1}^{2} is defined as in Theorem 2.1.

Conditions (2.6) and (2.7) require that the sample size TT is asymptotically larger than the squared dimension N2N^{2}. The case when NN is proportional to TT has received considerable attention in the probability and statistics literature. Assuming that C^N,T(1)\hat{C}_{N,T}(1) is based on independent and identically distributed entries, the distribution of λ^1(1)\hat{\lambda}_{1}(1) converges to a Tracy-Widom distribution (cf. Johnstone (2008)). For a survey of the theory of eigenvalues of large random matrices, we refer to Aue and Paul (2014).

3. Changepoint detection

3.1. Estimating the norming sequence

Consistent estimation of σ12\sigma_{1}^{2} is required in order to apply Theorems 2.1 and 2.2 to test H0H_{0}. As σ12\sigma_{1}^{2} is defined as the long run covariance of the quadratic forms ξi,t\xi_{i,t}, we propose a natural nonparametric estimator. We define 𝔢^i\hat{{\mathfrak{e}}}_{i} by

λ^i(1)𝔢^i=𝐂^N,T(1)𝔢^i,  1iN.\hat{\lambda}_{i}(1)\hat{{\mathfrak{e}}}_{i}=\hat{{\bf C}}_{N,T}(1)\hat{{\mathfrak{e}}}_{i},\;\;1\leq i\leq N.

Let ξ^i,t=(𝔢^i(𝐗t𝐗¯T,t))2,\hat{\xi}_{i,t}=(\hat{{\mathfrak{e}}}_{i}^{\top}({\bf X}_{t}-\bar{{\bf X}}^{*}_{T,t}))^{2}, where

𝐗¯T,t={1t^t=1t^𝐗t,if  1tt^1Tt^t=t^+1T𝐗t,ift^+1tT,\bar{{\bf X}}^{*}_{T,t}=\left\{\begin{array}[]{ll}\displaystyle\frac{1}{\hat{t}^{*}}\sum_{t=1}^{\hat{t}^{*}}{\bf X}_{t},\;\;\mbox{if}\;\;1\leq t\leq\hat{t}^{*}\\ \displaystyle\frac{1}{T-\hat{t}^{*}}\sum_{t=\hat{t}^{*}+1}^{T}{\bf X}_{t},\;\;\mbox{if}\;\;\hat{t}^{*}+1\leq t\leq T,\end{array}\right.

and t^\hat{t}^{*} is the least squares change point estimator for a change in the mean defined in Section 3 of Bai (2010). Estimating the mean under the alternative of a mean change is done to ensure monotonic power in that case. Let JJ be a kernel/weight function that is continuous and symmetric about the origin in {\mathbb{R}} with bounded support, and satisfying J(0)=1J(0)=1. Examples of such functions include the Bartlett and Parzen kernels; further examples and discussion may be found in Taniguchi and Kakizawa (2000). We define the estimator v^1,T2\hat{v}^{2}_{1,T} for σ12\sigma_{1}^{2} by

(3.1) v^1,T2=s=N+1N1J(sh)r^1,s,\displaystyle\hat{v}^{2}_{1,T}=\sum_{s=-N+1}^{N-1}J\left(\frac{s}{h}\right)\hat{r}_{1,s},

where hh denotes a smoothing bandwidth parameter, and

r^1,s={1Tst=1Ts(ξ^1,tξ¯1,T)(ξ^1,t+sξ¯1,T),ifs01T|s|t=sT(ξ^1,tξ¯1,T)(ξ^1,t+sξ¯1,T),ifs<0,\hat{r}_{1,s}=\left\{\begin{array}[]{ll}\displaystyle\frac{1}{T-s}\sum_{t=1}^{T-s}(\hat{\xi}_{1,t}-\bar{\xi}_{1,T})(\hat{\xi}_{1,t+s}-\bar{\xi}_{1,T}),\;\;\mbox{if}\;\;s\geq 0\vskip 5.69046pt\\ \displaystyle\frac{1}{T-|s|}\sum_{t=-s}^{T}(\hat{\xi}_{1,t}-\bar{\xi}_{1,T})(\hat{\xi}_{1,t+s}-\bar{\xi}_{1,T}),\;\;\mbox{if}\;\;s<0,\end{array}\right.

with

ξ¯1,T=1Tt=1Tξ1,t.\bar{\xi}_{1,T}=\frac{1}{T}\sum_{t=1}^{T}\xi_{1,t}.
Theorem 3.1.

If H0H_{0} and the conditions of Theorem 2.1 are satisfied, and

(3.2) h=h(T)andhN3/T1/20,asT,h=h(T)\to\infty\;\;\mbox{and}\;\;hN^{3}/T^{1/2}\to 0,\;\;\mbox{as}\;\;T\to\infty,

then

(3.3) v^1,T2σ12P 1,asT.\frac{\hat{v}^{2}_{1,T}}{\sigma_{1}^{2}}\;\;\stackrel{{\scriptstyle P}}{{\to}}\;1,\;\;\;\mbox{as}\;\;T\to\infty.

The results in Theorems 2.1, 2.2, and 3.1 can be used to test for the stability of the largest eigenvalue, which, as we show below, suggests stability of the model parameters.

Corollary 3.1.

Let

B^T,1(u)=T1/2v^1,Tu(λ^1(u)λ^1(1)),  0u1.\hat{B}_{T,1}(u)=\frac{T^{1/2}}{\hat{v}_{1,T}}u(\hat{\lambda}_{1}(u)-\hat{\lambda}_{1}(1)),\;\;0\leq u\leq 1.

Under the conditions of Theorems 2.1 and 3.1, B^T,1(u)𝒟[0,1]W0(u)\hat{B}_{T,1}(u)\stackrel{{\scriptstyle{\mathcal{D}[0,1]}}}{{\longrightarrow}}W^{0}(u), where W0W^{0} is a standard Brownian bridge.

The continuous mapping theorem and Corollary 3.1 imply that

(3.4) sup0t1|B^T,1(u)|𝒟sup0t1|W0(t)|.\displaystyle\sup_{0\leq t\leq 1}|\hat{B}_{T,1}(u)|\stackrel{{\scriptstyle{\mathcal{D}}}}{{\to}}\sup_{0\leq t\leq 1}|W^{0}(t)|.

The limiting distribution on the right hand side of (3.4) is commonly referred to as the Kolmogorov distribution. An approximate test of size α\alpha of H0H_{0} is to reject if sup0t1|B^T,1(u)|\sup_{0\leq t\leq 1}|\hat{B}_{T,1}(u)| is larger than the α\alpha critical value of the Kolmogorov distribution. One could also consider alternate functionals of B^T,1\hat{B}_{T,1} to test H0H_{0}. The distributions of many functionals of W0W^{0} are well–known (cf. Shorack and Wellner (1986), pp. 142–149).

3.2. Consistency under alternatives

We now turn our attention to studying the consistency of tests for H0H_{0} based on sup0t1|B^T,1(u)|\sup_{0\leq t\leq 1}|\hat{B}_{T,1}(u)| under the mean break and factor loading break alternatives. Following the literature, we assume that the change does not occur too close to the end points of the sample:

(3.5) t=Tθwith some   0<θ<1.t^{*}=\lfloor T\theta\rfloor\;\;\mbox{with some }\;\;0<\theta<1.

First we consider the case of a break in the mean, i.e. the model

(3.6) Xi,t=(μi+δiI{tt})+γiηt+ei,t,  1iN,1tT,X_{i,t}=(\mu_{i}+\delta_{i}I\{t\geq t^{*}\})+\gamma_{i}\eta_{t}+e_{i,t},\;\;1\leq i\leq N,1\leq t\leq T,

holds. Let 𝜹=𝜹T=(δ1,δ2,,δN){\mbox{\boldmath$\delta$}}={\mbox{\boldmath$\delta$}}_{T}=(\delta_{1},\delta_{2},\ldots,\delta_{N})^{\top} and assume

(3.7) limTT1/2𝜹𝜸2=.\lim_{T\to\infty}\frac{T^{1/2}\|{\mbox{\boldmath$\delta$}}\|}{\|{\mbox{\boldmath$\gamma$}}\|^{2}}=\infty.
Theorem 3.2.

Under (3.6), Assumptions 2.1, 2.2, and 2.3, and assuming that (2.7), (3.5), and (3.7) are satisfied, then we have that

(3.8) sup0u1|B^T,1(u)|P\sup_{0\leq u\leq 1}|\hat{B}_{T,1}(u)|\;\;\stackrel{{\scriptstyle P}}{{\to}}\;\;\infty

We note that assumptions (3.5) and (3.7) also appeared in Horváth and Hušková (2012) where the optimality of these conditions are discussed. It is clear if NN is large, relatively small changes can be detected by λ^1(u)\hat{\lambda}_{1}(u). As a consequence of the proof of Theorem 3.2, it follows that

max2iKsup0u1T1/2|λ^i(u)λ^i(1)|=OP(1),\max_{2\leq i\leq K}\sup_{0\leq u\leq 1}T^{1/2}|\hat{\lambda}_{i}(u)-\hat{\lambda}_{i}(1)|=O_{P}(1),

i.e. a change in the mean is asymptotically entirely captured by the largest eigenvalue of the partial covariance matrices.

The condition (3.7) suggests how a local change in the mean alternative may be considered. For example, if δ1=δ2==δN=δ(N,T)\delta_{1}=\delta_{2}=\ldots=\delta_{N}=\delta(N,T) and γ1=γ2==γN=γ\gamma_{1}=\gamma_{2}=\ldots=\gamma_{N}=\gamma, γ\gamma is fixed , we need that (T/N)1/2|δ(N,T)|(T/N)^{1/2}|\delta(N,T)|\to\infty for (3.7) to hold, which describes at what rate δ(N,T)\delta(N,T) may tend to zero while maintaining consistency.

Next we consider the model

(3.9) Xi,t=μi+(γi+ψiI{tt})ηt+ei,t,  1iN,1tT,X_{i,t}=\mu_{i}+(\gamma_{i}+\psi_{i}I\{t\geq t^{*}\})\eta_{t}+e_{i,t},\;\;1\leq i\leq N,1\leq t\leq T,

i.e. the means of the panels remain the same but the loadings change at time tt^{*}. Let 𝝍=(ψ1,ψ2,,ψN){\mbox{\boldmath$\psi$}}=(\psi_{1},\psi_{2},\ldots,\psi_{N})^{\top}.

Theorem 3.3.

Under (3.9), Assumptions 2.1, 2.2, and 2.3, and assuming that (2.7), (3.5) and

(3.10) limT(1θ)[𝝍2+2|𝝍𝜸|]+(𝝍𝜸)2/𝝍2𝜸2+max1iNσi2>1\lim_{T\to\infty}\frac{(1-\theta)[\|{\mbox{\boldmath$\psi$}}\|^{2}+2|{\mbox{\boldmath$\psi$}}^{\top}{\mbox{\boldmath$\gamma$}}|]+({\mbox{\boldmath$\psi$}}^{\top}{\mbox{\boldmath$\gamma$}})^{2}/\|{\mbox{\boldmath$\psi$}}\|^{2}}{\|{\mbox{\boldmath$\gamma$}}\|^{2}+\max_{1\leq i\leq N}\sigma_{i}^{2}}>1

hold, then sup0u1|B^T,1(u)|P\sup_{0\leq u\leq 1}|\hat{B}_{T,1}(u)|\;\;\stackrel{{\scriptstyle P}}{{\to}}\;\;\infty, as TT\to\infty.

Roughly speaking, it is possible that the covariance might change on a subspace that is orthogonal to the first eigenvector (or more generally the first KK eigenvectors), and then if this change is not sufficiently large, the first eigenvalue cannot have power to detect it. Condition (3.10) is sufficient to imply that this does not occur.

4. Finite Sample Performance

In order to demonstrate how the result in (3.4) is manifested in finite samples, we present here the results of a Monte Carlo simulation study involving several different data generating processes (DGP’s) that follow (2.1). All simulations were carried out in the R programming language (cf. R Development Core Team (2010)). In order to compute the long run variance estimate v^1,T2\hat{{\it v}}^{2}_{1,T} defined in (3.1), we used the “sandwich” package (cf. Zeileis (2006)), in particular the “kernHAC” function. The Parzen kernel with corresponding bandwidth defined in Andrews (1991) were employed.

4.1. Empirical Size

We begin by presenting the results on the empirical size of the test for stability based on the largest eigenvalue by considering two examples of synthetic data generated according to model (2.2). We use the notation YiYY_{i}\sim Y to denote that the sequence of random variables YiY_{i} are independent and identically distributed with distribution YY. Let Ni,t(0,1)N_{i,t}(0,1) i0i\geq 0 and tt\in\mathbb{Z} denote iid standard normal random variables, and let ARi(1,p)AR_{i}(1,p) i0i\geq 0 denote independent autoregressive one processes with parameter pp based on standard normal errors. We generated observations Xi,tX_{i,t} according to (2.2) and the DGP’s

(IID): ηt=N0,t(0,1),\eta_{t}=N_{0,t}(0,1), ei,t=siNi,t(0,1)e_{i,t}=s_{i}N_{i,t}(0,1), siUnif(.8,1.2)s_{i}\sim Unif(.8,1.2), γiN(0,1)\gamma_{i}\sim N(0,1),
and
(AR-1): ηt=AR0(1,.5),\eta_{t}=AR_{0}(1,.5), ei,t=siARi(1,.5)e_{i,t}=s_{i}AR_{i}(1,.5), siUnif(.8,1.2)s_{i}\sim Unif(.8,1.2), γiN(0,1)\gamma_{i}\sim N(0,1).

The purpose of choosing random parameters sis_{i}, which define the standard deviations of the idiosyncratic errors, and γi\gamma_{i} is two fold. Firstly, this forces Assumption 2.1 to hold. Secondly, this choice highlights that the methodology is relatively robust to variations in the parameter values.

Five simulated paths of the process B^T,1(u)\hat{B}_{T,1}(u) are shown in the left hand panel of 4.1 when T=100T=100 and N=20N=20, under IID. The most notable feature is that each process always starts with a spike near the origin, i.e. λ^i(u)\hat{\lambda}_{i}(u) is much larger than λ^i(1)\hat{\lambda}_{i}(1) when uu is small. The reason for this is that, when uu is small, λ^i(u)\hat{\lambda}_{i}(u) is computed from a matrix that is low rank, and hence will tend to be closer to the norm of the observation vectors, which is on the order of NN, than the eigenvalue that it being estimated. This problem is ameliorated when NN decreases or TT increases, but significantly affects the results for many practical values of NN and TT.

Figure 4.1. The left panel illustrates five simulated paths of B^T,1(u)\hat{B}_{T,1}(u) when N=20N=20 and T=100T=100 under (IID), and the right panel illustrates five simulated paths of B~T,1(u)\tilde{B}_{T,1}(u) under the same conditions with ϵ=.05\epsilon=.05.

Refer to caption
Refer to caption
DGP IID AR-1
ϵ=.05\epsilon=.05 ϵ=.1\epsilon=.1 ϵ=.05\epsilon=.05 ϵ=.1\epsilon=.1
N T 10% 5% 1% 10% 5% 1% 10% 5% 1% 10% 5% 1%
10 50 18.1 11.2 3.8 8.8 4.9 1.8 26.7 18.4 10.0 24.7 17.9 8.4
100 8.3 3.5 .7 9.2 3.6 .7 17.1 10.3 3.4 9.2 3.6 .7
200 8.7 4.1 .7 8.6 4.3 1.0 11.7 5.7 2.0 10.4 5.1 1.6
20 50 18.6 12.3 5.5 9.5 4.8 .7 23.7 16.5 8.0 25.8 17.8 8.7
100 8.5 3.6 .6 9.1 4.5 .3 14.9 9.4 3.4 14.9 9.0 3.7
200 8.4 4.2 .6 8.8 3.3 .5 11.8 6.5 2.0 12.4 6.8 1.5
50 50 23.3 13.7 5.3 10.2 3.9 .7 24.8 17.3 8.8 24.4 18.6 .9
100 8.8 3.5 .6 9.0 4.2 1.0 17.8 11.6 4.0 15.3 8.5 3.5
200 10.0 5.0 1.3 8.9 3.8 .5 13.0 7.1 2.1 12.2 6.4 1.7
Table 4.1. Empirical sizes with nominal levels of 10%, 5%, and 1% in both the independent (IID) and dependent (AR-1) cases based on the process B~T,1\tilde{B}_{T,1}.

In order to correct for this, we define

B~T,1(u)={0u[0,ϵ]B^T,1(u)1u1ϵB^T,1(ϵ)u(ϵ,1]\tilde{B}_{T,1}(u)=\left\{\begin{array}[]{lr}0&u\in[0,\epsilon]\\ \vskip 5.69046pt\\ \hat{B}_{T,1}(u)-\frac{1-u}{1-\epsilon}\hat{B}_{T,1}(\epsilon)&u\in(\epsilon,1]\end{array}\right.

for a trimming parameter ϵ>0\epsilon>0. Five corresponding paths of B~T,1(u)\tilde{B}_{T,1}(u) are illustrated in the right panel of Figure 4.1, with ϵ=.05\epsilon=.05.

Table 4.1 contains the percentages of the test statistic sup0u1|B~T,1(u)|\sup_{0\leq u\leq 1}|\tilde{B}_{T,1}(u)| that are larger than the 10%, 5%, and 1% critical values of the Kolmogorov distribution. The results can be summarized as follows:

  1. (1)

    When TT is small (T=50T=50), then the size of the test may be inflated by two sources. One of them is the spiked effect, and this is particularly pronounced when ϵ\epsilon is small and NN is large. If the temporal dependence in the data is low, then increasing ϵ\epsilon can allow the test to achieve good size even for small TT and relatively large NN. However, strong temporal dependence can cause size inflation for small TT that cannot be accounted for by increasing ϵ\epsilon.

  2. (2)

    Another source of size inflation that is present for larger values of TT may be attributed to estimating the variance under the alternative of a break in the mean. This may be improved by considering alternative variance estimation approaches, such as those developed in Kejriwal (2009).

  3. (3)

    The difference in the results between the IID and AR-1 DGP’s were small for larger values of TT (T=100,200)(T=100,200), indicating the variance estimation is performing well.

  4. (4)

    For T=200T=200, the empirical sizes are close to nominal in all cases.

4.2. Empirical Power

In order to study the power of our test under both the mean break and loading break alternatives, we considered two processes that satisfy (2.1) with t=Tθt^{*}=T\theta with θ(0,1)\theta\in(0,1). Throughout the simulations below, we set t=T/2t^{*}=T/2, i.e. the break was in the middle of the sample. We also studied the situation in which breaks occured towards the endpoints of the sample. The results in those cases tended to be worse, but not more so than expected. We define the DGP’s

MB(δ\delta): Xi,t=δiI{tT/2})+γiηt+ei,t,  1iN,1tTX_{i,t}=\delta_{i}I\{t\geq T/2\})+\gamma_{i}\eta_{t}+e_{i,t},\;\;1\leq i\leq N,1\leq t\leq T, where δiUnif(δ,δ)\delta_{i}\sim\mbox{Unif}(-\delta,\delta)
and
LB(Δ\Delta): Xi,t=(γi+ψiI{tT/2})ηt+ei,t,  1iN,1tTX_{i,t}=(\gamma_{i}+\psi_{i}I\{t\geq T/2\})\eta_{t}+e_{i,t},\;\;1\leq i\leq N,1\leq t\leq T, where ψiN(0,Δ2)\psi_{i}\sim N(0,\Delta^{2})

In each case we take the other terms in (2.1), i.e. the idiosyncratic errors, common factor, and factor loadings, to satisfy AR-1. We let the parameters δ\delta and Δ\Delta vary between 0 and 44 at increments of .5.5, and let N=10,20,50N=10,20,50, and T=50,100,200T=50,100,200. The results are displayed in terms of power curves in Figures 4.2 and 4.3 in case of a mean break alternative (MB(δ\delta)) and in Figures 4.4 and 4.5 in case of breaks in the factor loadings (LB(Δ\Delta)) when the size of the significance level of the test was fixed at 5%. We summarize the results as follows:

Refer to caption
Figure 4.2. Power curves generated from data following MB(δ)(\delta) for fixed NN and varying TT. The horizontal axis measures δ\delta, and the vertical axis measures the empirical power when the significance level is fixed at 5%.
Refer to caption
Figure 4.3. Power curves generated from data following MB(δ)(\delta) for fixed TT and varying NN. The horizontal axis measures δ\delta, and the vertical axis measures the empirical power when the significance level is fixed at 5% .
Refer to caption
Figure 4.4. Power curves generated from data following LB(Δ)(\Delta) for fixed NN and varying TT. The horizontal axis measures Δ\Delta, and the vertical axis measures the empirical power when the significance level is fixed at 5%.
Refer to caption
Figure 4.5. Power curves generated from data following LB(Δ)(\Delta) for fixed TT and varying NN. The horizontal axis measures Δ\Delta, and the vertical axis measures the empirical power when the significance level is fixed at 5%.

Mean Break:

  1. (1)

    In the case of a mean break, for each value of TT and NN that we considered there is a substantial gain in power for δ\delta exceeding 1.5. We note that data generated according to AR-1 have cross-sectional standard deviations of on average 1.6, and, when δ=2\delta=2, the average squared size of the change in the mean of each cross section is 1.33. Thus testing based on the largest eigenvalue seemed very sensitive to detect changes in the mean.

  2. (2)

    Due to the estimation of the variance under a mean break, the test exhibited monotonic power.

  3. (3)

    Increasing TT with fixed NN improved the empirical power, as expected, and the same was observed when TT was fixed and NN increased. The latter occurrence is likely attributable to the fact that as NN increases, changes in the mean occur in more cross sections, and the size is inflated in these cases due to the spiked effect.

Loading Break

  1. (1)

    In the case of a break in the factor loadings, even smaller changes relative to the size of the standard deviation (Δ=1)(\Delta=1) of the idiosyncratic errors resulted in dramatic increases in power.

  2. (2)

    We noticed that for smaller values of TT (T=50)(T=50) the power seemed to level off for larger breaks in the common factors, and never reached more than 90%90\%.

  3. (3)

    For larger TT (T=100,200)(T=100,200), the power approached 1 at a much faster rate for breaks in the factor loadings, and this occurrence seemed to be independent of the value of NN.

  4. (4)

    Increasing NN resulted in reduced power in this case, although the effects of changing NN were not particularly pronounced.

5. Application to U.S. Yield Curve Data

Following Yamamoto and Tanaka (2015), we consider an application of our methodology to test for structural breaks in U.S. Treasury yield curve data considered in Gürkaynak et al. (2007), which is available at http://www.federalreserve.gov/
econresdata
, and which the authors graciously maintain. The data consists of yields for fixed interest securities with maturities between one and thirty years with one year increments (N=30N=30). We studied a portion of this data set spanning from January 1st, 1990 to August 28th, 2015, that we further reduced from daily to monthly observations by considering only the data from the last day of each month. Figure 5.1 illustrates the yield curves corresponding to 1, 5, 10, and 30 year maturities.

Refer to caption
Figure 5.1. Yield curves at a 1-month resolution between January, 1990 and August, 2015 correpsonding to 1 year, 5 year, 10 year, and 30 year maturities.

In order to remove the effects of stochastic trends, and to allow for a comparison of our results to Yamamoto and Tanaka (2015), we first differenced each series. We applied the hypothesis test for stability of the largest eigenvalue based on sup0t1|B~T,1(t)|\sup_{0\leq t\leq 1}|\tilde{B}_{T,1}(t)| with trimming parameter ϵ=.05\epsilon=.05 to sequential blocks of the first differenced data of length 10 years, corresponding to 120 monthly observations in each sample (T=120T=120). The first block contained data spanning from January, 1990 to December, 1999, and the last block contained data spanning from September, 2005 to August, 2015, which constituted a total of 172 tests. The P-value from each test is plotted against the end date of the corresponding 10 year block in Figure 5.2.

Refer to caption
Figure 5.2. P-values corresponding to 10 year blocks of the first differenced yield curve data. The vertical axis measures the magnitude of the P-value, and the horizontal axis indicates the concluding month of the 10 year block. P-values below the horizontal line are below .05.

The most notable result of this analysis is the persistent instability of the largest eigenvalue evident in the samples that end in late 2008 to early 2009. This seems to correspond with the subprime crisis, which sparked what has been termed the “Great Recession”. The stability of the largest eigenvalue seems to return near the end of 2013. This may be indicative of the economic recovery, and provides a way of dating the end of the recession. The findings of structural breaks in the correlation structure of the yield curves during the 2007-2009 recession are consistent with those of Yamamoto and Tanaka (2015).

Also notable is the lack of persistent instability in relation to the 2001 economic recession. This illuminates a difference between the two recessions: The 2001 recession may be better modeled as a first order structural break, which is not as evident in the first differenced yield curve series, whilst the 2009 recession, which generated numerous policy changes and endured for a longer period, is manifested as a change in the largest eigenvalue.

6. Results for smaller eigenvalues

In this section, we provide analogous results to Theorems 2.1 and 2.2 for the smaller eigenvalues. Namely, we aim to establish the weak convergence of the KK–dimensional process

𝐀N,T(u)=(AN,T,1(u),AN,T,2(u),,AN,T,K(u)),{\bf A}_{N,T}(u)=(A_{N,T,1}(u),A_{N,T,2}(u),\ldots,A_{N,T,K}(u))^{\top},

where for 1iK1\leq i\leq K,

AN,T,i(u)=T1/2u(λ^i(u)λi),  1/Tu1andAN,T,i(u)=0,  0u<1/T.A_{N,T,i}(u)=T^{1/2}u(\hat{\lambda}_{i}(u)-\lambda_{i}),\;\;1/T\leq u\leq 1\;\;\mbox{and}\;\;A_{N,T,i}(u)=0,\;\;0\leq u<1/T.

Let

(6.1) V1==cov(η02,η2),V_{1}=\sum_{\ell=-\infty}^{\infty}\mbox{cov}(\eta_{0}^{2},\eta^{2}_{\ell}),
(6.2) 𝐕2={s=limTk=1N𝔢i(k)𝔢j(k)cov(η0,ηs)cov(ek,0,ek,s),1i,jK},{\bf V}_{2}=\left\{\sum_{s=-\infty}^{\infty}\lim_{T\to\infty}\sum_{k=1}^{N}{\mathfrak{e}}_{i}(k){\mathfrak{e}}_{j}(k)\mbox{cov}(\eta_{0},\eta_{s})\mbox{cov}(e_{k,0},e_{k,s}),1\leq i,j\leq K\right\},
(6.3) 𝐕3\displaystyle{\bf V}_{3} ={s=limT[k=1N𝔢i2(k)𝔢j2(k)cov(ek,02,ek,s2)\displaystyle=\left\{\sum_{s=-\infty}^{\infty}\lim_{T\to\infty}\left[\sum_{k=1}^{N}{\mathfrak{e}}_{i}^{2}(k){\mathfrak{e}}^{2}_{j}(k)\mbox{cov}(e^{2}_{k,0},e^{2}_{k,s})\right.\right.
+2(k=1N𝔢i(k)𝔢j(k)cov(ek,0,ek,s))2\displaystyle\hskip 42.67912pt+2\left(\sum_{k=1}^{N}{\mathfrak{e}}_{i}(k){\mathfrak{e}}_{j}(k)\mbox{cov}(e_{k,0},e_{k,s})\right)^{2}
2k=1N𝔢i2(k)𝔢j2(k)(cov(ek,0,ek,s))2],1i,jK}.\displaystyle\hskip 42.67912pt\left.\left.-2\sum_{k=1}^{N}{\mathfrak{e}}_{i}^{2}(k){\mathfrak{e}}_{j}^{2}(k)(\mbox{cov}(e_{k,0},e_{k,s}))^{2}\right],1\leq i,j\leq K\right\}.

We use the notation 𝐕2={V2(i,j),1i,jK}{\bf V}_{2}=\{V_{2}(i,j),1\leq i,j\leq K\} and 𝐕3={V3(i,j),1i,jK}{\bf V}_{3}=\{V_{3}(i,j),1\leq i,j\leq K\}.

Remark 6.1.

If, for example, we assume that r(s)=cov(ek,0,ek,s)r(s)=\mbox{cov}(e_{k,0},e_{k,s}) for all 1kN1\leq k\leq N, then 𝐕2{\bf V}_{2} is a diagonal matrix with

V2(i,i)=s=cov(η0,ηs)r(s).V_{2}(i,i)=\sum_{s=-\infty}^{\infty}\mbox{cov}(\eta_{0},\eta_{s})r(s).

The expression for 𝐕3{\bf V}_{3} also simplifies since by the orthonormality of the 𝔢i{\mathfrak{e}}_{i}’s we have

k=1N𝔢i(k)𝔢j(k)cov(ek,0,ek,s)=r(s)I{i=j}.\sum_{k=1}^{N}{\mathfrak{e}}_{i}(k){\mathfrak{e}}_{j}(k)\mbox{cov}(e_{k,0},e_{k,s})=r(s)I\{i=j\}.

If we further assume that each of the {ek,s,<s<}\{e_{k,s},-\infty<s<\infty\} sequences are Gaussian, then cov(ek,02,ek,s2)=2r2(s)\mbox{cov}(e^{2}_{k,0},e^{2}_{k,s})=2r^{2}(s), and 𝐕3{\bf V}_{3} also reduces to a diagonal matrix with V3(i,i)=2s=r2(s)V_{3}(i,i)=2\sum_{s=-\infty}^{\infty}r^{2}(s).

Let

(6.4) ai=limT𝔢i𝜸,   1iKa_{i}=\lim_{T\to\infty}{\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}},\;\;\;1\leq i\leq K

and define 𝐆={G(i,j),1i,jK}{\bf G}=\{G(i,j),1\leq i,j\leq K\} with G(i,j)=ai2aj2V1+4aiajV2(i,j)+V3(i,j)G(i,j)=a_{i}^{2}a_{j}^{2}V_{1}+4a_{i}a_{j}V_{2}(i,j)+V_{3}(i,j). Lemma 7.4 demonstrates that the limit in (6.4) is finite.

Theorem 6.1.

If H0H_{0} and the conditions of Theorem 2.1 hold, and

(6.5) 𝜸=O(1),asT,\|{\mbox{\boldmath$\gamma$}}\|=O(1),\;\;\mbox{as}\;\;T\to\infty,

then we have that 𝐀N,T(u)converges weakly in𝒟K[0,1]to𝐖𝐆(u),{\bf A}_{N,T}(u)\;\;\mbox{converges weakly in}\;\;{\mathcal{D}}^{K}[0,1]\;\;\mbox{to}\;\;{\bf W}_{{\bf G}}(u), where 𝐖𝐆(u){\bf W}_{{\bf G}}(u) is a KK–dimensional Wiener process, i.e. 𝐖𝐆(u){\bf W}_{{\bf G}}(u) is Gaussian with E𝐖𝐆(u)=𝟎E{\bf W}_{{\bf G}}(u)={\bf 0} and E𝐖𝐆(u)𝐖𝐆(u)=min(u,u)𝐆E{\bf W}_{{\bf G}}(u){\bf W}_{{\bf G}}^{\top}(u^{\prime})=\min(u,u^{\prime}){\bf G}.

Remark 6.2.

If 𝜸0\|{\mbox{\boldmath$\gamma$}}\|\to 0, as TT\to\infty, then ai=0a_{i}=0 according to Lemma 7.4. In this case the weak limit of 𝐀N,T(u){\bf A}_{N,T}(u) is the KK–dimensional Wiener process 𝐖𝐕3(u){\bf W}_{{\bf V}_{3}}(u), since 𝐆=𝐕3{\bf G}={\bf V}_{3}.

To state the next result we introduce the covariance matrix 𝐇={H(i,j),1i,jK}{\bf H}=\{H(i,j),1\leq i,j\leq K\}: H(1,1)=V1,H(1,i)=H(i,1)=ai2V1H(1,1)=V_{1},H(1,i)=H(i,1)=a_{i}^{2}V_{1} and H(i,j)=ai2aj2V1+4aiajV2(i,j)+V3(i,j),2i,jKH(i,j)=a_{i}^{2}a_{j}^{2}V_{1}+4a_{i}a_{j}V_{2}(i,j)+V_{3}(i,j),2\leq i,j\leq K.

Theorem 6.2.

If H0H_{0} and the conditions of Theorem 2.1 hold, and

(6.6) 𝜸,asT,\|{\mbox{\boldmath$\gamma$}}\|\to\infty,\;\;\mbox{as}\;\;T\to\infty,

then we have that {𝛄2AN,T;1(u),AN,T;i(u),2iK}\left\{\|{\mbox{\boldmath$\gamma$}}\|^{-2}A_{N,T;1}(u),A_{N,T;i}(u),2\leq i\leq K\right\} converges weakly in 𝒟K[0,1]{\mathcal{D}}^{K}[0,1] to 𝐖𝐇(u){\bf W}_{{\bf H}}(u), where 𝐖𝐇(u){\bf W}_{{\bf H}}(u) is a KK–dimensional Wiener process, i.e. 𝐖𝐇(u){\bf W}_{{\bf H}}(u) is Gaussian with E𝐖𝐇(u)=𝟎E{\bf W}_{{\bf H}}(u)={\bf 0} and E𝐖𝐇(u)𝐖𝐇(u)=min(u,u)𝐇E{\bf W}_{{\bf H}}(u){\bf W}_{{\bf H}}^{\top}(u^{\prime})=\min(u,u^{\prime}){\bf H}.

Remark 6.3.

We note that σ12\sigma_{1}^{2} defined in Theorem 2.1 coincide with G(1,1)G(1,1) and H(1,1)γ2H(1,1)\|\gamma\|^{2} in the cases when 𝜸=O(1)\|{\mbox{\boldmath$\gamma$}}\|=O(1) and 𝜸\|{\mbox{\boldmath$\gamma$}}\|\to\infty as TT\to\infty, respectively, and so Theorems 6.1 and 6.2 imply Theorem 2.1.

Remark 6.4.

We show in Lemma 7.4 that in case of (6.6), λ1\lambda_{1}, the largest eigenvalue of 𝐂{\bf C} satisfies

|λ1𝜸21|=O(1)\left|\frac{\lambda_{1}}{\|{\mbox{\boldmath$\gamma$}}\|^{2}}-1\right|=O(1)

Thus Theorem 6.2 yields that λ^(u)/𝜸21\hat{\lambda}_{(}u)/\|{\mbox{\boldmath$\gamma$}}\|^{2}\to 1 in probability for all u>0u>0.

Remark 6.5.

Theorems 2.1, 6.1 and 6.2 provide the limits of the weighted differences T1/2u(λ^i(u)T^{1/2}u(\hat{\lambda}_{i}(u) λi)=T1/2(λ~iuλi),1iK-\lambda_{i})=T^{1/2}(\tilde{\lambda}_{i}-u\lambda_{i}),1\leq i\leq K. If the conditions of Theorem 2.1 are satisfied but (2.6) is replaced (2.7) as in Theorem 2.2, then T1/2(λ^i(u)λi),1iKT^{1/2}(\hat{\lambda}_{i}(u)-\lambda_{i}),1\leq i\leq K converges weakly in 𝒟K[c,1]{\mathcal{D}}^{K}[c,1] to 𝐖𝐆(u)/u{\bf W}_{{\bf G}}(u)/u for any 0<c10<c\leq 1 where 𝐖𝐆(u){\bf W}_{{\bf G}}(u) is defined in Theorem 2.1.

7. Technical Results

7.1. Proof of Theorems 2.1, 2.2, 6.1 and 6.2

Throughout these proofs we use the terms of the form ci,jc_{i,j} to denote unimportant numerical constants. We can assume without loss of generality that E𝐗t=𝟎E{\bf X}_{t}={\bf 0}, and so we define

𝐂N,T(u)=1Tt=1Tu𝐗t𝐗t.{\bf C}_{N,T}(u)=\frac{1}{T}\sum_{t=1}^{\lfloor Tu\rfloor}{\bf X}_{t}{\bf X}_{t}^{\top}.
Lemma 7.1.

If (2.2) and Assumptions 2.1, 2.2, and 2.3 hold, then we have, as TT\to\infty,

sup0u1𝐂~N,T(u)𝐂N,T(u)=OP(NT).\sup_{0\leq u\leq 1}\left\|\tilde{\bf C}_{N,T}(u)-{\bf C}_{N,T}(u)\right\|=O_{P}\left(\frac{N}{T}\right).
Proof.

It is easy to see that

𝐂~N,T(u)=𝐂N,T(u)𝐗¯T(1Tt=1Tu𝐗t)(1Tt=1Tu𝐗t)𝐗¯T+𝐗¯T𝐗¯T,\tilde{\bf C}_{N,T}(u)={\bf C}_{N,T}(u)-\bar{\bf X}_{T}\left(\frac{1}{T}\sum_{t=1}^{\lfloor Tu\rfloor}{\bf X}_{t}\right)^{\top}-\left(\frac{1}{T}\sum_{t=1}^{\lfloor Tu\rfloor}{\bf X}_{t}\right)\bar{\bf X}_{T}^{\top}+\bar{\bf X}_{T}\bar{\bf X}_{T}^{\top},

and therefore

sup0u1𝐂~N,T(u)𝐂N,T(u)\displaystyle\sup_{0\leq u\leq 1}\left\|\tilde{\bf C}_{N,T}(u)-{\bf C}_{N,T}(u)\right\| 2sup0u1𝐗¯T(1Tt=1Tu𝐗t)+𝐗¯T𝐗¯T\displaystyle\leq 2\sup_{0\leq u\leq 1}\left\|\bar{\bf X}_{T}\left(\frac{1}{T}\sum_{t=1}^{\lfloor Tu\rfloor}{\bf X}_{t}\right)^{\top}\right\|+\left\|\bar{\bf X}_{T}\bar{\bf X}_{T}^{\top}\right\|
3sup0u1𝐗¯T(1Tt=1Tu𝐗t).\displaystyle\leq 3\sup_{0\leq u\leq 1}\left\|\bar{\bf X}_{T}\left(\frac{1}{T}\sum_{t=1}^{\lfloor Tu\rfloor}{\bf X}_{t}\right)^{\top}\right\|.

Using assumption (2.2) we obtain that

T2t=1Tu𝐗t𝐗¯T2\displaystyle T^{2}\left\|\sum_{t=1}^{\lfloor Tu\rfloor}{\bf X}_{t}\bar{\bf X}_{T}^{\top}\right\|^{2}
==1Np=1N(γt=1Tηt+t=1Te,t)2(γpt=1Tuηt+t=1Tuep,t)2\displaystyle=\sum_{\ell=1}^{N}\sum_{p=1}^{N}\left(\gamma_{\ell}\sum_{t=1}^{T}\eta_{t}+\sum_{t=1}^{T}e_{\ell,t}\right)^{2}\left(\gamma_{p}\sum_{t=1}^{\lfloor Tu\rfloor}\eta_{t}+\sum_{t=1}^{\lfloor Tu\rfloor}e_{p,t}\right)^{2}
=(=1Nγ2)2(t=1Tuηt)2(t=1Tηt)2+2=1Nγ2(t=1Tηt)2(v=1Tuηv)(s=1Tup=1Nγpep,s)\displaystyle=\left(\sum_{\ell=1}^{N}\gamma_{\ell}^{2}\right)^{2}\left(\sum_{t=1}^{\lfloor Tu\rfloor}\eta_{t}\right)^{2}\left(\sum_{t=1}^{T}\eta_{t}\right)^{2}+2\sum_{\ell=1}^{N}\gamma_{\ell}^{2}\left(\sum_{t=1}^{T}\eta_{t}\right)^{2}\left(\sum_{v=1}^{\lfloor Tu\rfloor}\eta_{v}\right)\left(\sum_{s=1}^{\lfloor Tu\rfloor}\sum_{p=1}^{N}\gamma_{p}e_{p,s}\right)
+=1Nγ2(t=1Tηt)2p=1N(t=1Tuep,t)2+p=1Nγp2(t=1Tuηt)2=1N(s=1Te,s)2\displaystyle\hskip 14.22636pt+\sum_{\ell=1}^{N}\gamma_{\ell}^{2}\left(\sum_{t=1}^{T}\eta_{t}\right)^{2}\sum_{p=1}^{N}\left(\sum_{t=1}^{\lfloor Tu\rfloor}e_{p,t}\right)^{2}+\sum_{p=1}^{N}\gamma_{p}^{2}\left(\sum_{t=1}^{\lfloor Tu\rfloor}\eta_{t}\right)^{2}\sum_{\ell=1}^{N}\left(\sum_{s=1}^{T}e_{\ell,s}\right)^{2}
+2=1N(s=1Te,s)2(s=1Tuηs)(s=1Tup=1Nγpep,s)+=1N(s=1Te,s)2p=1N(s=1Tuep,s)2\displaystyle\hskip 14.22636pt+2\sum_{\ell=1}^{N}\left(\sum_{s=1}^{T}e_{\ell,s}\right)^{2}\left(\sum_{s=1}^{\lfloor Tu\rfloor}\eta_{s}\right)\left(\sum_{s=1}^{\lfloor Tu\rfloor}\sum_{p=1}^{N}\gamma_{p}e_{p,s}\right)+\sum_{\ell=1}^{N}\left(\sum_{s=1}^{T}e_{\ell,s}\right)^{2}\sum_{p=1}^{N}\left(\sum_{s=1}^{\lfloor Tu\rfloor}e_{p,s}\right)^{2}
+2t=1Tηt=1Nγ(s=1Te,s)p=1Nγp2(v=1Tuηv)2+2t=1Tηt=1Nγ(s=1Te,s)p=1N(s=1Tuep,s)2\displaystyle\hskip 14.22636pt+2\sum_{t=1}^{T}\eta_{t}\sum_{\ell=1}^{N}\gamma_{\ell}\left(\sum_{s=1}^{T}e_{\ell,s}\right)\sum_{p=1}^{N}\gamma_{p}^{2}\left(\sum_{v=1}^{\lfloor Tu\rfloor}\eta_{v}\right)^{2}+2\sum_{t=1}^{T}\eta_{t}\sum_{\ell=1}^{N}\gamma_{\ell}\left(\sum_{s=1}^{T}e_{\ell,s}\right)\sum_{p=1}^{N}\left(\sum_{s=1}^{\lfloor Tu\rfloor}e_{p,s}\right)^{2}
+4t=1Tηt=1N(s=1Tγe,s)z=1Tuηzp=1N(s=1Tuγpep,s)\displaystyle\hskip 14.22636pt+4\sum_{t=1}^{T}\eta_{t}\sum_{\ell=1}^{N}\left(\sum_{s=1}^{T}\gamma_{\ell}e_{\ell,s}\right)\sum_{z=1}^{\lfloor Tu\rfloor}\eta_{z}\sum_{p=1}^{N}\left(\sum_{s=1}^{\lfloor Tu\rfloor}\gamma_{p}e_{p,s}\right)
=RT,1(u)+RT,2(u)++RT,9(u).\displaystyle=R_{T,1}(u)+R_{T,2}(u)+\ldots+R_{T,9}(u).

First we prove that

(7.1) sup0u1|t=1Tuηt|=OP(T1/2).\sup_{0\leq u\leq 1}\left|\sum_{t=1}^{\lfloor Tu\rfloor}\eta_{t}\right|=O_{P}(T^{1/2}).

It follows from Proposition 4 of Berkes et al. (2011) that under conditions Assumption 2.3(a) and Assumption 2.3(a) we have for any 2<κ122<\kappa\leq 12 that

(7.2) E(t=TvTuηt)κc1,1(TuTv)κ/2for all  0vu1,E\left(\sum_{t=\lfloor Tv\rfloor}^{\lfloor Tu\rfloor}\eta_{t}\right)^{\kappa}\leq c_{1,1}(\lfloor Tu\rfloor-\lfloor Tv\rfloor)^{\kappa/2}\;\;\mbox{for all}\;\;0\leq v\leq u\leq 1,

and therefore the maximal inequality of Móricz et al. (1982) implies (7.1). Next we show that

(7.3) sup0u1|s=1Tup=1Nγpep,s|=OP(1)T1/2𝜸.\sup_{0\leq u\leq 1}\left|\sum_{s=1}^{\lfloor Tu\rfloor}\sum_{p=1}^{N}\gamma_{p}e_{p,s}\right|=O_{P}(1)T^{1/2}\|{\mbox{\boldmath$\gamma$}}\|.

Following the arguments leading to (7.2) one can verify that for any 2<κ122<\kappa\leq 12

(7.4) E|s=TvTuep,s|κc1,2(TuTv)κ/2   0vu1,E\left|\sum_{s=\lfloor Tv\rfloor}^{\lfloor Tu\rfloor}e_{p,s}\right|^{{\kappa}}\leq c_{1,2}(\lfloor Tu\rfloor-\lfloor Tv\rfloor)^{\kappa/2}\;\;\;0\leq v\leq u\leq 1,

with some constant c1,2c_{1,2} for all 1pN1\leq p\leq N. Hence for any 0v<u10\leq v<u\leq 1 we have via Rosenthal’s inequality (cf. Petrov (1995), p. 59) and (7.4) that

E|s=TvTup=1Nγpep,s|κ\displaystyle E\left|\sum_{s=\lfloor Tv\rfloor}^{\lfloor Tu\rfloor}\sum_{p=1}^{N}\gamma_{p}e_{p,s}\right|^{\kappa} =E|p=1Ns=TvTuγpep,s|κ\displaystyle=E\left|\sum_{p=1}^{N}\sum_{s=\lfloor Tv\rfloor}^{\lfloor Tu\rfloor}\gamma_{p}e_{p,s}\right|^{\kappa}
c1,3{p=1N|γp|κE|s=TvTuep,s|κ+(p=1Nγp2E(s=TvTuep,s)2)κ/2}\displaystyle\leq c_{1,3}\left\{\sum_{p=1}^{N}|\gamma_{p}|^{\kappa}E\left|\sum_{s=\lfloor Tv\rfloor}^{\lfloor Tu\rfloor}e_{p,s}\right|^{\kappa}+\left(\sum_{p=1}^{N}\gamma_{p}^{2}E\left(\sum_{s=\lfloor Tv\rfloor}^{\lfloor Tu\rfloor}e_{p,s}\right)^{2}\right)^{\kappa/2}\right\}
c1,4(TuTv)κ/2{p=1N|γp|κ+(p=1Nγp2)κ/2}.\displaystyle\leq c_{1,4}(\lfloor Tu\rfloor-\lfloor Tv\rfloor)^{\kappa/2}\left\{\sum_{p=1}^{N}|\gamma_{p}|^{\kappa}+\left(\sum_{p=1}^{N}\gamma_{p}^{2}\right)^{\kappa/2}\right\}.

Using again the maximal inequality of Móricz et al. (1982) we conclude

Esup0u1|s=1Tup=1Nγpes,p|κ\displaystyle E\sup_{0\leq u\leq 1}\left|\sum_{s=1}^{\lfloor Tu\rfloor}\sum_{p=1}^{N}\gamma_{p}e_{s,p}\right|^{\kappa} c1,5Tκ/2{p=1N|γp|κ+(p=1Nγp2)κ/2}\displaystyle\leq c_{1,5}T^{\kappa/2}\left\{\sum_{p=1}^{N}|\gamma_{p}|^{\kappa}+\left(\sum_{p=1}^{N}\gamma_{p}^{2}\right)^{\kappa/2}\right\}
c1,6Tκ/2𝜸κ,\displaystyle\leq c_{1,6}T^{\kappa/2}\left\|{\mbox{\boldmath$\gamma$}}\right\|^{\kappa},

by Assumption 2.2. This completes the proof of (7.3).
Similarly to (7.3) we show that

(7.5) sup0s1=1N(s=1Tue,s)2=OP(NT).\sup_{0\leq s\leq 1}\sum_{\ell=1}^{N}\left(\sum_{s=1}^{\lfloor Tu\rfloor}e_{\ell,s}\right)^{2}=O_{P}(NT).

First we note

Esup0u1=1N(s=1Tue,s)2=1NEsup0u1(s=1Tue,s)2E\sup_{0\leq u\leq 1}\sum_{\ell=1}^{N}\left(\sum_{s=1}^{\lfloor Tu\rfloor}e_{\ell,s}\right)^{2}\leq\sum_{\ell=1}^{N}E\sup_{0\leq u\leq 1}\left(\sum_{s=1}^{\lfloor Tu\rfloor}e_{\ell,s}\right)^{2}

and by Jensen’s inequality we have

Esup0u1(s=1Tue,s)2(Esup0u1|s=1Tue,s|κ)2/κ.E\sup_{0\leq u\leq 1}\left(\sum_{s=1}^{\lfloor Tu\rfloor}e_{\ell,s}\right)^{2}\leq\left(E\sup_{0\leq u\leq 1}\left|\sum_{s=1}^{\lfloor Tu\rfloor}e_{\ell,s}\right|^{\kappa}\right)^{2/\kappa}.

Using again Proposition 4 of Berkes et al. (2011) we get for all 0vu10\leq v\leq u\leq 1 that

E|s=TvTue,s|κc1,7(TuTv)κ/2E\left|\sum_{s=\lfloor Tv\rfloor}^{\lfloor Tu\rfloor}e_{\ell,s}\right|^{\kappa}\leq c_{1,7}(\lfloor Tu\rfloor-\lfloor Tv\rfloor)^{\kappa/2}

and therefore the maximal inequality of Móricz et al. (1982) yields

(Esup0u1|s=1Tue,s|κ)2/κc1,8T1/2.\left(E\sup_{0\leq u\leq 1}\left|\sum_{s=1}^{\lfloor Tu\rfloor}e_{\ell,s}\right|^{\kappa}\right)^{2/\kappa}\leq c_{1,8}T^{1/2}.

This completes the proof of (7.3).
The upper bounds in (7.1)–(7.5) imply

sup0u1|RT,i(u)|=OP((𝜸4+𝜸3)T2),ifi=1,2,7,\sup_{0\leq u\leq 1}|R_{T,i}(u)|=O_{P}((\|{\mbox{\boldmath$\gamma$}}\|^{4}+\|{\mbox{\boldmath$\gamma$}}\|^{3})T^{2}),\quad\mbox{if}\;\;i=1,2,7,
sup0u1|RT,i(u)|=OP((𝜸2+𝜸)NT2),ifi=3,4,5,8,9.\sup_{0\leq u\leq 1}|R_{T,i}(u)|=O_{P}((\|{\mbox{\boldmath$\gamma$}}\|^{2}+\|{\mbox{\boldmath$\gamma$}}\|)NT^{2}),\quad\mbox{if}\;\;i=3,4,5,8,9.

and

sup0u1|RT,6(u)|=OP(N2T2).\sup_{0\leq u\leq 1}|R_{T,6}(u)|=O_{P}(N^{2}T^{2}).

Assumption 2.2 implies that 𝜸c1,9N,\|{\mbox{\boldmath$\gamma$}}\|\leq c_{1,9}N, the proof of Lemma 7.1 is complete. ∎

Let λ¯1(u)λ¯2(u)λ¯K(u)\bar{\lambda}_{1}(u)\geq\bar{\lambda}_{2}(u)\geq\ldots\geq\bar{\lambda}_{K}(u) denote the KK largest eigenvalues of 𝐂N,T(u){\bf C}_{N,T}(u).

Lemma 7.2.

If (2.2) and Assumptions 2.1, 2.2, and 2.3 hold, then we have, as TT\to\infty,

max1iKsup0u1|λ~i(u)λ¯i(u)|=OP(NT).\max_{1\leq i\leq K}\sup_{0\leq u\leq 1}|\tilde{\lambda}_{i}(u)-\bar{\lambda}_{i}(u)|=O_{P}\left(\frac{N}{T}\right).
Proof.

It is well–known (cf. Dunford and Schwartz (1988)) that

max1iKsup0u1|λ~i(u)λ¯i(u)|c2,1sup0u1𝐂~T(u)𝐂T(u)\max_{1\leq i\leq K}\sup_{0\leq u\leq 1}|\tilde{\lambda}_{i}(u)-\bar{\lambda}_{i}(u)|\leq c_{2,1}\sup_{0\leq u\leq 1}\|\tilde{{\bf C}}_{T}(u)-{\bf C}_{T}(u)\|

with some absolute constant c2,1c_{2,1} and therefore the result follows from Lemma 7.1. ∎

Let

ZN,T;i(u)=iN1u(λiλ)(𝔢i(𝐂~N,T(u)u𝐂)𝔢)2,  1iK.\displaystyle Z_{N,T;i}(u)=\sum_{\ell\neq i}^{N}\frac{1}{u(\lambda_{i}-\lambda_{\ell})}\left({\mathfrak{e}}_{i}^{\top}(\tilde{{\bf C}}_{N,T}(u)-u{\bf C}){\mathfrak{e}}_{\ell}\right)^{2},\;\;1\leq i\leq K.
Lemma 7.3.

If (2.2), Assumptions 2.1, 2.2, and 2.3 hold, then we have, as TT\to\infty,

sup0u1|λ~i(u)TuTλi𝔢i(𝐂~N,T(u)u𝐂)𝔢iZN,T;i(u)|=OP(NT3/2).\displaystyle\sup_{0\leq u\leq 1}\left|\tilde{\lambda}_{i}(u)-\frac{\lfloor Tu\rfloor}{T}\lambda_{i}-{\mathfrak{e}}_{i}^{\top}(\tilde{{\bf C}}_{N,T}(u)-u{\bf C}){\mathfrak{e}}_{i}-Z_{N,T;i}(u)\right|=O_{P}(NT^{-3/2}).
Proof.

According to formula (5.17) of Hall and Hosseini–Nasab (2009) we have for all 1/Tu11/T\leq u\leq 1 that

|λ^i(u)λi𝔢i(𝐂^N,T(u)𝐂)𝔢iiN1λiλ(𝔢i(𝐂^N,T(u)𝐂)𝔢)2|\displaystyle\left|\hat{\lambda}_{i}(u)-\lambda_{i}-{\mathfrak{e}}_{i}^{\top}(\hat{{\bf C}}_{N,T}(u)-{\bf C}){\mathfrak{e}}_{i}-\sum_{\ell\neq i}^{N}\frac{1}{\lambda_{i}-\lambda_{\ell}}\left({\mathfrak{e}}_{i}^{\top}(\hat{{\bf C}}_{N,T}(u)-{\bf C}){\mathfrak{e}}_{\ell}\right)^{2}\right|
c3,1Δ^3(u),\displaystyle\hskip 14.22636pt\leq c_{3,1}\hat{\Delta}^{3}(u),

where

Δ^(u)=max1N(j=1N(C^N,T;j,(u)Cj,)2)1/2,\hat{\Delta}(u)=\max_{1\leq\ell\leq N}\left(\sum_{j=1}^{N}(\hat{C}_{N,T;j,\ell}(u)-C_{j,\ell})^{2}\right)^{1/2},

and C^N,T;j,(u)\hat{C}_{N,T;j,\ell}(u) and Cj,C_{j,\ell} denote the (k,)th(k,\ell)^{\mbox{th}} element of 𝐂^N,T(u)\hat{{\bf C}}_{N,T}(u) and 𝐂{\bf C}, respectively. Hence

sup0u1|λ~i(u)TuTλi𝔢i(𝐂~N,T(u)u𝐂)𝔢iZN,T;i(u)|c3,1sup0u1Δ3(u),\displaystyle\sup_{0\leq u\leq 1}\left|\tilde{\lambda}_{i}(u)-\frac{\lfloor Tu\rfloor}{T}\lambda_{i}-{\mathfrak{e}}_{i}^{\top}(\tilde{{\bf C}}_{N,T}(u)-u{\bf C}){\mathfrak{e}}_{i}-Z_{N,T;i}(u)\right|\leq c_{3,1}\sup_{0\leq u\leq 1}\Delta^{3}(u),

where

Δ(u)=max1NRN,T;(u)\Delta(u)=\max_{1\leq\ell\leq N}R_{N,T;\ell}(u)

with

RN,T;(u)=(j=1N(C~N,T;j,(u)TuTCj,)2)1/2,R_{N,T;\ell}(u)=\left(\sum_{j=1}^{N}(\tilde{C}_{N,T;j,\ell}(u)-\frac{\lfloor Tu\rfloor}{T}C_{j,\ell})^{2}\right)^{1/2},

where C~N,T;j,(u)\tilde{C}_{N,T;j,\ell}(u) denotes the (j,)th(j,\ell)^{\mbox{th}} element of the matrix 𝐂~N,T(u)\tilde{{\bf C}}_{N,T}(u). By inequality (2.30) in Petrov (1995, p. 58) we conclude

RN,T;6(u)N2j=1N(C~N,T;j,(u)TuTCj,)6R_{N,T;\ell}^{6}(u)\leq N^{2}\sum_{j=1}^{N}\left(\tilde{C}_{N,T;j,\ell}(u)-\frac{\lfloor Tu\rfloor}{T}C_{j,\ell}\right)^{6}

and hence

Esup0u1(RN,T;(u))6N2j=1NEsup0u1(C~N,T;j,(u)TuTCj,)6.E\sup_{0\leq u\leq 1}\left(R_{N,T;\ell}(u)\right)^{6}\leq N^{2}\sum_{j=1}^{N}E\sup_{0\leq u\leq 1}\left(\tilde{C}_{N,T;j,\ell}(u)-\frac{\lfloor Tu\rfloor}{T}C_{j,\ell}\right)^{6}.

Using the definitions of C~N,T;j,(u)\tilde{C}_{N,T;j,\ell}(u) and Cj,C_{j,\ell} we write

(C~N,T;j,\displaystyle\biggl{(}\tilde{C}_{N,T;j,\ell} (u)TuTCj,)6\displaystyle(u)-\frac{\lfloor Tu\rfloor}{T}C_{j,\ell}\biggl{)}^{6}
=T6(s=1Tu[γγj(ηs21)+γηsej,s+γjηse,s+e,sej,sEe,sej,s])6\displaystyle=T^{-6}\left(\sum_{s=1}^{\lfloor Tu\rfloor}\left[\gamma_{\ell}\gamma_{j}(\eta_{s}^{2}-1)+\gamma_{\ell}\eta_{s}e_{j,s}+\gamma_{j}\eta_{s}e_{\ell,s}+e_{\ell,s}e_{j,s}-Ee_{\ell,s}e_{j,s}\right]\right)^{6}
46T6[γ6γj6(s=1Tu(ηs21))6+γ6(s=1Tuηsej,s)6+γj6(s=1Tuηse,s)6\displaystyle\leq 4^{6}T^{-6}\left[\gamma_{\ell}^{6}\gamma_{j}^{6}\left(\sum_{s=1}^{\lfloor Tu\rfloor}(\eta_{s}^{2}-1)\right)^{6}+\gamma_{\ell}^{6}\left(\sum_{s=1}^{\lfloor Tu\rfloor}\eta_{s}e_{j,s}\right)^{6}+\gamma_{j}^{6}\left(\sum_{s=1}^{\lfloor Tu\rfloor}\eta_{s}e_{\ell,s}\right)^{6}\right.
+(s=1Tu(e,sej,sEe,sej,s))6].\displaystyle\hskip 56.9055pt\left.+\left(\sum_{s=1}^{\lfloor Tu\rfloor}(e_{\ell,s}e_{j,s}-Ee_{\ell,s}e_{j,s})\right)^{6}\right].

Utilizing Assumption 2.3(a), we obtain along the lines of (7.2) that E(s=1t(ηs21))6c3,2t3E(\sum_{s=1}^{t}(\eta_{s}^{2}-1))^{6}\leq c_{3,2}t^{3}, so by the stationarity of ηt2,<t<\eta_{t}^{2},-\infty<t<\infty and the maximal inequality of Móricz et al. (1982) we obtain that

Esup0u1(s=1Tu(ηs21))6c3,3T3E\sup_{0\leq u\leq 1}\left(\sum_{s=1}^{\lfloor Tu\rfloor}(\eta_{s}^{2}-1)\right)^{6}\leq c_{3,3}T^{3}

Similarly, for all 1j,N1\leq j,\ell\leq N

Esup0u1(s=1Tuηse,s)6c3,4T3andEsup0u1(s=1Tu(e,sej,sEe,sej,s))6c3,5T3.E\sup_{0\leq u\leq 1}\left(\sum_{s=1}^{\lfloor Tu\rfloor}\eta_{s}e_{\ell,s}\right)^{6}\leq c_{3,4}T^{3}\;\;\;\mbox{and}\;\;\;E\sup_{0\leq u\leq 1}\left(\sum_{s=1}^{\lfloor Tu\rfloor}(e_{\ell,s}e_{j,s}-Ee_{\ell,s}e_{j,s})\right)^{6}\leq c_{3,5}T^{3}.

Hence for all 1N1\leq\ell\leq N we have by Assumption 2.2 that

(7.6) E(sup0u1RN,T;(u))6c3,6T3N3.E\left(\sup_{0\leq u\leq 1}R_{N,T;\ell}(u)\right)^{6}\leq c_{3,6}T^{-3}N^{3}.

Using (7.6) we conclude for all x>0x>0

P{sup0u1max1NRN,T;(u)>xN2/3T1/2}\displaystyle P\left\{\sup_{0\leq u\leq 1}\max_{1\leq\ell\leq N}R_{N,T;\ell}(u)>xN^{2/3}T^{-1/2}\right\} =1NP{sup0u1RN,T;(u)>xN2/3T1/2}\displaystyle\leq\sum_{\ell=1}^{N}P\left\{\sup_{0\leq u\leq 1}R_{N,T;\ell}(u)>xN^{2/3}T^{-1/2}\right\}
=1NT3x6N6E(sup0u1RN,T;(u))6\displaystyle\leq\sum_{\ell=1}^{N}\frac{T^{3}}{x^{6}N^{6}}E\left(\sup_{0\leq u\leq 1}R_{N,T;\ell}(u)\right)^{6}
=1NT3x6N6C5T3N3,\displaystyle\leq\sum_{\ell=1}^{N}\frac{T^{3}}{x^{6}N^{6}}C_{5}T^{-3}N^{3},

which shows that

(7.7) sup0u1Δ3(u)=OP(N2T3/2)andsup0u1uΔ^3(u)=OP(N2T3/2).\sup_{0\leq u\leq 1}\Delta^{3}(u)=O_{P}(N^{2}T^{-3/2})\;\;\;\mbox{and}\;\;\;\sup_{0\leq u\leq 1}u\hat{\Delta}^{3}(u)=O_{P}(N^{2}T^{-3/2}).

Since 𝔢1{\mathfrak{e}}_{1} is defined via (2.5) up to a sign, we can assume without loss of generality that 𝜸𝔢10{\mbox{\boldmath$\gamma$}}^{\top}{\mathfrak{e}}_{1}\geq 0.

Lemma 7.4.

If (2.2), Assumptions 2.1, 2.2, and 2.3 hold and 𝛄\|{\mbox{\boldmath$\gamma$}}\|\to\infty hold, then we have

(7.8) 𝔢1𝜸𝜸=O(1𝜸),\left\|{\mathfrak{e}}_{1}-\frac{{\mbox{\boldmath$\gamma$}}}{\|{\mbox{\boldmath$\gamma$}}\|}\right\|=O\left(\frac{1}{\|{\mbox{\boldmath$\gamma$}}\|}\right),
(7.9) λ1𝜸21,\frac{\lambda_{1}}{\|{\mbox{\boldmath$\gamma$}}\|^{2}}\to 1,
(7.10) max2iN|𝜸𝔢i|c4,1with some constantc4,1,\max_{2\leq i\leq N}|{\mbox{\boldmath$\gamma$}}^{\top}{\mathfrak{e}}_{i}|\leq c_{4,1}\;\;\mbox{with some constant}\;\;c_{4,1},

and

(7.11) max2iNλic4,2with some constantc4,2.\max_{2\leq i\leq N}\lambda_{i}\leq c_{4,2}\;\;\mbox{with some constant}\;\;c_{4,2}.
Proof.

By (2.2) we have

𝐂=𝜸𝜸T+𝚲,{\bf C}={\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\gamma$}}^{T}+{\mbox{\boldmath$\Lambda$}},

where 𝚲\Lambda is the N×NN\times N diagonal matrix with σ12,σ22,,σN2\sigma_{1}^{2},\sigma_{2}^{2},\ldots,\sigma_{N}^{2} in the diagonal. We can write

𝔢1=α¯1𝜸𝜸+β¯1𝐫1,with someα¯10,whereα¯12+β¯12=1,𝜸𝐫1=0and𝐫1=1.{\mathfrak{e}}_{1}=\bar{\alpha}_{1}\frac{{\mbox{\boldmath$\gamma$}}}{\|{\mbox{\boldmath$\gamma$}}\|}+\bar{\beta}_{1}{\bf r}_{1},\;\;\mbox{with some}\;\;\bar{\alpha}_{1}\geq 0,\;\;\mbox{where}\;\;\bar{\alpha}_{1}^{2}+\bar{\beta}_{1}^{2}=1,{\mbox{\boldmath$\gamma$}}^{\top}{\bf r}_{1}=0\;\;\mbox{and}\;\;\|{\bf r}_{1}\|=1.

It follows from the definition of λ1\lambda_{1} and 𝔢1{\mathfrak{e}}_{1} that

λ1=𝔢1𝐂𝔢1𝜸2\displaystyle\lambda_{1}={\mathfrak{e}}_{1}^{\top}{\bf C}{\mathfrak{e}}_{1}\geq\|{\mbox{\boldmath$\gamma$}}\|^{2}

and

(7.12) 𝔢1𝐂𝔢1=α¯12𝜸2+𝔢1𝚲𝐂𝔢1,𝔢1𝚲𝐂𝔢1=1N𝔢i2()σ2c5\displaystyle{\mathfrak{e}}_{1}^{\top}{\bf C}{\mathfrak{e}}_{1}=\bar{\alpha}_{1}^{2}\|{\mbox{\boldmath$\gamma$}}\|^{2}+{\mathfrak{e}}_{1}^{\top}{\mbox{\boldmath$\Lambda$}}{\bf C}{\mathfrak{e}}_{1},\;\;{\mathfrak{e}}_{1}^{\top}{\mbox{\boldmath$\Lambda$}}{\bf C}{\mathfrak{e}}_{1}\leq\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}^{2}(\ell)\sigma_{\ell}^{2}\leq c_{5}

where c5c_{5} is defined in Assumption 2.3(b). Thus we conclude

𝜸2α¯12𝜸2+c5.\|{\mbox{\boldmath$\gamma$}}\|^{2}\leq\bar{\alpha}_{1}^{2}\|{\mbox{\boldmath$\gamma$}}\|^{2}+c_{5}.

By assumption 𝜸𝔢10{\mbox{\boldmath$\gamma$}}^{\top}{\mathfrak{e}}_{1}\geq 0 and therefore 0α¯110\leq\bar{\alpha}_{1}\leq 1. Hence (1α¯1)21α¯12c5/𝜸2(1-\bar{\alpha}_{1})^{2}\leq 1-\bar{\alpha}_{1}^{2}\leq c_{5}/\|{\mbox{\boldmath$\gamma$}}\|^{2} and β¯12c5/𝜸2\bar{\beta}_{1}^{2}\leq c_{5}/\|{\mbox{\boldmath$\gamma$}}\|^{2}. Thus we get

(7.13) 𝔢1𝜸𝜸2=(1α¯1)2+β¯122c5𝜸2,\left\|{\mathfrak{e}}_{1}-\frac{{\mbox{\boldmath$\gamma$}}}{\|{\mbox{\boldmath$\gamma$}}\|}\right\|^{2}=(1-\bar{\alpha}_{1})^{2}+\bar{\beta}_{1}^{2}\leq\frac{2c_{5}}{\|{\mbox{\boldmath$\gamma$}}\|^{2}},

completing the proof of (7.8). Since α¯221c5/𝜸|2\bar{\alpha}_{2}^{2}\geq 1-c_{5}/\|{\mbox{\boldmath$\gamma$}}|^{2}, (7.9) follows from (7.12). For all i2i\geq 2 we have

|𝜸𝔢i|=𝜸|(𝜸𝜸𝔢1)𝔢i|𝜸𝜸𝜸𝔢12c5|{\mbox{\boldmath$\gamma$}}^{\top}{\mathfrak{e}}_{i}|=\|{\mbox{\boldmath$\gamma$}}\|\left|\left(\frac{{\mbox{\boldmath$\gamma$}}}{\|{\mbox{\boldmath$\gamma$}}\|}-{\mathfrak{e}}_{1}\right)^{\top}{\mathfrak{e}}_{i}\right|\leq\|{\mbox{\boldmath$\gamma$}}\|\left\|\frac{{\mbox{\boldmath$\gamma$}}}{\|{\mbox{\boldmath$\gamma$}}\|}-{\mathfrak{e}}_{1}\right\|\leq 2c_{5}

by (7.13) which gives (7.10). Since λi=𝔢i𝐂𝔢i=(𝔢i𝜸)2+𝔢i𝚲𝔢i\lambda_{i}={\mathfrak{e}}_{i}^{\top}{\bf C}{\mathfrak{e}}_{i}=({\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}})^{2}+{\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\Lambda$}}{\mathfrak{e}}_{i} and 𝔢i𝚲𝔢i==1N𝔢i2()σ2c5{\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\Lambda$}}{\mathfrak{e}}_{i}=\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}^{2}(\ell)\sigma^{2}_{\ell}\leq c_{5} by Assumption 2.3(b), the last claim of this lemma follows from (7.10). ∎

Lemma 7.5.

If (2.2), and Assumptions 2.1, 2.2, and 2.3 hold, then we have

(7.14) max1iKsup0u1|ZN,T;i(u)|=OP(N(logT)1/3T).\max_{1\leq i\leq K}\sup_{0\leq u\leq 1}|Z_{N,T;i}(u)|=O_{P}\left(\frac{N(\log T)^{1/3}}{T}\right).
Proof.

It follows from (2.5) that 𝔢i𝐂𝔢=0{\mathfrak{e}}_{i}{\bf C}{\mathfrak{e}}_{\ell}=0, if ii\neq\ell. Hence we get

𝔢i(𝐂~N,T(u)TuT𝐂)𝔢\displaystyle{\mathfrak{e}}_{i}^{\top}\left(\tilde{{\bf C}}_{N,T}(u)-\frac{\lfloor Tu\rfloor}{T}{\bf C}\right){\mathfrak{e}}_{\ell} =1Ts=1Tu𝔢i𝐗s𝐗s𝔢.\displaystyle=\frac{1}{T}\sum_{s=1}^{\lfloor Tu\rfloor}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}.

First we assume that 𝜸=O(1)\|{\mbox{\boldmath$\gamma$}}\|=O(1). It follows from the definition of ZN,T;iZ_{N,T;i} that

|ZN,T;i(u)|\displaystyle|Z_{N,T;i}(u)| =|iN1u(λiλ)(𝔢i(𝐂~N,T(u)u𝐂)𝔢)2|\displaystyle=\left|\sum_{\ell\neq i}^{N}\frac{1}{u(\lambda_{i}-\lambda_{\ell})}\left({\mathfrak{e}}_{i}^{\top}(\tilde{\bf C}_{N,T}(u)-u{\bf C}){\mathfrak{e}}_{\ell}\right)^{2}\right|
1c51TiN(1(Tu)1/2s=1Tu𝔢i𝐗s𝐗s𝔢)2,\displaystyle\leq\frac{1}{c_{5}}\frac{1}{T}\sum_{\ell\neq i}^{N}\left(\frac{1}{(Tu)^{1/2}}\sum_{s=1}^{\lfloor Tu\rfloor}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right)^{2},

where c0c_{0} is defined in Assumption 2.1. Let ρ>1\rho>1 and write with c=1/logρ+1c=\lfloor 1/\log\rho\rfloor+1

max1vTv1/2|s=1v𝔢i𝐗s𝐗s𝔢|\displaystyle\max_{1\leq v\leq T}v^{-1/2}\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right| max1kclogTmaxρk1<vρkv1/2|s=1v𝔢i𝐗s𝐗s𝔢|\displaystyle\leq\max_{1\leq k\leq c\log T}\max_{\rho^{k-1}<v\leq\rho^{k}}v^{-1/2}\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right|
max1kclogTρ(k1)/2max1vρk|s=1v𝔢i𝐗s𝐗s𝔢|.\displaystyle\leq\max_{1\leq k\leq c\log T}\rho^{-(k-1)/2}\max_{1\leq v\leq\rho^{k}}\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right|.

Thus we get for any x>0x>0 via Markov’s inequality that

(7.15) P\displaystyle P {max1vTv1/2|s=1v𝔢i𝐗s𝐗s𝔢|>x}\displaystyle\left\{\max_{1\leq v\leq T}v^{-1/2}\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right|>x\right\}
k=1clogTP{max1vρk|s=1v𝔢i𝐗s𝐗s𝔢|>xρ(k1)/2}\displaystyle\hskip 28.45274pt\leq\sum_{k=1}^{c\log T}P\left\{\max_{1\leq v\leq\rho^{k}}\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right|>x\rho^{(k-1)/2}\right\}
k=1clogTx6ρ3(k1)E(max1vρk|s=1v𝔢i𝐗s𝐗s𝔢|)6.\displaystyle\hskip 28.45274pt\leq\sum_{k=1}^{c\log T}x^{-6}\rho^{-3(k-1)}E\left(\max_{1\leq v\leq\rho^{k}}\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right|\right)^{6}.

Using (2.2) we obtain with 𝔢i=(𝔢i(1),𝔢i(2),,𝔢i(N)){\mathfrak{e}}_{i}=({\mathfrak{e}}_{i}(1),{\mathfrak{e}}_{i}(2),\ldots,{\mathfrak{e}}_{i}(N))^{\top} that

𝔢i𝐗s𝐗s𝔢=\displaystyle{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}= k=1Nγk𝔢i(k)n=1Nγn𝔢(n)(ηs21)+k=1Nγk𝔢i(k)ηsn=1Nen,s𝔢(n)\displaystyle\sum_{k=1}^{N}\gamma_{k}{\mathfrak{e}}_{i}(k)\sum_{n=1}^{N}\gamma_{n}{\mathfrak{e}}_{\ell}(n)(\eta^{2}_{s}-1)+\sum_{k=1}^{N}\gamma_{k}{\mathfrak{e}}_{i}(k)\eta_{s}\sum_{n=1}^{N}e_{n,s}{\mathfrak{e}}_{\ell}(n)
+n=1Nγn𝔢(n)ηsk=1Nek,s𝔢i(k)+n=1Nk=1N(ek,sen,sEek,sen,s)𝔢i(k)𝔢(n),\displaystyle+\sum_{n=1}^{N}\gamma_{n}{\mathfrak{e}}_{\ell}(n)\eta_{s}\sum_{k=1}^{N}e_{k,s}{\mathfrak{e}}_{i}(k)+\sum_{n=1}^{N}\sum_{k=1}^{N}(e_{k,s}e_{n,s}-Ee_{k,s}e_{n,s}){\mathfrak{e}}_{i}(k){\mathfrak{e}}_{\ell}(n),

since for ii\neq\ell we have E𝔢i𝐗s𝐗s𝔢=𝔢i𝐂𝔢=0E{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}={\mathfrak{e}}_{i}^{\top}{\bf C}{\mathfrak{e}}_{\ell}=0. Clearly, on account of 𝔢i=1\|{\mathfrak{e}}_{i}\|=1, the Cauchy–Schwarz inequality implies

|k=1Nγk𝔢i(k)|𝜸.\displaystyle\left|\sum_{k=1}^{N}\gamma_{k}{\mathfrak{e}}_{i}(k)\right|\leq\|{\mbox{\boldmath$\gamma$}}\|.

Following the proofs of (7.2), we get that from Assumption 2.3(a) that

(7.16) E|k=1Nγk𝔢i(k)m=1Nγm𝔢m()s=1v(ηs21)|6c5,1v3𝜸12.\displaystyle E\left|\sum_{k=1}^{N}\gamma_{k}{\mathfrak{e}}_{i}(k)\sum_{m=1}^{N}\gamma_{m}{\mathfrak{e}}_{m}(\ell)\sum_{s=1}^{v}(\eta^{2}_{s}-1)\right|^{6}\leq c_{5,1}v^{3}\|{\mbox{\boldmath$\gamma$}}\|^{12}.

Let

τs=ηsn=1Nen,s𝔢(n)andτs(m)=ηs(m)n=1Nen,s(m)𝔢(n),\tau_{s}=\eta_{s}\sum_{n=1}^{N}e_{n,s}{\mathfrak{e}}_{\ell}(n)\;\;\;\mbox{and}\;\;\;\tau_{s}^{(m)}=\eta_{s}^{(m)}\sum_{n=1}^{N}e_{n,s}^{(m)}{\mathfrak{e}}_{\ell}(n),

where ηs(m)\eta_{s}^{(m)} and en,s(m)e_{n,s}^{(m)} are defined in Assumption 2.3(a) and Assumption 2.3(b), respectively. By independence we have

E\displaystyle E |τ0τ0(m)|6\displaystyle\left|\tau_{0}-\tau_{0}^{(m)}\right|^{6}
26E|η0η0(m)|6E|n=1Nen,0𝔢(n)|6+26E|η0(m)|6E|n=1N(en,0en,0(m))𝔢(n)|6.\displaystyle\leq 2^{6}E\left|\eta_{0}-\eta_{0}^{(m)}\right|^{6}E\left|\sum_{n=1}^{N}e_{n,0}{\mathfrak{e}}_{\ell}(n)\right|^{6}+2^{6}E\left|\eta_{0}^{(m)}\right|^{6}E\left|\sum_{n=1}^{N}(e_{n,0}-e_{n,0}^{(m)}){\mathfrak{e}}_{\ell}(n)\right|^{6}.

By the independence of the variables en,0,1nNe_{n,0},1\leq n\leq N and the Rosenthal inequality (cf. Petrov (1995)) we conclude

E|n=1Nen,0𝔢(n)|6\displaystyle E\left|\sum_{n=1}^{N}e_{n,0}{\mathfrak{e}}_{\ell}(n)\right|^{6} c5,2{n=1NE|en,0|6|𝔢(n)|6+(n=1NEen,02𝔢2(n))3}\displaystyle\leq c_{5,2}\left\{\sum_{n=1}^{N}E|e_{n,0}|^{6}|{\mathfrak{e}}_{\ell}(n)|^{6}+\left(\sum_{n=1}^{N}Ee_{n,0}^{2}{\mathfrak{e}}_{\ell}^{2}(n)\right)^{3}\right\}
c5,3sup1n<Een,06\displaystyle\leq c_{5,3}\sup_{1\leq n<\infty}Ee^{6}_{n,0}
c5,4,\displaystyle\leq c_{5,4},

where c5,4c_{5,4} is a constant, on account of Assumption 2.3(b) and 𝔢=1\|{\mathfrak{e}}_{\ell}\|=1. Due to the independence of en,0en,0(m)e_{n,0}-e_{n,0}^{(m)} and er,0er,0(m)e_{r,0}-e_{r,0}^{(m)}, if nrn\neq r, we can apply again the Rosenthal inequality to get

E\displaystyle E |n=1N(en,0en,0(m))𝔢(n)|6\displaystyle\left|\sum_{n=1}^{N}(e_{n,0}-e_{n,0}^{(m)}){\mathfrak{e}}_{\ell}(n)\right|^{6}
c5,5{n=1NE|en,0en,0(m)|6|𝔢(n)|6+(n=1NE(en,0en,0(m))2𝔢2(n))3}\displaystyle\leq c_{5,5}\left\{\sum_{n=1}^{N}E|e_{n,0}-e_{n,0}^{(m)}|^{6}|{\mathfrak{e}}_{\ell}(n)|^{6}+\left(\sum_{n=1}^{N}E(e_{n,0}-e_{n,0}^{(m)})^{2}{\mathfrak{e}}_{\ell}^{2}(n)\right)^{3}\right\}
c5,6m6α,\displaystyle\leq c_{5,6}m^{-6\alpha},

resulting in

(7.17) E|τ0τ0(m)|6c5,7m6α.\displaystyle E\left|\tau_{0}-\tau_{0}^{(m)}\right|^{6}\leq c_{5,7}m^{-6\alpha}.

Hence the moment inequality in Berkes et al.  (2011) yields

(7.18) E|s=1vτs|6c5,8v3.E\left|\sum_{s=1}^{v}\tau_{s}\right|^{6}\leq c_{5,8}v^{3}.

Similarly to (7.18) we have

(7.19) E|s=1vηsk=1Nek,s𝔢i(k)|6c5,9v3.E\left|\sum_{s=1}^{v}\eta_{s}\sum_{k=1}^{N}e_{k,s}{\mathfrak{e}}_{i}(k)\right|^{6}\leq c_{5,9}v^{3}.

Let

τ¯s=n=1Nk=1N(ek,sen,sEek,sen,s)𝔢i(k)𝔢(n)=n=1Nen,s𝔢(n)k=1Nek,s𝔢i(k)n=1NEen,s2𝔢i(n)𝔢(n)\bar{\tau}_{s}=\sum_{n=1}^{N}\sum_{k=1}^{N}(e_{k,s}e_{n,s}-Ee_{k,s}e_{n,s}){\mathfrak{e}}_{i}(k){\mathfrak{e}}_{\ell}(n)=\sum_{n=1}^{N}e_{n,s}{\mathfrak{e}}_{\ell}(n)\sum_{k=1}^{N}e_{k,s}{\mathfrak{e}}_{i}(k)-\sum_{n=1}^{N}Ee_{n,s}^{2}{\mathfrak{e}}_{i}(n){\mathfrak{e}}_{\ell}(n)

and

τ¯s(m)\displaystyle\bar{\tau}_{s}^{(m)} =n=1Nk=1N(ek,s(m)en,s(m)Eek,sen,s)𝔢i(k)𝔢(n)\displaystyle=\sum_{n=1}^{N}\sum_{k=1}^{N}(e_{k,s}^{(m)}e_{n,s}^{(m)}-Ee_{k,s}e_{n,s}){\mathfrak{e}}_{i}(k){\mathfrak{e}}_{\ell}(n)
=n=1Nen,s(m)𝔢(n)k=1Nek,s(m)𝔢i(k)n=1NEen,s2𝔢i(n)𝔢(n),\displaystyle=\sum_{n=1}^{N}e_{n,s}^{(m)}{\mathfrak{e}}_{\ell}(n)\sum_{k=1}^{N}e_{k,s}^{(m)}{\mathfrak{e}}_{i}(k)-\sum_{n=1}^{N}Ee_{n,s}^{2}{\mathfrak{e}}_{i}(n){\mathfrak{e}}_{\ell}(n),

where en,s(m)e_{n,s}^{(m)} defined in Assumption 2.3(b). Clearly,

|n=1NEen,s2𝔢i(n)𝔢(n)|sup1n<Een,02,\left|\sum_{n=1}^{N}Ee_{n,s}^{2}{\mathfrak{e}}_{i}(n){\mathfrak{e}}_{\ell}(n)\right|\leq\sup_{1\leq n<\infty}Ee_{n,0}^{2},

and

τ¯sτ¯s(m)=(n=1N(en,sek,s(m))𝔢(n))k=1Nek,s𝔢i(k)+(k=1N(ek,sek,s(m))𝔢i(k))n=1Nen,s(m)𝔢(n).\displaystyle\bar{\tau}_{s}-\bar{\tau}_{s}^{(m)}=\left(\sum_{n=1}^{N}(e_{n,s}-e_{k,s}^{(m)}){\mathfrak{e}}_{\ell}(n)\right)\sum_{k=1}^{N}e_{k,s}{\mathfrak{e}}_{i}(k)+\left(\sum_{k=1}^{N}(e_{k,s}-e_{k,s}^{(m)}){\mathfrak{e}}_{i}(k)\right)\sum_{n=1}^{N}e_{n,s}^{(m)}{\mathfrak{e}}_{\ell}(n).

Thus we get by the Cauchy–Schwarz inequality that

E|τ¯0τ¯0(m)|626\displaystyle E|\bar{\tau}_{0}-\bar{\tau}_{0}^{(m)}|^{6}\leq 2^{6} {(E|n=1N(en,0ek,0(m))𝔢(n)|12E|k=1Nek,s𝔢i(k)|12)1/2\displaystyle\left\{\left(E\left|\sum_{n=1}^{N}(e_{n,0}-e_{k,0}^{(m)}){\mathfrak{e}}_{\ell}(n)\right|^{12}E\left|\sum_{k=1}^{N}e_{k,s}{\mathfrak{e}}_{i}(k)\right|^{12}\right)^{1/2}\right.
+E(|k=1N(ek,0ek,0(m))𝔢i(k)|12E|n=1Nen,0(m)𝔢(n)|12)1/2}.\displaystyle\hskip 28.45274pt+E\left.\left(\left|\sum_{k=1}^{N}(e_{k,0}-e_{k,0}^{(m)}){\mathfrak{e}}_{i}(k)\right|^{12}E\left|\sum_{n=1}^{N}e_{n,0}^{(m)}{\mathfrak{e}}_{\ell}(n)\right|^{12}\right)^{1/2}\right\}.

Using again Rosenthal’s and Jensen’s inequalities, we obtain that

E|n=1N(en,0ek,0(m))𝔢(n)|12\displaystyle E\left|\sum_{n=1}^{N}(e_{n,0}-e_{k,0}^{(m)}){\mathfrak{e}}_{\ell}(n)\right|^{12} c5,10{n=1NE|en,0ek,0(m)|12|𝔢(n)|12\displaystyle\leq c_{5,10}\left\{\sum_{n=1}^{N}E|e_{n,0}-e_{k,0}^{(m)}|^{12}|{\mathfrak{e}}_{\ell}(n)|^{12}\right.
+(n=1NE(en,0ek,0(m))2𝔢2(n))6}\displaystyle\left.\hskip 28.45274pt+\left(\sum_{n=1}^{N}E(e_{n,0}-e_{k,0}^{(m)})^{2}{\mathfrak{e}}_{\ell}^{2}(n)\right)^{6}\right\}
c5,11m12α,\displaystyle\leq c_{5,11}m^{-12\alpha},

and similarly

E|k=1Nek,0𝔢i(k)|12\displaystyle E\left|\sum_{k=1}^{N}e_{k,0}{\mathfrak{e}}_{i}(k)\right|^{12} c5,12{k=1NE|ek,0|12|𝔢i(k)|12+(k=1NEek,02𝔢i2(k))6}\displaystyle\leq c_{5,12}\left\{\sum_{k=1}^{N}E|e_{k,0}|^{12}|{\mathfrak{e}}_{i}(k)|^{12}+\left(\sum_{k=1}^{N}Ee_{k,0}^{2}{\mathfrak{e}}_{i}^{2}(k)\right)^{6}\right\}
2c5,12sup1k<E|ek,0|12.\displaystyle\leq 2c_{5,12}\sup_{1\leq k<\infty}E|e_{k,0}|^{12}.

Thus we have

(7.20) E|τ¯0τ¯0(m)|6c5,13m6α,E|\bar{\tau}_{0}-\bar{\tau}_{0}^{(m)}|^{6}\leq c_{5,13}m^{-6\alpha},

and therefore Proposition 4 of Berkes et al. (2011) implies

(7.21) E|s=1vτ¯s|6c5,14v3.E\left|\sum_{s=1}^{v}\bar{\tau}_{s}\right|^{6}\leq c_{5,14}v^{3}.

Putting together (7.16)–(7.21) we conclude

(7.22) E|s=1v𝔢i𝐗s𝐗s𝔢|6c5,15v3(1+𝜸6+𝜸12).E\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right|^{6}\leq c_{5,15}v^{3}(1+\|{\mbox{\boldmath$\gamma$}}\|^{6}+\|{\mbox{\boldmath$\gamma$}}\|^{12}).

Since 𝔢i𝐗s𝐗s𝔢,<s<{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell},-\infty<s<\infty is a stationary sequence, (7.22) and the maximal inequality of Móricz et al. (1982) imply

(7.23) Emax1vz|s=1v𝔢i𝐗s𝐗s𝔢|6c5,16z3(1+𝜸6+𝜸12).\displaystyle E\max_{1\leq v\leq z}\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right|^{6}\leq c_{5,16}z^{3}(1+\|{\mbox{\boldmath$\gamma$}}\|^{6}+\|{\mbox{\boldmath$\gamma$}}\|^{12}).

Now we use (7.15) with x=u(logT)1/6x=u(\log T)^{1/6} resulting in

P{max1vTv1/2|s=1v𝔢i𝐗s𝐗s𝔢|>u(logT)1/6}c5,17u6,\displaystyle P\left\{\max_{1\leq v\leq T}v^{-1/2}\left|\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right|>u(\log T)^{1/6}\right\}\leq c_{5,17}u^{-6},

implying

E(max1vTv1/2s=1v𝔢i𝐗s𝐗s𝔢)2c5,18(logT)1/3.E\left(\max_{1\leq v\leq T}v^{-1/2}\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right)^{2}\leq c_{5,18}(\log T)^{1/3}.

This completes the proof of (7.14).

Next we assume that 𝜸\|{\mbox{\boldmath$\gamma$}}\|\to\infty. It is easy to see that for for 2iK2\leq i\leq K

|ZN,T;i(u)|1T\displaystyle|Z_{N,T;i}(u)|\leq\frac{1}{T} |i,1N1λiλ(1(Tu)1/2s=1Tu𝔢i𝐗s𝐗s𝔢)2|\displaystyle\left|\sum_{\ell\neq i,\ell\neq 1}^{N}\frac{1}{\lambda_{i}-\lambda_{\ell}}\left(\frac{1}{(Tu)^{1/2}}\sum_{s=1}^{\lfloor Tu\rfloor}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}^{\top}\right)^{2}\right|
+1T1λ1λ2(1(Tu)1/2s=2Tu𝔢i𝐗s𝐗s𝔢1)2.\displaystyle+\frac{1}{T}\frac{1}{\lambda_{1}-\lambda_{2}}\left(\frac{1}{(Tu)^{1/2}}\sum_{s=2}^{\lfloor Tu\rfloor}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{1}\right)^{2}.

If 2iK2\leq i\leq K, then the proof of (7.22) shows that

i,1N(1(Tu)1/2s=1Tu𝔢i𝐗s𝐗s𝔢)2=OP(N(logT)1/3),\sum_{\ell\neq i,\ell\neq 1}^{N}\left(\frac{1}{(Tu)^{1/2}}\sum_{s=1}^{\lfloor Tu\rfloor}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right)^{2}=O_{P}(N(\log T)^{1/3}),

and therefore by Assumption 2.1 for any 2iK2\leq i\leq K we have

|i,11λiλ(1(Tu)1/2s=1Tu𝔢i𝐗s𝐗s𝔢)2|=OP(N(logT)1/3).\displaystyle\left|\sum_{\ell\neq i,\ell\neq 1}\frac{1}{\lambda_{i}-\lambda_{\ell}}\left(\frac{1}{(Tu)^{1/2}}\sum_{s=1}^{\lfloor Tu\rfloor}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}^{\top}\right)^{2}\right|=O_{P}(N(\log T)^{1/3}).

By (7.21) we have along the lines of the proof of (7.15)

(7.24) Esup1vT1v\displaystyle E\sup_{1\leq v\leq T}\frac{1}{v} (s=1v𝔢i𝐗s𝐗s𝔢𝜸𝔢i𝜸𝔢s=1v(ηs21)𝜸𝔢is=1vn=1Nen,s𝔢(n)\displaystyle\left(\sum_{s=1}^{v}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}-{\mbox{\boldmath$\gamma$}}^{\top}{\mathfrak{e}}_{i}{\mbox{\boldmath$\gamma$}}^{\top}{\mathfrak{e}}_{\ell}\sum_{s=1}^{v}(\eta_{s}^{2}-1)-{\mbox{\boldmath$\gamma$}}^{\top}{\mathfrak{e}}_{i}\sum_{s=1}^{v}\sum_{n=1}^{N}e_{n,s}{\mathfrak{e}}_{\ell}(n)\right.
𝜸𝔢s=1vk=1Nek,s𝔢i(k))2c5,19(logT)1/3,\displaystyle\hskip 56.9055pt\left.-{\mbox{\boldmath$\gamma$}}^{\top}{\mathfrak{e}}_{\ell}\sum_{s=1}^{v}\sum_{k=1}^{N}e_{k,s}{\mathfrak{e}}_{i}(k)\right)^{2}\leq c_{5,19}(\log T)^{1/3},

where in the last step we used (7.10). Also, (7.18) and (7.19) imply via the maximal inequality in Móricz et al. (1982) that

(7.25) Esup1vT(1vs=1v(ηs21))2c5,20(logT)1/3,E\sup_{1\leq v\leq T}\left(\frac{1}{v}\sum_{s=1}^{v}(\eta_{s}^{2}-1)\right)^{2}\leq c_{5,20}(\log T)^{1/3},

and

(7.26) Esup1vT1v(s=1vk=1Nek,s𝔢i(k))2c5,21(logT)1/3.E\sup_{1\leq v\leq T}\frac{1}{v}\left(\sum_{s=1}^{v}\sum_{k=1}^{N}e_{k,s}{\mathfrak{e}}_{i}(k)\right)^{2}\leq c_{5,21}(\log T)^{1/3}.

Using now (7.25) and (7.26) we conclude that

1λ1λ2(1(Tu)1/2s=1Tu𝔢i𝐗s𝐗s𝔢1)2=(𝔢1𝜸)2λ1λ2OP((logT)1/3).\displaystyle\frac{1}{\lambda_{1}-\lambda_{2}}\left(\frac{1}{(Tu)^{1/2}}\sum_{s=1}^{\lfloor Tu\rfloor}{\mathfrak{e}}_{i}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{1}\right)^{2}=\frac{({\mathfrak{e}}_{1}^{\top}{\mbox{\boldmath$\gamma$}})^{2}}{\lambda_{1}-\lambda_{2}}O_{P}((\log T)^{1/3}).

Since by Lemma 7.4 we have that (𝔢1𝜸)2/(λ1λ2)=O(1)({\mathfrak{e}}_{1}^{\top}{\mbox{\boldmath$\gamma$}})^{2}/(\lambda_{1}-\lambda_{2})=O(1), the proof of (7.14) is complete when 2iK2\leq i\leq K. It is easy to see that by (7.24) and Lemma 7.4

sup0u1|ZN,T;1(u)|\displaystyle\sup_{0\leq u\leq 1}|Z_{N,T;1}(u)| 1T1λ1λ2sup0u1=2N(1(Tu)1/2s=1Tu𝔢1𝐗s𝐗s𝔢)2\displaystyle\leq\frac{1}{T}\frac{1}{\lambda_{1}-\lambda_{2}}\sup_{0\leq u\leq 1}\sum_{\ell=2}^{N}\left(\frac{1}{(Tu)^{1/2}}\sum_{s=1}^{\lfloor Tu\rfloor}{\mathfrak{e}}_{1}^{\top}{\bf X}_{s}{\bf X}_{s}^{\top}{\mathfrak{e}}_{\ell}\right)^{2}
=1TNλ1λ2(OP((logT)1/3+(𝔢1γ)2Emax1vT(v1/2s=1v(ηs21))2\displaystyle=\frac{1}{T}\frac{N}{\lambda_{1}-\lambda_{2}}\left(O_{P}((\log T)^{1/3}+({\mathfrak{e}}_{1}^{\top}\gamma)^{2}E\max_{1\leq v\leq T}\left(v^{-1/2}\sum_{s=1}^{v}(\eta_{s}^{2}-1)\right)^{2}\right.
+Emax2iN(v1/2s=1vk=1Nek,s𝔢i(k))2)\displaystyle\left.\hskip 28.45274pt+E\max_{2\leq i\leq N}\left(v^{-1/2}\sum_{s=1}^{v}\sum_{k=1}^{N}e_{k,s}{\mathfrak{e}}_{i}(k)\right)^{2}\right)
=(𝔢1γ)2λ1λ2N(logT)1/3T\displaystyle=\frac{({\mathfrak{e}}_{1}^{\top}\gamma)^{2}}{\lambda_{1}-\lambda_{2}}\frac{N(\log T)^{1/3}}{T}

an account of (7.25) and (7.26). According to Lemma 7.4 we have that (𝔢1γ)2/(λ1λ2)=O(1)({\mathfrak{e}}_{1}^{\top}\gamma)^{2}/(\lambda_{1}-\lambda_{2})=O(1), completing the proof of Lemma 7.5.

Using the definition of 𝐂~N,T(u)\tilde{{\bf C}}_{N,T}(u) and (2.2) we get for any 1iK1\leq i\leq K,

T𝔢i(𝐂~N,T(u)(Tu/T)𝐂)𝔢i=(𝔢i𝜸)2\displaystyle T{\mathfrak{e}}_{i}^{\top}(\tilde{{\bf C}}_{N,T}(u)-(\lfloor Tu\rfloor/T){\bf C}){\mathfrak{e}}_{i}=({\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}})^{2} t=1Tu(ηt21)+2𝔢iT𝜸t=1Tuηt=1N𝔢i()e,t\displaystyle\sum_{t=1}^{\lfloor Tu\rfloor}(\eta_{t}^{2}-1)+2{\mathfrak{e}}_{i}^{T}{\mbox{\boldmath$\gamma$}}\sum_{t=1}^{\lfloor Tu\rfloor}\eta_{t}\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}(\ell)e_{\ell,t}
+t=1Tu(=1N𝔢i()e,t)2Tu=1N𝔢i2()σ2.\displaystyle+\sum_{t=1}^{\lfloor Tu\rfloor}\left(\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}(\ell)e_{\ell,t}\right)^{2}-\lfloor Tu\rfloor\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}^{2}(\ell)\sigma^{2}_{\ell}.

Let

DN,T(u)=1T1/2t=1Tu(ηt21),FN,T;i(u)=1T1/2t=1Tuηt=1N𝔢i()e,t, 1iK,D_{N,T}(u)=\frac{1}{T^{1/2}}\sum_{t=1}^{\lfloor Tu\rfloor}(\eta_{t}^{2}-1),\;\;\;F_{N,T;i}(u)=\frac{1}{T^{1/2}}\sum_{t=1}^{\lfloor Tu\rfloor}\eta_{t}\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}(\ell)e_{\ell,t},\;1\leq i\leq K,

and

GN,T;i(u)=1T1/2{t=1Tu(=1N𝔢i()e,t)2Tu=1N𝔢i2()σ2}, 1iK.G_{N,T;i}(u)=\frac{1}{T^{1/2}}\left\{\sum_{t=1}^{\lfloor Tu\rfloor}\left(\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}(\ell)e_{\ell,t}\right)^{2}-\lfloor Tu\rfloor\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}^{2}(\ell)\sigma^{2}_{\ell}\right\},\;1\leq i\leq K.
Lemma 7.6.

If (2.2) and Assumptions 2.1, 2.2, and 2.3 hold, then {DN,T(u),FN,T;i(u),GN,T;i(u),0u1,1iK}\{D_{N,T}(u),F_{N,T;i}(u),G_{N,T;i}(u),0\leq u\leq 1,1\leq i\leq K\} converges in 𝒟2K+1[0,1]{\mathcal{D}}^{2K+1}[0,1] to the Gaussian process 𝚪(u)=(Γ1(u),Γ2(u),,Γ2K+1(u)),0u1{\mbox{\boldmath$\Gamma$}}(u)=(\Gamma_{1}(u),\Gamma_{2}(u),\ldots,\\ \Gamma_{2K+1}(u))^{\top},0\leq u\leq 1, E𝚪(u)=𝟎E{\mbox{\boldmath$\Gamma$}}(u)={\bf 0}, and

E𝚪(u)𝚪(u)=min(u,u)(V1𝟎       0𝟎𝐕2𝐎𝟎𝐎𝐕3)E{\mbox{\boldmath$\Gamma$}}(u){\mbox{\boldmath$\Gamma$}}^{\top}(u^{\prime})=\min(u,u^{\prime})\left(\begin{array}[]{ll}V_{1}&{\bf 0}^{\top}\;\;\;\;\;\;\;{\bf 0}^{\top}\vskip 8.5359pt\\ {\bf 0}&{\bf V}_{2}\;\;\;\;\;{\bf O}\vskip 8.5359pt\\ {\bf 0}&{\bf O}\;\;\;\;\;\;\;\;{\bf V}_{3}\end{array}\right)
Proof.

First we define the mm–dependent processes

DN,T(m)(u)=1T1/2t=1Tu((ηt(m))21),FN,T;i(m)(u)=1T1/2t=1Tuηt=1N𝔢i()e,t(m), 1iK,D_{N,T}^{(m)}(u)=\frac{1}{T^{1/2}}\sum_{t=1}^{\lfloor Tu\rfloor}((\eta_{t}^{(m)})^{2}-1),\;\;\;F_{N,T;i}^{(m)}(u)=\frac{1}{T^{1/2}}\sum_{t=1}^{\lfloor Tu\rfloor}\eta_{t}\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}(\ell)e_{\ell,t}^{(m)},\;1\leq i\leq K,

and

GN,T;i(m)(u)=1T1/2{t=1Tu(=1N𝔢i()e,t(m))2Tu=1N𝔢i2()σ2}, 1iK,G_{N,T;i}^{(m)}(u)=\frac{1}{T^{1/2}}\left\{\sum_{t=1}^{\lfloor Tu\rfloor}\left(\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}(\ell)e_{\ell,t}^{(m)}\right)^{2}-\lfloor Tu\rfloor\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}^{2}(\ell)\sigma^{2}_{\ell}\right\},\;1\leq i\leq K,

where ηt(m)\eta_{t}^{(m)} and e,t(m)e_{\ell,t}^{(m)} are defined in Assumption 2.3(a) and Assumption 2.3(b), respectively. We show that for any x>0x>0

(7.27) limmlim supTP{|DN,T(u)DN,T(m)(u)|>x}=0,\displaystyle\lim_{m\to\infty}\limsup_{T\to\infty}P\left\{|D_{N,T}(u)-D_{N,T}^{(m)}(u)|>x\right\}=0,
(7.28) limmlim supTP{|FN,T;i(u)FN,T;i(m)(u)|>x}=0,\displaystyle\lim_{m\to\infty}\limsup_{T\to\infty}P\left\{|F_{N,T;i}(u)-F_{N,T;i}^{(m)}(u)|>x\right\}=0,

and

(7.29) limmlim supTP{|GN,T;i(u)GN,T;i(m)(u)|>x}=0,\displaystyle\lim_{m\to\infty}\limsup_{T\to\infty}P\left\{|G_{N,T;i}(u)-G_{N,T;i}^{(m)}(u)|>x\right\}=0,

for all 0<u10<u\leq 1 and 1iK1\leq i\leq K. It follows from Assumption 2.3(a) and the Cauchy–Schwarz inequality that

(7.30) E|η02(η0(m))2|6\displaystyle E\left|\eta_{0}^{2}-(\eta^{(m)}_{0})^{2}\right|^{6} =E{|η0+η0(m)||η0η0(m)|}6\displaystyle=E\left\{|\eta_{0}+\eta^{(m)}_{0}||\eta_{0}-\eta^{(m)}_{0}|\right\}^{6}
24(Eη012)1/2(E|η0η(m)|12)1/2\displaystyle\leq 2^{4}(E\eta_{0}^{12})^{1/2}(E|\eta_{0}-\eta^{(m)}|^{12})^{1/2}
c6,1m6α.\displaystyle\leq c_{6,1}m^{-6\alpha}.

By stationarity, we get that

var (T1/2s=1Tu(ηs2(ηs(m))2)2)\displaystyle\left(T^{-1/2}\sum_{s=1}^{\lfloor Tu\rfloor}(\eta_{s}^{2}-(\eta^{(m)}_{s})^{2})^{2}\right)
1Ts=1TE(ηs2(ηs(m))2)+2s=1TE(η02(η0(m))2)(ηs2(ηs(m))2)\displaystyle\leq\frac{1}{T}\sum_{s=1}^{T}E(\eta_{s}^{2}-(\eta^{(m)}_{s})^{2})+2\sum_{s=1}^{T}E(\eta_{0}^{2}-(\eta^{(m)}_{0})^{2})(\eta_{s}^{2}-(\eta^{(m)}_{s})^{2})
E(η02(η0(m))2)2+2s=1T|E(η02(η0(m))2)(ηs2(ηs(m))2)|.\displaystyle\leq E(\eta_{0}^{2}-(\eta^{(m)}_{0})^{2})^{2}+2\sum_{s=1}^{T}|E(\eta_{0}^{2}-(\eta^{(m)}_{0})^{2})(\eta_{s}^{2}-(\eta^{(m)}_{s})^{2})|.

Since η02(η0(m))2\eta_{0}^{2}-(\eta^{(m)}_{0})^{2} is independent of ηs(m)\eta^{(m)}_{s}, if s>ms>m, we obtain that

s=m+1T\displaystyle\sum_{s=m+1}^{T} |E(η02(η0(m))2)(ηs2(ηs(m))2)|\displaystyle|E(\eta_{0}^{2}-(\eta^{(m)}_{0})^{2})(\eta_{s}^{2}-(\eta^{(m)}_{s})^{2})|
s=m+1T|E(η021)ηs2|+s=m+1T|E((η0(m))2)1)ηs2|.\displaystyle\leq\sum_{s=m+1}^{T}|E(\eta_{0}^{2}-1)\eta_{s}^{2}|+\sum_{s=m+1}^{T}|E((\eta^{(m)}_{0})^{2})-1)\eta_{s}^{2}|.

The independence of η0\eta_{0} and ηs(s)\eta_{s}^{(s)}, (7.30), and Hölder’s inequality yield

s=m+1T|E(η021)ηs2|\displaystyle\sum_{s=m+1}^{T}|E(\eta_{0}^{2}-1)\eta_{s}^{2}| =s=m+1T|E(η021)(ηs2(ηs(s))2|\displaystyle=\sum_{s=m+1}^{T}|E(\eta_{0}^{2}-1)(\eta_{s}^{2}-(\eta_{s}^{(s)})^{2}|
s=m+1(E|η021|6/5)5/6(E(η02(η0(s))2)6)1/6\displaystyle\leq\sum_{s=m+1}^{\infty}(E|\eta_{0}^{2}-1|^{6/5})^{5/6}(E(\eta_{0}^{2}-(\eta_{0}^{(s)})^{2})^{6})^{1/6}
c6,2m(α1)\displaystyle\leq c_{6,2}m^{-(\alpha-1)}

with c6,2=(c6,1/(α1))(E|η021|6/5)5/6c_{6,2}=(c_{6,1}/(\alpha-1))(E|\eta_{0}^{2}-1|^{6/5})^{5/6}. The same argument gives that

s=m+1T|E((η0(m))2)1)ηs2|c6,2m(α1).\sum_{s=m+1}^{T}|E((\eta^{(m)}_{0})^{2})-1)\eta_{s}^{2}|\leq c_{6,2}m^{-(\alpha-1)}.

On the other hand, applying again (7.30) and the Cauchy–Schwarz inequality we conclude

s=1m|E(η02(η0(m))2)(ηs2(ηs(m))2)|s=1mE(η02(η0(m))2)2c6,1m(α1).\displaystyle\sum_{s=1}^{m}|E(\eta_{0}^{2}-(\eta^{(m)}_{0})^{2})(\eta_{s}^{2}-(\eta^{(m)}_{s})^{2})|\leq\sum_{s=1}^{m}E(\eta_{0}^{2}-(\eta^{(m)}_{0})^{2})^{2}\leq c_{6,1}m^{-(\alpha-1)}.

Chebyshev’s inequality now implies (7.27). The proofs of (7.28) and (7.29) go along the lines of (7.27), we only need to replace (7.30) with (7.17) and (7.20), respectively. Next we show that for each mm, {DN,T(m)(u),FN,T;i(m)(u),GN,T;i(m)(u),0u1,1iK}\{D_{N,T}^{(m)}(u),F_{N,T;i}^{(m)}(u),G_{N,T;i}^{(m)}(u),0\leq u\leq 1,1\leq i\leq K\} converges in 𝒟2K+1[0,1]{\mathcal{D}}^{2K+1}[0,1] to the Gaussian process 𝚪(m)(u)=(Γ1(m)(u),Γ2(m)(u),,Γ2K+1(m)(u)),0u1{\mbox{\boldmath$\Gamma$}}^{(m)}(u)=(\Gamma_{1}^{(m)}(u),\Gamma_{2}^{(m)}(u),\\ \ldots,\Gamma_{2K+1}^{(m)}(u))^{\top},0\leq u\leq 1, with E𝚪(m)(u)=𝟎E{\mbox{\boldmath$\Gamma$}}^{(m)}(u)={\bf 0}, and

E𝚪(m)(u)(𝚪(m))(u)=min(u,u)(V1(m)𝟎       0𝟎𝐕2(m)𝐎𝟎𝐎𝐕3(m))E{\mbox{\boldmath$\Gamma$}}^{(m)}(u)({\mbox{\boldmath$\Gamma$}}^{(m)})^{\top}(u^{\prime})=\min(u,u^{\prime})\left(\begin{array}[]{ll}V_{1}^{(m)}&{\bf 0}^{\top}\;\;\;\;\;\;\;{\bf 0}^{\top}\vskip 8.5359pt\\ {\bf 0}&{\bf V}_{2}^{(m)}\;\;\;\;\;{\bf O}\vskip 8.5359pt\\ {\bf 0}&{\bf O}\;\;\;\;\;\;\;\;{\bf V}_{3}^{(m)}\end{array}\right)

with

(7.31) V1(m)==mmcov((η0(m))2,(η(m))2),V_{1}^{(m)}=\sum_{\ell=-m}^{m}\mbox{cov}((\eta_{0}^{(m)})^{2},(\eta^{(m)}_{\ell})^{2}),
(7.32) 𝐕2(m)={s=mmlimNk=1N𝔢i(k)𝔢j(k)cov(η0(m),ηs(m))cov(ek,0(m),ek,s(m)),1i,jK},{\bf V}_{2}^{(m)}=\left\{\sum_{s=-m}^{m}\lim_{N\to\infty}\sum_{k=1}^{N}{\mathfrak{e}}_{i}(k){\mathfrak{e}}_{j}(k)\mbox{cov}(\eta_{0}^{(m)},\eta_{s}^{(m)})\mbox{cov}(e_{k,0}^{(m)},e_{k,s}^{(m)}),1\leq i,j\leq K\right\},

and

(7.33) 𝐕3(m)\displaystyle{\bf V}_{3}^{(m)} ={s=mmlimN(k=1N𝔢i2(k)𝔢j2(k)cov((ek,0(m))2,(ek,s(m))2)\displaystyle=\left\{\sum_{s=-m}^{m}\lim_{N\to\infty}\left(\sum_{k=1}^{N}{\mathfrak{e}}_{i}^{2}(k){\mathfrak{e}}^{2}_{j}(k)\mbox{cov}((e_{k,0}^{(m)})^{2},(e_{k,s}^{(m)})^{2})\right.\right.
+2(k=1N𝔢i(k)𝔢j(k)cov(ek,0(m),ek,s(m)))2\displaystyle\hskip 42.67912pt+2\left(\sum_{k=1}^{N}{\mathfrak{e}}_{i}(k){\mathfrak{e}}_{j}(k)\mbox{cov}(e_{k,0}^{(m)},e_{k,s}^{(m)})\right)^{2}
2k=1N𝔢i2(k)𝔢j2(k)(cov(ek,0(m),ek,s(m)))2),1i,jK}.\displaystyle\hskip 42.67912pt\left.\left.-2\sum_{k=1}^{N}{\mathfrak{e}}_{i}^{2}(k){\mathfrak{e}}_{j}^{2}(k)(\mbox{cov}(e_{k,0}^{(m)},e_{k,s}^{(m)}))^{2}\right),1\leq i,j\leq K\right\}.

Let 0u1<u2<<uM10\leq u_{1}<u_{2}<\ldots<u_{M}\leq 1 and μi,k,,1iM,1k,K\mu_{i,k,\ell},1\leq i\leq M,1\leq k,\ell\leq K. We can write

k=1Mμk,1,1(DN,T(m)(uk)DN,T(m)(uk1))+k=1Mi=1Kμk,2,i(FN,T,i(m)(uk)FN,T,i(m)(uk1))\displaystyle\sum_{k=1}^{M}\mu_{k,1,1}(D_{N,T}^{(m)}(u_{k})-D_{N,T}^{(m)}(u_{k-1}))+\sum_{k=1}^{M}\sum_{i=1}^{K}\mu_{k,2,i}(F_{N,T,i}^{(m)}(u_{k})-F_{N,T,i}^{(m)}(u_{k-1}))
+k=1Mi=1Kμk,3,i(GN,T,i(m)(uk)GN,T,i(m)(uk1))\displaystyle\hskip 56.9055pt+\sum_{k=1}^{M}\sum_{i=1}^{K}\mu_{k,3,i}(G_{N,T,i}^{(m)}(u_{k})-G_{N,T,i}^{(m)}(u_{k-1}))
=𝒮1++𝒮M,\displaystyle={\mathcal{S}}_{1}+\ldots+{\mathcal{S}}_{M},

where

𝒮k=s=Tui1+1TuiξN,T;s(k),  1iM.{\mathcal{S}}_{k}=\sum_{s=\lfloor Tu_{i-1}\rfloor+1}^{\lfloor Tu_{i}\rfloor}\xi_{N,T;s}(k),\;\;1\leq i\leq M.

The variables ξN,T;s(k),Tuk1+1sTuk,1kM\xi_{N,T;s}(k),\lfloor Tu_{k-1}\rfloor+1\leq s\leq\lfloor Tu_{k}\rfloor,1\leq k\leq M are mm–dependent and therefore T1/2𝒮1T^{-1/2}{\mathcal{S}}_{1}, T1/2𝒮2,T^{-1/2}{\mathcal{S}}_{2},\ldots, T1/2𝒮MT^{-1/2}{\mathcal{S}}_{M} are asymptotically independent. Hence we need only show the asymptotic normality of T1/2𝒮kT^{-1/2}{\mathcal{S}}_{k} for all 1kM1\leq k\leq M. For every fixed kk the variables ξN,T;s(k),Tuk1+1sTuk\xi_{N,T;s}(k),\lfloor Tu_{k-1}\rfloor+1\leq s\leq\lfloor Tu_{k}\rfloor form an mm–dependent stationary sequence with zero mean,

limT\displaystyle\lim_{T\to\infty} var(T1/2𝒮k)\displaystyle\mbox{var}\left(T^{-1/2}{\mathcal{S}}_{k}\right)
=var(μk,1,1Γ1(m)(uk)(Γ1(m)(uk1))+i=1Kμk,2,i(Γi+1(m)(uk)Γi+1(m)(uk1))\displaystyle=\mbox{var}\left(\mu_{k,1,1}\Gamma_{1}^{(m)}(u_{k})-(\Gamma_{1}^{(m)}(u_{k-1}))+\sum_{i=1}^{K}\mu_{k,2,i}(\Gamma_{i+1}^{(m)}(u_{k})-\Gamma_{i+1}^{(m)}(u_{k-1}))\right.
+i=1Kμk,3,i(Γi+K+1(m)(uk)Γi+K+1(m)(uk1)))\displaystyle\hskip 85.35826pt\left.+\sum_{i=1}^{K}\mu_{k,3,i}(\Gamma_{i+K+1}^{(m)}(u_{k})-\Gamma_{i+K+1}^{(m)}(u_{k-1}))\right)

and E|ξN,T;s(k)|3C1E|\xi_{N,T;s}(k)|^{3}\leq C_{1}, where C1,1C_{1,1} does not depend on NN nor on TT. Due to the mm–dependence, these properties imply the asymptotic normality of T1/2𝒮kT^{-1/2}{\mathcal{S}}_{k}. Applying the Cramér–Wold device (cf. Billingsley (1968)), we get that the finite dimensional distributions of {DN,T(m)(u),FN,T;i(m)(u),GN,T;i(m)(u),0u1,1iK}\{D_{N,T}^{(m)}(u),F_{N,T;i}^{(m)}(u),G_{N,T;i}^{(m)}(u),0\leq u\leq 1,1\leq i\leq K\} converge to that of 𝚪(m)(u){\mbox{\boldmath$\Gamma$}}^{(m)}(u). Since 𝐕(m)𝐕0\|{\bf V}^{(m)}-{\bf V}\|\to 0 as TT\to\infty, and 𝚪(u){\mbox{\boldmath$\Gamma$}}(u) and 𝚪(m)(u){\mbox{\boldmath$\Gamma$}}^{(m)}(u) are Gaussian processes we conclude that that 𝚪(m)(u){\mbox{\boldmath$\Gamma$}}^{(m)}(u) converges in 𝒟2K+1[0,1]{\mathcal{D}}^{2K+1}[0,1] to 𝚪(u){\mbox{\boldmath$\Gamma$}}(u). On account of (7.27)–(7.29) we obtain that the finite dimensional distributions of {DN,T(u),FN,T;i(u),GN,T;i(u),0u1,1iK}\{D_{N,T}(u),F_{N,T;i}(u),G_{N,T;i}(u),0\leq u\leq 1,1\leq i\leq K\} converge to that of 𝚪(u){\mbox{\boldmath$\Gamma$}}(u). It is shown in the proof of Lemma 7.1 that

E|t=1v(ηt21)|3c6,3v3/2,E|t=1vηt=1N𝔢i()e,t|3c6,4v3/2E\left|\sum_{t=1}^{v}(\eta_{t}^{2}-1)\right|^{3}\leq c_{6,3}v^{3/2},\;\;E\left|\sum_{t=1}^{v}\eta_{t}\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}(\ell)e_{\ell,t}\right|^{3}\leq c_{6,4}v^{3/2}\;\;

and

E|t=1v(=1N𝔢i()e,t)2v=1N𝔢i2()σ2|3c6,5v3/2.E\left|\sum_{t=1}^{v}\left(\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}(\ell)e_{\ell,t}\right)^{2}-v\sum_{\ell=1}^{N}{\mathfrak{e}}_{i}^{2}(\ell)\sigma^{2}_{\ell}\right|^{3}\leq c_{6,5}v^{3/2}.

Due to the stationarity of η,ei,t,1iN\eta,e_{i,t},1\leq i\leq N, the tightness follows from Theorem 8.4 of Billingsley (1968). ∎

Proof of Theorem 6.1. By Lemmas 7.2, 7.3 and 7.5 we have that

sup0u1|T1/2(λi~(u)TuTλi)T1/2𝔢i(𝐂~N,T(u)TuT𝐂)𝔢i|=oP(1).\sup_{0\leq u\leq 1}\left|T^{1/2}\left(\tilde{\lambda_{i}}(u)-\frac{\lfloor Tu\rfloor}{T}\lambda_{i}\right)-T^{1/2}{\mathfrak{e}}_{i}^{\top}\left(\tilde{{\bf C}}_{N,T}(u)-\frac{\lfloor Tu\rfloor}{T}{\bf C}\right){\mathfrak{e}}_{i}\right|=o_{P}(1).

Also,

sup0u1\displaystyle\sup_{0\leq u\leq 1} |T1/2𝔢i(𝐂~N,T(u)TuT𝐂)𝔢iGN,T;i(u)|\displaystyle\left|T^{1/2}{\mathfrak{e}}_{i}^{\top}\left(\tilde{{\bf C}}_{N,T}(u)-\frac{\lfloor Tu\rfloor}{T}{\bf C}\right){\mathfrak{e}}_{i}-G_{N,T;i}(u)\right|
(𝔢i𝜸)2sup0u1|DN,T(u)|+2|𝔢i𝜸|sup0u1|FN,T;i(u)|\displaystyle\leq({\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}})^{2}\sup_{0\leq u\leq 1}|D_{N,T}(u)|+2|{\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}}|\sup_{0\leq u\leq 1}|F_{N,T;i}(u)|
=OP(1)((𝔢i𝜸)2+|𝔢i𝜸|),\displaystyle=O_{P}(1)(({\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}})^{2}+|{\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}}|),

since by Lemma 7.6

sup0u1|DN,T(u)|=OP(1)andsup0u1|FN,T;i(u)|=OP(1).\sup_{0\leq u\leq 1}|D_{N,T}(u)|=O_{P}(1)\;\;\;\mbox{and}\;\;\;\sup_{0\leq u\leq 1}|F_{N,T;i}(u)|=O_{P}(1).

By the Cauchy–Schwarz inequality we have that |𝔢i𝜸|𝜸|{\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}}|\leq\|{\mbox{\boldmath$\gamma$}}\| and therefore

sup0u1|T1/2𝔢i(𝐂~N,T(u)TuT𝐂)𝔢iGN,T;i(u)|=oP(1).\sup_{0\leq u\leq 1}\left|T^{1/2}{\mathfrak{e}}_{i}^{\top}\left(\tilde{{\bf C}}_{N,T}(u)-\frac{\lfloor Tu\rfloor}{T}{\bf C}\right){\mathfrak{e}}_{i}-G_{N,T;i}(u)\right|=o_{P}(1).

The weak convergence of GN,T;i(u),0u1,1iKG_{N,T;i}(u),0\leq u\leq 1,1\leq i\leq K is proven in Lemma 7.6, which completes the proof of Theorem 2.1. ∎

Proof of Theorem 6.2.

Lemmas 7.2 and 7.3 yield

sup0u1|T1/2𝜸2(λ~1(u)TuTλ1)T1/2𝜸2𝔢1(𝐂~N,T(u)TuT𝐂)𝔢1|=oP(1).\displaystyle\sup_{0\leq u\leq 1}\left|T^{1/2}\|{\mbox{\boldmath$\gamma$}}\|^{-2}\Bigl{(}\tilde{\lambda}_{1}(u)-\frac{\lfloor Tu\rfloor}{T}\lambda_{1}\Bigl{)}-T^{1/2}\|{\mbox{\boldmath$\gamma$}}\|^{-2}{\mathfrak{e}}_{1}^{\top}\Bigl{(}\tilde{{\bf C}}_{N,T}(u)-\frac{\lfloor Tu\rfloor}{T}{\bf C}\Bigl{)}{\mathfrak{e}}_{1}\right|=o_{P}(1).

Thus Lemma 7.6 yields

sup0u1|T1/2𝜸2(λ~1(u)TuTλ1)(𝔢1𝜸)2𝜸2DN,T(u)|=oP(1).\displaystyle\sup_{0\leq u\leq 1}\left|T^{1/2}\|{\mbox{\boldmath$\gamma$}}\|^{-2}\left(\tilde{\lambda}_{1}(u)-\frac{\lfloor Tu\rfloor}{T}\lambda_{1}\right)-\frac{({\mathfrak{e}}_{1}^{\top}{\mbox{\boldmath$\gamma$}})^{2}}{\|{\mbox{\boldmath$\gamma$}}\|^{2}}D_{N,T}(u)\right|=o_{P}(1).

According to Lemma 7.6 sup0u1|DN,T(u)|=OP(1)\sup_{0\leq u\leq 1}|D_{N,T}(u)|=O_{P}(1) and since (𝔢1𝜸)2/𝜸21({\mathfrak{e}}_{1}^{\top}{\mbox{\boldmath$\gamma$}})^{2}/\|{\mbox{\boldmath$\gamma$}}\|^{2}\to 1 by Lemma 7.4, we conclude

(7.34) sup0u1|T1/2𝜸2(λ~1(u)TuTλ1)DN,T(u)|=oP(1).\displaystyle\sup_{0\leq u\leq 1}\left|T^{1/2}\|{\mbox{\boldmath$\gamma$}}\|^{-2}\left(\tilde{\lambda}_{1}(u)-\frac{\lfloor Tu\rfloor}{T}\lambda_{1}\right)-D_{N,T}(u)\right|=o_{P}(1).

Lemmas 7.4 and 7.5 imply

(7.35) sup0u1\displaystyle\sup_{0\leq u\leq 1} |T1/2(λ~i(u)uλi)((𝔢i𝜸)2DN,T(u)+2𝔢i𝜸FN,T;i(u)+GN,T;i(u))|\displaystyle\left|T^{1/2}(\tilde{\lambda}_{i}(u)-u\lambda_{i})-(({\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}})^{2}D_{N,T}(u)+2{\mathfrak{e}}_{i}^{\top}{\mbox{\boldmath$\gamma$}}F_{N,T;i}(u)+G_{N,T;i}(u))\right|
=oP(1).\displaystyle=o_{P}(1).

Combining (7.34) and (7.35) with Lemma 7.6, we obtain that {T1/2|𝜸2(λ~1(u)uλ1),T1/2(λ~i(u)uλi),2iK}\{T^{1/2}|\|{\mbox{\boldmath$\gamma$}}\|^{-2}(\tilde{\lambda}_{1}(u)-u\lambda_{1}),T^{1/2}(\tilde{\lambda}_{i}(u)-u\lambda_{i}),2\leq i\leq K\} converges weakly in 𝒟K[0,1]{\mathcal{D}}^{K}[0,1] to 𝚪0(u)=(Γ10(u),Γ20(u),,ΓK0(u)){\mbox{\boldmath$\Gamma$}}^{0}(u)=(\Gamma^{0}_{1}(u),\Gamma^{0}_{2}(u),\\ \ldots,\Gamma^{0}_{K}(u))^{\top}, where Γ10(u)=Γ1(u)\Gamma_{1}^{0}(u)=\Gamma_{1}(u) and Γi0(u)=ai2Γ1(u)+2aiΓi+1(u)+Γi+K+1(u),2iK\Gamma_{i}^{0}(u)=a_{i}^{2}\Gamma_{1}(u)+2a_{i}\Gamma_{i+1}(u)+\Gamma_{i+K+1}(u),2\leq i\leq K. The computation of the covariance function of 𝚪0(u){\mbox{\boldmath$\Gamma$}}^{0}(u) finishes the proof of Theorem 6.2.

Proof of Theorem 2.1. Theorem 2.1 is implied by Theorems 6.1 and 6.2 by Remark 6.3.

Proof of Theorem 2.2 and Remark 6.5. Theorem 2.2 follows from Remark 6.5. Remark 6.5 follows from Theorems 6.1 and 6.2 when the condition (2.6) is replaced with (2.7). This requires replacing Lemma 7.5 with the result that for all c>0c>0

max1iKsupcu1|ZN,T;i(u)|=OP(NT),\max_{1\leq i\leq K}\sup_{c\leq u\leq 1}|Z_{N,T;i}(u)|=O_{P}\left(\frac{N}{T}\right),

which follows from (7.23) and Markov’s inequality.

7.2. Proof of Theorems 3.1 and 3.2

We prove a more general result concerning consistent estimates for norming sequences for each eigenvalue process. Let

ξ^i,t=(𝔢^i(𝐗t𝐗¯T))2,  1tT   1iK,\hat{\xi}_{i,t}=(\hat{{\mathfrak{e}}}_{i}^{\top}({\bf X}_{t}-\bar{{\bf X}}_{T}))^{2},\;\;1\leq t\leq T\;\;\;1\leq i\leq K,

and define

v^i,T2=s=N+1N1J(sh)r^i,s,\displaystyle\hat{v}^{2}_{i,T}=\sum_{s=-N+1}^{N-1}J\left(\frac{s}{h}\right)\hat{r}_{i,s},

where

r^i,s={1Tst=1Ts(ξ^i,tξ¯i,T)(ξ^i,t+sξ¯i,T),ifs01T|s|t=sT(ξ^i,tξ¯i,T)(ξ^i,t+sξ¯i,T),ifs<0,\hat{r}_{i,s}=\left\{\begin{array}[]{ll}\displaystyle\frac{1}{T-s}\sum_{t=1}^{T-s}(\hat{\xi}_{i,t}-\bar{\xi}_{i,T})(\hat{\xi}_{i,t+s}-\bar{\xi}_{i,T}),\;\;\mbox{if}\;\;s\geq 0\vskip 5.69046pt\\ \displaystyle\frac{1}{T-|s|}\sum_{t=-s}^{T}(\hat{\xi}_{i,t}-\bar{\xi}_{i,T})(\hat{\xi}_{i,t+s}-\bar{\xi}_{i,T}),\;\;\mbox{if}\;\;s<0,\end{array}\right.

where

ξ¯i,T=1Tt=1Tξi,t.\bar{\xi}_{i,T}=\frac{1}{T}\sum_{t=1}^{T}\xi_{i,t}.

We show that if 𝜸=O(1)\|{\mbox{\boldmath$\gamma$}}\|=O(1) as TT\to\infty, then

(7.36) v^i,T2G(i,i)P 1,asT.\frac{\hat{v}^{2}_{i,T}}{G(i,i)}\;\;\stackrel{{\scriptstyle P}}{{\to}}\;1,\;\;\;\mbox{as}\;\;T\to\infty.

Moreover, if 𝜸\|{\mbox{\boldmath$\gamma$}}\|\to\infty as TT\to\infty, then

(7.37) v^1,T2V1𝜸4P 1,asT,\frac{\hat{v}^{2}_{1,T}}{V_{1}\|{\mbox{\boldmath$\gamma$}}\|^{4}}\;\;\stackrel{{\scriptstyle P}}{{\to}}\;1,\;\;\;\mbox{as}\;\;T\to\infty,

and for 2iK2\leq i\leq K,

(7.38) v^i,T2H(i,i)P 1,asT.\frac{\hat{v}^{2}_{i,T}}{H(i,i)}\;\;\stackrel{{\scriptstyle P}}{{\to}}\;1,\;\;\;\mbox{as}\;\;T\to\infty.

We can assume without loss of generality that E𝐗t=𝟎E{\bf X}_{t}={\bf 0}. Elementary algebra gives that

(𝐗t\displaystyle({\bf X}_{t}- 𝐗¯T)(𝐗t𝐗¯T)1Tu=1T(𝐗u𝐗¯T)(𝐗u𝐗¯T)\displaystyle\bar{{\bf X}}_{T})({\bf X}_{t}-\bar{{\bf X}}_{T})^{\top}-\frac{1}{T}\sum_{u=1}^{T}({\bf X}_{u}-\bar{{\bf X}}_{T})({\bf X}_{u}-\bar{{\bf X}}_{T})^{\top}
=𝐗t𝐗tE𝐗0𝐗01Tu=1T(𝐗u𝐗sE𝐗0𝐗0)𝐗t𝐗¯T𝐗¯T𝐗t.\displaystyle={\bf X}_{t}{\bf X}_{t}^{\top}-E{\bf X}_{0}{\bf X}_{0}^{\top}-\frac{1}{T}\sum_{u=1}^{T}\left({\bf X}_{u}{\bf X}_{s}^{\top}-E{\bf X}_{0}{\bf X}_{0}^{\top}\right)-{\bf X}_{t}\bar{{\bf X}}_{T}^{\top}-\bar{{\bf X}}_{T}{\bf X}_{t}^{\top}.

It is easy to see that

E|𝔢^i[1Tt=1T(𝐗t𝐗tE𝐗0𝐗0)]𝔢^i[𝔢^i1Tu=1T(𝐗u𝐗sE𝐗0𝐗0)𝔢^i]|\displaystyle E\left|\hat{{\mathfrak{e}}}_{i}^{\top}\left[\frac{1}{T}\sum_{t=1}^{T}({\bf X}_{t}{\bf X}_{t}^{\top}-E{\bf X}_{0}{\bf X}_{0}^{\top})\right]\hat{{\mathfrak{e}}}_{i}\left[\hat{{\mathfrak{e}}}_{i}^{\top}\frac{1}{T}\sum_{u=1}^{T}\left({\bf X}_{u}{\bf X}_{s}^{\top}-E{\bf X}_{0}{\bf X}_{0}^{\top}\right)\hat{{\mathfrak{e}}}_{i}\right]\right|
E1Tt=1T(𝐗t𝐗tE𝐗0𝐗0)2\displaystyle\hskip 14.22636pt\leq E\left\|\frac{1}{T}\sum_{t=1}^{T}({\bf X}_{t}{\bf X}_{t}^{\top}-E{\bf X}_{0}{\bf X}_{0}^{\top})\right\|^{2}
=1T2=1N=1NE(u=1T(X,uX,uEX,uX,u))2\displaystyle\hskip 14.22636pt=\frac{1}{T^{2}}\sum_{\ell=1}^{N}\sum_{\ell^{\prime}=1}^{N}E\left(\sum_{u=1}^{T}(X_{\ell,u}X_{\ell^{\prime},u}-EX_{\ell,u}X_{\ell^{\prime},u})\right)^{2}
=O(N2T)\displaystyle=O\left(\frac{N^{2}}{T}\right)

and therefore by Markov’s inequality we have

(7.39) limT|[1Tt=1T𝔢^i(𝐗t𝐗tE𝐗0𝐗0)𝔢^i][𝔢^i1Tu=1T(𝐗u𝐗sE𝐗0𝐗0)𝔢^i]|\displaystyle\lim_{T\to\infty}\left|\left[\frac{1}{T}\sum_{t=1}^{T}\hat{{\mathfrak{e}}}_{i}^{\top}({\bf X}_{t}{\bf X}_{t}^{\top}-E{\bf X}_{0}{\bf X}_{0}^{\top})\hat{{\mathfrak{e}}}_{i}\right]\left[\hat{{\mathfrak{e}}}_{i}^{\top}\frac{1}{T}\sum_{u=1}^{T}\left({\bf X}_{u}{\bf X}_{s}^{\top}-E{\bf X}_{0}{\bf X}_{0}^{\top}\right)\hat{{\mathfrak{e}}}_{i}\right]\right|
=OP(N2T).\displaystyle\hskip 14.22636pt=O_{P}\left(\frac{N^{2}}{T}\right).

Using the same arguments as above, for every c7,1c_{7,1} one can find c7,2c_{7,2} such that

(7.40) limTP{1Tt=1T𝔢^i(𝐗t𝐗tE𝐗t𝐗t)𝔢^i𝔢^i𝐗t+s𝐗¯T𝔢^ic7,2N2/T}c7,1\displaystyle\lim_{T\to\infty}P\left\{\frac{1}{T}\sum_{t=1}^{T}\hat{{\mathfrak{e}}}_{i}^{\top}({\bf X}_{t}{\bf X}_{t}^{\top}-E{\bf X}_{t}{\bf X}_{t}^{\top})\hat{{\mathfrak{e}}}_{i}\hat{{\mathfrak{e}}}_{i}^{\top}{\bf X}_{t+s}\bar{{\bf X}}_{T}\hat{{\mathfrak{e}}}_{i}^{\top}\geq c_{7,2}N^{2}/T\right\}\leq c_{7,1}

for every c7,3c_{7,3} there is c7,4c_{7,4} such that

(7.41) limTP{|𝔢^i1Tu=1T(𝐗u𝐗uE𝐗u𝐗u)𝔢^i1Tt=1T𝔢^i𝐗t+s𝐗¯T𝔢^i|c7,4N2/T}c7,3.\displaystyle\lim_{T\to\infty}P\left\{\left|\hat{{\mathfrak{e}}}_{i}^{\top}\frac{1}{T}\sum_{u=1}^{T}({\bf X}_{u}{\bf X}_{u}^{\top}-E{\bf X}_{u}{\bf X}_{u}^{\top})\hat{{\mathfrak{e}}}_{i}\frac{1}{T}\sum_{t=1}^{T}\hat{{\mathfrak{e}}}_{i}^{\top}{\bf X}_{t+s}\bar{{\bf X}}_{T}\hat{{\mathfrak{e}}}_{i}\right|\geq c_{7,4}N^{2}/T\right\}\leq c_{7,3}.

We note

|1Tt=1Ts𝔢^i𝐗t𝐗¯T𝔢^i𝔢^i𝐗t+s𝐗¯T𝔢^i|𝐗¯T21Tt=1T𝐗t𝐗t+s.\displaystyle\left|\frac{1}{T}\sum_{t=1}^{T-s}\hat{{\mathfrak{e}}}_{i}^{\top}{\bf X}_{t}\bar{{\bf X}}_{T}\hat{{\mathfrak{e}}}_{i}\hat{{\mathfrak{e}}}_{i}^{\top}{\bf X}_{t+s}\bar{{\bf X}}_{T}\hat{{\mathfrak{e}}}_{i}\right|\leq\|\bar{{\bf X}}_{T}\|^{2}\frac{1}{T}\sum_{t=1}^{T}\|{\bf X}_{t}\|\|{\bf X}_{t+s}\|.

By (2.2) and assumption μi=0\mu_{i}=0 we get that from Assumption 2.3(a)–Assumption 2.3(b) and Assumption 2.2

E𝐗¯T2=1T2u,v=1TE𝐗u𝐗v\displaystyle E\|\bar{{\bf X}}_{T}\|^{2}=\frac{1}{T^{2}}\sum_{u,v=1}^{T}E{\bf X}_{u}^{\top}{\bf X}_{v} =1T2u,v=1T=1NEX,uX,v\displaystyle=\frac{1}{T^{2}}\sum_{u,v=1}^{T}\sum_{\ell=1}^{N}EX_{\ell,u}X_{\ell,v}
=1T2u,v=1T=1N(γ2Eηuηv+Ee,ue,v)\displaystyle=\frac{1}{T^{2}}\sum_{u,v=1}^{T}\sum_{\ell=1}^{N}(\gamma_{\ell}^{2}E\eta_{u}\eta_{v}+Ee_{\ell,u}e_{\ell,v})
=O(NT)\displaystyle=O\left(\frac{N}{T}\right)

using the arguments in the proof of Lemma 7.1. Due to stationarity we have

E𝐗t𝐗t+s(E𝐗t2E𝐗t+s2)1/2=E𝐗02E\|{\bf X}_{t}\|\|{\bf X}_{t+s}\|\leq(E\|{\bf X}_{t}\|^{2}E\|{\bf X}_{t+s}\|^{2})^{1/2}=E\|{\bf X}_{0}\|^{2}

and

E𝐗02==1N(γ2+Ee,02)=O(N)\displaystyle E\|{\bf X}_{0}\|^{2}=\sum_{\ell=1}^{N}(\gamma_{\ell}^{2}+Ee_{\ell,0}^{2})=O(N)

Hence for every c7,5c_{7,5} there is c7,6c_{7,6} such that

(7.42) limTP{|1Tt=1Ts𝔢^i𝐗t𝐗¯T𝔢^i𝔢^i𝐗t+s𝐗¯T𝔢^i|c7,6N2/T}c7,5.\displaystyle\lim_{T\to\infty}P\left\{\left|\frac{1}{T}\sum_{t=1}^{T-s}\hat{{\mathfrak{e}}}_{i}^{\top}{\bf X}_{t}\bar{{\bf X}}_{T}\hat{{\mathfrak{e}}}_{i}\hat{{\mathfrak{e}}}_{i}^{\top}{\bf X}_{t+s}\bar{{\bf X}}_{T}\hat{{\mathfrak{e}}}_{i}\right|\geq c_{7,6}N^{2}/T\right\}\leq c_{7,5}.

Putting together (7.39)–(7.41) we conclude

v^i,T2=v~i,T2+OP(hN2T),\displaystyle\hat{v}_{i,T}^{2}=\tilde{v}_{i,T}^{2}+O_{P}\left(\frac{hN^{2}}{T}\right),

where

v~i,T2=s=N+1N1J(sh)r~i,s,\tilde{v}^{2}_{i,T}=\sum_{s=-N+1}^{N-1}J\left(\frac{s}{h}\right)\tilde{r}_{i,s},

where

r~i,s={1Tst=1Tsξ~i,tξ~i,t+s,ifs01T|s|t=sTξ~i,tξ~i,t+s,ifs<0\tilde{r}_{i,s}=\left\{\begin{array}[]{ll}\displaystyle\frac{1}{T-s}\sum_{t=1}^{T-s}\tilde{\xi}_{i,t}\tilde{\xi}_{i,t+s},\;\;\mbox{if}\;\;s\geq 0\vskip 5.69046pt\\ \displaystyle\frac{1}{T-|s|}\sum_{t=-s}^{T}\tilde{\xi}_{i,t}\tilde{\xi}_{i,t+s},\;\;\mbox{if}\;\;s<0\end{array}\right.

with ξ~i,t=𝔢^i(𝐗t𝐗tE(𝐗0𝐗0))𝔢^i\tilde{\xi}_{i,t}=\hat{{\mathfrak{e}}}_{i}^{\top}({\bf X}_{t}{\bf X}_{t}^{\top}-E({\bf X}_{0}{\bf X}_{0}^{\top}))\hat{{\mathfrak{e}}}_{i}.
It follows from Dunford and Schwartz (1988) and Assumption 2.1 that with some constant c7,6c_{7,6}

(7.43) max1iK𝔢^ic^i𝔢ic7,6𝐂^N,T(1)𝐂,\displaystyle\max_{1\leq i\leq K}\|\hat{{\mathfrak{e}}}_{i}-\hat{c}_{i}{\mathfrak{e}}_{i}\|\leq c_{7,6}\|\hat{{\bf C}}_{N,T}(1)-{\bf C}\|,

where c^i,1iK\hat{c}_{i},1\leq i\leq K are random signs. We write

𝐂^N,T(1)𝐂1Tt=1T(𝐗t𝐗t𝐂)+𝐗¯T𝐗¯,\|\hat{{\bf C}}_{N,T}(1)-{\bf C}\|\leq\left\|\frac{1}{T}\sum_{t=1}^{T}({\bf X}_{t}{\bf X}_{t}^{\top}-{\bf C})\right\|+\left\|\bar{{\bf X}}_{T}\bar{{\bf X}}^{\top}\right\|,

and since we can assume without loss of generality that E𝐗t=𝟎E{\bf X}_{t}={\bf 0} we get from the proof of Lemma 7.1

𝐗¯T𝐗¯T=OP(NT).\left\|\bar{{\bf X}}_{T}\bar{{\bf X}}_{T}^{\top}\right\|=O_{P}\left(\frac{N}{T}\right).

Also,

Et=1T(𝐗t𝐗t𝐂)2\displaystyle E\left\|\sum_{t=1}^{T}({\bf X}_{t}{\bf X}_{t}^{\top}-{\bf C})\right\|^{2} =E,=1N(t=1T(X,tX,tEX,tX,t))2\displaystyle=E\sum_{\ell,\ell^{\prime}=1}^{N}\left(\sum_{t=1}^{T}(X_{\ell,t}X_{\ell^{\prime},t}-EX_{\ell,t}X_{\ell^{\prime},t})\right)^{2}
=,=1Nt,t=1T(EX,tX,tX,tX,tEX,tX,tEX,tX,t),\displaystyle=\sum_{\ell,\ell^{\prime}=1}^{N}\sum_{t,t^{\prime}=1}^{T}(EX_{\ell,t}X_{\ell^{\prime},t}X_{\ell,t^{\prime}}X_{\ell^{\prime},t^{\prime}}-EX_{\ell,t}X_{\ell^{\prime},t}EX_{\ell,t^{\prime}}X_{\ell^{\prime},t^{\prime}}),
EX,tX,t={γγ,ifγ2+Ee,02,if=.EX_{\ell,t}X_{\ell^{\prime},t}=\left\{\begin{array}[]{ll}\gamma_{\ell}\gamma_{\ell^{\prime}},&\quad\mbox{if}\quad\ell\neq\ell^{\prime}\vskip 5.69046pt\\ \gamma_{\ell}^{2}+Ee^{2}_{\ell,0},&\quad\mbox{if}\quad\ell=\ell^{\prime}.\end{array}\right.

and

EX,tX,tX,tX,t={γ2γ2Eηt2ηt2+γ2EηtηtEe,te,t+γ2EηtηtEe,te,t+Ee,te,tEe,te,t,ifγ4Eηt2ηt2+2γ2Eη02Ee,02+4γ2EηtηtEe,te,t+Ee,t2e,t2,if=.EX_{\ell,t}X_{\ell^{\prime},t}X_{\ell,t^{\prime}}X_{\ell^{\prime},t^{\prime}}=\left\{\begin{array}[]{ll}\gamma_{\ell}^{2}\gamma_{\ell^{\prime}}^{2}E\eta_{t}^{2}\eta_{t^{\prime}}^{2}+\gamma_{\ell}^{2}E\eta_{t}\eta_{t^{\prime}}Ee_{\ell^{\prime},t}e_{\ell^{\prime},t^{\prime}}+\gamma^{2}_{\ell^{\prime}}E\eta_{t}\eta_{t^{\prime}}Ee_{\ell,t}e_{\ell,t^{\prime}}\vskip 5.69046pt\\ \;\;\;+Ee_{\ell,t}e_{\ell,t^{\prime}}Ee_{\ell^{\prime},t}e_{\ell^{\prime},t^{\prime}},\quad\mbox{if}\quad\ell\neq\ell^{\prime}\vskip 5.69046pt\\ \gamma_{\ell}^{4}E\eta_{t}^{2}\eta_{t^{\prime}}^{2}+2\gamma_{\ell}^{2}E\eta_{0}^{2}Ee^{2}_{\ell,0}+4\gamma_{\ell}^{2}E\eta_{t}\eta_{t^{\prime}}Ee_{\ell,t}e_{\ell,t^{\prime}}+Ee^{2}_{\ell,t}e^{2}_{\ell,t^{\prime}},\vskip 5.69046pt\\ \quad\mbox{if}\quad\ell=\ell^{\prime}.\end{array}\right.

Thus we have

=1N\displaystyle\sum_{\ell=1}^{N} t,t=1T(EX,t2X,t2(EX,02)2)\displaystyle\sum_{t,t^{\prime}=1}^{T}(EX^{2}_{\ell,t}X^{2}_{\ell,t^{\prime}}-(EX^{2}_{\ell,0})^{2})
==1Nγ4t,t=1T(Eηt2ηt2(Eη02)2)+4=1Nγ2t,t=1TEηtηtEe,te,t\displaystyle=\sum_{\ell=1}^{N}\gamma_{\ell}^{4}\sum_{t,t^{\prime}=1}^{T}(E\eta_{t}^{2}\eta_{t^{\prime}}^{2}-(E\eta_{0}^{2})^{2})+4\sum_{\ell=1}^{N}\gamma_{\ell}^{2}\sum_{t,t^{\prime}=1}^{T}E\eta_{t}\eta_{t^{\prime}}Ee_{\ell,t}e_{\ell,t^{\prime}}
+=1Nt,t=1T(Ee,t2e,t2(Ee,02)2)\displaystyle\hskip 28.45274pt+\sum_{\ell=1}^{N}\sum_{t,t^{\prime}=1}^{T}(Ee^{2}_{\ell,t}e^{2}_{\ell,t^{\prime}}-(Ee^{2}_{\ell,0})^{2})
=O(NT).\displaystyle=O\left({N}{T}\right).

Similarly,

,=1,Nt,t=1T(EX,tX,tX,tX,tEX,tX,tEX,tX,t)\displaystyle\sum_{\ell,\ell^{\prime}=1,\ell\neq\ell^{\prime}}^{N}\sum_{t,t^{\prime}=1}^{T}(EX_{\ell,t}X_{\ell,t^{\prime}}X_{\ell^{\prime},t}X_{\ell^{\prime},t^{\prime}}-EX_{\ell,t}X_{\ell,t^{\prime}}EX_{\ell^{\prime},t}X_{\ell^{\prime},t^{\prime}})
=,=1,Nγ2γ2t,t=1T(Eηt2ηt21)+2,=1,Nγ2t,t=1TEηtηtEe,te,t\displaystyle\hskip 14.22636pt=\sum_{\ell,\ell^{\prime}=1,\ell\neq\ell^{\prime}}^{N}\gamma_{\ell}^{2}\gamma^{2}_{\ell^{\prime}}\sum_{t,t^{\prime}=1}^{T}(E\eta_{t}^{2}\eta^{2}_{t^{\prime}}-1)+2\sum_{\ell,\ell^{\prime}=1,\ell\neq\ell^{\prime}}^{N}\gamma_{\ell}^{2}\sum_{t,t^{\prime}=1}^{T}E\eta_{t}\eta_{t^{\prime}}Ee_{\ell^{\prime},t}e_{\ell^{\prime},t^{\prime}}
+,=1,Nt,t=1TEe,te,tEe,te,t\displaystyle\hskip 28.45274pt+\sum_{\ell,\ell^{\prime}=1,\ell\neq\ell^{\prime}}^{N}\sum_{t,t^{\prime}=1}^{T}Ee_{\ell,t}e_{\ell,t^{\prime}}Ee_{\ell^{\prime},t}e_{\ell^{\prime},t^{\prime}}
=O(N2T).\displaystyle=O(N^{2}T).

We conclude from (7.43) that

(7.44) max1iK𝔢^ic^i𝔢i=OP(NT1/2).\displaystyle\max_{1\leq i\leq K}\|\hat{{\mathfrak{e}}}_{i}-\hat{c}_{i}{\mathfrak{e}}_{i}\|=O_{P}\left({N}{T^{-1/2}}\right).

Next we define

vi,T2=s=N+1N1J(sh)ri,s,{v}^{2}_{i,T}=\sum_{s=-N+1}^{N-1}J\left(\frac{s}{h}\right){r}_{i,s},

where

ri,s={1Tst=1Tsξi,tξi,t+s,ifs01T|s|t=sTξi,tξi,t+s,ifs<0,{r}_{i,s}=\left\{\begin{array}[]{ll}\displaystyle\frac{1}{T-s}\sum_{t=1}^{T-s}{\xi}_{i,t}{\xi}_{i,t+s},\;\;\mbox{if}\;\;s\geq 0\vskip 5.69046pt\\ \displaystyle\frac{1}{T-|s|}\sum_{t=-s}^{T}{\xi}_{i,t}{\xi}_{i,t+s},\;\;\mbox{if}\;\;s<0,\end{array}\right.

where ξi,t=𝔢i(𝐗t𝐗tE(𝐗0𝐗0))𝔢i{\xi}_{i,t}={\mathfrak{e}}_{i}^{\top}({\bf X}_{t}{\bf X}_{t}^{\top}-E({\bf X}_{0}{\bf X}_{0}^{\top})){\mathfrak{e}}_{i}.

We write

v~i,T2vi,T2=j=1NN1J(jh)(r~j,srj,s).\displaystyle\tilde{v}^{2}_{i,T}-v^{2}_{i,T}=\sum_{j=1-N}^{N-1}J\left(\frac{j}{h}\right)(\tilde{r}_{j,s}-r_{j,s}).

For j0j\geq 0,

r~i,sri,j\displaystyle\tilde{r}_{i,s}-r_{i,j} =1Tjt=1Tjξ~i,tξ~i,t+jξi,tξi,t+j\displaystyle=\frac{1}{T-j}\sum_{t=1}^{T-j}\tilde{\xi}_{i,t}\tilde{\xi}_{i,t+j}-\xi_{i,t}\xi_{i,t+j}
=1Tjt=1Tj(ξ~i,tξi,t)ξ~i,t+j+1Tjt=1Tj(ξ~i,t+jξi,t+j)ξi,t.\displaystyle=\frac{1}{T-j}\sum_{t=1}^{T-j}(\tilde{\xi}_{i,t}-\xi_{i,t})\tilde{\xi}_{i,t+j}+\frac{1}{T-j}\sum_{t=1}^{T-j}(\tilde{\xi}_{i,t+j}-\xi_{i,t+j})\xi_{i,t}.

According to the definitions of ξ~i,t\tilde{\xi}_{i,t} and ξi,t\xi_{i,t},

ξ~i,tξi,t=(𝔢^i𝔢i)Ut𝔢^i+𝔢iUt(𝔢^i𝔢i),\displaystyle\tilde{\xi}_{i,t}-\xi_{i,t}=(\hat{{\mathfrak{e}}}_{i}^{\top}-{\mathfrak{e}}_{i}^{\top})U_{t}\hat{{\mathfrak{e}}}_{i}+{\mathfrak{e}}_{i}^{\top}U_{t}(\hat{{\mathfrak{e}}}_{i}-{\mathfrak{e}}_{i}),

where Ut=𝐗t𝐗tE𝐗0𝐗0U_{t}={\bf X}_{t}{\bf X}_{t}^{\top}-E{\bf X}_{0}{\bf X}_{0}^{\top}, from which it follows that,

(7.45) (ξ~i,tξi,t)ξ~i,t+j=(𝔢^i𝔢i)Ut𝔢^i𝔢^iUt+j𝔢^i+𝔢iUt(𝔢^i𝔢i)𝔢^iUt+j𝔢^i.\displaystyle(\tilde{\xi}_{i,t}-\xi_{i,t})\tilde{\xi}_{i,t+j}=(\hat{{\mathfrak{e}}}_{i}^{\top}-{\mathfrak{e}}_{i}^{\top})U_{t}\hat{{\mathfrak{e}}}_{i}\hat{{\mathfrak{e}}}_{i}^{\top}U_{t+j}\hat{{\mathfrak{e}}}_{i}+{\mathfrak{e}}_{i}^{\top}U_{t}(\hat{{\mathfrak{e}}}_{i}-{\mathfrak{e}}_{i})\hat{{\mathfrak{e}}}_{i}^{\top}U_{t+j}\hat{{\mathfrak{e}}}_{i}.

According to the Cauchy-Schwarz and triangle inequalities,

|j=0N1J(jh)1Tjt=1Tj(𝔢^i𝔢i)Ut𝔢^i𝔢^iUt+j𝔢^i|𝔢^i𝔢i|j=0N1J(jh)1Tjt=1TjUtUt+j|.\displaystyle\left|\sum_{j=0}^{N-1}J\left(\frac{j}{h}\right)\frac{1}{T-j}\sum_{t=1}^{T-j}(\hat{{\mathfrak{e}}}_{i}^{\top}-{\mathfrak{e}}_{i}^{\top})U_{t}\hat{{\mathfrak{e}}}_{i}\hat{{\mathfrak{e}}}_{i}^{\top}U_{t+j}\hat{{\mathfrak{e}}}_{i}\right|\leq\|\hat{{\mathfrak{e}}}_{i}-{\mathfrak{e}}_{i}\|\left|\sum_{j=0}^{N-1}J\left(\frac{j}{h}\right)\frac{1}{T-j}\sum_{t=1}^{T-j}\|U_{t}\|\|U_{t+j}\|\right|.

According to 7.44 𝔢^i𝔢i=OP(NT1/2)\|\hat{{\mathfrak{e}}}_{i}-{\mathfrak{e}}_{i}\|=O_{P}(NT^{-1/2}). Furthermore, since EU02=O(N2)E\|U_{0}\|^{2}=O(N^{2}), and JJ has bounded support, the Cauchy-Schwarz and triangle inequalities imply that

(7.46) E|j=0N1J(jh)1Tjt=1TjUtUt+j|c1hEU02=O(hN2),\displaystyle E\left|\sum_{j=0}^{N-1}J\left(\frac{j}{h}\right)\frac{1}{T-j}\sum_{t=1}^{T-j}\|U_{t}\|\|U_{t+j}\|\right|\leq c_{1}hE\|U_{0}\|^{2}=O(hN^{2}),

For some constant c1c_{1}. Hence, according to (7.46) and Markov’s inequality, we obtain that

(7.47) |j=0N1J(jh)1Tjt=1Tj(𝔢^i𝔢i)Ut𝔢^i𝔢^iUt+j𝔢^i|=OP(hN3T1/2).\displaystyle\left|\sum_{j=0}^{N-1}J\left(\frac{j}{h}\right)\frac{1}{T-j}\sum_{t=1}^{T-j}(\hat{{\mathfrak{e}}}_{i}^{\top}-{\mathfrak{e}}_{i}^{\top})U_{t}\hat{{\mathfrak{e}}}_{i}\hat{{\mathfrak{e}}}_{i}^{\top}U_{t+j}\hat{{\mathfrak{e}}}_{i}\right|=O_{P}(hN^{3}T^{-1/2}).

Similar arguments applied to the remaining terms in v~i,T2vi,T2\tilde{v}^{2}_{i,T}-v^{2}_{i,T} show that

(7.48) |v~i,T2vi,T2|=OP(hN3T1/2).\displaystyle|\tilde{v}^{2}_{i,T}-v^{2}_{i,T}|=O_{P}(hN^{3}T^{-1/2}).

It follows from (2.2) and Assumption 2.3(b) that

limT1G(i,i)s=Eξi,tξi,t+s=1,\displaystyle\lim_{T\to\infty}\frac{1}{G(i,i)}\sum_{s=-\infty}^{\infty}E{\xi}_{i,t}{\xi}_{i,t+s}=1,

and

limT1G(i,i)s=K(sh)Eξi,tξi,t+s=1.\displaystyle\lim_{T\to\infty}\frac{1}{G(i,i)}\sum_{s=-\infty}^{\infty}K\left(\frac{s}{h}\right)E{\xi}_{i,t}{\xi}_{i,t+s}=1.

Since

(7.49) Evi,T2=s=K(sh)Eξi,tξi,t+s,E{v}^{2}_{i,T}=\sum_{s=-\infty}^{\infty}K\left(\frac{s}{h}\right)E{\xi}_{i,t}{\xi}_{i,t+s},

if we show that

(7.50) limTvar(vi,T2)=0,\lim_{T\to\infty}\mbox{var}({v}^{2}_{i,T})=0,

we get immediately that

vi,T2G(i,i)P   1,asT.\frac{{v}^{2}_{i,T}}{G(i,i)}\;\;\;\stackrel{{\scriptstyle P}}{{\to}}\;\;\;1,\quad{as}\;\;\;T\to\infty.

To this end, we have that

var(vi,T2)=s,s=J(sh)J(sh)(ri,sEξi,0ξi,s)(ri,sEξi,0ξi,s)\displaystyle\mbox{var}({v}^{2}_{i,T})=\sum_{s,s^{\prime}=-\infty}^{\infty}J\left(\frac{s}{h}\right)J\left(\frac{s^{\prime}}{h}\right)({r}_{i,s}-E\xi_{i,0}\xi_{i,s})({r}_{i,s^{\prime}}-E\xi_{i,0}\xi_{i,s^{\prime}})

and

|s,s=0J(sh)J(sh)(ri,sEξi,0ξi,s)(ri,sEξi,0ξi,s)|\displaystyle\left|\sum_{s,s^{\prime}=0}^{\infty}J\left(\frac{s}{h}\right)J\left(\frac{s^{\prime}}{h}\right)({r}_{i,s}-E\xi_{i,0}\xi_{i,s})({r}_{i,s^{\prime}}-E\xi_{i,0}\xi_{i,s^{\prime}})\right|
s,s=0|J(sh)J(sh)|1Ts1Tst=1Tst=1Ts|E𝔢i𝐗t𝐗t𝔢i𝔢i𝐗t+s𝐗t+s𝔢i𝔢i𝐗t𝐗t𝔢i\displaystyle\hskip 14.22636pt\leq\sum_{s,s^{\prime}=0}^{\infty}\Biggl{|}J\left(\frac{s}{h}\right)J\left(\frac{s^{\prime}}{h}\right)\Biggl{|}\frac{1}{T-s}\frac{1}{T-s^{\prime}}\sum_{t=1}^{T-s}\sum_{t^{\prime}=1}^{T-s^{\prime}}\biggl{|}E{\mathfrak{e}}_{i}^{\top}{\bf X}_{t}{\bf X}_{t}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t+s}{\bf X}_{t+s}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t^{\prime}}{\bf X}_{t^{\prime}}^{\top}{\mathfrak{e}}_{i}
×𝔢i𝐗t+s𝐗t+s𝔢iE𝔢i𝐗t𝐗t𝔢i𝔢i𝐗t+s𝐗t+s𝔢iE𝔢i𝐗t𝐗t𝔢i𝔢i𝐗t+s𝐗t+s𝔢i|\displaystyle\hskip 25.6073pt\times{\mathfrak{e}}_{i}^{\top}{\bf X}_{t^{\prime}+s^{\prime}}{\bf X}_{t^{\prime}+s^{\prime}}^{\top}{\mathfrak{e}}_{i}-E{\mathfrak{e}}_{i}^{\top}{\bf X}_{t}{\bf X}_{t}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t+s}{\bf X}_{t+s}^{\top}{\mathfrak{e}}_{i}E{\mathfrak{e}}_{i}^{\top}{\bf X}_{t^{\prime}}{\bf X}_{t^{\prime}}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t^{\prime}+s^{\prime}}{\bf X}_{t^{\prime}+s^{\prime}}^{\top}{\mathfrak{e}}_{i}\biggl{|}
c7,71T2s,s=0ht=1Tst=1Ts|E𝔢i𝐗t𝐗t𝔢i𝔢i𝐗t+s𝐗t+s𝔢i𝔢i𝐗t𝐗t𝔢i𝔢i𝐗t+s𝐗t+s𝔢i\displaystyle\hskip 14.22636pt\leq c_{7,7}\frac{1}{T^{2}}\sum_{s,s^{\prime}=0}^{h}\sum_{t=1}^{T-s}\sum_{t^{\prime}=1}^{T-s^{\prime}}\biggl{|}E{\mathfrak{e}}_{i}^{\top}{\bf X}_{t}{\bf X}_{t}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t+s}{\bf X}_{t+s}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t^{\prime}}{\bf X}_{t^{\prime}}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t^{\prime}+s^{\prime}}{\bf X}_{t^{\prime}+s^{\prime}}^{\top}{\mathfrak{e}}_{i}
E𝔢i𝐗t𝐗t𝔢i𝔢i𝐗t+s𝐗t+s𝔢iE𝔢i𝐗t𝐗t𝔢i𝔢i𝐗t+s𝐗t+s𝔢i|\displaystyle\hskip 25.6073pt-E{\mathfrak{e}}_{i}^{\top}{\bf X}_{t}{\bf X}_{t}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t+s}{\bf X}_{t+s}^{\top}{\mathfrak{e}}_{i}E{\mathfrak{e}}_{i}^{\top}{\bf X}_{t^{\prime}}{\bf X}_{t^{\prime}}^{\top}{\mathfrak{e}}_{i}{\mathfrak{e}}_{i}^{\top}{\bf X}_{t^{\prime}+s^{\prime}}{\bf X}_{t^{\prime}+s^{\prime}}^{\top}{\mathfrak{e}}_{i}\biggl{|}
=O(hT),\displaystyle\hskip 14.22636pt=O\left(\frac{h}{T}\right),

with some constant c7,7c_{7,7}, since we can assume without loss of generality that J(u)=0J(u)=0 if |u|1|u|\geq 1.

Now we assume that the conditions of Theorem 6.2 are satisfied. First we prove (7.37). It follows from (2.2) and (7.49) that

limT1𝜸4Er1,T2=V1.\lim_{T\to\infty}\frac{1}{\|{\mbox{\boldmath$\gamma$}}\|^{4}}Er^{2}_{1,T}=V_{1}.

Following the proof of one can verify that

var(1𝜸4r1,T2)=0,\mbox{var}\left(\frac{1}{\|{\mbox{\boldmath$\gamma$}}\|^{4}}r^{2}_{1,T}\right)=0,

completing the proof of (7.37). The proof of (7.38) goes along the lines of that of (7.36) and therefore the details are omitted.∎

Proof of Theorem 3.2. We can assume without loss of generality that μi=0,1iN\mu_{i}=0,1\leq i\leq N. Using (2.1) we have

t=1s(𝐗t𝐗¯T)(𝐗t𝐗¯T)={s(TtT)2𝜹𝜹+u=1s𝐔u,T,if   0st(t(TtT)2+(st)(tT)2)𝜹𝜹+u=1s𝐔u,T,iftsT,\sum_{t=1}^{s}({\bf X}_{t}-\bar{{\bf X}}_{T})({\bf X}_{t}-\bar{{\bf X}}_{T})^{\top}=\left\{\begin{array}[]{ll}\displaystyle s\left(\frac{T-t^{*}}{T}\right)^{2}{\mbox{\boldmath$\delta$}}{\mbox{\boldmath$\delta$}}^{\top}+\sum_{u=1}^{s}{\bf U}_{u,T},\;\;\mbox{if}\;\;\;0\leq s\leq t^{*}\vskip 5.69046pt\\ \displaystyle\left(t^{*}\left(\frac{T-t^{*}}{T}\right)^{2}+(s-t^{*})\left(\frac{t^{*}}{T}\right)^{2}\right){\mbox{\boldmath$\delta$}}{\mbox{\boldmath$\delta$}}^{\top}+\sum_{u=1}^{s}{\bf U}_{u,T},\vskip 5.69046pt\\ \hskip 85.35826pt\;\;\mbox{if}\;\;\;t^{*}\leq s\leq T,\end{array}\right.

where

𝐔u,T=\displaystyle{\bf U}_{u,T}= (𝜸ηu+𝐞u)(𝜸ηu+𝐞u)TtT(𝜸ηu+𝐞u)𝜹(𝜸ηu+𝐞u)𝐙TTtT𝜹(𝜸ηu+𝐞u)\displaystyle({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u})({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u})^{\top}-\frac{T-t^{*}}{T}({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u}){\mbox{\boldmath$\delta$}}^{\top}-({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u}){\bf Z}_{T}^{\top}-\frac{T-t^{*}}{T}{\mbox{\boldmath$\delta$}}({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u})^{\top}
+TtT𝜹𝐙T𝐙T(𝜸ηu+𝐞u)+TtT𝐙T𝜹+𝐙T𝐙T,if  1ut,\displaystyle+\frac{T-t^{*}}{T}{\mbox{\boldmath$\delta$}}{\bf Z}_{T}^{\top}-{\bf Z}_{T}({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u})^{\top}+\frac{T-t^{*}}{T}{\bf Z}_{T}{\mbox{\boldmath$\delta$}}^{\top}+{\bf Z}_{T}{\bf Z}_{T}^{\top},\;\;\mbox{if}\;\;1\leq u\leq t^{*},

with

𝐙T=𝜸1Tv=1Tηv+1Tv=1T𝐞v,𝐞v=(e1,v,e2,v,,eN,v){\bf Z}_{T}={\mbox{\boldmath$\gamma$}}\frac{1}{T}\sum_{v=1}^{T}\eta_{v}+\frac{1}{T}\sum_{v=1}^{T}{\bf e}_{v},\quad{\bf e}_{v}=(e_{1,v},e_{2,v},\ldots,e_{N,v})^{\top}

and

𝐔u,T=\displaystyle{\bf U}_{u,T}= (𝜸ηu+𝐞u)(𝜸ηu+𝐞u)+tT(𝜸ηu+𝐞u)𝜹(𝜸ηu+𝐞u)𝐙T+tT𝜹(𝜸ηu+𝐞u)\displaystyle({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u})({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u})^{\top}+\frac{t^{*}}{T}({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u}){\mbox{\boldmath$\delta$}}^{\top}-({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u}){\bf Z}_{T}^{\top}+\frac{t^{*}}{T}{\mbox{\boldmath$\delta$}}({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u})^{\top}
tT𝜹𝐙T𝐙T(𝜸ηu+𝐞u)tT𝐙T𝜹+𝐙T𝐙T,iftuT.\displaystyle-\frac{t^{*}}{T}{\mbox{\boldmath$\delta$}}{\bf Z}_{T}^{\top}-{\bf Z}_{T}({\mbox{\boldmath$\gamma$}}\eta_{u}+{\bf e}_{u})^{\top}-\frac{t^{*}}{T}{\bf Z}_{T}{\mbox{\boldmath$\delta$}}^{\top}+{\bf Z}_{T}{\bf Z}_{T}^{\top},\;\;\mbox{if}\;\;t^{*}\leq u\leq T.

It follows along the same lines as the proof of (7.23) that

sup0sT|𝔢u=1s𝐔u,T𝔢|=OP(T1/2)\sup_{0\leq s\leq T}\left|{\mathfrak{e}}^{\top}\sum_{u=1}^{s}{\bf U}_{u,T}{\mathfrak{e}}\right|=O_{P}(T^{1/2})

for 𝔢RN{\mathfrak{e}}\in R^{N} with 𝔢=1\|{\mathfrak{e}}\|=1. Hence

λ^1,T(t/T)𝜹Pθ(1θ)2\frac{\hat{\lambda}_{1,T}(t^{*}/T)}{\|{\mbox{\boldmath$\delta$}}\|}\;\;\stackrel{{\scriptstyle P}}{{\to}}\;\;\theta(1-\theta)^{2}

and

λ^1,T(1)𝜹Pθ(1θ),\frac{\hat{\lambda}_{1,T}(1)}{\|{\mbox{\boldmath$\delta$}}\|}\;\;\stackrel{{\scriptstyle P}}{{\to}}\;\;\theta(1-\theta),

which completes the proof of Theorem 3.2.

The proof of Theorem 3.3 is based on the following lemma.

Lemma 7.7.

We assume that model (3.9) holds, Assumptions 2.1, 2.2, 2.3, (2.7), (3.5) are satisfied. If for some 0<ϵ<10<\epsilon<1 there exists an N0N_{0} such that for all NN0N\geq N_{0}

(7.51) |sup{𝐯[θ𝛄𝛄+(1θ)(𝜸+𝝍)(𝜸+𝝍)+𝚲]𝐯:𝐯RN,𝐯=1}sup{𝐯[𝛄𝛄+𝚲]𝐯:𝐯RN,𝐯=1}1|>ϵ\left|\frac{\sup\left\{{\bf v}^{\top}[\theta{\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\gamma$}}^{\top}+(1-\theta)({\mbox{\boldmath$\gamma$}}+{\mbox{\boldmath$\psi$}})({\mbox{\boldmath$\gamma$}}+{\mbox{\boldmath$\psi$}})^{\top}+{\mbox{\boldmath$\Lambda$}}]{\bf v}:\;{\bf v}\in R^{N},\|{\bf v}\|=1\right\}}{\sup\left\{{\bf v}^{\top}[{\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\gamma$}}^{\top}+{\mbox{\boldmath$\Lambda$}}]{\bf v}:\;{\bf v}\in R^{N},\|{\bf v}\|=1\right\}}-1\right|>\epsilon
Proof.

Since the means of the panels do not change during the observation period in (3.9), we can assume without loss of generality that μi=0,1iN\mu_{i}=0,1\leq i\leq N. It follows from Theorems 6.1 and 6.2 that for any u(0,θ]u^{*}\in(0,\theta] that

|λ^1(u)λ1|=OP(NT1/2).\left|{\hat{\lambda}_{1}(u^{*})}-{\lambda_{1}}\right|=O_{P}\left(NT^{-1/2}\right).

One can show that Lemmas 7.1 and 7.2 hold with minor modifications under model (3.9), and thus

|λ^1(1)λ¯1(1)|=OP(NT1),\left|{\hat{\lambda}_{1}(1)}-{\bar{\lambda}_{1}(1)}\right|=O_{P}\left(NT^{-1}\right),

where λ¯1(1)\bar{\lambda}_{1}(1) is the largest eigenvalue of t=1T𝐗t𝐗t/T\sum_{t=1}^{T}{\bf X}_{t}{\bf X}_{t}^{\top}/T. Simple arithmetic shows that

1Tt=1T𝐗t𝐗t=𝐂T(1)+𝐆1,T+𝐆2,T,\frac{1}{T}\sum_{t=1}^{T}{\bf X}_{t}{\bf X}_{t}^{\top}={\bf C}_{T}^{(1)}+{\bf G}_{1,T}+{\bf G}_{2,T},

where

𝐂T(1)=𝜸𝜸1Tt=1Tηt2+1Tt=1T𝐞t𝐞t+(𝝍𝝍+𝜸𝝍+𝝍𝜸)1Tt=tTηt2,{\bf C}_{T}^{(1)}={\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\gamma$}}^{\top}\frac{1}{T}\sum_{t=1}^{T}\eta_{t}^{2}+\frac{1}{T}\sum_{t=1}^{T}{\bf e}_{t}{\bf e}_{t}^{\top}+({\mbox{\boldmath$\psi$}}{\mbox{\boldmath$\psi$}}^{\top}+{\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\psi$}}^{\top}+{\mbox{\boldmath$\psi$}}{\mbox{\boldmath$\gamma$}}^{\top})\frac{1}{T}\sum_{t=t^{*}}^{T}\eta_{t}^{2},
𝐆1,T=1Tt=1T(𝜸𝐞t+𝐞t𝜸)ηt{\bf G}_{1,T}=\frac{1}{T}\sum_{t=1}^{T}({\mbox{\boldmath$\gamma$}}{\bf e}_{t}^{\top}+{\bf e}_{t}{\mbox{\boldmath$\gamma$}}^{\top})\eta_{t}

and

𝐆2,T=1Tt=tT(𝐞t𝝍+𝝍𝐞t)ηt.{\bf G}_{2,T}=\frac{1}{T}\sum_{t=t^{*}}^{T}({\bf e}_{t}{\mbox{\boldmath$\psi$}}^{\top}+{\mbox{\boldmath$\psi$}}{\bf e}_{t}^{\top})\eta_{t}.

It follows along the lines of the proof of (7.44) that

𝐆i,T=OP(NT1/2),i=1,2,\|{\bf G}_{i,T}\|=O_{P}(NT^{-1/2}),\quad i=1,2,

and thus if λT(1)\lambda_{T}^{(1)} denotes the largest eigenvalue of 𝐂T(1){\bf C}_{T}^{(1)}, then we also have that

|λ¯1(1)λT(1)|=OP(NT1/2).\left|{\bar{\lambda}_{1}(1)}-{\lambda_{T}^{(1)}}\right|=O_{P}(NT^{-1/2}).

Let ϕT\phi_{T} be the largest eigenvalue of (𝝍𝝍+𝝍𝜸+𝜸𝝍)(1θ)+𝜸𝜸+𝚲({\mbox{\boldmath$\psi$}}{\mbox{\boldmath$\psi$}}^{\top}+{\mbox{\boldmath$\psi$}}{\mbox{\boldmath$\gamma$}}^{\top}+{\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\psi$}}^{\top})(1-\theta)+{\mbox{\boldmath$\gamma$}}{\mbox{\boldmath$\gamma$}}^{\top}+{\mbox{\boldmath$\Lambda$}}. Then one can show using the arguments establishing Theorems 6.1 and 6.2 that

|λT(1)ϕT|=OP(NT1/2).\left|{\lambda_{T}^{(1)}}-{\phi_{T}}\right|=O_{P}(NT^{-1/2}).

Assumption (7.51) implies that there is an ϵ>0\epsilon>0 for all TT sufficiently large

|λ1ϕT1|>ϵ,\left|\frac{\lambda_{1}}{\phi_{T}}-1\right|>\epsilon,

and therefore there is a constant c8,1c_{8,1} such that

|λ1ϕT|>c8,1.\left|{\lambda_{1}}-{\phi_{T}}\right|>c_{8,1}.

Observing that v^1,T=OP(h1/2)\hat{v}_{1,T}=O_{P}(h^{1/2}) and sup0u1|B^T,1(u)||B^T,1(u)|\sup_{0\leq u\leq 1}|\hat{B}_{T,1}(u)|\geq|\hat{B}_{T,1}(u^{*})|, the proof of (3.8) is complete. ∎

Proof of Theorem 3.3. It is clear that assumption (3.10) implies (7.51), and therefore Theorem 3.3 follows from Lemma 7.7.

References

  • [1] Ang, A. and Chen, J.: Asymmetric correlation of equity portfolios. Journal of Financial Economics 63(2002), 443–494.
  • [2] Aue, A., Hörmann, S., Horváth, L. and Reimherr, M.: Break detection in the covariance structure of multivariate time series models. The Annals of Statistics 37(2009), 4046 -4087.
  • [3] Aue, A. and Horváth, L.: Structural breaks in time series. Journal of Time Series Analysis. 34(2013), 1–16.
  • [4] Aue, A. and Paul, D.: Random matrix theory in statistics: a review. Statistical Planning and Inference 150(2014), 1–29.
  • [5] Bai, J.: Common breaks in means and variances for panel data. Journal of Econometrics 157(2010), 78–92.
  • [6] Bai, J. and Ng, S.: Determining the number of factors in approximate factor models. Econometrica 70(2002), 191- 221.
  • [7] Berkes, I., Hörmann, S. and Schauer, J.: Split invariance principles for stationary processes. The Annals of Probability 39(2011), 2441 -2473.
  • [8] Billingsley, P. (1968) Convergence of Probability Measures. Wiley, New York, 1968.
  • [9] Breitung, J. and Eickmeier, S.: Testing for structural breaks in dynamic factor models. Journal of Econometrics 163 (2011), 71- 84.
  • [10] Dunford, N. and Schwartz, J.T.: Linear Operators, General Theory (Part 1). Wiley, New York, 1988.
  • [11] Galeano, P. and Peña, D.: Covariance changes detection in multivariate time series. Journal of Statistical Planning and Inference 137(2007), 194 -211.
  • [12] Gürkaynak, R.  Sack, B.  and Wright, J.: The US treasury yield curve: 1961 to the present. Journal of Monetary Economics 54(2007), 2291 2304.
  • [13] Horváth, L. and Hušková, M.: Change–point detection in panel data. Journal of Time Series Analysis 33(2012), 631–648.
  • [14] Hall, P. and Hosseini–Nasab, M.: Theory for high–order bounds in functional principal components analysis. Mathematical Proceedings of the Cambridge Philosophical Society 146(2009), 225–256.
  • [15] Johnstone, I.: Multivariate analysis and Jacobi ensembles: largest eigenvalue, Tracy Widom limits and rates of convergence. Annals of Statistics 36 (2008), 2638 2716.
  • [16] Kao, C.,  Trapani, L. and Urga, G.: Testing for breaks in cointegrated panels. Econometric Reviews(2015) To appear.
  • [17] Kejriwal, M.: Test of a mean shift with good size and monotonic power. Economics Letters102(2015),78–82.
  • [18] Keogh, G.,  Sharifi, S.,  Ruskin, H. and Crane, M.: Epochs in market sector index data–empirical or optimistic? In: The Application of Econophysics 2004, pp. 83–89, Springer, New York.
  • [19] Kim, D.: Estimating a common deterministic time trend break in large panels with cross sectional dependence Journal of Econometrics 164(2011), 310–330.
  • [20] Kim, D.: Common breaks in time trends for large panel data with a factor structure. Econometrics Journal (2014)In press.
  • [21] Li, D.,  Qian, J. and Su, L.: Panel Data Models with Interactive Fixed Effects and Multiple Structrual Breaks. (2015) Under Revision
  • [22] Markowitz,H.: Portfolio selection. Journal of Finance 7(1952), 77- 91.
  • [23] Markowitz,H.: The optimization of aquadratic function subject to linear constraints. Naval Research Logistics Quarterly 3(1956), 111- 133.
  • [24] Móricz, F.,  Serfling, R. and Stout, W.: Moment and probability bounds with quasi-superadditive structure for the maximal partial sum. Annals of Probability 10 (1982) , 1032–1040.
  • [25] Petrov, V.V.: Limit Theorems of Probability Theory. Clarendon Press, Oxford, 1995.
  • [26] Qian, J., and Su, L.: Shrinkage Esimation of Common Breaks in Panel Data Models via Adaptive Group Fused Lasso. Working Paper, (2014) Singapore Management University.
  • [27] R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria.
  • [28] Rényi, A.: On the theory of order statistics. Acta Mathematica Academiae Scientiarum Hungaricae 4(1953), 191–227.
  • [29] Shorack, G.R. and Wellner, J.A.: Empirical Processes with Applications to Statistics. Wiley, New York, 1986.
  • [30] Taniguchi, A. and Kakizawa, Y.: Asymptotic Theory of Statistical Inference for Time Series. Springer, New York, 2000.
  • [31] Wu, W.: Nonlinear System Theory: Another Look at Dependence, Proceedings of The National Academy of Sciences of the United States 102 (2005), 14150–14154.
  • [32] Wied, D., Krämer, W. and Dehling, H.: Testing for a change in correlation at an unknown point in time using an extended functional delta method. Econometric Theory 28(2012), 570–589.
  • [33] Yamamoto, Y. and Tanaka, S.: Testing for factor loading structural change under common breaks. Journal of Econometrics 189(2015), 187–206.
  • [34] Zeileis, A.: Object-Oriented Computation of Sandwich Estimators. Journal of Statistical Software 16(2006), 1 16.
  • [35] Zovko, I.I. and Farmer, J.D.: Correlations and clustering in the trading of members of the London Stock Exchange. In: In Complexity, Metastability and Nonextensivity: An International Conference AIP Conference Proceedings, 2007, Springer, New York.