This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Limit Theorems for low dimensional generalized T,T1T,T^{-1} transformations

D. Dolgopyat1, C. Dong2, A. Kanigowski1,4, P. Nándori3
(Date:
1University of Maryland, College Park, MD, USA
2Chern Institute of Mathematics and LPMC, Nankai University, Tianjin, China
3Yeshiva University, New York, NY, USA
4Jagiellonian University, Krakow, Poland
)
Abstract.

We consider generalized (T,T1)(T,T^{-1}) transformations such that the base map satisfies a multiple mixing local limit theorem and anticoncentration large deviation bounds and in the fiber we have d\mathbb{R}^{d} actions with d=1d=1 or 22 which are exponentially mixing of all orders. If the skewing cocycle has zero drift, we show that the ergodic sums satisfy the same limit theorems as the random walks in random scenery studied by Kesten and Spitzer (1979) and Bolthausen (1989). The proofs rely on the quenched CLT for the fiber action and the control of the quenched variance. This paper complements our previous work where the classical central limit theorem is obtained for a large class of generalized (T,T1)(T,T^{-1}) transformations.

Key words and phrases:
(T,T1)(T,T^{-1}) transformations; Kesten–Spitzer process; quenched and annealed limit theorems; local time
1991 Mathematics Subject Classification:
Primary: 60F05. Secondary: 37D30, 60J55, 60K35

1. Introduction

This work is a continuation of our study on generalized T,T1T,T^{-1} transformations, following previous work [11, 12, 13]. Our main innovation in this paper is to provide several limit theorems in the low dimensional setting, complementing to the higher dimensional case in [12].

1.1. Results

Let ff be a smooth map of a manifold XX preserving a measure μ,\mu, and GtG_{t} be an d\mathbb{R}^{d} action on a manifold YY preserving a measure ν.\nu.

Definition 1.1.

GtG_{t} enjoys multiple exponential mixing of all orders if there is α>0\alpha>0 such that for each rr there are constants C,c>0C,c>0 such that for all zero mean CαC^{\alpha} functions A1,,ArA_{1},\dots,A_{r}, for all t1,trdt_{1},\dots t_{r}\in\mathbb{R}^{d}

(1.1) |ν(j=1rA(Gtjy))|C[j=1rACα]ec𝔩\left|\nu\left(\prod_{j=1}^{r}A(G_{t_{j}}y)\right)\right|\leq C\left[\prod_{j=1}^{r}\|A\|_{C^{\alpha}}\right]e^{-c\mathfrak{l}}

where 𝔩\mathfrak{l} is the gap 𝔩=maxijtitj.\displaystyle\mathfrak{l}=\max_{i\neq j}\|t_{i}-t_{j}\|.

Remark 1.2.

A simple interpolation shows that if (1.1) holds for some α\alpha then it also holds for all α\alpha (see e.g. [14, Appendix A]).

In case d=1d=1, there are plenty of examples of multiply exponentially mixing systems see e.g. the discussion in [11]. In the case d=2d=2 our main example is the following: Y=SL3()/ΓY=SL_{3}(\mathbb{R})/\Gamma, where Γ\Gamma is a cocompact lattice, Gt:YYG_{t}:Y\to Y is the Cartan action on YY, and ν\nu is the Haar measure. More generally one can consider subactions of Cartan actions on 𝒢/Γ\mathcal{G}/\Gamma where 𝒢\mathcal{G} is a semisimple Lie group with compact factors and Γ\Gamma is a cocompact lattice. In particular, we can take Y=SLd()/ΓY=SL_{d}(\mathbb{R})/\Gamma, and let GtG_{t} be an action by a two dimensional subgroup of the group of diagonal matrices.

Let τ:Xd\tau:X\to\mathbb{R}^{d} be a smooth map. We study the map F:(X×Y)(X×Y)F:(X\times Y)\to(X\times Y) given by

(1.2) F(x,y)=(f(x),Gτ(x)y).F(x,y)=(f(x),G_{\tau(x)}y).

Note that FF preserves the measure ζ=μ×ν\zeta=\mu\times\nu and that

FN(x,y)=(fNx,GτN(x)y)whereτN(x)=n=0N1τ(fnx).F^{N}(x,y)=(f^{N}x,G_{\tau_{N}(x)}y)\quad\text{where}\quad\tau_{N}(x)=\sum_{n=0}^{N-1}\tau(f^{n}x).

Let H:X×YH:X\times Y\to\mathbb{R} be a sufficiently smooth function with ζ(H)=0\zeta(H)=0 and

SN=n=0N1H(Fn(x,y)).S_{N}=\sum_{n=0}^{N-1}H(F^{n}(x,y)).

We want to study the distribution of SNS_{N} when the initial condition (x,y)(x,y) is distributed according to ζ.\zeta.

We shall assume that ff is ergodic and satisfies the CLT for Hölder functions.

Definition 1.3.

We say that τ\tau satisfies the mixing local limit theorem (MLLT) if for any sequences (δn)n(\delta_{n})_{n\in\mathbb{N}}\in\mathbb{R}, with limnδn=0\displaystyle\lim_{n\to\infty}\delta_{n}=0 and (zn)nd(z_{n})_{n\in\mathbb{N}}\in\mathbb{R}^{d} such that |znnz|<δn|\frac{z_{n}}{\sqrt{n}}-z|<\delta_{n} for any cube 𝒞d\mathcal{C}\subset\mathbb{R}^{d} and any continuous functions A0,A1:XA_{0},A_{1}:X\to\mathbb{R}

limnnd/2μ(A0()A1(fn)𝟙𝒞(τnzn))=𝔤(z)μ(A0)μ(A1)Vol(𝒞)\lim_{n\to\infty}n^{d/2}\mu\Big{(}A_{0}(\cdot)A_{1}(f^{n}\cdot){\mathbbm{1}}_{\mathcal{C}}(\tau_{n}-z_{n})\Big{)}=\mathfrak{g}(z)\mu(A_{0})\mu(A_{1}){\rm Vol}(\mathcal{C})

where 𝔤(z)\mathfrak{g}(z) is a non-degenerate Gaussian density and the convergence is uniform once (δn)n(\delta_{n})_{n\in\mathbb{N}} is fixed and A0,A1,zA_{0},A_{1},z range over compact subsets of C(X),C(X)C(X),C(X) and d\mathbb{R}^{d}.

Definition 1.4.

τ\tau satisfies multiple mixing local limit theorem (MMLLT) if for each mm\in\mathbb{N} for any sequence (δn)n(\delta_{n})_{n\in\mathbb{N}}\in\mathbb{R}, with limnδn=0\displaystyle\lim_{n\to\infty}\delta_{n}=0, for any family of sequences (zn(1),,zn(m))n(z_{n}^{(1)},\dots,z_{n}^{(m)})_{n\in\mathbb{N}} with |zn(j)nz(j)|<δn|\frac{z_{n}^{(j)}}{\sqrt{n}}-z^{(j)}|<\delta_{n} for any cubes {𝒞j}jmd\{\mathcal{C}_{j}\}_{j\leq m}\subset\mathbb{R}^{d} and any continuous functions A0,A1,Am:XA_{0},A_{1},\dots A_{m}:X\to\mathbb{R} for any sequences n1nmn_{1}\dots n_{m}\in\mathbb{N} such that nj+1njn_{j+1}-n_{j}\to\infty (with n0=0n_{0}=0),

limmin|njnj|(j=1m(njnj1)d/2)μ(j=0mAj(fnj)j=1m𝟙𝒞j(τnjznj(j)))\lim_{\min|n_{j}-n_{j^{\prime}}|\to\infty}\left(\prod_{j=1}^{m}\left(n_{j}-n_{j-1}\right)^{d/2}\right)\mu\left(\prod_{j=0}^{m}A_{j}(f^{n_{j}}\cdot)\prod_{j=1}^{m}{\mathbbm{1}}_{\mathcal{C}_{j}}\left(\tau_{n_{j}}-z_{n_{j}}^{(j)}\right)\right)
=j=0mμ(Aj)j=1m𝔤(z(j)z(j1))j=1mVol(𝒞j),=\prod_{j=0}^{m}\mu(A_{j})\prod_{j=1}^{m}\mathfrak{g}\left(z^{(j)}-z^{(j-1)}\right)\prod_{j=1}^{m}{\rm Vol}(\mathcal{C}_{j}),

where z(0)=0.z^{(0)}=0. Moreover, the convergence is uniform once (δn)n(\delta_{n})_{n\in\mathbb{N}} is fixed, A0,A1,,AmA_{0},A_{1},\dots,A_{m} range over compact subsets of C(X)C(X) and z(j)z^{(j)} range over a compact subset of d\mathbb{R}^{d} for every jmj\leq m.

Definition 1.5.

τ\tau satisfies the anticoncentration large deviation bound of order ss if there exist a constant KK and a decreasing function Θ\Theta such that 1Θ(r)rddr<\displaystyle\int_{1}^{\infty}\Theta(r)r^{d}\mathrm{d}r<\infty and for any numbers n1,n2,,nsn_{1},n_{2},\dots,n_{s}, for any unit cubes C1,C2,,CsC_{1},C_{2},\dots,C_{s} centered at c1,c2,,csc_{1},c_{2},\dots,c_{s}

(1.3) μ(x:τnjCj for j=1,,s)\mu\left(x:\tau_{n_{j}}\in C_{j}\text{ for }j=1,\dots,s\right)\leq
K(j=1s(njnj1)d/2)Θ(maxjcjcj1njnj1).K\left(\prod_{j=1}^{s}\left(n_{j}-n_{j-1}\right)^{-d/2}\right)\Theta\left(\max_{j}\frac{\|c_{j}-c_{j-1}\|}{\sqrt{n_{j}-n_{j-1}}}\right).

The class of maps satisfying the MMLLT and the large deviation anticoncentration bounds includes in particular the maps which admit Young towers with exponential tails, see [21, 19].

Theorem 1.6.

Suppose that d=2d=2, GtG_{t} enjoys multiple exponential mixing of all orders, ff satisfies the CLT for smooth observables and τ\tau satisfies the MMLLT and the anticoncentration large deviation bound. Then there is Σ2\Sigma^{2} such that SNNlnN\displaystyle\frac{S_{N}}{\sqrt{N\ln N}} converges as NN\to\infty to the normal distribution with zero mean and variance Σ2.\Sigma^{2}.

Theorem 1.7.

Suppose that d=1d=1, GtG_{t} enjoys multiple exponential mixing of all orders, ff satisfies the CLT for smooth observables and τ\tau satisfies the MMLLT and the anticoncentration large deviation bound. Then there is a constant Σ\Sigma such that

(1.4) SNN3/4converges as N to a productΣ𝒩,\frac{S_{N}}{N^{3/4}}\quad\text{converges as $N\to\infty$ to a product}\quad\Sigma\mathcal{L}\mathcal{N},

where \mathcal{L} and 𝒩\mathcal{N} are independent, 𝒩\mathcal{N} has standard normal distribution and

(1.5) =x2dx\mathcal{L}=\sqrt{\int_{-\infty}^{\infty}\ell_{x}^{2}\mathrm{d}x\;\;}

where x\ell_{x} is the local time of the standard Brownian Motion at time 11 and spatial location x.x.

Remark 1.8.

If d=1d=1 then the exponential mixing condition can be weakened to sufficiently fast polynomial mixing, see Theorem 5.1 in Section 5. It seems more difficult to weaken the mixing assumption in two dimension. We do not pursue this topic since we do not know examples of smooth 2\mathbb{R}^{2} actions which are mixing at a fast polynomial (rather than exponential) speed.

Remark 1.9.

The asymptotic variance Σ2\Sigma^{2} in Theorems 1.6 and 1.7 has similar form (see (3.10) and (4.5)). Namely let H~(y)=XH(x,y)𝑑μ(x).\displaystyle{\tilde{H}}(y)=\int_{X}H(x,y)d\mu(x). Then

Σ2=cddYH~(y)H~(Gty)𝑑ν(y)𝑑t.\Sigma^{2}=c_{d}\int_{\mathbb{R}^{d}}\int_{Y}{\tilde{H}}(y){\tilde{H}}(G_{t}y)d\nu(y)dt.

In dimension 11 if Σ2=0\Sigma^{2}=0 then there is an L2L^{2} function J(y)J(y) such that

0tH~(Gsy)𝑑s=J(Gty)J(y).\int_{0}^{t}{\tilde{H}}(G_{s}y)ds=J(G_{t}y)-J(y).

This was shown in the discrete case by Rudolph, [20] (Proposition 2 and Lemma 7), but the proof applies with no changes in the continuous case.

It is shown in [12, Theorem 6.4] that for 1\mathbb{R}^{1} volume preserving exponentially mixing flows, the quadratic form HΣ2(H)H\mapsto\Sigma^{2}(H) is not identically zero. Therefore the set of functions of vanishing asymptotic variance is a proper linear subspace. The proof of Theorem 6.4 in [12] works also for higher dimensional (not necessarily volume preserving actions) provided that

(i) there are constants K1,β1,K2,β2K_{1},\beta_{1},K_{2},\beta_{2} such that for all yy and rr, K1rβ1ν(B(y,r))K2rβ2\displaystyle K_{1}r^{\beta_{1}}\!\!\leq\!\!\nu(B(y,r))\!\!\leq\!\!K_{2}r^{\beta_{2}} and

(ii) there exists a slowly recurrent point, that is, a point yYy\in Y such that for all positive constants KK and AA there exists r0(K,A)r_{0}(K,A) so that for all r<r0r<r_{0} and for all tt with |t|<K|t|<K, ν(B(y,r)GtB(y,r))ν(B(y,r))|lnr|A\displaystyle\nu(B(y,r)G_{-t}B(y,r))\leq\frac{\nu(B(y,r))}{|\ln r|^{A}}, where B(y,r)B(y,r) is the ball of radius rr centered at yy. (We note that for exponentially mixing volume preserving \mathbb{R} actions, almost all points are slowly recurrent, see [14].)

We believe that in many cases the vanishing of the asymptotic variance entails that the ergodic sums of HH satisfy the classical CLT but this will be a subject of a future work.

Remark 1.10.

Results analogous to the above theorems can be proved in case GG is an action of d\mathbb{Z}^{d} (d=1,2d=1,2) and τ:Xd\tau:X\to\mathbb{Z}^{d} is a piecewise smooth map satisfying the appropriate assumptions such as the CLT, MMLLT, and anticoncentration large deviations bounds. Since the results as well as the proofs are virtually the same, we omit the case of discrete actions. One can also take XX to be a subshift of finite type, see the discussion below.

1.2. Discussion

Here, we discuss previous results related to Theorems 1.6, 1.7.

The first results about T,T1T,T^{-1} transformations pertain to so called random walks in random scenery. In this model we are given a sequence {ξz}zd\{\xi_{z}\}_{z\in\mathbb{Z}^{d}} of i.i.d. random variables. Let τn\tau_{n} be a simple random walk on d\mathbb{Z}^{d} independent of ξ\xis. We are interested in SN=n=1Nξτn.\displaystyle S_{N}=\sum_{n=1}^{N}\xi_{\tau_{n}}. This model could be put in the present framework as follows. Let XX be a set of sequences {vn}n,\{v_{n}\}_{n\in\mathbb{Z}}, where vn{±e1,±e2,±ed}v_{n}\in\{\pm e_{1},\pm e_{2},\dots\pm e_{d}\} where eje_{j} are basis vectors in d,\mathbb{Z}^{d}, μ\mu is the Bernoulli measure with (v=±ej)=12d{\mathbb{P}}(v=\pm e_{j})=\frac{1}{2d} for all j1,,d,j\in 1,\dots,d, YY is the space of sequences {ξz}zd\{\xi_{z}\}_{z\in\mathbb{Z}^{d}}, ν\nu is the product of distribution functions of ξ\xi, ff and GtG_{t} are shifts and τ({v})=v0.\tau(\{v\})=v_{0}. For random walks in random scenery, Theorem 1.7 is due to [15], and Theorem 1.6 are due to [4]. The results of [15] and [4] are extended to more general actions in the fiber (still assuming the random walk in the base) in [7, 8].

In the context of dynamical systems, Theorem 1.7 was proven in [17] under the assumption that we have a good rate of convergence in the Central Limit Theorem for ff. In the present paper we follow a method of [4] which seems more flexible and allows a larger class of base systems.

We also note that if d3d\geq 3 or τ\tau has a drift, one has a classical CLT (that is, SN𝔼(SN)N\dfrac{S_{N}-{\mathbb{E}}(S_{N})}{\sqrt{N}} converges to a Gaussian distributions). These results are proven in [11, 12]. The paper [11] also studies mixing properties of (T,T1)(T,T^{-1}) transformation defined by (1.2). In particular [11] shows that in many cases the mixing of the whole product FF is exponential. We observe that in the case where FF enjoys multiple exponential mixing, one can obtain the classical CLT by applying the results of [3]. The CLT of Björklund and Gorodnik [3] also plays a key role in our proof and we review it in Section 2.

In the case where the base map ff is uniformly hyperbolic the skew product map FF is partially hyperbolic. It is possible that generic partially hyperbolic maps enjoy strong statistical properties including the multiple exponential mixing and the Central Limit Thereom. However, such a result seems currently beyond reach (some special cases are considered in [1, 6]). In present (T,T1)(T,T^{-1}) setting the exotic limit theorems such as our Theorems 1.7 and 1.6 require the zero drift assumption, which is a codimension dd condition.

2. Björklund - Gorodnik CLT

In order to prove our results we use the strategy of [4] replacing the Feller Lindenberg CLT for iid random variables by a CLT for exponentially mixing systems due to [3]. More precisely we need the following fact.

Proposition 2.1.

Let 𝔪N\mathfrak{m}_{N}, NN\in\mathbb{N} be a sequence of measures on d\mathbb{R}^{d} and let {At,N}td,N\{A_{t,N}\}_{t\in\mathbb{R}^{d},N\in\mathbb{N}} be a family of real valued functions on YY so that At,NC1(Y)\|A_{t,N}\|_{C^{1}(Y)} is uniformly bounded and ν(At,N)0\nu(A_{t,N})\equiv 0. Set 𝒮N:=dAt,N(Gty)d𝔪N(t).\mathcal{S}_{N}:=\int_{\mathbb{R}^{d}}A_{t,N}(G_{t}y)\mathrm{d}\mathfrak{m}_{N}(t). Suppose that

(a) limN𝔪N=\displaystyle\lim_{N\to\infty}\|\mathfrak{m}_{N}\|=\infty where 𝔪=𝔪(d).\displaystyle\|\mathfrak{m}\|=\mathfrak{m}(\mathbb{R}^{d}).

(b) For each r,r\in\mathbb{N}, r3r\geq 3 for each KK

𝔪Nr1[B(t,Kln𝔪N)]d𝔪N(t)=0,\int\mathfrak{m}_{N}^{r-1}[B(t,K\ln\|\mathfrak{m}_{N}\|)]\mathrm{d}\mathfrak{m}_{N}(t)=0,

where B(t,R)dB(t,R)\subset\mathbb{R}^{d} is the ball of radius RR around tdt\in\mathbb{R}^{d}.

(c) limNVN=σ2\displaystyle\lim_{N\to\infty}V_{N}=\sigma^{2} where

VN:=𝒮N2(y)dν(y)=At1,N(Gt1y)At2,N(Gt2y)d𝔪N(t1)d𝔪N(t2)dν(y).V_{N}:=\int\mathcal{S}_{N}^{2}(y)\mathrm{d}\nu(y)=\iiint A_{t_{1},N}(G_{t_{1}}y)A_{t_{2},N}(G_{t_{2}}y)\mathrm{d}\mathfrak{m}_{N}(t_{1})\mathrm{d}\mathfrak{m}_{N}(t_{2})\mathrm{d}\nu(y).

Then, as NN\to\infty, 𝒮N\mathcal{S}_{N} converges weakly to the normal distribution with zero mean and variance σ2.\sigma^{2}.

This proposition is proven in [3] in case At,TA_{t,T} does not depend on tt and TT, however the proof easily extends to the case of t,Tt,T-dependent AA, see [13].

3. Dimension two

3.1. Reduction to quenched CLT

Here we prove Theorem 1.6.

Consider a function H~{\tilde{H}} satisfying

(3.1) H~(x,y)dν(y)=0\int{\tilde{H}}(x,y)\mathrm{d}\nu(y)=0

for all xX.x\in X.

Given xXx\in X define the measures 𝔪N\mathfrak{m}_{N} on 2\mathbb{R}^{2} by

(3.2) 𝔪N(x)=1NlnNn=0N1δτn(x)\mathfrak{m}_{N}(x)=\frac{1}{\sqrt{N\ln N}}\sum_{n=0}^{N-1}\delta_{\tau_{n}(x)}

and functions At,N,x(y)A_{t,N,x}(y) by

(3.3) At,N,x(y)=1Card(nN:τn(x)=t)nN:τn(x)=tH~(fnx,y).A_{t,N,x}(y)=\frac{1}{{\rm Card}(n\leq N:\tau_{n}(x)=t)}\sum_{n\leq N:\tau_{n}(x)=t}{\tilde{H}}({f^{n}}x,y).
Proposition 3.1.

Under the assumptions of Theorem 1.6, there exists σ2\sigma^{2} and subsets XNXX_{N}\subset X such that limNμ(XN)=1\displaystyle\lim_{N\to\infty}\mu(X_{N})=1 and for any sequence xNXNx_{N}\in X_{N} the measures {𝔪N(xN)}\{\mathfrak{m}_{N}(x_{N})\} satisfy the conditions of Proposition 2.1.

The proposition will be proven later. Now we shall show how to obtain Theorem 1.6 from the proposition.

Proof of Theorem 1.6 assuming Proposition 3.1.

Split

(3.4) H(x,y)=H~(x,y)+H¯(x)whereH¯(x)=H(x,y)dν(y).H(x,y)={\tilde{H}}(x,y)+{\bar{H}}(x)\quad\text{where}\quad{\bar{H}}(x)=\int H(x,y)\mathrm{d}\nu(y).

Denote

𝒮N(x,y)=1NlnNn=0N1H~(Fn(x,y)).\mathcal{S}_{N}(x,y)=\frac{1}{\sqrt{N\ln N}}\sum_{n=0}^{N-1}{\tilde{H}}(F^{n}(x,y)).

Note that H~{\tilde{H}} satisfies (3.1) and hence, by Proposition 3.1, 𝒮N(x,y)\mathcal{S}_{N}(x,y) is asymptotically normal and is asymptotically independent from xx. Finally the contribution of H¯{\bar{H}} is negligible. Indeed, the CLT for smooth observables gives 1NlnNn=0N1H¯(fn(x))0.\displaystyle\frac{1}{\sqrt{N\ln N}}\sum_{n=0}^{N-1}{\bar{H}}(f^{n}(x))\Rightarrow 0. This completes the proof of the theorem (modulo Proposition 3.1). \square

The remaining part of this section is devoted to the proof of Proposition 3.1. It suffices to verify the conditions of Proposition 2.1.

Property (a) is clear, since 𝔪N(x)=N/lnN.\|\mathfrak{m}_{N}(x)\|=\sqrt{N/\ln N}.

Verifying properties (b) and (c) requires longer computations that are presented in §3.2 and §3.3.

3.2. Property (b)

Let

(3.5) XK,N={x:Card(n:|n|<N and τn(x)KlnN)N1/5}.X_{K,N}=\left\{x:{\rm Card}(n:|n|<N\text{ and }\|\tau_{n}(x)\|\leq K\ln N)\geq N^{1/5}\right\}.
Lemma 3.2.

If for each KK, limNNμ(XK,N)=0\displaystyle\lim_{N\to\infty}N\mu(X_{K,N})=0, then there are sets X^N{\hat{X}}_{N} such that for all xNX^Nx_{N}\in{\hat{X}}_{N} the measures 𝔪N(xN)\mathfrak{m}_{N}(x_{N}) satisfy property (b) and μ(X^N)1\mu({\hat{X}}_{N})\to 1

Proof.

Given KK let X^N={x:fnxXK,N for n<N}.\displaystyle{\hat{X}}_{N}=\{x:f^{n}x\not\in X_{K,N}\text{ for }n<N\}. By the assumption of the lemma, μ(X^N)1.\mu({\hat{X}}_{N})\to 1. On the other hand, for xX^Nx\in{\hat{X}}_{N},

𝔪Nr1(x)[B(t,KlnN)]d𝔪N(x)=1(NlnN)r/2n=0N1Cardr1(j<N:τjτnKlnN)\int\mathfrak{m}_{N}^{r-1}(x)[B(t,K\ln N)]\mathrm{d}\mathfrak{m}_{N}(x)=\frac{1}{(N\ln N)^{r/2}}\sum_{n=0}^{N-1}{\rm Card}^{r-1}(j<N:\;\|\tau_{j}-\tau_{n}\|\leq K\ln N)
Nr2+1+r15N1/10,\leq N^{-\frac{r}{2}+1+\frac{r-1}{5}}\leq N^{-1/10},

where the first inequality holds since xX^Nx\in{\hat{X}}_{N} and the second one holds because r3r\geq 3. Since KK is arbitrary, the result follows. \square

Let

(3.6) (x,t,N)=Card(nN:|τn(x)t|1).\ell(x,t,N)={\rm Card}(n\leq N:|\tau_{n}(x)-t|\leq 1).
Lemma 3.3.

For each pp there is a constant CpC_{p} such that for each tdt\in\mathbb{R}^{d} and NN\in\mathbb{N},

(3.7) μ(p(,t,N))CplnpN.\mu\left(\ell^{p}(\cdot,t,N)\right)\leq C_{p}\ln^{p}N.
Proof.

Since (x,t,N)=n<N𝟙B(t,1)(τn(x))\displaystyle\ell(x,t,N)=\sum_{n<N}{\mathbbm{1}}_{B(t,1)}(\tau_{n}(x)) we have

μ(p(,t,n))=n1,npμ(x:|τnj(x)t|1 for j=1,p)\mu\left(\ell^{p}(\cdot,t,n)\right)=\sum_{n_{1},\dots n_{p}}\mu\left(x:|\tau_{n_{j}}(x)-t|\leq 1\text{ for }j=1,\dots p\right)
n1,npKp1n1j=2p(1njnj1+1),\leq\sum_{n_{1},\dots n_{p}}K_{p}\frac{1}{n_{1}}\prod_{j=2}^{p}\left(\frac{1}{n_{j}-n_{j-1}+1}\right),

where the last step uses the anticoncentration large deviation bound. \square

Lemma 3.3 and Markov inequality imply that for each ε,t,p{\varepsilon},t,p we have

μ(x:(x,t,N)N1/5(KlnN)2)Cp(KlnN)2pNp/5\mu\left(x:\ell(x,t,N)\geq\frac{N^{1/5}}{(K\ln N)^{2}}\right)\leq\frac{C_{p}(K\ln N)^{2p}}{N^{p/5}}

for NN large enough. It follows that

μ(x:t:tKlnN and (x,t,N)N1/5(KlnN)2)Cp(KlnN)2p+2Np/5.\mu\left(x:\exists t:\|t\|\leq K\ln N\text{ and }\ell(x,t,N)\geq\frac{N^{1/5}}{(K\ln N)^{2}}\right)\leq\frac{C_{p}(K\ln N)^{2p+2}}{N^{p/5}}.

Taking p=6p=6 we verify the conditions of Lemma 3.2.

3.3. Property (c)

Theorem 4.7 in [11] implies that

(3.8) H~(x,y)H~(Fk(x,y))dμ(x)dν(y)=𝔤(0)kς22+o(1k),\int{\tilde{H}}(x,y){\tilde{H}}(F^{k}(x,y))\mathrm{d}\mu(x)\mathrm{d}\nu(y)=\frac{\mathfrak{g}(0)}{k}\varsigma_{2}^{2}+o\left(\frac{1}{k}\right),

where

(3.9) ς22=H~(x1,y)H~(x2,Gt(y))dν(y)dtdμ(x1)dμ(x2)\varsigma_{2}^{2}=\iiint_{-\infty}^{\infty}\int\tilde{H}(x_{1},y)\tilde{H}(x_{2},G_{t}(y))\mathrm{d}\nu(y)\mathrm{d}t\mathrm{d}\mu(x_{1})\mathrm{d}\mu(x_{2})

and 𝔤\mathfrak{g} is the limit Gaussian density of τ\tau (that is, τN/N\tau_{N}/\sqrt{N} converges as NN\to\infty to the normal distribution with density 𝔤\mathfrak{g}). We note that the integral (3.9) converges by the exponential mixing of GG and (3.1). Furthermore, ς220\varsigma_{2}^{2}\geq 0, which can be seen from the following formula (whose proof is standard and so is omitted)

ς22=limT1T[0TH~(x,Gt(y))dtdμ]2dν.\varsigma_{2}^{2}=\lim_{T\to\infty}\frac{1}{T}\int\left[\int\int_{0}^{T}{\tilde{H}}(x,G_{t}(y))\mathrm{d}t\mathrm{d}\mu\right]^{2}\mathrm{d}\nu.

Defining

VN=1NlnN[n=0N1H~(Fn(x,y))]2dν(y),V_{N}=\frac{1}{N\ln N}\int\left[\sum_{n=0}^{N-1}{\tilde{H}}(F^{n}(x,y))\right]^{2}\mathrm{d}\nu(y)\,,

we compute

VN\displaystyle V_{N} =1NlnNn1=0N1n2=0N1H~(Fn1(x,y))H~(Fn2(x,y))dν(y)\displaystyle=\frac{1}{N\ln N}\sum_{n_{1}=0}^{N-1}\sum_{n_{2}=0}^{N-1}\int{\tilde{H}}(F^{n_{1}}(x,y)){\tilde{H}}(F^{n_{2}}(x,y))\mathrm{d}\nu(y)
=1NlnNk=0N1(Nk)(1+𝟙k0)H~(x,y)H~(Fk(x,y))dν(y),\displaystyle=\frac{1}{N\ln N}\sum_{k=0}^{N-1}(N-k)(1+{\mathbbm{1}}_{k\neq 0})\int{\tilde{H}}(x,y){\tilde{H}}(F^{k}(x,y))\mathrm{d}\nu(y)\,,

whence from (3.8) we obtain

(3.10) Σ2:=limNμ(VN(x))=2𝔤(0)H~(x,y)H~(x¯,Gty)dμ(x)dμ(x¯)dν(y)dt.\Sigma^{2}:=\lim_{N\to\infty}\mu\left(V_{N}(x)\right)=2\mathfrak{g}(0)\iiiint{\tilde{H}}(x,y){\tilde{H}}({\bar{x}},G_{t}y)\mathrm{d}\mu(x)\mathrm{d}\mu({\bar{x}})\mathrm{d}\nu(y)\mathrm{d}t.

By Chebyshev’s inequality, to establish property (c), it suffices to show that

(3.11) limNμ(VN2)=Σ4.\lim_{N\to\infty}\mu(V_{N}^{2})=\Sigma^{4}.

Note that

μ(VN2)=1N2ln2Nn1,n2,n3,n4μ(σn1,n2σn3,n4),\mu(V_{N}^{2})=\frac{1}{N^{2}\ln^{2}N}\sum_{n_{1},n_{2},n_{3},n_{4}}\mu(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}),

where

σn1,n2(x)=H(fn1x,Gτn1xy)H(fn2x,Gτn2(x)y)dν(y).\sigma_{n_{1},n_{2}}(x)=\int H(f^{n_{1}}x,G_{\tau_{n_{1}}x}y)H(f^{n_{2}}x,G_{\tau_{n_{2}}(x)}y)\mathrm{d}\nu(y).

Fix a large LL and let 𝐤=𝐤(n1,n2,n3,n4)\mathbf{k}=\mathbf{k}(n_{1},n_{2},n_{3},n_{4}) be the number of indices jj such that |ninj|L|n_{i}-n_{j}|\geq L for all ij.i\neq j. The number of terms where 𝐤1\mathbf{k}\leq 1 is O(L2N2)O(L^{2}N^{2}). Using the trivial bound |μ(σn1,n2σn3,n4)|H~2|\mu(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}})|\leq\|{\tilde{H}}\|_{\infty}^{2}, we see that the contribution of terms with 𝐤1\mathbf{k}\leq 1 is O(N2L2)O(N^{2}L^{2}) which is negligible.

Next, we consider the contribution of the terms with 𝐤=4\mathbf{k}=4. Without loss of generality, we will assume that

(3.12) n1<n2,n3<n4,n1<n3.n_{1}<n_{2},\quad n_{3}<n_{4},\quad n_{1}<n_{3}.

We distniguish 33 cases based on the relative position of n2n_{2} with respect to n3n_{3} and n4.n_{4}.

Case 1. We claim that for each ε>0{\varepsilon}>0 there is LL such that if

(3.13) n1<n1+Ln2<n2+Ln3<n3+Ln4,n_{1}<n_{1}+L\leq n_{2}<n_{2}+L\leq n_{3}<n_{3}+L\leq n_{4},

then

(3.14) |μ(σn1,n2σn3,n4)Σ44|n2n1||n4n3||ε|n2n1||n4n3|.\left|\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)-\frac{\Sigma^{4}}{4|n_{2}-n_{1}||n_{4}-n_{3}|}\right|\leq\frac{{\varepsilon}}{|n_{2}-n_{1}||n_{4}-n_{3}|}.

The proof of (3.14) is based on [11]. First, by [11, Lemma 3.3], we can assume that H~(x,y)=A(x)B(y){\tilde{H}}(x,y)=A(x)B(y) and without loss of generality we assume ν(B)\nu(B) = 0. Then we have

μ(σn1,n2σn3,n4)=[i=14A(fnix)]ρ(τn2n1(fn1x))ρ(τn3n4(fn4x))dμ(x),\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)=\int\left[\prod_{i=1}^{4}A(f^{n_{i}}x)\right]\rho(\tau_{n_{2}-n_{1}}(f^{n_{1}}x))\rho(\tau_{n_{3}-n_{4}}(f^{n_{4}}x))\mathrm{d}\mu(x),

where ρ(t)=B(y)B(Gty)dν(y)\rho(t)=\int B(y)B(G_{t}y)\mathrm{d}\nu(y). Note that for H~{\tilde{H}} as above (3.11) simplifies to

(3.15) Σ2=2𝔤(0)μ(A)2ρ(t)dt.\Sigma^{2}=2\mathfrak{g}(0)\mu(A)^{2}\int\rho(t)\mathrm{d}t.

The remaining part of the proof of (3.14) closely follows the lines of [11, Theorems 4.6, 4.7] and so we only give a sketch. Decompose 2\mathbb{R}^{2} into a countable disjoint family of small squares 𝒞i\mathcal{C}_{i}. Let ziz_{i} be the center of square 𝒞i\mathcal{C}_{i}. Then

μ(σn1,n2σn3,n4)i,jSi,j,\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)\approx\sum_{i,j}S_{i,j},

where

Si,j=ρ(zi)ρ(zj)k=14A(fnkx)𝟙𝒞i(τn2n1(fn1x))𝟙𝒞j(τn3n4(fn4x))dμ(x).S_{i,j}=\rho(z_{i})\rho(z_{j})\int\prod_{k=1}^{4}A(f^{n_{k}}x){\mathbbm{1}}_{\mathcal{C}_{i}}(\tau_{n_{2}-n_{1}}(f^{n_{1}}x)){\mathbbm{1}}_{\mathcal{C}_{j}}(\tau_{n_{3}-n_{4}}(f^{n_{4}}x))\mathrm{d}\mu(x).

Fixing i,ji,j, letting LL\to\infty and using the MMLLT and (3.13), we find

Si,j1|n2n1|1|n4n3|𝒞iρ(t)dt𝒞jρ(t)dt𝔤(0)2[μ(A)]4.S_{i,j}\approx\frac{1}{|n_{2}-n_{1}|}\frac{1}{|n_{4}-n_{3}|}\int_{\mathcal{C}_{i}}\rho(t)\mathrm{d}t\int_{\mathcal{C}_{j}}\rho(t)\mathrm{d}t\mathfrak{g}(0)^{2}[\mu(A)]^{4}.

Summing over i,ji,j and using (3.15), we obtain (3.14).

Case 2. We claim that

(3.16) (n1,,n4):n1<n3<n4<n2μ(σn1,n2σn3,n4)CN2lnN.\sum_{(n_{1},...,n_{4}):n_{1}<n_{3}<n_{4}<n_{2}}\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)\leq CN^{2}\ln N.

To prove (3.16) take (n1,,n4)(n_{1},...,n_{4}) as in (3.16) and write

m0=0,m1=n3n1,m2=n4n1,m3=n2n1.m_{0}=0,\quad m_{1}=n_{3}-n_{1},\quad m_{2}=n_{4}-n_{1},\quad m_{3}=n_{2}-n_{1}.

Then we have

μ(σn1,n2σn3,n4)=[i=03A(fmi(x))]ρ(τm3(x))ρ(τm2m1(fm1x))dμ(x).\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)=\int\left[\prod_{i=0}^{3}A(f^{m_{i}}(x))\right]\rho(\tau_{m_{3}}(x))\rho(\tau_{m_{2}-m_{1}}(f^{m_{1}}x))\mathrm{d}\mu(x).

Decompose 2\mathbb{R}^{2} into unit boxes 𝒞i\mathcal{C}_{i} with center ziz_{i}. Then, we find

|μ(σn1,n2σn3,n4)|Ci1,i2,i3ρ~(zi3)ρ~(zi2zi1)j=13𝟙τmj𝒞ijdμ(x),\left|\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)\right|\leq C\sum_{i_{1},i_{2},i_{3}}\tilde{\rho}(z_{i_{3}})\tilde{\rho}(z_{i_{2}}-z_{i_{1}})\int{\prod_{j=1}^{3}}{\mathbbm{1}}_{\tau_{m_{j}}\in{\mathcal{C}_{i_{j}}}}\mathrm{d}\mu(x),

where ρ~(t)=supt:tt1|ρ(t)|.\displaystyle\tilde{\rho}(t)=\sup_{t^{\prime}:\|t^{\prime}-t\|\leq 1}\left|\rho(t)\right|. Applying (1.3), we find

(3.17) |μ(σn1,n2σn3,n4)|\displaystyle\left|\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)\right|\leq Ci1,i2,i3ρ~(zi3)ρ~(zi2zi1)m1(m2m1)(m3m2)Θ(max1j3zijzij1mjmj1)\displaystyle C\sum_{i_{1},i_{2},i_{3}}\frac{\tilde{\rho}(z_{i_{3}})\tilde{\rho}(z_{i_{2}}-z_{i_{1}})}{m_{1}(m_{2}-m_{1})(m_{3}-m_{2})}\Theta\left(\max_{1\leq j\leq 3}\frac{\|z_{i_{j}}-z_{i_{j-1}}\|}{\sqrt{m_{j}-m_{j-1}}}\right)
=:i1,i2,i3𝒮i1,i2,i3(m1,m2,m3).\displaystyle=:\sum_{i_{1},i_{2},i_{3}}\mathcal{S}_{i_{1},i_{2},i_{3}}(m_{1},m_{2},m_{3}).

Let us consider the special case when i2=i1i_{2}=i_{1} and zi3=0z_{i_{3}}=0 (without loss of generality we say that in this case i3=0i_{3}=0). Then we have

(3.18) i1𝒮i1,i1,0(m1,m2,m3)\sum_{i_{1}}\mathcal{S}_{i_{1},i_{1},0}(m_{1},m_{2},m_{3})\leq
Ci11m11m2m11m3m2Θ(zi1min{m1,m3m2}).C\sum_{i_{1}}\frac{1}{m_{1}}\frac{1}{m_{2}-m_{1}}\frac{1}{m_{3}-m_{2}}\Theta\left(\frac{\|z_{i_{1}}\|}{\min\{\sqrt{m_{1}},\sqrt{m_{3}-m_{2}}\}}\right).

Note that the last expression is symmetric in m1=n3n1m_{1}=n_{3}-n_{1} and m3m2=n2n4m_{3}-m_{2}=n_{2}-n_{4}. Thus without loss of generality we can assume n3n1n2n4n_{3}-n_{1}\leq n_{2}-n_{4} (indeed, the multiplier 22 can be incorporated into the constant CC). Denoting a=n3n1a=n_{3}-n_{1}, b=n4n3b=n_{4}-n_{3}, c=n2n4c=n_{2}-n_{4}, we obtain

(n1,,n4):n1<n3<n4<n2<Ni1𝒮i1,i1,0(m1,m2,m3)CNia=1Nb=1Nc=aN1abcΘ(zia).\sum_{(n_{1},...,n_{4}):n_{1}<n_{3}<n_{4}<n_{2}<N}\sum_{i_{1}}\mathcal{S}_{i_{1},i_{1},0}(m_{1},m_{2},m_{3})\leq CN\sum_{i}\sum_{a=1}^{N}\sum_{b=1}^{N}\sum_{c=a}^{N}\frac{1}{abc}\Theta\left(\frac{\|z_{i}\|}{\sqrt{a}}\right).

Note that the multiplier NN accounts for all choices of n1n_{1}. Thus

(n1,,n4):n1<n3<n4<n2<Ni1𝒮i1,i1,0(m1,m2,m3)CNlnNia=1Nc=aN1acΘ(zia)\sum_{(n_{1},...,n_{4}):n_{1}<n_{3}<n_{4}<n_{2}<N}\sum_{i_{1}}\mathcal{S}_{i_{1},i_{1},0}(m_{1},m_{2},m_{3})\leq CN\ln N\sum_{i}\sum_{a=1}^{N}\sum_{c=a}^{N}\frac{1}{ac}\Theta\left(\frac{\|z_{i}\|}{\sqrt{a}}\right)
CNlnN21NaN1acΘ(za)dcdadzCNlnN1NaN1cdcdaCN2lnN\leq CN\ln N\int_{\mathbb{R}^{2}}\int_{1}^{N}\int_{a}^{N}\frac{1}{ac}\Theta\left(\frac{\|z\|}{\sqrt{a}}\right)\mathrm{d}c\mathrm{d}a\mathrm{d}z\leq CN\ln N\int_{1}^{N}\int_{a}^{N}\frac{1}{c}\mathrm{d}c\mathrm{d}a\leq CN^{2}\ln N

where in the third inequality we used that 1Θ(r)r𝑑r<\displaystyle{\int_{1}^{\infty}\Theta(r)rdr}<\infty and the last inequality follows from direct integration. Thus we have verified that the terms corresponding to i1=i2i_{1}=i_{2} and i3=0i_{3}=0 are in O(N2lnN)O(N^{2}\ln N).

To complete the proof of (3.16), we need to estimate the contribution of the sum corresponding to all i2i_{2} and i3i_{3}. To this end, we will distinguish two cases based on whether the following assumption holds or not:

(3.19) zi2zi1<14zi1, and zi3zi2<14zi1.\|z_{i_{2}}-z_{i_{1}}\|<\frac{1}{4}\|z_{i_{1}}\|,\text{ and }\|z_{i_{3}}-z_{i_{2}}\|<\frac{1}{4}\|z_{i_{1}}\|.

For any given fixed i1i_{1}, let ^i2,i3\widehat{\sum}_{i_{2},i_{3}} denote the sum over i2,i3i_{2},i_{3} that do not satisfy (3.19) and let ~i2,i3\widetilde{\sum}_{i_{2},i_{3}} denote the sum for i2,i3i_{2},i_{3} satisfying (3.19).

First, we assume that (3.19) fails. Then, we have

i1i2,i3^𝒮i1,i2,i3(m1,m2,m3)Ci1i2,i3^ρ~(zi2zi1)ρ~(zi3)1m11m2m11m3m2×[Θ(zi14min{m1,m2m1})+Θ(zi14min{m1,m3m2})].\sum_{i_{1}}\widehat{\sum_{i_{2},i_{3}}}\mathcal{S}_{i_{1},i_{2},i_{3}}(m_{1},m_{2},m_{3})\leq C\sum_{i_{1}}\widehat{\sum_{i_{2},i_{3}}}\tilde{\rho}(z_{i_{2}}-z_{i_{1}})\tilde{\rho}(z_{i_{3}})\frac{1}{m_{1}}\frac{1}{m_{2}-m_{1}}\frac{1}{m_{3}-m_{2}}\\ \times\left[\Theta\left(\frac{\|z_{i_{1}}\|}{4\min\{\sqrt{m_{1}},\sqrt{m_{2}-m_{1}}\}}\right)+\Theta\left(\frac{\|z_{i_{1}}\|}{4\min\{\sqrt{m_{1}},\sqrt{m_{3}-m_{2}}\}}\right)\right].

Now we can perform the summation with respect to i2,i3i_{2},i_{3} first. Denote a=m1a=m_{1}, b=m2m1b=m_{2}-m_{1}, c=m3m2c=m_{3}-m_{2}. By the exponential decay of ρ~\tilde{\rho} we get

i1i2,i3^𝒮i1,i2,i3(m1,m2,m3)Ci11abc[Θ(zi14min{a,b})+Θ(zi14min{a,c})].\sum_{i_{1}}\widehat{\sum_{i_{2},i_{3}}}\mathcal{S}_{i_{1},i_{2},i_{3}}(m_{1},m_{2},m_{3})\leq C\sum_{i_{1}}\frac{1}{abc}\left[\Theta\!\left(\frac{\|z_{i_{1}}\|}{4\min\{\sqrt{a},\sqrt{b}\}}\right)\!+\!\Theta\!\left(\frac{\|z_{i_{1}}\|}{4\min\{\sqrt{a},\sqrt{c}\}}\right)\right]\!.

Summing the last displayed formula for n1,n2,n3,n4n_{1},n_{2},n_{3},n_{4}, we obtain a term O(N2lnN)O(N^{2}\ln N) exactly as in the estimate of (3.18).

Now we assume that (3.19) holds. Then observing that zi3zi1/2,\|z_{i_{3}}\|\geq\|z_{i_{1}}\|/2, and using the exponential decay of ρ~\tilde{\rho}, we find that

ρ~(zi2zi1)ρ~(zi3)Ceczi1\tilde{\rho}(z_{i_{2}}-z_{i_{1}})\tilde{\rho}(z_{i_{3}})\leq Ce^{-c\|z_{i_{1}}\|}

with a fixed constant cc. Thus we conclude

m1m2m3Ni1i2,i3~𝒮i1,i2,i3Cm1m2m3N1m11m2m11m3m2i1i2,i3~eczi1Cln3N.\sum_{m_{1}\leq m_{2}\leq m_{3}\leq N}\sum_{i_{1}}\widetilde{\sum_{i_{2},i_{3}}}\mathcal{S}_{i_{1},i_{2},i_{3}}\\ \leq C\sum_{m_{1}\leq m_{2}\leq m_{3}\leq N}\frac{1}{m_{1}}\frac{1}{m_{2}-m_{1}}\frac{1}{m_{3}-m_{2}}\sum_{i_{1}}\widetilde{\sum_{i_{2},i_{3}}}e^{-c\|z_{i_{1}}\|}\leq C\ln^{3}N.

Summing over all choices of n1n_{1}, we obtain a term in O(Nln3N)O(N\ln^{3}N). This completes the proof of (3.16).

Case 3. We claim that

(3.20) (n1,,n4):n1<n3<n2<n4μ(σn1,n2σn3,n4)CN2lnN.\sum_{(n_{1},...,n_{4}):n_{1}<n_{3}<n_{2}<n_{4}}\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)\leq CN^{2}\ln N.

The proof of (3.20) is similar to that of (3.16). Indeed, the right hand side of (3.17) is now replaced by

Ci1,i2,i3ρ~(zi2)ρ~(zi3zi1)m1(m2m1)(m3m2)Θ(maxjzijzij1mjmj1).C\sum_{i_{1},i_{2},i_{3}}\frac{\tilde{\rho}(z_{i_{2}})\tilde{\rho}(z_{i_{3}}-z_{i_{1}})}{m_{1}(m_{2}-m_{1})(m_{3}-m_{2})}\Theta\left(\max_{j}\frac{\|z_{i_{j}}-z_{i_{j-1}}\|}{\sqrt{m_{j}-m_{j-1}}}\right).

We consider the special case i2=0,i3=i1i_{2}=0,i_{3}=i_{1} first. Then (3.18) is replaced by

(3.21) i1𝒮i1,0,i1(m1,m2,m3)Ci11m11m2m11m3m2Θ(zi1min{m1,m2m1,m3m2}).\sum_{i_{1}}\mathcal{S}_{i_{1},0,i_{1}}(m_{1},m_{2},m_{3})\\ \leq C\sum_{i_{1}}\frac{1}{m_{1}}\frac{1}{m_{2}-m_{1}}\frac{1}{m_{3}-m_{2}}\Theta\left(\frac{\|z_{i_{1}}\|}{\min\{\sqrt{m_{1}},\sqrt{m_{2}-m_{1}},\sqrt{m_{3}-m_{2}}\}}\right).

which is bounded by the right hand side of (3.18) (perhaps with a different constant CC) and hence is in O(N2lnN)O(N^{2}\ln N). The contribution of general i2i_{2}, i3i_{3} can be bounded as before. (3.20) follows.

The upshot of the above three cases is that the leading term only comes from indices n1,,n4n_{1},...,n_{4} satisfying (3.13). Summing μ(σn1,n2σn3,n4)\mu(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}) for all indices n1,,n4n_{1},...,n_{4} satisfying (3.13), we obtain N2ln2N[Σ4/8+oL(1)]N^{2}\ln^{2}N[\Sigma^{4}/8+o_{L\to\infty}(1)] where we have used (3.14) and the fact that n1<n3<Nln(n3n1)ln(Nn3)=N2ln2N2(1+oN(1)).\displaystyle\sum_{n_{1}<n_{3}<N}\ln(n_{3}-n_{1})\ln(N-n_{3})=\frac{N^{2}\ln^{2}N}{2}(1+o_{N\to\infty}(1)).

Next we claim that

(3.22) n1,,n4:𝐤=4μ(σn1,n2σn3,n4)=[Σ2+oL(1)]N2ln2N.\sum_{n_{1},...,n_{4}:\mathbf{k}=4}\mu(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}})=[\Sigma^{2}+o_{L\to\infty}(1)]N^{2}\ln^{2}N.

Indeed similarly to the case of (3.13) considered above we see that the main contribution to our sums comes from the terms where the pairs (n1,n2)(n_{1},n_{2}) and (n3,n4)(n_{3},n_{4}) are not intertwined. Note that if we choose a random permutation of (n1,n2,n3,n4)(n_{1},n_{2},n_{3},n_{4}) then the probability of non intertwining is 1/3 (since the second element should come from the same pair as the first one). Therefore there are 24/3=8 permutations 111These eight permutations corresponds to all possible choices of signs ({,}*\in\{\leq,\geq\}) in the inequalities n1n2,n3n4,min(n1,n2)min(n3,n4)n_{1}*n_{2},\quad n_{3}*n_{4},\quad\min(n_{1},n_{2})*\min(n_{3},n_{4}) contributing to our sum which explains the additional factor of 8 in (3.22).

To complete the proof of property (c), it remains to verify that the contribution of terms with 𝐤=2,3\mathbf{k}=2,3 is negligible. First note that 𝐤=3\mathbf{k}=3 is impossible by the definition of 𝐤\mathbf{k}. Finally, if 𝐤=2\mathbf{k}=2, the we can repeat a simplified version of the proof for 𝐤=4\mathbf{k}=4 using the MLLT instead of the MMLLT. For example, let us consider the case n1n2n3n4n_{1}\leq n_{2}\leq n_{3}\leq n_{4}. Since 𝐤=2\mathbf{k}=2, we must have |n2n1|L|n_{2}-n_{1}|\geq L or |n4n3|L|n_{4}-n_{3}|\geq L. Without loss of generality assume |n4n3|L|n_{4}-n_{3}|\geq L. Then, using the fact that due to the exponential mixing of GG, σn3,n4Cec\sigma_{n_{3},n_{4}}\leq Ce^{-c\ell} on the set τn4τn3[,+1]\|\tau_{n_{4}}-\tau_{n_{3}}\|\in[\ell,\ell+1], we obtain from the MLLT that

μ(σn1,n2σn3,n4)=O(1|n4n3|).\mu\left(\sigma_{n_{1},n_{2}}\sigma_{n_{3},n_{4}}\right)=O\left(\frac{1}{|n_{4}-n_{3}|}\right).

Hence the total contribution terms with 𝐤=2\mathbf{k}=2 is in O(LN2lnN)O\left(LN^{2}\ln N\right). This completes the proof of property (c). We have finished the proof of Proposition 3.1 and hence the proof of Theorem 1.6.

4. Dimension one

4.1. Reduction to the limit theorem for the variance.

Here we prove Theorem 1.7.

We use the decomposition (3.4). Since ff satisfies the CLT for smooth observables, 1Nn=1NH¯(fn(x))\displaystyle\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\bar{H}(f^{n}(x)) converges weakly, and so this sum is negligible after rescaling by N3/4N^{3/4}. It remains to study the ergodic sum of H~{\tilde{H}}.

Following the ideas of the previous section, we let

(4.1) 𝔪N(x)=1VN(x)n=0N1δτn(x),\mathfrak{m}_{N}(x)=\frac{1}{\sqrt{V_{N}(x)}}\sum_{n=0}^{N-1}\delta_{\tau_{n}(x)},
At,N,x(y)=1Card(nN:τn(x)=t)nN:τn(x)=tH~(fn(x),y),A_{t,N,x}(y)=\frac{1}{\mbox{Card}({n\leq N:\tau_{n}(x)=t})}\sum_{n\leq N:\tau_{n}(x)=t}{\tilde{H}}(f^{n}(x),y),

where

VN(x)=SN2(x,y)dν(y),SN(x,y)=n=0N1H~Fn(x,y).V_{N}(x)=\int S_{N}^{2}(x,y)\mathrm{d}\nu(y),\quad S_{N}(x,y)=\sum_{n=0}^{N-1}{\tilde{H}}\circ F^{n}(x,y).

Thus in the notation of Proposition 2.1, we have 𝒮N=SNVN(x).\displaystyle\mathcal{S}_{N}=\frac{S_{N}}{\sqrt{V_{N}(x)}}.

In contrast with Theorem 1.6, VN(x)V_{N}(x) does not satisfy a weak law of large numbers. Instead we have

Proposition 4.1.

There is a constant Λ\Lambda so that VNΛN3/2\frac{V_{N}}{\Lambda N^{3/2}} converges in law as NN\to\infty to the random variable 2\mathcal{L}^{2} given by (1.5).

Before proving Proposition 4.1, let us use it to derive Theorem 1.7. We start with the following analogue of Proposition 3.1:

Lemma 4.2.

Under the assumptions of Theorems 1.7, there are subsets XNXX_{N}\subset X such that limNμ(XN)=1\displaystyle\lim_{N\to\infty}\mu(X_{N})=1 and for any sequence xNXNx_{N}\in X_{N} the measures {𝔪N(xN)}\{\mathfrak{m}_{N}(x_{N})\} satisfy the conditions of Proposition 2.1 with σ=1\sigma=1.

Proof of Lemma 4.2 assuming Proposition 4.1.

Clearly, 𝔪N\|\mathfrak{m}_{N}\|\to\infty for a large set of xx’s and so (a) holds. Recalling that 𝒮N(x,y)=SN(x,y)/VN(x)\mathcal{S}_{N}(x,y)=S_{N}(x,y)/\sqrt{V_{N}(x)}, property (c) holds trivially for all xXx\in X. It remains to show property (b).

Note that 2\mathcal{L}^{2} is non-negative and non-atomic at zero, i.e. for any ε>0{\varepsilon}>0 there is ε>0{\varepsilon}^{\prime}>0 so that (2<ε)<ε\mathbb{P}(\mathcal{L}^{2}<{\varepsilon}^{\prime})<{\varepsilon} and so by Proposition 4.1,

limNμ(x:N1.49<VN(x)<N1.51)=1.\lim_{N\to\infty}\mu(x:N^{1.49}<V_{N}(x)<N^{1.51})=1.

Thus we can assume that xx is such that N1.49<VN(x)<N1.51N^{1.49}<V_{N}(x)<N^{1.51}. Next, we define

~(x,t,N)=Card(nN:|τn(x)t|<1)N.\tilde{\ell}(x,t,N)=\frac{{\rm Card}(n\leq N:|\tau_{n}(x)-t|<1)}{\sqrt{N}}.

As in the proof of Lemma 3.3, we find that for every p+p\in\mathbb{Z}_{+}, every tt\in\mathbb{R}, and every NN\in\mathbb{N} we have μ(~p(.,t,N))<Cp\mu(\tilde{\ell}^{p}(.,t,N))<C_{p}. Then choosing p=1000p=1000, the Markov inequality gives μ(x:~(x,t,N)>N1/100)<CN10\mu(x:\tilde{\ell}(x,t,N)>N^{1/100})<CN^{-10}. Thus we can assume that for all tt\in\mathbb{Z} with |t|<N0.6|t|<N^{0.6}, ~(x,t,N)<N1/100\tilde{\ell}(x,t,N)<N^{1/100} holds. By the anticoncentration large deviation bounds, we can also assume that maxnN|τn(x)|N0.6\displaystyle\max_{n\leq N}|\tau_{n}(x)|\leq N^{0.6} since the set of points where this condition fails has negligible measure. In summary, the measure of the set of xx’s satisfying that

N1.49<VN(x)<N1.51and~(x,t,N){N0.01|t|N0.6,0|t|N0.6.N^{1.49}<V_{N}(x)<N^{1.51}\quad\text{and}\quad\tilde{\ell}(x,t,N)\leq\begin{cases}N^{0.01}&|t|\leq N^{0.6},\\ 0&|t|\geq N^{0.6}.\end{cases}

tends to 11 as N.N\to\infty. For any such xx, we have

𝔪Nr1(x)(t,KlnN)d𝔪N(x)=1(VN(x))r/2n=0N1Cardr1(j<N:|τjτn|Kln𝔪N)\int\mathfrak{m}_{N}^{r-1}(x)(t,K\ln N)\mathrm{d}\mathfrak{m}_{N}(x)=\frac{1}{(V_{N}(x))^{r/2}}\sum_{n=0}^{N-1}{\rm Card}^{r-1}(j<N:\;|\tau_{j}-\tau_{n}|\leq K\ln\|\mathfrak{m}_{N}\|)
1(N1.49)r/2t:|t|N0.6r(x,t,N)2N0.51r+0.60.745r=o(1).\leq\frac{1}{(N^{1.49})^{r/2}}\sum_{t\in\mathbb{Z}:|t|\leq N^{0.6}}\ell^{r}(x,t,N)\leq 2N^{0.51r+0.6-0.745r}=o(1).

if r3r\geq 3. Property (b) and the lemma follow. \square

Proof of Theorem 1.7 assuming Proposition 4.1.

Recall that 𝒮N(x,y)=SN(x,y)VN(x).\displaystyle\mathcal{S}_{N}(x,y)=\frac{S_{N}(x,y)}{\sqrt{V_{N}(x)}}. By Lemma 4.2 and Proposition 2.1, there is a sequence of subsets XNXX_{N}\subset X with μ(XN)1\mu(X_{N})\to 1 such that for every sequence xNXNx_{N}\in X_{N}, the distribution of 𝒮N(x,y)\mathcal{S}_{N}(x,y) w.r.t. ν(y)\nu(y) converges to the standard normal (note that this time properties (a) and (c) are immediate).

In fact, we have the stronger statement that

(4.2) (VNΛN3/2,𝒮N)(2,𝒩)asN,\left(\frac{V_{N}}{\Lambda N^{3/2}},\mathcal{S}_{N}\right)\Rightarrow(\mathcal{L}^{2},\mathcal{N})\quad\text{as}\quad N\to\infty,

where \mathcal{L} is given by (1.5), 𝒩\mathcal{N} is standard normal and 2\mathcal{L}^{2} and 𝒩\mathcal{N} are independent. This follows from Proposition 4.1, Lemma 4.2 and the asymptotic independence of 𝒮N\mathcal{S}_{N} and VNV_{N} which comes from the fact that VNV_{N} depends only on xx while 𝒮N\mathcal{S}_{N} is asymptotically independent of xx. More precisely, let ϕ\phi and ψ\psi be two continuous compactly supported test functions. Then

limNζ(ϕ(VNΛN3/2)ψ(𝒮N))=limNζ(ϕ(VNΛN3/2)ψ(𝒮N)𝟙XN)\lim_{N\to\infty}\zeta\left(\phi\left(\frac{V_{N}}{\Lambda N^{3/2}}\right)\psi(\mathcal{S}_{N})\right)=\lim_{N\to\infty}\zeta\left(\phi\left(\frac{V_{N}}{\Lambda N^{3/2}}\right)\psi(\mathcal{S}_{N}){\mathbbm{1}}_{X_{N}}\right)
=limNXN[ϕ(VNΛN3/2)Yψ(𝒮N)dν]dμ=𝔼(ψ(𝒩))limNXNϕ(VNΛN3/2)dμ=\lim_{N\to\infty}\int_{X_{N}}\left[\phi\left(\frac{V_{N}}{\Lambda N^{3/2}}\right)\int_{Y}\psi(\mathcal{S}_{N})\mathrm{d}\nu\right]\mathrm{d}\mu={\mathbb{E}}(\psi(\mathcal{N}))\lim_{N\to\infty}\int_{X_{N}}\phi\left(\frac{V_{N}}{\Lambda N^{3/2}}\right)\mathrm{d}\mu
=𝔼(ψ(𝒩))limNXϕ(VNΛN3/2)dμ=𝔼(ψ(𝒩))𝔼(ϕ(2)),={\mathbb{E}}(\psi(\mathcal{N}))\lim_{N\to\infty}\int_{X}\phi\left(\frac{V_{N}}{\Lambda N^{3/2}}\right)\mathrm{d}\mu={\mathbb{E}}(\psi(\mathcal{N})){\mathbb{E}}(\phi(\mathcal{L}^{2})),

where the first and the fourth equalities hold since μ(XN)1,\mu(X_{N})\to 1, the third inequality holds by Proposition 2.1, and the last equality holds by Proposition 4.1. This proves (4.2).

Denote Σ=Λ\Sigma=\sqrt{\Lambda}. By (4.2) and the continuous mapping theorem,

SNΣN3/4=VNΛN3/2×𝒮NVN𝒩\frac{S_{N}}{\Sigma N^{3/4}}=\frac{\sqrt{V_{N}}}{\sqrt{\Lambda N^{3/2}}}\times\frac{\mathcal{S}_{N}}{\sqrt{V_{N}}}\Rightarrow\mathcal{L}\mathcal{N}

completing the proof of the Theorem 1.7. \square

4.2. Proof of Proposition 4.1

We are going to use the following well known fact (see e.g. [2] Chapter 1.7, Problem 4). If 𝒳n\mathcal{X}_{n} is a sequence of random variables so that for every k+k\in\mathbb{Z}_{+}

(4.3) Jk=limn𝔼(𝒳nk)J_{k}=\lim_{n\to\infty}\mathbb{E}(\mathcal{X}_{n}^{k})

exists and

(4.4) lim supk(Jkk!)1/k<,\limsup_{k\to\infty}\left(\frac{J_{k}}{k!}\right)^{1/k}<\infty,

then 𝒳n\mathcal{X}_{n} converges weakly to a random variable 𝒳\mathcal{X}. Furthermore, 𝔼(𝒳k)=Jk\mathbb{E}(\mathcal{X}^{k})=J_{k} for every k+k\in\mathbb{Z}_{+} and 𝒳\mathcal{X} is uniquely characterized by its moments.

Now we explain our strategy of proof of Proposition 4.1 (a similar strategy was used in [18]). We prove that there is a constant Λ\Lambda depending on the (T,T1)(T,T^{-1}) transformation FF and the observable HH so that 𝒳N=VN/(ΛN3/2)\mathcal{X}_{N}=V_{N}/(\Lambda N^{3/2}) satisfies (4.3) (with 𝔼\mathbb{E} meaning integral w.r.t. μ\mu) and (4.4) with constants JkJ_{k} that do not depend on FF and HH. Consequently, there is a random variable 𝒳\mathcal{X} so that 𝒳N=VN/(ΛN3/2)\mathcal{X}_{N}=V_{N}/(\Lambda N^{3/2}) converges weakly to 𝒳\mathcal{X} for any (T,T1)(T,T^{-1}) transformation satisfying the assumptions of Theorem 1.7. Recall from §1.2 that one such example is the one dimensional random walk in random scenery, in which case by the result of [15], 𝒳N=VN/(ΛN3/2)\mathcal{X}_{N}=V_{N}/(\Lambda N^{3/2}) converges weakly to 2\mathcal{L}^{2}. Thus 𝒳=2\mathcal{X}=\mathcal{L}^{2} and it has to be the limit for all (T,T1)(T,T^{-1}) transformation satisfying the assumptions of Theorem 1.7. It remains to check (4.3) and (4.4).

Recall that τ\tau satisfies the MMLLT with a Gaussian density 𝔤\mathfrak{g}. Let ς1\varsigma_{1} be the standard deviation of this Gaussian random variable, that is 𝔤(z)=φ(z/ς1)/ς1\mathfrak{g}(z)=\varphi(z/\varsigma_{1})/\varsigma_{1}, where φ\varphi is the standard Gaussian density.

Define

(4.5) Λ=πς22ς1=2πς22𝔤(0)\Lambda=\frac{\sqrt{\pi}\varsigma_{2}^{2}}{\varsigma_{1}}=\sqrt{2}\pi\varsigma_{2}^{2}\mathfrak{g}(0)

where ς22\varsigma_{2}^{2} is given by (3.9).

Lemma 4.3.

For k=1k=1, (4.3) holds with

(4.6) J1=2πw0<t1<t2<11t11t2t1φ(wt1)φ(0)dt1dt2dw.J_{1}=\frac{2}{\sqrt{\pi}}\int_{w\in\mathbb{R}}\int_{0<t_{1}<t_{2}<1}\frac{1}{\sqrt{t_{1}}}\frac{1}{\sqrt{t_{2}-t_{1}}}\varphi\left(\frac{w}{\sqrt{t_{1}}}\right)\varphi(0)\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}w.

We note that the above integral can be performed explicitly. Namely J1=423π\displaystyle J_{1}=\frac{4\sqrt{2}}{3\sqrt{\pi}}. This shows that Λ\Lambda was chosen correctly. Indeed, in case of the simple random walk in random scenery, formula (1.2) in [15] shows that J1=423πJ_{1}=\frac{4\sqrt{2}}{3\sqrt{\pi}} while by the main result of [15], the conclusion of Proposition 4.1 holds for simple random walk in random scenery.

Proof of Lemma 4.3.

Fix some ε>0{\varepsilon}>0. We need to show that for NN large enough,

|N3/2Λ1[n=1NH~Fn(x,y)]2dμdνJ1|<ε.\left|N^{-3/2}\Lambda^{-1}\iint\left[\sum_{n=1}^{N}{\tilde{H}}\circ F^{n}(x,y)\right]^{2}\mathrm{d}\mu\mathrm{d}\nu-J_{1}\right|<{\varepsilon}.

In the following proof, we will choose a small η=η(ε)\eta=\eta({\varepsilon}), a large L=L(ε)L=L({\varepsilon}) a small η=η(ε,η,L)<η\eta^{\prime}\!\!=\!\!\eta^{\prime}({\varepsilon},\eta,L)\!\!<\!\!\eta and finally N=N(ε,η,L,η)N=N({\varepsilon},\eta,L,\eta^{\prime}). There will be finitely many restrictions on these parameters, we can choose the most strict one.

Choose a partition of XX into sets Xi,iIX_{i},i\in I of diameter <η<\eta and fix some elements xiXix_{i}\in X_{i}. We have

[n=1NH~Fn(x,y)]2dμdν=\iint\left[\sum_{n=1}^{N}{\tilde{H}}\circ F^{n}(x,y)\right]^{2}\mathrm{d}\mu\mathrm{d}\nu=
(4.7) n1,n2=1Ni1,i2Iz1,z2p(x)H~(fn1(x),Gτn1(x)(y))H~(fn2(x),Gτn2(x)(y))dνdμ,\sum_{n_{1},n_{2}=1}^{N}\sum_{i_{1},i_{2}\in I}\sum_{z_{1},z_{2}\in\mathbb{Z}}\int p(x)\int{\tilde{H}}(f^{n_{1}}(x),G_{\tau_{n_{1}}(x)}(y)){\tilde{H}}(f^{n_{2}}(x),G_{\tau_{n_{2}}(x)}(y))\mathrm{d}\nu\mathrm{d}\mu,

where

p(x)=pη,n1,n2,i1,i2,z1,z2(x)=𝟙fn1xXi1𝟙fn2xXi2𝟙τn1(x)[z1,z1+1]η𝟙τn2(x)[z2,z2+1]η.p(x)=p_{\eta,n_{1},n_{2},i_{1},i_{2},z_{1},z_{2}}(x)={\mathbbm{1}}_{f^{n_{1}}x\in X_{i_{1}}}{\mathbbm{1}}_{f^{n_{2}}x\in X_{i_{2}}}{\mathbbm{1}}_{\tau_{n_{1}}(x)\in[z_{1},z_{1}+1]\eta}{\mathbbm{1}}_{\tau_{n_{2}}(x)\in[z_{2},z_{2}+1]\eta}.

Split the sum in (4.7) as T1+T2T_{1}+T_{2}, where T1T_{1} corresponds to the terms satisfying

  • (A1)

    ηN<min{n1,n2}<min{n1,n2}+ηN<max{n1,n2}\displaystyle\eta^{\prime}N<\min\{n_{1},n_{2}\}<\min\{n_{1},n_{2}\}+\eta^{\prime}N<\max\{n_{1},n_{2}\}

  • (A2)

    |z1|<LN/η|z_{1}|<L\sqrt{N}/\eta

  • (A3)

    |z1z2|<L/η|z_{1}-z_{2}|<L/\eta.

and T2T_{2} stands for the terms where at least one of the conditions (A1)–(A3) is violated. Write T1=~()T_{1}=\widetilde{\sum}(...).

We start by estimating T1T_{1}. Let us write aNbNa_{N}\approx b_{N} if there are constants η\eta, LL and η\eta^{\prime} so that for NN large enough, |aNbN|<εΛN3/2/10|a_{N}-b_{N}|<{\varepsilon}\Lambda N^{3/2}/10. We claim that

(4.8) T1~p(x)dμH~(xi1,Gz1η(y))H~(xi2,Gz2η(y))dν.T_{1}\approx\widetilde{\sum}\int p(x)\mathrm{d}\mu\int{\tilde{H}}(x_{i_{1}},G_{z_{1}\eta}(y)){\tilde{H}}(x_{i_{2}},G_{z_{2}\eta}(y))\mathrm{d}\nu.

Indeed, by the continuity of HH we can choose η\eta so small that the difference between the LHS and the RHS of (4.8) does not exceed

εΛ1000~pη,n1,n2,i1,i2,z1,z2(x)dμ(x),\frac{{\varepsilon}\Lambda}{1000}\widetilde{\sum}\int p_{\eta,n_{1},n_{2},i_{1},i_{2},z_{1},z_{2}}(x)\mathrm{d}\mu(x),

so (4.8) follows from the anticoncentration large deviation bound.

Next, writing m1=min{n1,n2}m_{1}=\min\{n_{1},n_{2}\}, m2=max{n1,n2}m_{2}=\max\{n_{1},n_{2}\} and using the MMLLT, we find

(4.9) T11ς12~η21m1φ(z1ης1m1)1m2m1φ((z2z1)ης1m2m1)μ(Xi1)μ(Xi2)×T_{1}\approx\frac{1}{\varsigma_{1}^{2}}\widetilde{\sum}\eta^{2}\frac{1}{\sqrt{m_{1}}}\varphi\left(\frac{z_{1}\eta}{\varsigma_{1}\sqrt{m_{1}}}\right)\frac{1}{\sqrt{m_{2}-m_{1}}}\varphi\left(\frac{(z_{2}-z_{1})\eta}{\varsigma_{1}\sqrt{m_{2}-m_{1}}}\right)\mu(X_{i_{1}})\mu(X_{i_{2}})\times
H~(xi1,Gz1η(y))H~(xi2,Gz2η(y))dν.\int{\tilde{H}}(x_{i_{1}},G_{z_{1}\eta}(y)){\tilde{H}}(x_{i_{2}},G_{z_{2}\eta}(y))\mathrm{d}\nu.

Noting that by (A1) and (A3), (z2z1)η/[ς1m2m1]=O(N1/2)(z_{2}-z_{1})\eta/[\varsigma_{1}\sqrt{m_{2}-m_{1}}]=O(N^{-1/2}), we see that φ((z2z1)ης1m2m1)\displaystyle\varphi\left(\frac{(z_{2}-z_{1})\eta}{\varsigma_{1}\sqrt{m_{2}-m_{1}}}\right) can be replaced by φ(0)\varphi(0) in (4.9) without invalidating \approx. Then the only term depending on z2z_{2} is under the integral with respect to ν\nu. We can approximate the sum over z2z_{2} by a Riemann integral:

(4.10) ηz2=z1L/ηz1+L/ηH~(xi1,Gz1η(y))H~(xi2,Gz2η(y))dν=i1,i2+oη0,L,N(1)\eta\sum_{z_{2}=z_{1}-L/\eta}^{z_{1}+L/\eta}\int{\tilde{H}}(x_{i_{1}},G_{z_{1}\eta}(y)){\tilde{H}}(x_{i_{2}},G_{z_{2}\eta}(y))\mathrm{d}\nu=\mathcal{I}_{i_{1},i_{2}}+o_{\eta\to 0,L\to\infty,N\to\infty}(1)

where

i1,i2=H~(xi1,y)H~(xi2,Gt(y))dνdt.\mathcal{I}_{i_{1},i_{2}}=\int_{\mathbb{R}}\int{\tilde{H}}(x_{i_{1}},y){\tilde{H}}(x_{i_{2}},G_{t}(y))\mathrm{d}\nu\mathrm{d}t.

Combining (4.9) and (4.10), we conclude

T1\displaystyle T_{1} \displaystyle\approx 2ς12[i1,i2μ(Xi1)μ(Xi2)i1,i2]×\displaystyle\frac{2}{\varsigma_{1}^{2}}\left[\sum_{i_{1},i_{2}}\mu(X_{i_{1}})\mu(X_{i_{2}})\mathcal{I}_{i_{1},i_{2}}\right]\times
×[ηN<m1<m1+ηN<m2<N|z1|<LN/ηη1m1φ(z1ης1m1)1m2m1φ(0)],\displaystyle\times\left[\sum_{\eta^{\prime}N<m_{1}<m_{1}+\eta^{\prime}N<m_{2}<N}\sum_{|z_{1}|<L\sqrt{N}/\eta}\eta\frac{1}{\sqrt{m_{1}}}\varphi\left(\frac{z_{1}\eta}{\varsigma_{1}\sqrt{m_{1}}}\right)\frac{1}{\sqrt{m_{2}-m_{1}}}\varphi(0)\right],

where the factor 22 comes from taking into account both n1<n2n_{1}<n_{2} and n2<n1n_{2}<n_{1}. Further decreasing η\eta and increasing LL if necessary and recalling (3.9), we can replace the first sum by ς22\varsigma_{2}^{2}.

Now replacing two Riemann sums with the corresponding Riemann integrals, we obtain

T12ς22ς12N3/2|w|<Lη<t1<t1+η<t2<11t11t2t1φ(wς1t1)φ(0)dtdwς22ς1πN3/2J1.T_{1}\approx\frac{2\varsigma_{2}^{2}}{\varsigma_{1}^{2}}N^{3/2}\int_{|w|<L}\int_{\eta^{\prime}<t_{1}<t_{1}+\eta^{\prime}<t_{2}<1}\frac{1}{\sqrt{t_{1}}}\frac{1}{\sqrt{t_{2}-t_{1}}}\varphi\left(\frac{w}{\varsigma_{1}\sqrt{t_{1}}}\right)\varphi(0)\mathrm{d}t\mathrm{d}w\approx\frac{\varsigma_{2}^{2}}{\varsigma_{1}}\sqrt{\pi}N^{3/2}J_{1}.

In order to complete the proof of (4.6), it suffices to show that for LL large and for η,η\eta,\eta^{\prime} small

(4.11) |T2|<εN3/2/10.|T_{2}|<{\varepsilon}N^{3/2}/10.

We will estimate the contribution of the terms where exactly one of the assumptions (A1)–(A3) fail (the cases when more than one fail are similar and easier).

Suppose that (A1) fails and, moreover, m1<ηNm_{1}<\eta^{\prime}N (the other cases are similar). Using the anticoncentration large deviation bounds instead of the MMLLT, we obtain that the corresponding sum is bounded by

m1=1ηNm2=0N1m11m2+1|z1|<LN/ηC(η,L)ηNC(η,L)N<εN3/2/10\sum_{m_{1}=1}^{\eta^{\prime}N}\sum_{m_{2}=0}^{N}\frac{1}{\sqrt{m_{1}}}\frac{1}{\sqrt{m_{2}+1}}\sum_{|z_{1}|<L\sqrt{N}/\eta}C(\eta,L)\leq\sqrt{\eta^{\prime}}NC^{\prime}(\eta,L)\sqrt{N}<{\varepsilon}N^{3/2}/10

if η\eta^{\prime} is small enough.

Let us assume that (A2) fails. Then again using the anticoncentration large deviation bounds instead of the MMLLT, we obtain that the corresponding sum is bounded by

m1=ηNNm2=m1+ηNN1m11m2m1|z1|LN/ηCηΘ(|z1|η/m1)\sum_{m_{1}=\eta^{\prime}N}^{N}\sum_{m_{2}=m_{1}+\eta^{\prime}N}^{N}\frac{1}{\sqrt{m_{1}}}\frac{1}{\sqrt{m_{2}-m_{1}}}\sum_{|z_{1}|\geq L\sqrt{N}/\eta}C\eta\Theta(|z_{1}|\eta/\sqrt{m_{1}})
CN|z1|LN/ηηΘ(|z1|η/N)CN3/2L1Θ(r)dr<εN3/2/10\leq CN\sum_{|z_{1}|\geq L\sqrt{N}/\eta}\eta\Theta(|z_{1}|\eta/\sqrt{N})\leq CN^{3/2}\int_{L-1}^{\infty}\Theta(r)\mathrm{d}r<{\varepsilon}N^{3/2}/10

for LL large.

Finally assume that (A3) fails. Then we use the exponential mixing of GG to derive

ηz2:|z2z1|>L/ηH~(xi1,Gz1η(y))H~(xi2,Gz2η(y))dν\eta\sum_{z_{2}:|z_{2}-z_{1}|>L/\eta}\int{\tilde{H}}(x_{i_{1}},G_{z_{1}\eta}(y)){\tilde{H}}(x_{i_{2}},G_{z_{2}\eta}(y))\mathrm{d}\nu
2LH~(xi1,y)H~(xi2,Gt(y))dνdtCecL.\leq 2\int_{L}^{\infty}\int{\tilde{H}}(x_{i_{1}},y){\tilde{H}}(x_{i_{2}},G_{t}(y))\mathrm{d}\nu\mathrm{d}t\leq Ce^{-cL}.

Proceeding as in the case of T1T_{1} we obtain that the sum corresponding to indices when (A3) is violated is bounded by CN3/2ecL<εN3/2/10CN^{3/2}e^{-cL}<{\varepsilon}N^{3/2}/10. This completes the proof of the lemma. \square

Lemma 4.4.

For any k2k\geq 2, (4.3) holds with

(4.12) Jk=[2π]kw1,,wk0<t1<<t2k<1j=12k1tjtj1vVj=12kφ(wv(j)wv(j1)tjtj1)dtdw,J_{k}=\left[\frac{2}{\sqrt{\pi}}\right]^{k}\int_{w_{1},...,w_{k}\in\mathbb{R}}\int_{0<t_{1}<...<t_{2k}<1}\prod_{j=1}^{2k}\frac{1}{\sqrt{t_{j}-t_{j-1}}}\sum_{v\in V}\prod_{j=1}^{2k}\varphi\left(\frac{w_{v(j)}-w_{v(j-1)}}{\sqrt{t_{j}-t_{j-1}}}\right)\mathrm{d}t\mathrm{d}w,

where VV is the set of all two-to-one mappings from {1,2,,2k}\{1,2,...,2k\} to {1,2,,k}\{1,2,...,k\} (that is, |v1(l)|=2|v^{-1}(l)|=2 for all l=1,,kl=1,...,k).

Proof of Lemma 4.4.

We follow the strategy of the proof of Lemma 4.3.

Fix some ε>0{\varepsilon}>0. We need to show that for NN large enough,

|N3k/2Λk([n=1NH~Fn(x,y)]2dν)kdμJk|<ε.\left|N^{-3k/2}\Lambda^{-k}\int\left(\int\left[\sum_{n=1}^{N}{\tilde{H}}\circ F^{n}(x,y)\right]^{2}\mathrm{d}\nu\right)^{k}\mathrm{d}\mu-J_{k}\right|<{\varepsilon}.

Now we have

([n=1NH~Fn(x,y)]2dν)kdμ\displaystyle\int\left(\int\left[\sum_{n=1}^{N}{\tilde{H}}\circ F^{n}(x,y)\right]^{2}\mathrm{d}\nu\right)^{k}\mathrm{d}\mu
=\displaystyle= n1,n2,,n2k=1Ni1,i2,i2kIz1,z2,,z2k\displaystyle\sum_{n_{1},n_{2},...,n_{2k}=1}^{N}\sum_{i_{1},i_{2}...,i_{2k}\in I}\sum_{z_{1},z_{2},...,z_{2k}\in\mathbb{Z}}
p(x)[j=1kH~(fn2j1(x),Gτn2j1(x)(y))H~(fn2j(x),Gτn2j(x)(y))dν]dμ\displaystyle\int p(x)\left[\prod_{j=1}^{k}\int{\tilde{H}}(f^{n_{2j-1}}(x),G_{\tau_{n_{2j-1}}(x)}(y)){\tilde{H}}(f^{n_{2j}}(x),G_{\tau_{n_{2j}}(x)}(y))\mathrm{d}\nu\right]\mathrm{d}\mu

where

p(x)=pη,n¯,i¯,z¯(x)=[j=12k𝟙fnjxXij][j=12k𝟙τnj(x)[zj,zj+1]η].p(x)=p_{\eta,\underline{n},\underline{i},\underline{z}}(x)=\left[\prod_{j=1}^{2k}{\mathbbm{1}}_{f^{n_{j}}x\in X_{i_{j}}}\right]\left[\prod_{j=1}^{2k}{\mathbbm{1}}_{\tau_{n_{j}}(x)\in[z_{j},z_{j}+1]\eta}\right].

Let us order the numbers n1,,n2kn_{1},...,n_{2k} as m1m2m2km_{1}\leq m_{2}\leq...\leq m_{2k} and denote m0=0m_{0}=0. As before, we write the sum in (4.2) as T1+T2T_{1}+T_{2}, where T1T_{1} corresponds to the terms satisfying

  • (A1’)

    for every j=1,,2kj=1,...,2k, mjmj1>ηNm_{j}-m_{j-1}>\eta^{\prime}N

  • (A2’)

    |z2j1|<LN/η|z_{2j-1}|<L\sqrt{N}/\eta for all j=1,,kj=1,...,k

  • (A3’)

    |z2j1z2j|<L/η|z_{2j-1}-z_{2j}|<L/\eta for all j=1,,kj=1,...,k

and write T1=~()T_{1}=\widetilde{\sum}(...). As in (4.8),

(4.14) T1~p(x)dμj=1kH~(xi2j1,Gz2j1η(y))H~(xi2j,Gz2jη(y))dν,T_{1}\approx\widetilde{\sum}\int p(x)\mathrm{d}\mu\prod_{j=1}^{k}\int{\tilde{H}}(x_{i_{2j-1}},G_{z_{2j-1}\eta}(y)){\tilde{H}}(x_{i_{2j}},G_{z_{2j}\eta}(y))\mathrm{d}\nu,

where \approx now means that the difference between the LHS and the RHS an be made smaller than εΛkN3k/2/10{{\varepsilon}\Lambda^{k}N^{3k/2}}/{10} provided that η\eta and η\eta^{\prime} are small enough and LL and NN are large enough. To compute p(x)𝑑μ\int p(x)d\mu, we use the MMLLT for times m1<m2<<m2km_{1}<m_{2}<...<m_{2k}. Note however that the range of summation for different zjz_{j}’s are different and so it is important to keep track of the index ii so that ni=mjn_{i}=m_{j}. To this end, define to permutation ρ\rho (uniquely defined by the tuple (n1,,n2k)(n_{1},...,n_{2k}) if (A1’) holds) so that mj=nρ(j)m_{j}=n_{\rho(j)}. Writing ρ(0)=0\rho(0)=0 and z0=0z_{0}=0 and using the MMLLT we obtain

(4.16) T1\displaystyle T_{1} \displaystyle\approx 1ς12k~η2k[j=12k1mjmj1φ((zρ(j)zρ(j1))ης1mjmj1)]×\displaystyle\frac{1}{\varsigma_{1}^{2k}}\widetilde{\sum}\eta^{2k}\left[\prod_{j=1}^{2k}\frac{1}{\sqrt{m_{j}-m_{j-1}}}\varphi\left(\frac{(z_{\rho(j)}-z_{\rho(j-1)})\eta}{\varsigma_{1}\sqrt{m_{j}-m_{j-1}}}\right)\right]\times
[j=12kμ(Xij)][j=1kH~(xi2j1,Gz2j1(y))H~(xi2j,Gz2j(y))dν].\displaystyle\left[\prod_{j=1}^{2k}\mu(X_{i_{j}})\right]\left[\prod_{j=1}^{k}\int{\tilde{H}}(x_{i_{2j-1}},G_{z_{{2j-1}}}(y)){\tilde{H}}(x_{i_{2j}},G_{z_{{2j}}}(y))\mathrm{d}\nu\right].

By (A1’) and (A3’), ρ(l)\rho(l) can be replaced by 2ρ(l)/212\lceil\rho(l)/2\rceil-1 (the biggest odd integer not bigger than ρ(j)\rho(j)) for l=j1,jl=j-1,j in the subscripts of zz in (4.16). Consequently, z2jz_{2j} only appears in (4.16). As before, we compute

i1,,i2k[j=12kμ(Xij)]ηkz2,z4,,z2k:|z2jz2j1|<L/η[j=1kH~(xi2j1,Gz2j1(y))H~(xi2j,Gz2j(y))dν]\sum_{i_{1},...,i_{2k}}\left[\prod_{j=1}^{2k}\mu(X_{i_{j}})\right]\eta^{k}\sum_{\begin{subarray}{c}z_{2},z_{4},...,z_{2k}:\\ |z_{2j}-z_{2j-1}|<L/\eta\end{subarray}}\left[\prod_{j=1}^{k}\int{\tilde{H}}(x_{i_{2j-1}},G_{z_{{2j-1}}}(y)){\tilde{H}}(x_{i_{2j}},G_{z_{{2j}}}(y))\mathrm{d}\nu\right]
=ς22k+o(1).=\varsigma_{2}^{2k}+o(1).

Thus we arrive at

T1ς22kς12k^ηk[j=12k1mjmj1φ((z2ρ(j)/21z2ρ(j1)/21)ης1mjmj1)],T_{1}\approx\frac{\varsigma_{2}^{2k}}{\varsigma_{1}^{2k}}\widehat{\sum}\eta^{k}\left[\prod_{j=1}^{2k}\frac{1}{\sqrt{m_{j}-m_{j-1}}}\varphi\left(\frac{(z_{2\lceil\rho(j)/2\rceil-1}-z_{2\lceil\rho(j-1)/2\rceil-1})\eta}{\varsigma_{1}\sqrt{m_{j}-m_{j-1}}}\right)\right],

where Σ^\widehat{\Sigma} refers to the sum for n1,,n2kn_{1},...,n_{2k}, z1,z3,,z2k1z_{1},z_{3},...,z_{2k-1}, satisfying (A1’), (A3’). Next, observe that u:{1,,2k}{1,3,2k1}u:\{1,...,2k\}\to\{1,3...,2k-1\} defined by u(j)=2ρ(j)/21u(j)=2\lceil\rho(j)/2\rceil-1 is a two-to-one mapping. Let UU be the set of all such mappings. The summand in the last displayed formula only depends on (n1,,n2k)(n_{1},...,n_{2k}) through (m1,,m2k)(m_{1},...,m_{2k}) and uu. Furthermore, for any given (m1,,m2k)(m_{1},...,m_{2k}) and uu, there are exactly 2k2^{k} corresponding tuples (n1,,n2k)(n_{1},...,n_{2k}). Thus

T12kς22kς12kuUm1,m2ksatisfying (A1’)z1,..,z2k1satisfying (A3’)ηk[j=12k1mjmj1φ((zu(j)zu(j1))ης1mjmj1)].T_{1}\approx 2^{k}\frac{\varsigma_{2}^{2k}}{\varsigma_{1}^{2k}}\sum_{u\in U}\sum_{\begin{subarray}{c}m_{1},...m_{2k}\\ \text{satisfying (A1')}\end{subarray}}\sum_{\begin{subarray}{c}z_{1},..,z_{2k-1}\\ \text{satisfying (A3')}\end{subarray}}\eta^{k}\left[\prod_{j=1}^{2k}\frac{1}{\sqrt{m_{j}-m_{j-1}}}\varphi\left(\frac{(z_{u(j)}-z_{u(j-1)})\eta}{\varsigma_{1}\sqrt{m_{j}-m_{j-1}}}\right)\right].

Now we are ready to replace the last two sums (Riemann sums) by the corresponding Riemann integral with tjmj/Nt_{j}\sim m_{j}/N and wlz2l1η/Nw_{l}\sim z_{2l-1}\eta/\sqrt{N}. To simplify the notation a little, we denote v(j)=(u(j)+1)/2v(j)=(u(j)+1)/2, thus vv is a 2-to-1 mapping from {1,2,,2k}\{1,2,...,2k\} to {1,2,,k}\{1,2,...,k\} (the set of all such mapping is denoted by VV). We obtain

T1\displaystyle T_{1} \displaystyle\approx 2kς22kς12kN3k/2vV\displaystyle 2^{k}\frac{\varsigma_{2}^{2k}}{\varsigma_{1}^{2k}}N^{3k/2}\sum_{v\in V}
|w1|,,|wk|<L0=t0<t1<<t2k<1:tjtj1<ηj=12k1tjtj1j=12kφ(wv(j)wv(j1)ς1tjtj1)dtdw.\displaystyle\int_{|w_{1}|,...,|w_{k}|<L}\int_{\begin{subarray}{c}0=t_{0}<t_{1}<...<t_{2k}<1:\\ t_{j}-t_{j-1}<\eta^{\prime}\end{subarray}}\prod_{j=1}^{2k}\frac{1}{\sqrt{t_{j}-t_{j-1}}}\prod_{j=1}^{2k}\varphi\left(\frac{w_{v(j)}-w_{v(j-1)}}{\varsigma_{1}\sqrt{t_{j}-t_{j-1}}}\right)\mathrm{d}t\mathrm{d}w.

Substituting ww to w/ς1w/\varsigma_{1} in the above integrals, we obtain

T1ς22kς1kπk/2N3k/2Jk.T_{1}\approx\frac{\varsigma_{2}^{2k}}{\varsigma_{1}^{k}}\pi^{k/2}N^{3k/2}J_{k}.

As in Lemma 4.3, T2T_{2} is negligible. This completes the proof of Lemma 4.4. \square

To finish the proof of Proposition 4.1, it remains to verify (4.4). To this end, first note that since φ\varphi decays quickly, there is a constant MM so that for every vVv\in V and every t1,,t2k(0,1]t_{1},...,t_{2k}\in(0,1],

kj=12kφ(wv(j)wv(j1)ς1tjtj1)dw<Mk.\int_{\mathbb{R}^{k}}\prod_{j=1}^{2k}\varphi\left(\frac{w_{v(j)}-w_{v(j-1)}}{\varsigma_{1}\sqrt{t_{j}-t_{j-1}}}\right)\mathrm{d}w<M^{k}.

Noting that |V|=(2k)!/2k|V|=(2k)!/2^{k}, we have

|Jk|Ck(2k)!0<t1<<t2k<1j=12k1tjtj1dt.|J_{k}|\leq C^{k}(2k)!\int_{0<t_{1}<...<t_{2k}<1}\prod_{j=1}^{2k}\frac{1}{\sqrt{t_{j}-t_{j-1}}}\mathrm{d}t.

The above integral can be computed explicitly and is equal to Γ(1/2)2kΓ(k+1)=πkk!.\displaystyle\frac{\Gamma(1/2)^{2k}}{\Gamma(k+1)}=\frac{\pi^{k}}{k!}. Noting that (2k)!/(k!)<Ckk!(2k)!/(k!)<C^{k}k!, we find that |Jk|Ckk!\displaystyle|J_{k}|\leq C^{k}k! whence (4.4) follows. This completes the proof of Proposition 4.1.

5. Polynomially mixing flows in dimension one.

In this section, we extend the result of Theorem 1.7 to some flows with polynomial mixing rates. We use the setting of [10]. Recall that a flow GtG_{t} is called partially hyperbolic if there is a GtG_{t} invariant splitting TY=EuEcsTY=E_{u}\oplus E_{cs} and positive constants C1,C2,λ1,λ2C_{1},C_{2},\lambda_{1},\lambda_{2} such that for each t0t\geq 0

(i) dGt|EuC1eλ1t;\|dG_{-t}|E_{u}\|\leq C_{1}e^{-\lambda_{1}t};

(ii) For any unit vectors vuEu,v_{u}\in E_{u}, vcsEcsv_{cs}\in E_{cs} we have dGt|vcsC2eλ2tdGtvu.\|dG_{t}|v_{cs}\|\leq C_{2}e^{-\lambda_{2}t}\|dG_{t}v_{u}\|.

For partially hyperbolic flows the leaves of EuE_{u} are tangent to the leaves of an absolutely continuous foliation Wu.W^{u}.

Fix constants R,v¯,C¯1,α1R,{\bar{v}},{\bar{C}}_{1},\alpha_{1}. Let 𝔄(R,v¯,C¯1,α1)\mathfrak{A}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1}) denote the collection of sets DD which belong to one leaf of WuW^{u} and satisfy

diam(D)R,mes(D)v¯,mes(εD)C¯1εα1{\rm diam}(D)\leq R,\quad{\rm mes}(D)\geq{\bar{v}},\quad{\rm mes}(\partial_{\varepsilon}D)\leq{\bar{C}}_{1}{\varepsilon}^{\alpha_{1}}

for all ε>0{\varepsilon}>0, where εD\partial_{\varepsilon}D is the ε{\varepsilon} neighborhood of the boundary of D.D. We say that the sets from 𝔄(R,v¯,C¯1,α1)\mathfrak{A}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1}) have bounded geometry.

Fix C¯2,α2>0.{\bar{C}}_{2},\alpha_{2}>0. Let 𝔐(R,v¯,C¯1,α1,C¯2,α2)\mathfrak{M}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1},{\bar{C}}_{2},\alpha_{2}) denote the set of linear functionals of the form

𝔼D,ρ(A)=DA(y)ρ(y)dy,{\mathbb{E}}_{\ell_{D,\rho}}(A)=\int_{D}A(y)\rho(y)\mathrm{d}y,

where D𝔄(R,v¯,C¯1,α1),D\in\mathfrak{A}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1}), ρ\rho is a probability density on DD, lnρCα2(D),\ln\rho\in C^{\alpha_{2}}(D), lnρCrC¯2.\|\ln\rho\|_{C^{r}}\leq{\bar{C}}_{2}. We shall often use the existence of almost Markov decomposition established in [10, Section 2]: if RR, C¯1,{\bar{C}}_{1}, and C¯2{\bar{C}}_{2} are large enough and α1,α2\alpha_{1},\alpha_{2} and v¯{\bar{v}} are small enough, then given 𝔐(R,v¯,C¯1,α1,C¯2,α2)\ell\in\mathfrak{M}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1},{\bar{C}}_{2},\alpha_{2}) and t>0t>0 we can decompose

(5.1) 𝔼(AGt)=scs𝔼s(A)+O(AC0θt){\mathbb{E}}_{\ell}(A\circ G_{t})=\sum_{s}c_{s}{\mathbb{E}}_{\ell_{s}}(A)+O\left(\|A\|_{C^{0}}\theta^{t}\right)

with s𝔐(R,v¯,C¯1,α1,C¯2,α2)\ell_{s}\in\mathfrak{M}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1},{\bar{C}}_{2},\alpha_{2}) and scs=1+O(θt)\displaystyle\sum_{s}c_{s}=1+O(\theta^{t}) for some θ<1.\theta<1.

We say that a measure is u-Gibbs if its conditional measures on unstable leaves have smooth densities. The existence of almost Markov decomposition implies that u-Gibbs measures belong to the convex hull of 𝔐(R,v¯,C¯1,α1,C¯2,α2)\mathfrak{M}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1},{\bar{C}}_{2},\alpha_{2}) for appropriate choice of R,v¯,C¯1,α1,C¯2,α2R,{\bar{v}},{\bar{C}}_{1},\alpha_{1},{\bar{C}}_{2},\alpha_{2} (see [5, §11.2]).

From now on we shall fix constants involved in the definition of 𝔐(R,v¯,C¯1,α1,C¯2,α2)\mathfrak{M}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1},{\bar{C}}_{2},\alpha_{2}) which satisfies (5.1) and whose convex hull contains u-Gibbs measures. For the sake of simplicity, we will write 𝔐\mathfrak{M} instead of 𝔐(R,v¯,C¯1,α1,C¯2,α2).\mathfrak{M}(R,{\bar{v}},{\bar{C}}_{1},\alpha_{1},{\bar{C}}_{2},\alpha_{2}).

We say that the unstable leaves become equidistributed at rate a(t)a(t) if there is r>0r>0 such that for all 𝔐\ell\in\mathfrak{M} we have

(5.2) |𝔼(AGt)ν(A)|a(t)ACr\left|{\mathbb{E}}_{\ell}(A\circ G_{t})-\nu(A)\right|\leq a(t)\|A\|_{C^{r}}

where ν\nu is the reference invariant measure for Gt.G_{t}. We note that if (5.2) holds with a(t)0a(t)\to 0, then ν\nu belongs to the convex hull of 𝔐\mathfrak{M}, whence it is a u-Gibbs measure (in fact, it is the unique SRB measure for GtG_{t}, see [10, Corollary 2]).

Theorem 5.1.

Theorem 1.7 remains valid if the assumption that GG enjoys exponential mixing of all orders is replaced by the assumption that GG is partially hyperbolic and unstable leaves become equidistributed with rate KtγKt^{-\gamma} for some γ>1\gamma>1 and K>0K>0.

Corollary 5.2.

Suppose that τ\tau satisfies the assumption of Theorem 1.7, GtG_{t} is a topologically transitive Anosov flow whose stable and unstable foliations are not jointly integrable, and ν\nu is the SRB measure. Then (1.4) holds for sufficiently smooth functions.

Indeed, according to [9, Theorem 3], for Anosov flows, unstable leaves are equidistributed on CrC^{r} at rate Ktγ(r)Kt^{-\gamma(r)} where γ(r)\gamma(r)\to\infty as r.r\to\infty.

The proof of Theorem 5.1 requires a modification of Proposition 2.1. We shall use the same notation as in that proposition.

Proposition 5.3.

Suppose that GtG_{t} is partially hyperbolic with unstable leaves equidistributed at rate KtγKt^{-\gamma} for some γ>1.\gamma>1. Suppose that for some 0<δ<1/200<\delta<1/20 such that

(5.3) γ1>16δ120δ\gamma-1>\frac{16\delta}{1-20\delta}

and κ>0\kappa>0 we have

(a) 𝔪N(t:|t|N(1/2)+δ)N1/4+δ\mathfrak{m}_{N}(t\in\mathbb{R}:|t|\leq N^{(1/2)+\delta})\leq N^{1/4+\delta} and 𝔪N(t:|t|>N(1/2)+δ)=0\mathfrak{m}_{N}(t:|t|>N^{(1/2)+\delta})=0;

(b) for every tt\in\mathbb{R}, 𝔪N([t,t+1])Nδ(1/4);\mathfrak{m}_{N}([t,t+1])\leq N^{\delta-(1/4)};

(c) |t1t2|>lnNd𝔪N(t1)d𝔪N(t2)|t1t2|γlnκN;\displaystyle\iint_{|t_{1}-t_{2}|>\sqrt{\ln N}}\frac{\mathrm{d}\mathfrak{m}_{N}(t_{1})\mathrm{d}\mathfrak{m}_{N}(t_{2})}{|t_{1}-t_{2}|^{\gamma}}\leq\ln^{-\kappa}N;

(d) limNν(𝒮N2)=1.\displaystyle\lim_{N\to\infty}\nu(\mathcal{S}_{N}^{2})=1.

Then 𝒮N𝒩(0,1)\mathcal{S}_{N}\Rightarrow\mathcal{N}(0,1) as N.N\to\infty.

Lemma 5.4.

Under the assumptions of Theorems 5.1, there are subsets XNXX_{N}\subset X such that limNμ(XN)=1\displaystyle\lim_{N\to\infty}\mu(X_{N})=1 and for any sequence xNXNx_{N}\in X_{N} the measures {𝔪N(xN)}\{\mathfrak{m}_{N}(x_{N})\}, defined by (4.1), satisfy the conditions of Proposition 5.3.

Proof.

The fact that conditions (a) and (b) hold for arbitrary δ>0\delta>0 are verified as before. Condition (d) is immediate from the definition of 𝔪N.\mathfrak{m}_{N}. In order to verify condition (c) we note that by Proposition 4.1, for any κ>0\kappa>0, μ(x:VN(x)<N3/2lnκN)0\mu(x:V_{N}(x)<N^{3/2}\ln^{-\kappa}N)\to 0 as N.N\to\infty. Therefore it is enough to check that if δ\delta and κ\kappa are sufficiently small, then

μ(x:N(x)<N3/2ln2κN)\mu(x:\mathcal{R}_{N}(x)<N^{3/2}\ln^{-2\kappa}N)

is close to 1, where

N(x):=n1,n2:|n1n2|>lnN(x,n1,N)(x,n2,N)|n1n2|γ\mathcal{R}_{N}(x):=\sum_{n_{1},n_{2}\in\mathbb{Z}:|n_{1}-n_{2}|>\sqrt{\ln N}}\frac{\ell(x,n_{1},N)\ell(x,n_{2},N)}{|n_{1}-n_{2}|^{\gamma}}

and (x,t,N)\ell(x,t,N) is the local time defined by (3.6). Note that

μ((n1,N)(n2,N))=1j1,j2Nμ(x:|τj1(x)n1|1,|τj2(x)n2|1).\mu\left(\ell(n_{1},N)\ell(n_{2},N)\right)=\sum_{1\leq j_{1},j_{2}\leq N}\mu(x:|\tau_{j_{1}}(x)-n_{1}|\leq 1,|\tau_{j_{2}}(x)-n_{2}|\leq 1).

Therefore, using the anticoncentration large deviation bound, we conclude that

μ(N)C1j1j2N|n1n2|lnN1j1j2j1+1Θ(n1j1)1|n2n1|γ\mu(\mathcal{R}_{N})\leq C\sum_{1\leq j_{1}\leq j_{2}\leq N}\sum_{|n_{1}-n_{2}|\geq\sqrt{\ln N}}\frac{1}{\sqrt{j_{1}}\sqrt{j_{2}-j_{1}+1}}\Theta\left(\frac{n_{1}}{\sqrt{j_{1}}}\right)\frac{1}{|n_{2}-n_{1}|^{\gamma}}
CNln(γ1)/2Nj1n11j1Θ(n1j1)CNln(γ1)/2Nj10Θ(r)drCN3/2ln(γ1)/2N.\leq C\frac{\sqrt{N}}{\ln^{(\gamma-1)/2}N}\sum_{j_{1}}\sum_{n_{1}}\frac{1}{\sqrt{j_{1}}}\;\Theta\left(\frac{n_{1}}{\sqrt{j_{1}}}\right)\leq C\frac{\sqrt{N}}{\ln^{(\gamma-1)/2}N}\sum_{j_{1}}\int_{0}^{\infty}\Theta\left(r\right)\mathrm{d}r\leq C\frac{N^{3/2}}{\ln^{(\gamma-1)/2}N}.

Now the result follows by the Markov inequality provided that γ1>4κ.\gamma-1>4\kappa. \square

Thus in order to prove Theorem 5.1 it suffices to establish Proposition 5.3.

Proof of Proposition 5.3..

We divide the interval [N1/2+δ,N1/2+δ][-N^{1/2+\delta},N^{1/2+\delta}] into big blocks of size Nβ1>0N^{\beta_{1}}>0 separated by small blocks of size Nβ2N^{\beta_{2}} where the parameters β2<β1\beta_{2}<\beta_{1} will be chosen later. Let JjJ_{j} denote the jj-th big block and LL be the union of the small blocks. Let 𝒮N=tLAt(Gty)d𝔪N(y).\mathcal{S}_{N}^{\prime}=\int_{t\in L}A_{t}(G_{t}y)\mathrm{d}\mathfrak{m}_{N}(y). Note that ν(𝒮N)=0.\nu(\mathcal{S}^{\prime}_{N})=0. We claim that ν((𝒮N)2)0\nu((\mathcal{S}_{N}^{\prime})^{2})\to 0 as NN\to\infty provided that

(5.4) β1>β2+3δ.\beta_{1}>\beta_{2}+3\delta.

Indeed

(5.5) ν((𝒮N)2)L×Lν(At1(At2Gt2t1))d𝔪N(t1)d𝔪N(t2).\nu\left(\left(\mathcal{S}_{N}^{\prime}\right)^{2}\right)\leq\iint_{L\times L}\nu(A_{t_{1}}(A_{t_{2}}\circ G_{t_{2}-t_{1}}))\mathrm{d}\mathfrak{m}_{N}(t_{1})\mathrm{d}\mathfrak{m}_{N}(t_{2}).

According to [10, Theorem 2], ν(At1(At2Gt2t1))=O(|t2t1|γ).\displaystyle\nu(A_{t_{1}}(A_{t_{2}}\circ G_{t_{2}-t_{1}}))=O\left(|t_{2}-t_{1}|^{-\gamma}\right). Therefore property (b) shows that the integral of the RHS of (5.5) with respect to t2t_{2} is O(N(1/4)+δ)O\left(N^{-(1/4)+\delta}\right). To integrate with respect to t1t_{1} we divide LL into unit intervals. Noting that the mass of each interval is O(Nδ(1/4))O\left(N^{\delta-(1/4)}\right) and the number of intervals is O(N(1/2)+δβ1+β2)O\left(N^{(1/2)+\delta-\beta_{1}+\beta_{2}}\right), we conclude that ν((𝒮N)2)=O(Nβ2+3δβ1)\displaystyle\nu\left(\left(\mathcal{S}_{N}^{\prime}\right)^{2}\right)=O\left(N^{\beta_{2}+3\delta-\beta_{1}}\right) which tends to zero under (5.4).

Thus the main contribution to the variance comes from the big blocks. Let

τ=N1/2+δ,Tj=Tj(y)=JjAt(Gτ+ty)d𝔪N(t),SN,j=k=1jTk.\tau=N^{1/2+\delta},\quad T_{j}=T_{j}(y)=\int_{J_{j}}A_{t}(G_{\tau+t}y)\mathrm{d}\mathfrak{m}_{N}(t),\quad S_{N,j}=\sum_{k=1}^{j}T_{k}.

Fix any ξ\xi\in\mathbb{R}. We will show that there is a sequence εN0{\varepsilon}_{N}\to 0, depending only |ξ||\xi| so that for each 𝔐\ell\in\mathfrak{M} we have

|𝔼(eiξSN,N¯)eξ2/2|εN|{\mathbb{E}}_{\ell}\left(e^{i\xi S_{N,{\bar{N}}}}\right)-e^{-\xi^{2}/2}|\leq{\varepsilon}_{N}

where N¯{\bar{N}} is the number of big blocks. Let Φj(ξ)=𝔼(eiξSN,j)\displaystyle\Phi_{j}(\xi)={\mathbb{E}}_{\ell}\left(e^{i\xi S_{N,j}}\right) with Φ0(ξ)=1\Phi_{0}(\xi)=1.

Lemma 5.5.

If

(5.6) β1<1/42δ,\beta_{1}<1/4-2\delta,

then

Φj(ξ)=Φj1(ξ)[1ξ22vj]+O(vjNδ+wj+ε^N),\Phi_{j}(\xi)=\Phi_{j-1}(\xi)\left[1-\frac{\xi^{2}}{2}v_{j}\right]+O\left(v_{j}N^{-\delta}+w_{j}+{\hat{{\varepsilon}}}_{N}\right),

where

vj=ν(Tj2),wj=(t1,t2)Jj|t1t2|lnNd𝔪N(t1)d𝔪N(t2)|t1t2|γ,ε^N=Nδ1/4β2(γ1).v_{j}\!=\!\nu(T_{j}^{2}),\quad w_{j}\!=\!\iint_{(t_{1},t_{2})\in J_{j}\atop|t_{1}-t_{2}|\geq\sqrt{\ln N}}\frac{\mathrm{d}\mathfrak{m}_{N}(t_{1})\mathrm{d}\mathfrak{m}_{N}(t_{2})}{|t_{1}-t_{2}|^{\gamma}},\quad{\hat{{\varepsilon}}}_{N}\!=\!N^{\delta-1/4-\beta_{2}(\gamma-1)}.

Note that by property (b) and (5.6), we have Tj=O(Nδ).T_{j}\!\!=\!\!O(N^{-\delta}). Hence vj=O(N2δ)v_{j}\!\!=\!\!O(N^{-2\delta}) and so the estimate of Lemma 5.5 can be rewritten as

Φj(ξ)=Φj1(ξ)eξ2vj/2+O(vjNδ+wj+ε^N).\Phi_{j}(\xi)=\Phi_{j-1}(\xi)e^{-\xi^{2}v_{j}/2}+O\left(v_{j}N^{-\delta}+w_{j}+{\hat{{\varepsilon}}}_{N}\right).

Repeating this process, we obtain

(5.7) 𝔼(eiξ𝒮N,N¯)=ΦN¯(ξ)=exp[ξ22j=1Nvj]+O(j[vjNδ+wj]+N¯ε^N).{\mathbb{E}}_{\ell}\left(e^{i\xi\mathcal{S}_{N,{\bar{N}}}}\right)=\Phi_{\bar{N}}(\xi)=\exp\left[-\frac{\xi^{2}}{2}\sum_{j=1}^{N}v_{j}\right]+O\left(\sum_{j}\left[v_{j}N^{-\delta}+w_{j}\right]+{\bar{N}}{\hat{{\varepsilon}}}_{N}\right).

Next we claim that

(5.8) j=1N¯vj=1+oN(1)\sum_{j=1}^{\bar{N}}v_{j}=1+o_{N\to\infty}(1)

provided that

(5.9) β2(γ1)>2δ.\beta_{2}(\gamma-1)>2\delta.

Indeed, 1+o(1)=ν(SN,N¯2)=j=1N¯vj+j1j2ν(Tj1Tj2).\displaystyle 1+o(1)\!\!=\!\!\nu(S_{N,{\bar{N}}}^{2})\!\!=\!\!\sum_{j=1}^{\bar{N}}v_{j}\!+\!\sum_{j_{1}\neq j_{2}}\nu\left(T_{j_{1}}T_{j_{2}}\right). The second term here is at most

Cn1,n2:|n1n2|Nβ2𝔪N([n1,n1+1])𝔪N([n2,n2+1])|n1n2|γ.C\sum_{n_{1},n_{2}:|n_{1}-n_{2}|\geq N^{\beta_{2}}}\frac{\mathfrak{m}_{N}([n_{1},n_{1}+1])\mathfrak{m}_{N}([n_{2},n_{2}+1])}{|n_{1}-n_{2}|^{\gamma}}.

Summing over n2n_{2} using assumption (b) we are left with

Cn1𝔪N([n1,n1+1])Nδ(1/4)β2(γ1)C\sum_{n_{1}}\mathfrak{m}_{N}([n_{1},n_{1}+1])N^{\delta-(1/4)-\beta_{2}(\gamma-1)}
=C𝔪N([N1/2+δ,N1/2+δ])Nδ(1/4)β2(γ1)C¯N2δβ2(γ1)=oN(1),=C\mathfrak{m}_{N}([-N^{1/2+\delta},N^{1/2+\delta}])N^{\delta-(1/4)-\beta_{2}(\gamma-1)}\leq{\bar{C}}N^{2\delta-\beta_{2}(\gamma-1)}=o_{N\to\infty}(1),

where in the last inequality we also used assumption (a).

This proves that (5.9) implies (5.8). (5.8) shows in particular that
jvjNδ=O(Nδ).\displaystyle\sum_{j}v_{j}N^{-\delta}\!\!=\!\!O(N^{-\delta}). Also jwj=o(1)\displaystyle\sum_{j}w_{j}\!=\!o(1) due to assumption (c) of Proposition 5.3, while

N¯ε^N1/4+2δβ1β2(γ1)=o(1){\bar{N}}{\hat{{\varepsilon}}}\!\leq\!N^{1/4+2\delta-\beta_{1}-\beta_{2}(\gamma-1)}\!=\!o(1)

provided that

(5.10) β1+β2(γ1)>14+2δ.\beta_{1}+\beta_{2}(\gamma-1)>\frac{1}{4}+2\delta.

Plugging these estimates into (5.7) we conclude that for all 𝔐\ell\in\mathfrak{M} we have

(5.11) 𝔼(eiξSN,N¯)=eξ2/2+oN(1){\mathbb{E}}_{\ell}\left(e^{i\xi S_{N,{\bar{N}}}}\right)=e^{-\xi^{2}/2}+o_{N\to\infty}(1)

if β1\beta_{1} and β2\beta_{2} satisfy (5.4), (5.6), (5.9), and (5.10). Thus we need β1\beta_{1} and β2\beta_{2} to satisfy

β1<142δ,β2<β13δ,β1+β2(γ1)>14+2δ,β2(γ1)>2δ.\beta_{1}<\frac{1}{4}-2\delta,\quad\beta_{2}<\beta_{1}-3\delta,\quad\beta_{1}+\beta_{2}(\gamma-1)>\frac{1}{4}+2\delta,\quad\beta_{2}(\gamma-1)>2\delta.

Since β1\beta_{1} can be chosen arbitrary close to 142δ\frac{1}{4}-2\delta and β2\beta_{2} can be chosen arbitrarily close to β13δ\beta_{1}-3\delta, the above inequalities are compatible if (5.3) holds. It then follows that (5.11) holds on the convex hull of 𝔐\mathfrak{M} which includes ν.\nu. This completes the proof of Proposition 5.3 modulo Lemma 5.5. \square

Proof of Lemma 5.5.

Let Jj=[nj,nj+].J_{j}=[n_{j}^{-},n_{j}^{+}]. Denote mj=τ+nj1++nj2.m_{j}=\tau+\frac{n_{j-1}^{+}+n_{j}^{-}}{2}. We use the almost Markov decomposition (5.1):

𝔼(AGmj)=scs𝔼s(A)+O(ε~N){\mathbb{E}}_{\ell}(A\circ G_{m_{j}})=\sum_{s}c_{s}{\mathbb{E}}_{\ell_{s}}(A)+O({\tilde{\varepsilon}}_{N})

where ε~N=θNβ2{\tilde{\varepsilon}}_{N}=\theta^{N^{\beta_{2}}}, s=(Ds,ρs)𝔐\ell_{s}=(D_{s},\rho_{s})\in\mathfrak{M} and scs=1O(ε~N).\displaystyle\sum_{s}c_{s}=1-O({\tilde{\varepsilon}}_{N}).

Fix arbitrary ysGmjDs.y_{s}\in G_{-m_{j}}D_{s}. Then

𝔼(eiξSN,j)=scs𝔼s(eiξ[SN,j1(ys)+T~j(y)])+O(ε~N){\mathbb{E}}_{\ell}\left(e^{i\xi S_{N,j}}\right)=\sum_{s}c_{s}{\mathbb{E}}_{\ell_{s}}\left(e^{i\xi[S_{N,j-1}(y_{s})+{\tilde{T}}_{j}(y)]}\right)+O({\tilde{\varepsilon}}_{N})

where

T~j(y)=Tj(Gmjy)=JjAt(Gtmjy)d𝔪N(t).{\tilde{T}}_{j}(y)=T_{j}(G_{-m_{j}}y)=\int_{J_{j}}A_{t}(G_{t-m_{j}}y)\mathrm{d}\mathfrak{m}_{N}(t).

Next

𝔼s(eiξT~j)=1+iξ𝔼(T~j)ξ22𝔼s(T~j2)+O(𝔼s(|T~j|3)).{\mathbb{E}}_{\ell_{s}}\left(e^{i\xi{\tilde{T}}_{j}}\right)=1+i\xi{\mathbb{E}}({\tilde{T}}_{j})-\frac{\xi^{2}}{2}{\mathbb{E}}_{\ell_{s}}({\tilde{T}}_{j}^{2})+O\left({\mathbb{E}}_{\ell_{s}}\left(\left|{\tilde{T}}_{j}\right|^{3}\right)\right).

Note that 𝔼(T~j)=O(Nβ2(γ1)(1/4)+δ)\displaystyle{\mathbb{E}}({\tilde{T}}_{j})=O\left(N^{\beta_{2}(\gamma-1)-(1/4)+\delta}\right) due to the equidistribution of unstable leaves.

To estimate 𝔼s(T~j2){\mathbb{E}}_{\ell_{s}}({\tilde{T}}_{j}^{2}) we split

𝔼s(T~j2)=Jj×Jj𝔼s[At1(Gt1mjy)At2(Gt2mjy)]d𝔪N(t1)d𝔪N(t2)=I+II{\mathbb{E}}_{\ell_{s}}({\tilde{T}}_{j}^{2})=\iint_{J_{j}\times J_{j}}{\mathbb{E}}_{\ell_{s}}[A_{t_{1}}(G_{t_{1}-m_{j}}y)A_{t_{2}}(G_{t_{2}-m_{j}}y)]\mathrm{d}\mathfrak{m}_{N}(t_{1})\mathrm{d}\mathfrak{m}_{N}(t_{2})=I+{I\!\!I}

where II includes the terms where |t1t2|lnN|t_{1}-t_{2}|\leq\sqrt{\ln N} and II{I\!\!I} includes the other terms.

According to [10, Theorem 2],

𝔼s((At1Gt1mj)(At2Gt2mj))=O(|t2t1|γ).{\mathbb{E}}_{\ell_{s}}\left((A_{t_{1}}\circ G_{t_{1}-m_{j}})(A_{t_{2}}\circ G_{t_{2}-m_{j}})\right)=O\left(|t_{2}-t_{1}|^{-\gamma}\right).

Therefore II=O(wj).{I\!\!I}=O(w_{j}).

To estimate II, we note that

𝔼s((At1Gt1mj)(At2Gt2mj))=𝔼s((Dt1,t2Gt1mj))\displaystyle{\mathbb{E}}_{\ell_{s}}\left((A_{t_{1}}\circ G_{t_{1}-m_{j}})(A_{t_{2}}\circ G_{t_{2}-m_{j}})\right)={\mathbb{E}}_{\ell_{s}}\left((D_{t_{1},t_{2}}\circ G_{t_{1}-m_{j}})\right)

where Dt1,t2=At1(At2Gt2t1)D_{t_{1},t_{2}}=A_{t_{1}}(A_{t_{2}}\circ G_{t_{2}-t_{1}}) with Dt1,t2CrKt2t1Nδ/2.\displaystyle\|D_{t_{1},t_{2}}\|_{C^{r}}\leq K^{t_{2}-t_{1}}\leq N^{\delta/2}.

Hence using the equidistribution of unstable leaves, we obtain

𝔼s((At1Gt1mj)(At2Gt2mj))=ν(Dt1,t2)+O(Nδ/2(t1mj)γ){\mathbb{E}}_{\ell_{s}}\left((A_{t_{1}}\circ G_{t_{1}-m_{j}})(A_{t_{2}}\circ G_{t_{2}-m_{j}})\right)=\nu(D_{t_{1},t_{2}})+O\left(N^{\delta/2}(t_{1}-m_{j})^{-\gamma}\right)
=ν(At1(At2Gt2t1))+O(Nδ/2(t1mj)γ).=\nu(A_{t_{1}}(A_{t_{2}}\circ G_{t_{2}-t_{1}}))+O\left(N^{\delta/2}(t_{1}-m_{j})^{-\gamma}\right).

It follows that

I=t1,t2Jj|t1t2|lnNν(At1(At2Gt2t1))d𝔪N(t1)d𝔪N(t2)+O(N(5δ/2)(1/2)β2(γ1)lnN)I=\iint_{t_{1},t_{2}\in J_{j}\atop|t_{1}-t_{2}|\leq\sqrt{\ln N}}\nu(A_{t_{1}}(A_{t_{2}}G_{t_{2}-t_{1}}))\mathrm{d}\mathfrak{m}_{N}(t_{1})\mathrm{d}\mathfrak{m}_{N}(t_{2})+O\left(N^{(5\delta/2)-(1/2)-\beta_{2}(\gamma-1)}\sqrt{\ln N}\right)
=vj+O(wj)+O(N3δ(1/2)β2(γ1)).=v_{j}+O(w_{j})+O\left(N^{3\delta-(1/2)-\beta_{2}(\gamma-1)}\right).

Finally

𝔼s(|T~j|3)Nδ𝔼s(|T~j|2)=O([vj+wj]Nδ+N2δ(1/2)β2(γ1)).{\mathbb{E}}_{\ell_{s}}(|{\tilde{T}}_{j}|^{3})\leq N^{-\delta}{\mathbb{E}}_{\ell_{s}}(|{\tilde{T}}_{j}|^{2})=O\left(\left[v_{j}+w_{j}\right]N^{-\delta}+N^{2\delta-(1/2)-\beta_{2}(\gamma-1)}\right).

Summing over ss and using the fact that scs𝔼s(eiξSN,j1(ys))=𝔼(eiξSN,j1)+O(ε~N)\displaystyle\sum_{s}c_{s}{\mathbb{E}}_{\ell_{s}}\left(e^{i\xi S_{N,{j-1}}(y_{s})}\right)={\mathbb{E}}_{\ell}\left(e^{i\xi S_{N,j-1}}\right)+O({\tilde{\varepsilon}}_{N}), we obtain the result. \square

Acknowledgement

C.D. was partially supported by Nankai Zhide Foundation, and AMS-Simons travel grant. D.D. was partially supported by NSF DMS 1956049. A.K. was partially supported by NSF DMS 1956310. P.N. was partially supported by NSF DMS 2154725.

References

  • [1] Avila A., Gouezel S., Tsujii, M. Smoothness of solenoidal attractors, Discrete Contin. Dyn. Syst. 15 (2006) 21–35.
  • [2] Billingsley, P., Convergence of Probability Measures Wiley (1968).
  • [3] Bjorkund M., Gorodnik A. Central Limit Theorems for group actions which are exponentially mixing of all orders, J. Anal. Math. 141 (2020) 457–482.
  • [4] Bolthausen E. A central limit theorem for two-dimensional random walks in random sceneries, Ann. Probab. 17 (1989) 108–115.
  • [5] Bonatti C., Diaz L. J., Viana M. Dynamics beyond uniform hyperbolicity. A global geometric and probabilistic perspective, Encyclopaedia Math Sciences 102 (2005) Springer, Berlin, xviii+384 pp.
  • [6] Castorrini R., Liverani C. Quantitative statistical properties of two-dimensional partially hyperbolic systems, Adv. Math. 409 (2022), part A, paper 108625, 122 pp.
  • [7] Cohen G., Conze J.–P. CLT for random walks of commuting endomorphisms on compact abelian groups, J. Theoret. Prob. 30 (2017) 143–195.
  • [8] Cohen G., Conze J.–P. On the quenched functional CLT in random sceneries, Studia Math. 269 (2023) 261–303.
  • [9] Dolgopyat D. On decay of correlations in Anosov flows, Ann. of Math. 147 (1998) 357–390.
  • [10] Dolgopyat, D. Limit theorems for partially hyperbolic systems, Trans. AMS 356 (2004) 1637–1689.
  • [11] Dolgopyat D., Dong C., Kanigowski A., Nándori P. Mixing properties of generalized (T,T1)(T,T^{-1}) transformations, Israel J. Math. 247 (2022) 21–73.
  • [12] Dolgopyat D., Dong C., Kanigowski A., Nándori P. Flexibility of statistical properties for smooth systems satisfying the central limit theorem Invent. Math. 230 (2022) 31–120.
  • [13] Dolgopyat D., Dong C., Kanigowski A., Nándori P. Notes on Bjorklund-Gorodnik Central Limit Thereom, preprint.
  • [14] Dolgopyat D., Fayad B., Liu S. Multiple Borel-Cantelli lemma in dynamics and multilog law for recurrence, J. Mod. Dyn. 18 (2022) 209–289.
  • [15] Kesten H., Spitzer F. A limit theorem related to a new class of self-similar processes, Z. Wahrsch. Verw. Gebiete 50 (1979) 5–25.
  • [16] Kleinbock, D. Y., Margulis, G. A.: Bounded orbits of nonquasiunipotent flows on homogeneous spaces. AMS Transl. 171 (1996) 141–172.
  • [17] Le Borgne S. Exemples de systemes dynamiques quasi-hyperboliques a decorrelations lentes, preprint; research announcement: CRAS 343 (2006) 125–128.
  • [18] Nándori, P., Szász, D. Lorentz Process with shrinking holes in a wall Chaos 22 (2012) paper 026115.
  • [19] Pène F. Planar Lorentz process in a random scenery, Ann. Inst. Henri Poincare Probab. Stat. 45 (2009) 818–839.
  • [20] Rudolph, D,Asymptotically Brownian skew products give non-loosely Bernoulli K-automorphisms, Invent. Math. 91 (1988) 105–128.
  • [21] Szász D., Varjú T. Local limit theorem for the Lorentz process and its recurrence in the plane, Erg. Th. Dynam. Sys. 24 (2004) 257–278.