This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Central limit theorem for the excited random walk in dimension d2d\geq 2

Jean Bérard1 and Alejandro Ramírez1,2 Institut Camille Jordan, UMR CNRS 5208, 43, boulevard du 11 novembre 1918, Villeurbanne, F-69622, France; université de Lyon, Lyon, F-69003, France; université Lyon 1, Lyon, F-69003, France
e-mail: jean.berard@univ-lyon1.fr
Facultad de Matemáticas
Pontificia Universidad Católica de Chile
Vicuña Mackenna 4860, Macul
Santiago, Chile
e-mail: aramirez@mat.puc.cl
Abstract.

We prove that a law of large numbers and a central limit theorem hold for the excited random walk model in every dimension d2d\geq 2.

Key words and phrases:
Excited random walk, Regeneration techniques
1991 Mathematics Subject Classification:
60K35, 60J10
1Partially supported by ECOS-Conicyt grant CO5EO2
2Partially supported by Fondo Nacional de Desarrollo Científico y Tecnológico grant 1060738 and by Iniciativa Científica Milenio P-04-069-F

1. Introduction

An excited random walk with bias parameter p(1/2,1]p\in(1/2,1] is a discrete time nearest neighbor random walk (Xn)n0(X_{n})_{n\geq 0} on the lattice d\mathbb{Z}^{d} obeying the following rule: when at time nn the walk is at a site it has already visited before time nn, it jumps uniformly at random to one of the 2d2d neighboring sites. On the other hand, when the walk is at a site it has not visited before time nn, it jumps with probability p/dp/d to the right, probability (1p)/d(1-p)/d to the left, and probability 1/(2d)1/(2d) to the other nearest neighbor sites.

The excited random walk was introduced in 2003 by Benjamini and Wilson [1], motivated by previous works of [5, 4] and [10] on self-interacting Brownian motions. Variations on this model have also been introduced. The excited random walk on a tree was studied by Volkov [15]. The so called multi-excited random walk, where the walk gets pushed towards a specific direction upon its first MxM_{x} visits to a site xx, with MxM_{x} possibly being random, was introduced by Zerner in [16] (see also [17] and [9]).

In [1], Benjamini and Wilson proved that for every value of p(1/2,1]p\in(1/2,1] and d2d\geq 2, excited random walks are transient. Furthermore, they proved that for d4d\geq 4,

(1) lim infnn1Xne1>0a.s.,\liminf_{n\to\infty}n^{-1}X_{n}\cdot e_{1}>0\quad a.s.,

where (ei:1id)(e_{i}:1\leq i\leq d) denote the canonical generators of the group d\mathbb{Z}^{d}. Subsequently, Kozma extended (1) in [7] and [8] to dimensions d=3d=3 and d=2d=2. Then, in [14], relying on the lace expansion technique, van der Hofstad and Holmes proved that a weak law of large numbers holds when d>5d>5 and pp is close enough (depending on dd) to 1/21/2, and that a central limit theorem hold when d>8d>8 and and pp is close enough (depending on dd) to 1/21/2.

In this paper, we prove that the biased coordinate of the excited random walk satisfies a law of large numbers and a central limit theorem for every d2d\geq 2 and p(1/2,1]p\in(1/2,1].

Theorem 1.

Let p(1/2,1]p\in(1/2,1] and d2d\geq 2.

  • (i)

    (Law of large numbers). There exists v=v(p,d), 0<v<+v=v(p,d),\,0<v<+\infty such that a.s.

    limnn1Xne1=v.\lim_{n\to\infty}n^{-1}X_{n}\cdot e_{1}=v.
  • (ii)

    (Central limit theorem). There exists σ=σ(p,d), 0<σ<+\sigma=\sigma(p,d),\,0<\sigma<+\infty, such that

    tn1/2(Xnte1vnt),t\mapsto n^{-1/2}(X_{\lfloor nt\rfloor}\cdot e_{1}-v\lfloor nt\rfloor),

    converges in law as n+n\to+\infty to a Brownian motion with variance σ2\sigma^{2}.

Our proof is based on the well-known construction of regeneration times for the random walk, the key issue being to obtain good tail estimates for these regeneration times. Indeed, using estimates for the so-called tan points of the simple random walk, introduced in [1] and subsequently used in [7, 8], it is possible to prove that, when d2d\geq 2, the number of distinct points visited by the excited random walk after nn steps is, with large probability, of order n3/4n^{3/4} at least. Since the excited random walk performs a biased random step at each time it visits a site it has not previously visited, the e1e_{1}-coordinate of the walk should typically be at least of order n3/4n^{3/4} after nn steps. Since this number is o(n)o(n), this estimate is not good enough to provide a direct proof that the walk has linear speed. However, such an estimate is sufficient to prove that, while performing nn steps, the walk must have many independent opportunities to perform a regeneration. A tail estimate on the regeneration times follows, and in turn, this yields the law of large numbers and the central limit theorem, allowing for a full use of the spatial homogeneity properties of the model. When d3d\geq 3, it is possible to replace, in our argument, estimates on the number of tan points by estimates on the number of distinct points visited by the projection of the random walk on the (e2,,ed)(e_{2},\ldots,e_{d}) coordinates – which is essentially a simple random walk on d1\mathbb{Z}^{d-1}. Such an observation was used in [1] to prove that (1) holds when d4d\geq 4. Plugging the estimates of [6] in our argument, we can rederive the law of large numbers and the central limit theorem when d4d\geq 4 without considering tan points. Furthermore, a translation of the results in [2] and [11] about the volume of the Wiener sausage to the random walk situation considered here, would allow us to rederive our results when d=3d=3, and to improve the tail estimates for any d3d\geq 3.

The regeneration time methods used to prove Theorem 1 could also be used to describe the asymptotic behavior of the configuration of the vertices as seen from the excited random walk. Let Ξ:={0,1}d{0}\Xi:=\{0,1\}^{\mathbb{Z}^{d}\setminus\{0\}}, equipped with the product topology and σ\sigma-algebra. For each time nn and site xXnx\neq X_{n}, define β(x,n):=1\beta(x,n):=1 if the site xx was visited before time nn by the random walk, while β(x,n):=0\beta(x,n):=0 otherwise. Let ζ(x,n):=β(xXn,n)\zeta(x,n):=\beta(x-X_{n},n) and define

ζ(n):=(ζ(x,n);xd{0})Ξ.\zeta(n):=(\zeta(x,n);\,x\in\mathbb{Z}^{d}\setminus\{0\})\in\Xi.

We call the process (ζ(n))n(\zeta(n))_{n\in\mathbb{N}} the environment seen from the excited random walk. It is then possible to show that if ρ(n)\rho(n) is the law of ζ(n)\zeta(n), there exists a probability measure ρ\rho defined on Ξ\Xi such that

limnρ(n)=ρ,\lim_{n\to\infty}\rho(n)=\rho,

weakly.

In the following section of the paper we introduce the basic notation that will be used throughout. In Section 3, we define the regeneration times and formulate the key facts satisfied by them. In Section 4 we obtain the tail estimates for the regeneration times via a good control on the number of tan points. Finally, in Section 5, we present the results of numerical simulations in dimension d=2d=2 which suggest that, as a function of the bias parameter pp, the speed v(p,2)v(p,2) is an increasing convex function of pp, whereas the variance σ(p,2)\sigma(p,2) is a concave function which attains its maximum at some point strictly between 1/21/2 and 11.

2. Notations

Let 𝐛:={e1,,ed,e1,,ed}\mathbf{b}:=\{e_{1},\ldots,e_{d},-e_{1},\ldots,-e_{d}\}. Let μ\mu be the distribution on 𝐛\mathbf{b} defined by μ(+e1)=p/d\mu(+e_{1})=p/d, μ(e1)=(1p)/d\mu(-e_{1})=(1-p)/d, μ(±ej)=1/2d\mu(\pm e_{j})=1/2d for j1j\neq 1. Let ν\nu be the uniform distribution on 𝐛\mathbf{b}. Let 𝒮0\mathcal{S}_{0} denote the sample space of the trajectories of the excited random walk starting at the origin:

𝒮0:={(zi)i0(d);z0=0,zi+1zi𝐛 for all i0}.\mathcal{S}_{0}:=\left\{(z_{i})_{i\geq 0}\in(\mathbb{Z}^{d})^{\mathbb{N}};\,z_{0}=0,\,z_{i+1}-z_{i}\in\mathbf{b}\mbox{ for all $i\geq 0$}\right\}.

For all k0k\geq 0, let XkX_{k} denote the coordinate map defined on 𝒮0\mathcal{S}_{0} by Xk((zi)i0):=zkX_{k}((z_{i})_{i\geq 0}):=z_{k}. We will sometimes use the notation XX to denote the sequence (Xk)k0(X_{k})_{k\geq 0}. We let \mathcal{F} be the σ\sigma-algebra on 𝒮0\mathcal{S}_{0} generated by the maps (Xk)k0(X_{k})_{k\geq 0}. For kk\in\mathbb{N}, the sub-σ\sigma-algebra of \mathcal{F} generated by X0,,XkX_{0},\ldots,X_{k} is denoted by k\mathcal{F}_{k}. And we let θk\theta_{k} denote the transformation on 𝒮0\mathcal{S}_{0} defined by (zi)i0(zk+izk)i0(z_{i})_{i\geq 0}\mapsto(z_{k+i}-z_{k})_{i\geq 0}. For the sake of definiteness, we let θ+((zi)i0):=(zi)i0\theta_{+\infty}((z_{i})_{i\geq 0}):=(z_{i})_{i\geq 0}. For all n0n\geq 0, define the following two random variables on (𝒮0,)(\mathcal{S}_{0},\mathcal{F}):

rn:=max{Xie1; 0in},r_{n}:=\max\{X_{i}\cdot e_{1};\,0\leq i\leq n\},
Jn=Jn(X):= number of indices 0kn such that Xk{Xi; 0ik1}.J_{n}=J_{n}(X):=\mbox{ number of indices $0\leq k\leq n$ such that }X_{k}\notin\{X_{i};\,0\leq i\leq k-1\}.

(Note that, with this definition, J0=1J_{0}=1.)

We now call 0\mathbb{P}_{0} the law of the excited random walk, which is formally defined as the unique probability measure on (𝒮0,)(\mathcal{S}_{0},\mathcal{F}) satisfying the following conditions: for every k0k\geq 0,

  • on Xk{Xi; 0ik1}X_{k}\notin\{X_{i};\,0\leq i\leq k-1\}, the conditional distribution of Xk+1XkX_{k+1}-X_{k} with respect to k\mathcal{F}_{k} is μ\mu;

  • on Xk{Xi; 0ik1}X_{k}\in\{X_{i};\,0\leq i\leq k-1\}, the conditional distribution of Xk+1XkX_{k+1}-X_{k} with respect to k\mathcal{F}_{k} is ν\nu.

3. The renewal structure

We now define the regeneration times for the excited random walk (see [13] for the same definition in the context of random walks in random environment). Define on (𝒮0,)(\mathcal{S}_{0},\mathcal{F}) the following (k)k0(\mathcal{F}_{k})_{k\geq 0}-stopping times: T(h):=inf{k1;Xke1>h}T(h):=\inf\{k\geq 1;\,X_{k}\cdot e_{1}>h\}, and D:=inf{k1;Xke1=0}D:=\inf\{k\geq 1;\,X_{k}\cdot e_{1}=0\}. Then define recursively the sequences (Si)i0(S_{i})_{i\geq 0} and (Di)i0(D_{i})_{i\geq 0} as follows: S0:=T(0)S_{0}:=T(0), D0:=S0+DθS0D_{0}:=S_{0}+D\circ\theta_{S_{0}}, and Si+1:=T(rDi)S_{i+1}:=T(r_{D_{i}}), Di+1:=Si+1+DθSi+1D_{i+1}:=S_{i+1}+D\circ\theta_{S_{i+1}} for i0i\geq 0, with the convention that Si+1=+S_{i+1}=+\infty if Di=+D_{i}=+\infty, and, similarly, Di+1=+D_{i+1}=+\infty if Si+1=+S_{i+1}=+\infty. Then define K:=inf{i0;Di=+}K:=\inf\{i\geq 0;\,D_{i}=+\infty\} and κ:=SK\kappa:=S_{K} (with the convention that κ=+\kappa=+\infty when K=+K=+\infty).

The key estimate for proving our results is stated in the following proposition.

Proposition 1.

As nn goes to infinity,

0(κn)exp(n119+o(1)).\mathbb{P}_{0}(\kappa\geq n)\leq\exp\left(-n^{\textstyle{\frac{1}{19}+o(1)}}\right).

A consequence of the above proposition is that, under 0\mathbb{P}_{0}, κ\kappa has finite moments of all orders, and also XκX_{\kappa}, since the walk performs nearest-neighbor steps. We postpone the proof of Proposition 1 to Section 4.

Lemma 1.

There exists a δ>0\delta>0 such that 0(D=+)>δ\mathbb{P}_{0}(D=+\infty)>\delta.

Proof.

This is a simple consequence of two facts. Firstly, in [1] it is established that 0\mathbb{P}_{0}-a.s, limk+X(k)e1=+\lim_{k\to+\infty}X(k)\cdot e_{1}=+\infty. On the other hand, a general lemma (Lemma 9 of [17]) shows that, given the first fact, an excited random walk satisfies 0(D=+)>0\mathbb{P}_{0}(D=+\infty)>0.

Lemma 2.

For all h0h\geq 0, 0(T(h)<+)=1\mathbb{P}_{0}(T(h)<+\infty)=1.

Proof.

This is immediate from the fact that 0\mathbb{P}_{0}-a.s., limk+X(k)e1=+\lim_{k\to+\infty}X(k)\cdot e_{1}=+\infty. ∎

Now define the sequence of regeneration times (κn)n1(\kappa_{n})_{n\geq 1} by κ1:=κ\kappa_{1}:=\kappa and κn+1:=κn+κθκn\kappa_{n+1}:=\kappa_{n}+\kappa\circ\theta_{\kappa_{n}}, with the convention that κn+1=+\kappa_{n+1}=+\infty if κn=+\kappa_{n}=+\infty. For all n0n\geq 0, we denote by κn\mathcal{F}_{\kappa_{n}} the completion with respect to 0\mathbb{P}_{0}-negligible sets of the σ\sigma-algebra generated by the events of the form {κn=t}A\{\kappa_{n}=t\}\cap A, for all tt\in\mathbb{N}, and AtA\in\mathcal{F}_{t}.

The following two propositions are analogous respectively to Theorem 1.4 and Corollary 1.5 of [13]. Given Lemma 1 and Lemma 2, the proofs are completely similar to those presented in [13], noting that the process (β(n),Xn)n(\beta(n),X_{n})_{n\in\mathbb{N}} is strongly Markov, so we omit them, and refer the reader to [13].

Proposition 2.

For every n1n\geq 1, 0(κn<+)=1\mathbb{P}_{0}(\kappa_{n}<+\infty)=1. Moreover, for every AA\in\mathcal{F}, the following equality holds 0\mathbb{P}_{0}-a.s.

(2) 0(XθκnA|κn)=0(XA|D=+).\mathbb{P}_{0}\left(X\circ\theta_{\kappa_{n}}\in A|\mathcal{F}_{\kappa_{n}}\right)=\mathbb{P}_{0}\left(X\in A|D=+\infty\right).
Proposition 3.

With respect to 0\mathbb{P}_{0}, the random variables κ1\kappa_{1}, κ2κ1\kappa_{2}-\kappa_{1}, κ3κ2,\kappa_{3}-\kappa_{2},\ldots are independent, and, for all k1k\geq 1, the distribution of κk+1κk\kappa_{k+1}-\kappa_{k} with respect to 0\mathbb{P}_{0} is that of κ\kappa with respect to 0\mathbb{P}_{0} conditional upon D=+D=+\infty. Similarly, the random variables Xκ1X_{\kappa_{1}}, Xκ2Xκ1X_{\kappa_{2}}-X_{\kappa_{1}}, Xκ3Xκ2,X_{\kappa_{3}}-X_{\kappa_{2}},\ldots are independent, and, for all k1k\geq 1, the distribution of Xκk+1XκkX_{\kappa_{k+1}}-X_{\kappa_{k}} with respect to 0\mathbb{P}_{0} is that of XκX_{\kappa} with respect to 0\mathbb{P}_{0} conditional upon D=+D=+\infty.

For future reference, we state the following result.

Lemma 3.

On Sk<+S_{k}<+\infty, the conditional distribution of the sequence (XiXSk)Ski<Dk(X_{i}-X_{S_{k}})_{S_{k}\leq i<D_{k}} with respect to Sk\mathcal{F}_{S_{k}} is the same as the distribution of (Xi)0i<D(X_{i})_{0\leq i<D} with respect to 0\mathbb{P}_{0}.

Proof.

Observe that between times SkS_{k} and DkD_{k}, the walk never visits any site that it has visited before time SkS_{k}. Therefore, applying the strong Markov property to the process (β(n),Xn)n(\beta(n),X_{n})_{n\in\mathbb{N}} and spatial translation invariance, we conclude the proof. ∎

A consequence of Proposition 1 is that 𝔼0(κ|D=+)<+\mathbb{E}_{0}(\kappa|D=+\infty)<+\infty and 𝔼0(|Xκ||D=+)<+\mathbb{E}_{0}(|X_{\kappa}||D=+\infty)<+\infty. Since 0(κ1)=1\mathbb{P}_{0}(\kappa\geq 1)=1 and 0(Xκe11)=1\mathbb{P}_{0}(X_{\kappa}\cdot e_{1}\geq 1)=1, 𝔼0(κ|D=+)>0\mathbb{E}_{0}(\kappa|D=+\infty)>0 and 𝔼0(Xκe1|D=+)>0\mathbb{E}_{0}(X_{\kappa}\cdot e_{1}|D=+\infty)>0. Letting v(p,d):=𝔼0(Xκe1|D=+)𝔼0(κ|D=+)v(p,d):=\frac{\mathbb{E}_{0}(X_{\kappa}\cdot e_{1}|D=+\infty)}{\mathbb{E}_{0}(\kappa|D=+\infty)}, we see that 0<v(p,d)<+0<v(p,d)<+\infty.

The following law of large numbers can then be proved, using Proposition 3, exactly as Proposition 2.1 in [13], to which we refer for the proof.

Theorem 2.

Under 0\mathbb{P}_{0}, the following limit holds almost surely:

limn+n1Xne1=v(p,d).\lim_{n\to+\infty}n^{-1}X_{n}\cdot e_{1}=v(p,d).

Another consequence of Proposition 1 is that 𝔼0(κ2|D=+)<+\mathbb{E}_{0}(\kappa^{2}|D=+\infty)<+\infty and 𝔼0(|Xκ|2|D=+)<+\mathbb{E}_{0}(|X_{\kappa}|^{2}|D=+\infty)<+\infty. Letting σ2(p,d):=𝔼0([Xκe1v(p,d)κ]2|D=+)𝔼0(κ|D=+)\sigma^{2}(p,d):=\frac{\mathbb{E}_{0}(\left[X_{\kappa}\cdot e_{1}-v(p,d)\kappa\right]^{2}|D=+\infty)}{\mathbb{E}_{0}(\kappa|D=+\infty)}, we see that σ(p,d)<+\sigma(p,d)<+\infty. That σ(p,d)>0\sigma(p,d)>0 is explained in Remark 1 below.

The following functional central limit theorem can then be proved, using Proposition 3, exactly as Theorem 4.1 in [12], to which we refer for the proof.

Theorem 3.

Under 0\mathbb{P}_{0}, the following convergence in distribution holds: as nn goes to infinity,

tn1/2(Xnte1vnt),t\mapsto n^{-1/2}(X_{\lfloor nt\rfloor}\cdot e_{1}-v\lfloor nt\rfloor),

converges to a Brownian motion with variance σ2(p,d)\sigma^{2}(p,d).

Remark 1.

The fact that σ(p,d)>0\sigma(p,d)>0 is easy to check. Indeed, we will prove that the probability of the event Xκe1vκX_{\kappa}\cdot e_{1}\neq v\kappa is positive. There is a positive probability that the first step of the walk is +e1+e_{1}, and that Xne1>1X_{n}\cdot e_{1}>1 for all nn afterwards. In this situation, κ=1\kappa=1 and Xκe1=1X_{\kappa}\cdot e_{1}=1. Now, there is a positive probability that the walk first performs the following sequence of steps: +e2,e2,+e1+e_{2},-e_{2},+e_{1}, and that then Xne1>1X_{n}\cdot e_{1}>1 for all nn afterwards. In this situation, κ=3\kappa=3 and Xκe1=1X_{\kappa}\cdot e_{1}=1.

4. Estimate on the tail of κ\kappa

4.1. Coupling with a simple random walk and tan points

We use the coupling of the excited random walk with a simple random walk that was introduced in [1], and subsequently used in [7, 8].

To define this coupling, let (αi)i1(\alpha_{i})_{i\geq 1} be a sequence of i.i.d. random variables with uniform distribution on the set {1,,d}\{1,\ldots,d\}. Let also (Ui)i1(U_{i})_{i\geq 1} be an i.i.d. family of random variables with uniform distribution on [0,1][0,1], independent from (αi)i1(\alpha_{i})_{i\geq 1}. Call (Ω,𝒢,P)(\Omega,\mathcal{G},P) the probability space on which these variables are defined. Define the sequences of random variables Y=(Yi)i0Y=(Y_{i})_{i\geq 0} and Z=(Zi)i0Z=(Z_{i})_{i\geq 0} taking values in d\mathbb{Z}^{d}, as follows. First, Y0:=0Y_{0}:=0 and Z0:=0Z_{0}:=0. Then consider n0n\geq 0, and assume that Y0,,YnY_{0},\ldots,Y_{n} and Z0,,ZnZ_{0},\ldots,Z_{n} have already been defined. Let Zn+1:=Zn+(𝟏(Un+11/2)𝟏(Un+1>1/2))eαn+1Z_{n+1}:=Z_{n}+(\mathbf{1}(U_{n+1}\leq 1/2)-\mathbf{1}(U_{n+1}>1/2))e_{\alpha_{n+1}}. Then, if Yn{Yi; 0in1}Y_{n}\in\{Y_{i};\,0\leq i\leq n-1\} or αn+11\alpha_{n+1}\neq 1, let Yn+1:=Yn+(𝟏(Un+11/2)𝟏(Un+1>1/2))eαn+1Y_{n+1}:=Y_{n}+(\mathbf{1}(U_{n+1}\leq 1/2)-\mathbf{1}(U_{n+1}>1/2))e_{\alpha_{n+1}}. Otherwise, let Yn+1:=Yn+(𝟏(Un+1p)𝟏(Un+1>p))e1Y_{n+1}:=Y_{n}+(\mathbf{1}(U_{n+1}\leq p)-\mathbf{1}(U_{n+1}>p))e_{1}.

The following properties are then immediate:

  • (Zi)i0(Z_{i})_{i\geq 0} is a simple random walk on d\mathbb{Z}^{d};

  • (Yi)i0(Y_{i})_{i\geq 0} is an excited random walk on d\mathbb{Z}^{d} with bias parameter pp;

  • for all 2jd2\leq j\leq d and i0i\geq 0, Yiej=ZiejY_{i}\cdot e_{j}=Z_{i}\cdot e_{j};

  • the sequence (Yie1Zie1)i0(Y_{i}\cdot e_{1}-Z_{i}\cdot e_{1})_{i\geq 0} is non-decreasing.

Definition 1.

If (zi)i0𝒮0(z_{i})_{i\geq 0}\in\mathcal{S}_{0}, we call an integer n0n\geq 0 an (e1,e2)(e_{1},e_{2})–tan point index for the sequence (zi)i0(z_{i})_{i\geq 0} if zne1>zke1z_{n}\cdot e_{1}>z_{k}\cdot e_{1} for all 0kn10\leq k\leq n-1 such that zne2=zke2z_{n}\cdot e_{2}=z_{k}\cdot e_{2}.

The key observation made in [1] is the following.

Lemma 4.

If nn is an (e1,e2)(e_{1},e_{2})–tan point index for (Zi)i0(Z_{i})_{i\geq 0}, then Yn{Yi; 0in1}Y_{n}\notin\{Y_{i};\,0\leq i\leq n-1\}.

Proof.

If nn is an (e1,e2)(e_{1},e_{2})–tan point index and if there exists an {0,,n1}\ell\in\{0,\ldots,n-1\} such that Yn=YY_{n}=Y_{\ell}, then observe that, using the fact that Ze2=Ye2Z_{\ell}\cdot e_{2}=Y_{\ell}\cdot e_{2} and Zne2=Yne2Z_{n}\cdot e_{2}=Y_{n}\cdot e_{2}, we have that Ze2=Zne2Z_{\ell}\cdot e_{2}=Z_{n}\cdot e_{2}. Hence, by the definition of a tan point we must have that Ze1<Zne1Z_{\ell}\cdot e_{1}<Z_{n}\cdot e_{1}, whence Yne1Zne1<Ye1Ze1Y_{n}\cdot e_{1}-Z_{n}\cdot e_{1}<Y_{\ell}\cdot e_{1}-Z_{\ell}\cdot e_{1}. But this contradicts the fact that the coupling has the property that Yne1Zne1Ye1Ze1Y_{n}\cdot e_{1}-Z_{n}\cdot e_{1}\geq Y_{\ell}\cdot e_{1}-Z_{\ell}\cdot e_{1}.

Let H:={i1;αi{1,2}}H:=\{i\geq 1;\,\alpha_{i}\in\{1,2\}\}, and define the sequence of indices (Ii)i0(I_{i})_{i\geq 0} by I0:=0I_{0}:=0, I0<I1<I2<I_{0}<I_{1}<I_{2}<\cdots, and {I1,I2,}=H\{I_{1},I_{2},\ldots\}=H. Then the sequence of random variables (Wi)i0(W_{i})_{i\geq 0} defined by Wi:=(ZIie1,ZIie2)W_{i}:=(Z_{I_{i}}\cdot e_{1},Z_{I_{i}}\cdot e_{2}) forms a simple random walk on 2\mathbb{Z}^{2}.

If ii and nn are such that Ii=nI_{i}=n, it is immediate to check that nn is an (e1,e2)(e_{1},e_{2})–tan point index for (Zk)k0(Z_{k})_{k\geq 0} if and only if ii is an (e1,e2)(e_{1},e_{2})–tan point index for the random walk (Wk)k0(W_{k})_{k\geq 0}.

For all n1n\geq 1, let NnN_{n} denote the number of (e1,e2)(e_{1},e_{2})–tan point indices of (Wk)k0(W_{k})_{k\geq 0} that are n\leq n. The arguments used to prove the following lemma are quite similar to the ones used in the proofs of Theorem 4 in [1] and Lemma 1 in [8], which are themselves partly based on estimates in [3].

Lemma 5.

For all 0<a<3/40<a<3/4, as nn goes to infinity,

P(Nnna)exp(n134a9+o(1)).P(N_{n}\leq n^{a})\leq\exp\left(-n^{\textstyle{\frac{1}{3}-\frac{4a}{9}}+o(1)}\right).
Proof.

For all k{0}k\in\mathbb{Z}\setminus\{0\}, m1m\geq 1, consider the three sets

Γ(m)k\displaystyle\Gamma(m)_{k} :=\displaystyle:= ×{2km1/2},\displaystyle\mathbb{Z}\times\{2k\lfloor m^{1/2}\rfloor\},
Δ(m)k\displaystyle\Delta(m)_{k} :=\displaystyle:= ×((2k1)m1/2,(2k+1)m1/2),\displaystyle\mathbb{Z}\times((2k-1)\lfloor m^{1/2}\rfloor,(2k+1)\lfloor m^{1/2}\rfloor),
Θ(m)k\displaystyle\Theta(m)_{k} :=\displaystyle:= {vΔ(m)k;|ve2|2km1/2}.\displaystyle\{v\in\Delta(m)_{k};\,|v\cdot e_{2}|\geq 2k\lfloor m^{1/2}\rfloor\}.

Let χ(m)k\chi(m)_{k} be the (a.s. finite since the simple random walk on 2\mathbb{Z}^{2} is recurrent) first time when (Wi)i0(W_{i})_{i\geq 0} hits Γ(m)k\Gamma(m)_{k}. Let ϕ(m)k\phi(m)_{k} be the (again a.s. finite for the same reason as χ(m)k\chi(m)_{k}) first time after χ(m)k\chi(m)_{k} when (Wi)i0(W_{i})_{i\geq 0} leaves Δ(m)k\Delta(m)_{k}. Let Mk(m)M_{k}(m) denote the number of time indices nn that are (e1,e2)(e_{1},e_{2})–tan point indices, and satisfy χ(m)knϕ(m)k1\chi(m)_{k}\leq n\leq\phi(m)_{k}-1 and WnΘ(m)kW_{n}\in\Theta(m)_{k}.

Two key observations in [1] (see Lemma 2 in [1] and the discussion before its statement) are that:

  • the sequence (Mk(m))k{0}(M_{k}(m))_{k\in\mathbb{Z}\setminus\{0\}} is i.i.d.;

  • there exist c1,c2>0c_{1},c_{2}>0 such that P(M1(m)c1m3/4)c2P(M_{1}(m)\geq c_{1}m^{3/4})\geq c_{2}.

Now, consider an ϵ>0\epsilon>0 such that b:=1/34a/9ϵ>0b:=1/3-4a/9-\epsilon>0. let mn:=(na/c1)4/3+1m_{n}:=\lceil\left(n^{a}/c_{1}\right)^{4/3}\rceil+1, and let hn:=2(nb+1)mn1/2h_{n}:=2(\lceil n^{b}\rceil+1)\lfloor m_{n}^{1/2}\rfloor. Note that, as n+n\to+\infty, (hn)2(4c14/3)n23+49a2ϵ(h_{n})^{2}\sim(4c_{1}^{-4/3})n^{\frac{2}{3}+\frac{4}{9}a-2\epsilon}. Let Rn,+R_{n,+} and Rn,R_{n,-} denote the following events

Rn,+:={for all k{1,,+nb},Mk(mn)c1mn3/4},R_{n,+}:=\{\mbox{for all }k\in\{1,\ldots,+\lceil n^{b}\rceil\},\,M_{k}(m_{n})\leq c_{1}m_{n}^{3/4}\},

and

Rn,:={for all k{nb,,1},Mk(mn)c1mn3/4}.R_{n,-}:=\{\mbox{for all }k\in\{-\lceil n^{b}\rceil,\ldots,-1\},\,M_{k}(m_{n})\leq c_{1}m_{n}^{3/4}\}.

From the above observations, P(Rn,+Rn,)2(1c2)nbP(R_{n,+}\cup R_{n,-})\leq 2(1-c_{2})^{\lceil n^{b}\rceil}.

Let qn:=n(hn)2q_{n}:=\lfloor n(h_{n})^{-2}\rfloor, and let VnV_{n} be the event

Vn:={for all i{0,,n},hnWie2+hn}.V_{n}:=\{\mbox{for all }i\in\{0,\ldots,n\},\,-h_{n}\leq W_{i}\cdot e_{2}\leq+h_{n}\}.

By Lemma 6 below, there exists a constant c3>0c_{3}>0 such that, for all large enough nn, all hny+hn-h_{n}\leq y\leq+h_{n}, and xx\in\mathbb{Z}, the probability that a simple random walk on 2\mathbb{Z}^{2} started at (x,y)(x,y) at time zero leaves ×{hn,,+hn}\mathbb{Z}\times\{-h_{n},\ldots,+h_{n}\} before time hn2h_{n}^{2}, is larger than c3c_{3}. A consequence is that, for all q0q\geq 0, the probability that the same walk fails to leave ×{hn,,+hn}\mathbb{Z}\times\{-h_{n},\ldots,+h_{n}\} before time qhn2qh_{n}^{2} is less than (1c3)q(1-c_{3})^{q}. Therefore P(Vn)(1c3)qnP(V_{n})\leq(1-c_{3})^{q_{n}}.

Observe now that, on VncV_{n}^{c},

nmax(ϕ(mn)k; 1knb) or nmax(ϕ(mn)k;nbk1).n\geq\max(\phi(m_{n})_{k};\,1\leq k\leq\lceil n^{b}\rceil)\mbox{ or }n\geq\max(\phi(m_{n})_{k};\,-\lceil n^{b}\rceil\leq k\leq-1).

Hence, on VncV_{n}^{c},

Nnk=1nbMk(mn) or Nnk=1nbMk(mn).N_{n}\geq\sum_{k=1}^{\lceil n^{b}\rceil}M_{k}(m_{n})\mbox{ or }N_{n}\geq\sum_{k=-1}^{-\lceil n^{b}\rceil}M_{k}(m_{n}).

We deduce that, on Rn,+cRn,cVncR_{n,+}^{c}\cap R_{n,-}^{c}\cap V_{n}^{c}, Nnc1mn3/4>naN_{n}\geq c_{1}m_{n}^{3/4}>n^{a}.

As a consequence, 0(Nnna)0(Rn,+Rn,)+0(Vn)\mathbb{P}_{0}(N_{n}\leq n^{a})\leq\mathbb{P}_{0}(R_{n,+}\cup R_{n,-})+\mathbb{P}_{0}(V_{n}), so that 0(Nnna)2(1c2)nb+(1c3)qn\mathbb{P}_{0}(N_{n}\leq n^{a})\leq 2(1-c_{2})^{\lceil n^{b}\rceil}+(1-c_{3})^{q_{n}}

Noting that, as nn goes to infinity, qnnhn2(4c14/3)1n1/34a/9+2ϵq_{n}\sim nh_{n}^{-2}\sim(4c_{1}^{-4/3})^{-1}n^{1/3-4a/9+2\epsilon}, the conclusion follows.

Lemma 6.

There exists a constant c3>0c_{3}>0 such that, for all large enough hh, all hy+h-h\leq y\leq+h, and xx\in\mathbb{Z}, the probability that a simple random walk on 2\mathbb{Z}^{2} started at (x,y)(x,y) at time zero leaves ×{h,,+h}\mathbb{Z}\times\{-h,\ldots,+h\} before time h2h^{2}, is larger than c3c_{3}.

Proof.

Consider the probability that the e2e_{2} coordinate is larger than hh at time h2h^{2}. By standard coupling, this probability is minimal when y=hy=-h, so the central limit theorem applied to the walk starting with y=hy=-h yields the existence of c3c_{3}.

Lemma 7.

For all 0<a<3/40<a<3/4, as nn goes to infinity,

0(Jnna)exp(n134a9+o(1)).\mathbb{P}_{0}(J_{n}\leq n^{a})\leq\exp\left(-n^{\textstyle{\frac{1}{3}-\frac{4a}{9}}+o(1)}\right).
Proof.

Observe that, by definition, IkI_{k} is the sum of kk i.i.d. random variables whose distribution is geometric with parameter 2/d2/d. By a standard large deviations bound, there is a constant c6c_{6} such that, for all large enough nn, P(Ind1n)exp(c6n)P(I_{\lfloor nd^{-1}\rfloor}\geq n)\leq\exp(-c_{6}n). Then observe that, if Ind1nI_{\lfloor nd^{-1}\rfloor}\leq n, we have Jn(Y)Nnd1J_{n}(Y)\geq N_{\lfloor nd^{-1}\rfloor} according to Lemma 4 above. (Remember that, by definition, Jn(Y)J_{n}(Y) is the number of indices 0kn0\leq k\leq n such that Yk{Yi; 0ik1}Y_{k}\notin\{Y_{i};\,0\leq i\leq k-1\}. ) Now, according to Lemma 5 above, we have that, for all 0<a<3/40<a<3/4, as nn goes to infinity,

P(Nnd1nd1a)exp(nd1134a9+o(1)),P(N_{\lfloor nd^{-1}\rfloor}\leq\lfloor nd^{-1}\rfloor^{a})\leq\exp\left(-\lfloor nd^{-1}\rfloor^{\textstyle{\frac{1}{3}-\frac{4a}{9}}+o(1)}\right),

from which it is easy to deduce that for all 0<a<3/40<a<3/4, as nn goes to infinity, P(Nnd1na)exp(n134a9+o(1))P(N_{\lfloor nd^{-1}\rfloor}\leq n^{a})\leq\exp\left(-n^{\textstyle{\frac{1}{3}-\frac{4a}{9}}+o(1)}\right). Now we deduce from the union bound that P(Jn(Y)na)P(Ind1n)+P(Nnd1na)P(J_{n}(Y)\leq n^{a})\leq P(I_{\lfloor nd^{-1}\rfloor}\geq n)+P(N_{\lfloor nd^{-1}\rfloor}\leq n^{a}). The conclusion follows.

4.2. Estimates on the displacement of the walk

Lemma 8.

For all 1/2<a<3/41/2<a<3/4, as nn goes to infinity,

0(Xne1na)exp(nψ(a)+o(1)),\mathbb{P}_{0}(X_{n}\cdot e_{1}\leq n^{a})\leq\exp\left(-n^{\psi(a)+o(1)}\right),

where ψ(a):=min(134a9,2a1)\psi(a):=\min\left(\textstyle{\frac{1}{3}-\frac{4a}{9}},2a-1\right).

Proof.

Let γ:=2p12d\gamma:=\frac{2p-1}{2d}. Let (εi)i1(\varepsilon_{i})_{i\geq 1} be an i.i.d. family of random variables with common distribution μ\mu on 𝐛\mathbf{b}, and let (ηi)i1(\eta_{i})_{i\geq 1} be an i.i.d. family of random variables with common distribution ν\nu on 𝐛\mathbf{b} independent from (εi)i1(\varepsilon_{i})_{i\geq 1}. Let us call (Ω2,𝒢2,Q)(\Omega_{2},\mathcal{G}_{2},Q) the probability space on which these variables are defined.

Define the sequence of random variables (ξi)i0(\xi_{i})_{i\geq 0} taking values in d\mathbb{Z}^{d}, as follows. First, set ξ0:=0\xi_{0}:=0. Consider then n0n\geq 0, assume that ξ0,,ξn\xi_{0},\ldots,\xi_{n} have already been defined, and consider the number Jn(ξ)J_{n}(\xi) of indices 0kn0\leq k\leq n such that ξk{ξi; 0ik1}\xi_{k}\notin\{\xi_{i};\,0\leq i\leq k-1\}. If ξn{ξi; 0in1}\xi_{n}\notin\{\xi_{i};\,0\leq i\leq n-1\}, set ξn+1:=ξn+εJn(ξ)\xi_{n+1}:=\xi_{n}+\varepsilon_{J_{n}(\xi)}. Otherwise, let ξn+1:=ξn+ηnJn(ξ)+1\xi_{n+1}:=\xi_{n}+\eta_{n-J_{n}(\xi)+1}. It is easy to check that the sequence (ξn)n0(\xi_{n})_{n\geq 0} is an excited random walk on d\mathbb{Z}^{d} with bias parameter pp.

Now, according to Lemma 7, for all 1/2<a<3/41/2<a<3/4, Q(Jnna)exp(n134a9+o(1))Q(J_{n}\leq n^{a})\leq\exp\left(-n^{\textstyle{\frac{1}{3}-\frac{4a}{9}}+o(1)}\right). That for all 1/2<a<3/41/2<a<3/4, Q(Jn12γ1na)exp(n134a9+o(1))Q(J_{n-1}\leq 2\gamma^{-1}n^{a})\leq\exp\left(-n^{\textstyle{\frac{1}{3}-\frac{4a}{9}}+o(1)}\right) is an easy consequence. Now observe that, by definition, for n1n\geq 1, ξn=i=1Jn1εi+i=1nJn1ηi\xi_{n}=\sum_{i=1}^{J_{n-1}}\varepsilon_{i}+\sum_{i=1}^{n-J_{n-1}}\eta_{i}. Now, there exists a constant c4c_{4} such that, for all large enough nn, and every 2γ1nakn2\gamma^{-1}n^{a}\leq k\leq n,

Q(i=1kεie1(3/2)na)Q(i=1kεie134γk)exp(c4na),Q\left(\sum_{i=1}^{k}\varepsilon_{i}\cdot e_{1}\leq(3/2)n^{a}\right)\leq Q\left(\sum_{i=1}^{k}\varepsilon_{i}\cdot e_{1}\leq\textstyle{\frac{3}{4}\gamma k}\right)\leq\exp\left(-c_{4}n^{a}\right),

by a standard large deviations bound for the sum i=1kεie1\sum_{i=1}^{k}\varepsilon_{i}\cdot e_{1}, whose terms are i.i.d. bounded random variables with expectation γ>0\gamma>0. By the union bound, we see that

Q(i=1Jn1εie1(3/2)na)nexp(c4na)+exp(n134a9+o(1)).Q\left(\sum_{i=1}^{J_{n-1}}\varepsilon_{i}\cdot e_{1}\leq(3/2)n^{a}\right)\leq n\exp\left(-c_{4}n^{a}\right)+\exp\left(-n^{\textstyle{\frac{1}{3}-\frac{4a}{9}}+o(1)}\right).

Now, there exists a constant c5c_{5} such that, for all large enough nn, and for every 1kn1\leq k\leq n, Q(i=1kηie1(1/2)na)exp(c5n2a1),Q\left(\sum_{i=1}^{k}\eta_{i}\cdot e_{1}\leq-(1/2)n^{a}\right)\leq\exp\left(-c_{5}n^{2a-1}\right), by a standard Gaussian upper bound for the simple symmetric random walk on \mathbb{Z}.

By the union bound again, we see that Q(i=1nJn1ηie1(1/2)na)nexp(c5n2a1)Q\left(\sum_{i=1}^{n-J_{n-1}}\eta_{i}\cdot e_{1}\leq-(1/2)n^{a}\right)\leq n\exp\left(-c_{5}n^{2a-1}\right). The conclusion follows.

Lemma 9.

As nn goes to infinity,

0(nD<+)exp(n1/11+o(1)).\mathbb{P}_{0}(n\leq D<+\infty)\leq\exp\left(-n^{1/11+o(1)}\right).
Proof.

Consider 1/2<a<3/41/2<a<3/4, and write 0(nD<+)=k=n+0(D=k)k=n+0(Xke1=0)k=n+0(Xke1ka).\mathbb{P}_{0}(n\leq D<+\infty)=\sum_{k=n}^{+\infty}\mathbb{P}_{0}(D=k)\leq\sum_{k=n}^{+\infty}\mathbb{P}_{0}(X_{k}\cdot e_{1}=0)\leq\sum_{k=n}^{+\infty}\mathbb{P}_{0}(X_{k}\cdot e_{1}\leq k^{a}). Now, according to Lemma 8, 0(Xke1ka)exp(kψ(a)+o(1))\mathbb{P}_{0}(X_{k}\cdot e_{1}\leq k^{a})\leq\exp\left(-k^{\psi(a)+o(1)}\right). It is then easily checked that k=n+exp(kψ(a)+o(1))exp(nψ(a)+o(1))\sum_{k=n}^{+\infty}\exp\left(-k^{\psi(a)+o(1)}\right)\leq\exp\left(-n^{\psi(a)+o(1)}\right). As a consequence, 0(nD<+)exp(nψ(a)+o(1))\mathbb{P}_{0}(n\leq D<+\infty)\leq\exp\left(-n^{\psi(a)+o(1)}\right). Choosing aa so as to minimize ψ(a)\psi(a), the result follows.

4.3. Proof of Proposition 1

Let a1,a2,a3a_{1},a_{2},a_{3} be positive real numbers such that a1<3/4a_{1}<3/4 and a2+a3<a1a_{2}+a_{3}<a_{1}. For every n>0n>0, let un:=na1u_{n}:=\lfloor n^{a_{1}}\rfloor, vn:=na2v_{n}:=\lfloor n^{a_{2}}\rfloor, wn:=na3w_{n}:=\lfloor n^{a_{3}}\rfloor. In the sequel, we assume that nn is large enough so that vn(wn+1)+2unv_{n}(w_{n}+1)+2\leq u_{n}. Let

An:={Xne1un};Bn:=k=0vn{Dk<+};Cn:=k=0vn{wnDkSk<+}.A_{n}:=\{X_{n}\cdot e_{1}\leq u_{n}\};\,B_{n}:=\bigcap_{k=0}^{v_{n}}\{D_{k}<+\infty\};\,C_{n}:=\bigcup_{k=0}^{v_{n}}\{w_{n}\leq D_{k}-S_{k}<+\infty\}.

(With the convention that, in the definition of CnC_{n}, DkSk=+D_{k}-S_{k}=+\infty whenever Dk=+D_{k}=+\infty.) We shall prove that {κn}AnBnCn\{\kappa\geq n\}\subset A_{n}\cup B_{n}\cup C_{n}, then apply the union bound to 0(AnBnCn)\mathbb{P}_{0}(A_{n}\cup B_{n}\cup C_{n}), and then separately bound the three probabilities 0(An)\mathbb{P}_{0}(A_{n}), 0(Bn)\mathbb{P}_{0}(B_{n}), 0(Cn)\mathbb{P}_{0}(C_{n}).

Assume that AncBncCncA_{n}^{c}\cap B_{n}^{c}\cap C_{n}^{c} occurs. Our goal is to prove that this assumption implies that κ<n\kappa<n.

Call MM the smallest index kk between 0 and vnv_{n} such that Dk=+D_{k}=+\infty, whose existence is ensured by BncB_{n}^{c}. By definition, κ=SM\kappa=S_{M}, so we have to prove that SM<nS_{M}<n. For notational convenience, let D1=0D_{-1}=0. By definition of MM, we know that DM1<+D_{M-1}<+\infty. Now write rDM1=k=0M1(rDkrSk)+(rSkrDk1)r_{D_{M-1}}=\sum_{k=0}^{M-1}(r_{D_{k}}-r_{S_{k}})+(r_{S_{k}}-r_{D_{k-1}}), with the convention that k=01=0\sum_{k=0}^{-1}=0. Since the walk performs nearest-neighbor steps, we see that for all 0kM10\leq k\leq M-1, rDkrSkDkSkr_{D_{k}}-r_{S_{k}}\leq D_{k}-S_{k}. On the other hand, by definition, for all 0kM10\leq k\leq M-1, rSkrDk1=1r_{S_{k}}-r_{D_{k-1}}=1. Now, for all 0kM10\leq k\leq M-1, DkSkwnD_{k}-S_{k}\leq w_{n}, due to the fact that CncC_{n}^{c} holds and that Dk<+D_{k}<+\infty. As a consequence, we obtain that rDM1M(wn+1)vn(wn+1)r_{D_{M-1}}\leq M(w_{n}+1)\leq v_{n}(w_{n}+1). Remember now that vn(wn+1)+2unv_{n}(w_{n}+1)+2\leq u_{n}, so we have proved that, rDM1+2unr_{D_{M-1}}+2\leq u_{n}. Now observe that, on AncA_{n^{c}}, Xne1>unX_{n}\cdot e_{1}>u_{n}. As a consequence, the smallest ii such that Xie1=rDM1+1X_{i}\cdot e_{1}=r_{D_{M-1}}+1 must be <n<n. But SMS_{M} is indeed the smallest ii such that Xie1=rDM1+1X_{i}\cdot e_{1}=r_{D_{M-1}}+1, so we have proved that SM<nS_{M}<n on AncBncCncA_{n}^{c}\cap B_{n}^{c}\cap C_{n}^{c}.

The union bound then yields the fact that, for large enough nn, 0(κn)0(An)+0(Bn)+0(Cn)\mathbb{P}_{0}(\kappa\geq n)\leq\mathbb{P}_{0}(A_{n})+\mathbb{P}_{0}(B_{n})+\mathbb{P}_{0}(C_{n}).

Now, from Lemma 8, we see that 0(An)exp(nψ(a1)+o(1)).\mathbb{P}_{0}(A_{n})\leq\exp(-n^{\psi(a_{1})+o(1)}). By repeatedly applying Lemma 2 and the strong Markov property at the stopping times SkS_{k} for k=0,,vnk=0,\ldots,v_{n} to the process (β(n),Xn)n(\beta(n),X_{n})_{n\in\mathbb{N}}, we see that 0(Bn)0(D<+)vn\mathbb{P}_{0}(B_{n})\leq\mathbb{P}_{0}(D<+\infty)^{v_{n}}. Hence, from Lemma 1, we know that 0(Bn)(1δ)vn\mathbb{P}_{0}(B_{n})\leq(1-\delta)^{v_{n}}.

From the union bound and Lemma 3, we see that 0(Cn)(vn+1)0(wnD<+)\mathbb{P}_{0}(C_{n})\leq(v_{n}+1)\mathbb{P}_{0}(w_{n}\leq D<+\infty), so, by Lemma 9, 0(Cn)(vn+1)exp(na3/11+o(1))\mathbb{P}_{0}(C_{n})\leq(v_{n}+1)\exp(-n^{a_{3}/11+o(1)}).

Using Lemma 8, we finally obtain the following estimate:

0(κn)(1δ)na2+(na2+1)exp(na3/11+o(1))+exp(nψ(a1)+o(1)).\mathbb{P}_{0}(\kappa\geq n)\leq(1-\delta)^{\lfloor n^{a_{2}}\rfloor}+(\lfloor n^{a_{2}}\rfloor+1)\exp\left(-n^{a_{3}/11+o(1)}\right)+\exp\left(-n^{\psi(a_{1})+o(1)}\right).

Now, for all ϵ\epsilon small enough, choose a1=12/19a_{1}=12/19, a2=1/19a_{2}=1/19, a3=11/19ϵa_{3}=11/19-\epsilon. This ends the proof of Proposition 1.

5. Simulation results

We have performed simulations of the model in dimension d=2d=2, using a C code and the Gnu Scientific Library facilities for random number generation.

The following graph is a plot of an estimate of v(p,2)v(p,2) as a function of pp. Each point is the average over 1000 independent simulations of (X10000e1)/10000(X_{10000}\cdot e_{1})/10000.

[Uncaptioned image]

The following graph is a plot of an estimate of σ(p,2)\sigma(p,2) as a function of pp. Each point is the standard deviation over 1000000 independent simulations of (X10000e1)/(10000)1/2(X_{10000}\cdot e_{1})/(10000)^{1/2} (obtaining a reasonably smooth curve required many more simulations for σ\sigma than for vv).

[Uncaptioned image]

References

  • [1] Itai Benjamini and David B. Wilson. Excited random walk. Electron. Comm. Probab., 8:86–92 (electronic), 2003.
  • [2] E. Bolthausen. On the volume of the Wiener sausage. Ann. Probab., 18(4):1576–1582, 1990.
  • [3] Mireille Bousquet-Mélou and Gilles Schaeffer. Walks on the slit plane. Probab. Theory Related Fields, 124(3):305–344, 2002.
  • [4] Burgess Davis. Weak limits of perturbed random walks and the equation Yt=Bt+αsup{Ys:st}+βinf{Ys:st}Y_{t}=B_{t}+\alpha\sup\{Y_{s}\colon\ s\leq t\}+\beta\inf\{Y_{s}\colon\ s\leq t\}. Ann. Probab., 24(4):2007–2023, 1996.
  • [5] Burgess Davis. Brownian motion and random walk perturbed at extrema. Probab. Theory Related Fields, 113(4):501–518, 1999.
  • [6] M. D. Donsker and S. R. S. Varadhan. On the number of distinct sites visited by a random walk. Comm. Pure Appl. Math., 32(6):721–747, 1979.
  • [7] Gady Kozma. Excited random walk in three dimensions has positive speed. arXiv: math/0310305, 2003.
  • [8] Gady Kozma. Excited random walk in two dimensions has linear speed. arXiv: math/0512535, 2005.
  • [9] Thomas Mountford, Leandro P. R. Pimentel, and Glauco Valle. On the speed of the one-dimensional excited random walk in the transient regime. ALEA Lat. Am. J. Probab. Math. Stat., 2:279–296 (electronic), 2006.
  • [10] Mihael Perman and Wendelin Werner. Perturbed Brownian motions. Probab. Theory Related Fields, 108(3):357–383, 1997.
  • [11] Alain-Sol Sznitman. Long time asymptotics for the shrinking Wiener sausage. Comm. Pure Appl. Math., 43(6):809–820, 1990.
  • [12] Alain-Sol Sznitman. Slowdown estimates and central limit theorem for random walks in random environment. J. Eur. Math. Soc. (JEMS), 2(2):93–143, 2000.
  • [13] Alain-Sol Sznitman and Martin Zerner. A law of large numbers for random walks in random environment. Ann. Probab., 27(4):1851–1869, 1999.
  • [14] Remco van den Hofstad and Mark Holmes. An expansion for self-interacting random walks. arXiv:0706.0614, 2006.
  • [15] Stanislav Volkov. Excited random walk on trees. Electron. J. Probab., 8:no. 23, 15 pp. (electronic), 2003.
  • [16] Martin P. W. Zerner. Multi-excited random walks on integers. Probab. Theory Related Fields, 133(1):98–122, 2005.
  • [17] Martin P. W. Zerner. Recurrence and transience of excited random walks on d\mathbb{Z}^{d} and strips. Electron. Comm. Probab., 11:118–128 (electronic), 2006.