This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Speed of random walk on dynamical percolation
in nonamenable transitive graphs

Chenlin Gu Yau Mathematical Sciences Center, Tsinghua University, Beijing, China. gclmath@tsinghua.edu.cn Jianping Jiang Yau Mathematical Sciences Center, Tsinghua University, Beijing, China. jianpingjiang@tsinghua.edu.cn Yuval Peres Beijing Institute of Mathematical Sciences and Applications, Beijing, China. yperes@gmail.com Zhan Shi Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China. shizhan@amss.ac.cn Hao Wu Yau Mathematical Sciences Center, Tsinghua University, and Beijing Institute of Mathematical Sciences and Applications, Beijing, China. hao.wu.proba@gmail.com  and  Fan Yang Yau Mathematical Sciences Center, Tsinghua University, and Beijing Institute of Mathematical Sciences and Applications, Beijing, China. fyangmath@tsinghua.edu.cn
Abstract.

Let GG be a nonamenable transitive unimodular graph. In dynamical percolation, every edge in GG refreshes its status at rate μ>0\mu>0, and following the refresh, each edge is open independently with probability pp. The random walk traverses GG only along open edges, moving at rate 11. In the critical regime p=pcp=p_{c}, we prove that the speed of the random walk is at most O(μlog(1/μ))O(\sqrt{\mu\log(1/\mu)}), provided that μe1\mu\leq e^{-1}. In the supercritical regime p>pcp>p_{c}, we prove that the speed on GG is of order 1 (uniformly in μ)\mu), while in the subcritical regime p<pcp<p_{c}, the speed is of order μ1\mu\wedge 1.

1. Introduction

Let G=(V,E)G=(V,E) be an infinite graph, where VV and EE denote the sets of vertices and edges, respectively. The dynamical percolation process on GG is defined as follows: the edges in EE are either open or closed and independently refresh their status at an exponential rate μ>0\mu>0. Upon refreshing, each edge becomes open with probability p(0, 1)p\in(0,\,1) or closed with probability (1p)(1-p). (It is important to note the distinction between “refreshing” and “flipping”: when an edge is refreshed, it has probability pp of becoming open, regardless of its previous state.) Fixing a starting point X0=oVX_{0}=o\in V, we initiate a continuous-time random walk denoted as (Xt)t0(X_{t})_{t\geq 0} on GG. The particle jumps with rate 11 and randomly selects one of its neighboring vertices to jump to, provided that the connecting edge is open at that specific time.

We study the random walk on dynamical percolation in nonamenable graphs. Specifically, we are interested in the interplay between the refreshing rate μ\mu and the speed of random walk. Recall that the Cheeger constant of a graph GG is defined by

Φ(G):=inf{|EW|vWdeg(v):W finite subset of V},\displaystyle\Phi(G):=\inf\left\{\frac{|\partial_{E}W|}{\sum_{v\in W}\deg(v)}:W\text{ finite subset of }V\right\}, (1.1)

where EW\partial_{E}W denotes the edge boundary of WW, i.e., the set of edges in EE with one endpoint in WW and the other endpoint in VWV\setminus W. The graph GG is said to be nonamenable if Φ(G)>0\Phi(G)>0; otherwise it is amenable, i.e., Φ(G)=0\Phi(G)=0.

When GG is transitive, the random walk (Xt)t0(X_{t})_{t\geq 0} exhibits a well-defined speed, i.e., given any μ>0\mu>0 and p(0,1)p\in(0,1), there exists a constant vp(μ)[0,1]v_{p}(\mu)\in[0,1] such that

limtdist(X0,Xt)t=vp(μ)a.s. and in L1.\lim_{t\to\infty}\frac{\mathrm{dist}(X_{0},\,X_{t})}{t}=v_{p}(\mu)\quad\hbox{\rm a.s.\ and in $L^{1}$}. (1.2)

Here, dist(X0,Xt)\mathrm{dist}(X_{0},\,X_{t}) represents the graph distance on GG between X0X_{0} and XtX_{t}. (For the proof of this fact, see Lemma 2.3 below.) We are particularly interested in the dependence of vp(μ)v_{p}(\mu) on μ\mu when μ\mu is much smaller than 1. This corresponds to a scenario where the status of individual edges changes more slowly than the random walk moves. Note that if μ\mu\to\infty, then (Xt)t0(X_{t})_{t\geq 0} converges weakly to the simple random walk on GG with time scaled by pp. Similarly, if μ\mu is of order 1, we may expect that the system behaves in various ways like the ordinary random walk. This is also justified by our theoretical results in Theorems 1.1 and 1.2 below.

When μ\mu is small, the random walk on dynamical percolation can also be viewed as an approximation of the random walk on static percolation. In particular, similar to the random walk on static percolation, the behavior of the random walk on dynamical percolation also exhibits a phase transition as pp crosses the critical point. More precisely, let 𝐏p{\bf P}_{p} denote the probability measure of the (static) Bernoulli-pp bond percolation on GG, where every edge is independently open with probability p[0,1]p\in[0,1]. The connected components of vertices with respect to the open edges in the percolation are called clusters, and recall that the critical point of connectivity pc=pc(G)p_{c}=p_{c}(G) is defined by

pc(G):=sup{p[0,1]: every cluster has finite size 𝐏p-a.s.}.\displaystyle p_{c}(G):=\sup\left\{p\in[0,1]:\text{ every cluster has finite size }{\bf P}_{p}\text{-a.s.}\right\}. (1.3)

For infinite transitive graphs with bounded degree and superlinear volume growth, we know that 0<pc<10<p_{c}<1; see [19, Theorem 1.3]. Heuristically, in the subcritical phase p<pcp<p_{c}, the moving particle is confined within a finite cluster and must wait for a duration of order 1/μ1/\mu before the cluster undergoes any notable changes. This suggests that the speed in the subcritical phase should be of order O(μ)O(\mu). On the other hand, in the supercritical phase p>pcp>p_{c}, there is a positive probability for the moving particle to be inside an infinite cluster. In such cases, the speed of the random walk should be of order O(1)O(1) and is not affected much by the evolution of the graph. Now, we state the first two results on the speed of the random walk in the subcritical and supercritical phases, respectively. One can notice the qualitative difference between them.

Theorem 1.1 (Subcritical phase).

Let GG be a connected, locally finite, nonamenable transitive unimodular graph where each vertex has degree d3d\geq 3. For any p(0,pc)p\in(0,p_{c}), there exist two constants 0<c1.1<C1.1<0<c_{\ref{thm:Subcritical}}<C_{\ref{thm:Subcritical}}<\infty independent of μ\mu such that the speed of the random walk satisfies the following estimate:

c1.1(μ1)vp(μ)(C1.1μ)1.\displaystyle c_{\ref{thm:Subcritical}}(\mu\wedge 1)\leq v_{p}(\mu)\leq\left(C_{\ref{thm:Subcritical}}\mu\right)\wedge 1. (1.4)
Theorem 1.2 (Supercritical phase).

Let GG be a connected, locally finite, nonamenable transitive unimodular graph where each vertex has degree d3d\geq 3. For any p(pc,1]p\in(p_{c},1], there exists a constant c1.2(0,1)c_{\ref{thm:Supercritical}}\in(0,1) independent of μ\mu such that the speed of random walk satisfies the following estimate:

c1.2vp(μ)1.\displaystyle c_{\ref{thm:Supercritical}}\leq v_{p}(\mu)\leq 1. (1.5)

The proofs of Theorems 1.1 and 1.2 will be presented in Sections 5 and 4, respectively. By the definition of the model, the upper bound vp(μ)1v_{p}(\mu)\leq 1 is trivial for all μ>0\mu>0 and p[0,1]p\in[0,1]. The upper bound C1.1μC_{\ref{thm:Subcritical}}\mu in (1.4) is proved following our above heuristic reasoning. The main challenges lie in establishing the lower bounds in (1.4) and (1.5) when μ1\mu\ll 1.

Next, we consider the random walk on dynamical percolation at criticality p=pcp=p_{c}. In Theorem 1.4, we will show that the (order of the) speed is strictly smaller than that in the supercritical regime. The proof depends crucially on the one-arm estimate for static percolation on GG. Denote by 𝒞v\mathcal{C}_{v} the open cluster containing the vertex vVv\in V, and let 𝗋𝖺𝖽int(𝒞v)\mathsf{rad}_{\operatorname{int}}(\mathcal{C}_{v}) be its intrinsic radius, i.e., the maximum, over vertices w𝒞vw\in\mathcal{C}_{v}, of the intrinsic graph distance (using open edges) between vv and ww.

Definition 1.3 (One-arm estimate).

An infinite graph GG is said to satisfy the one-arm estimate with exponent 1 if there exists a positive constant C>0C>0 such that for every vGv\in G and r>0r>0,

𝐏pc(𝗋𝖺𝖽int(𝒞v)r)Cr.\displaystyle{\bf P}_{p_{c}}\left(\mathsf{rad}_{\operatorname{int}}(\mathcal{C}_{v})\geq r\right)\leq\frac{C}{r}. (1.6)
Theorem 1.4 (Critical phase).

Let GG be a connected, locally finite, transitive unimodular graph where each vertex has degree d3d\geq 3. Suppose it satisfies the one-arm estimate (1.6). Then, there exists a constant C1.4(0,)C_{\ref{t:ub}}\in(0,\infty) (independent of μ\mu) such that

vpc(μ)C1.4μlog(1/μ),μ(0,1/e].{v_{p_{c}}(\mu)}\leq C_{\ref{t:ub}}\sqrt{\mu\log(1/\mu)},\quad\forall\mu\in(0,1/e]. (1.7)

While the statement above does not explicitly mention nonamenability, it is hinted in the one-arm estimate (1.6). The one-arm estimate is implied by the mean-field behavior of the critical percolation model. A classical condition to verify the mean-field behavior of critical percolation is the well-known triangle condition introduced by Aizenman and Newman [2]: for a transitive graph G=(V,E)G=(V,E) with an origin oVo\in V, the triangle function is defined as

p:=x,yV𝐏p(ox)𝐏p(xy)𝐏p(yo).\displaystyle\nabla_{p}:=\sum_{x,y\in V}{\bf P}_{p}(o\leftrightarrow x){\bf P}_{p}(x\leftrightarrow y){\bf P}_{p}(y\leftrightarrow o). (1.8)

Then, the triangle condition is p<\nabla_{p}<\infty, which implies the one-arm estimate (1.6) following the argument by Kozma and Nachmias in [37]; see also the discussion by Hutchcroft in [31, Section 7]. Schonmann conjectured in [50, Conjectures 1.1 and  1.2] that critical percolation on all nonamenable transitive graphs exhibits mean-field behavior. Recently, Hutchcroft proved this conjecture for all nonamenable nonunimodular graphs in [31] and for a large family of nonamenable unimodular graphs in [29, 30, 32], which includes the Gromov hyperbolic graphs. In addition, another L2L^{2}-boundedness condition was proposed in [30] as a sufficient criterion for the triangle condition.

Finally, we also obtain a lower bound estimate for the speed at criticality. The following estimate (1.9) holds for all p(0,1]p\in(0,1] and provides an optimal lower bound for the subcritical phase. However, it fails to yield a sharp lower bound for both the critical and supercritical phases.

Proposition 1.5.

Let GG be a connected, locally finite, nonamenable transitive graph where each vertex has degree d3d\geq 3. For any p(0,1]p\in(0,1], there exists a constant c1.5(0,)c_{\ref{prop:general_Lower}}\in(0,\infty) independent of μ\mu such that the random walker satisfies the following lower bound:

vp(μ)c1.5(μ1).\displaystyle v_{p}(\mu)\geq c_{\ref{prop:general_Lower}}(\mu\wedge 1). (1.9)

It is natural to ask similar questions about the mean squared displacement 𝔼[|XtX0|2]{\mathbb{E}}[|X_{t}-X_{0}|^{2}] (corresponding to our 𝔼[dist(X0,Xt)]{\mathbb{E}}[\mathrm{dist}(X_{0},X_{t})]) and the diffusion constant Dp(μ)D_{p}(\mu) (corresponding to our speed vp(μ)v_{p}(\mu)) for the random walk on dynamical percolation in d{\mathbb{Z}}^{d} lattices. Sharp estimates on the mean squared displacement and diffusion constant in the subcritical regime have been derived by Peres, Stauffer, and Steif in [48]. Some estimates on the mixing time and mean squared displacement in the supercritical case were also proved by Peres, Sousi, and Steif in [47], although the results there do not provide a precise lower bound for the diffusion constant. A general lower bound for the diffusion constant, analogous to Proposition 1.5, for all p(0,1]p\in(0,1] was established by Peres, Sousi, and Steif in [46]. The critical regime for the random walk on dynamical percolation in d{\mathbb{Z}}^{d} lattices, however, has not been addressed in the literature so far. In a companion paper [24], we investigate the random walk on critical dynamical percolation in d{\mathbb{Z}}^{d} lattices and establish a sharp lower bound for the diffusion constant in the supercritical case. We obtain similar results to those in Theorem 1.4 and Theorem 1.2 for the diffusion constant, which show that the critical and supercritical regimes are different qualitatively.

An important special case of nonamenable transitive unimodular graphs is the regular tree 𝕋d\mathbb{T}_{d}, where each vertex has degree dd. It is a widely accepted belief that the random walk on trees is closely connected to the random walk on critical percolation in high-dimensional d{\mathbb{Z}}^{d} lattices. This belief is due to the observation that critical percolation clusters in high-dimensional lattices exhibit only local cycles and are similar to critical percolation on a tree. This heuristic has been rigorously justified in various contexts. For instance, Kozma and Nachmias proved the Alexander-Orbach conjecture in high dimensions [37], which was previously proved for trees by Kesten [36] and Barlow and Kumagai [9]. See also the survey by Heydenreich and van der Hofstad [28]. Our results in this paper, along with those presented in the companion paper [24], provide further evidence regarding the connection between the diffusion constant on high-dimensional lattices and the speed on trees. In fact, the proofs in these two papers mutually inspire each other, particularly in the critical case.

Related works

The study of percolation on general graphs beyond d\mathbb{Z}^{d} has received significant attention in recent decades. In their seminal paper [14], Benjamini and Schramm proposed a systematic investigation of percolation on general quasi-transitive graphs. In particular, they posed many conjectures regarding the existence of phase transitions and the extinction of infinite clusters at criticality. (Recall that a graph GG is called quasi-transitive if the action of the automorphism group Aut(G)\operatorname{Aut}(G) on VV has only a finite number of orbits.) Benjamini, Lyons, Peres, and Schramm [10, 11] proved that for nonamenable quasi-transitive unimodular graphs, pc<1p_{c}<1 and no infinite cluster exists at criticality. Further details can be found in Chapter 8 of the book by Lyons and Peres [42]. The developments in percolation on nonunimodular graphs usually rely on the study of geometric conditions. Lyons [40] proved that every Cayley graph with exponential growth exhibits a non-trivial phase transition. Teixeira [52] proved that pc<1p_{c}<1 for graphs with polynomial growth and satisfying the local isoperimetric inequality of dimension greater than 11. In a recent groundbreaking result, Duminil-Copin, Goswami, Raoufi, Severo, and Yadin [19] established the existence of phase transitions in all quasi-transitive graphs with super-linear volume growth. Hermon and Hutchcroft [26] established the exponential decay of the cluster size distribution and the anchored expansion for infinite clusters in supercritical percolation on transitive nonamenable graphs.

There is a vast literature on random walks in evolving random environments. We begin by highlighting some related works that share the same context as our paper, specifically focusing on random walks on dynamical percolation. The concept of dynamical percolation on arbitrary graphs was introduced by Häggström, Peres, and Steif [25]. Subsequently, the model of random walk on dynamical percolation in d\mathbb{Z}^{d} was introduced by Peres, Stauffer, and Steif [48]. As previously mentioned, in the context of subcritical/supercritical dynamical percolation in d\mathbb{Z}^{d}, several results concerning mean squared displacement, mixing times, and hitting times for the random walk have been proved in [48, 46, 47]. Later, Hermon and Sousi [27] extended the setting to general underlying graphs, establishing a comparison principle between the random walk on dynamical percolation and the simple random walk on the graph regarding hitting and mixing times, spectral gap, and log-Sobolev constant. Sousi and Thomas [51] investigated the random walk on dynamical Erdős-Rényi graphs (i.e., dynamical percolation on complete graphs) and showed that the mixing time exhibits a cutoff phenomenon. More recently, Andres, Gantert, Schmid, and Sousi [3] studied the biased random walk on dynamical percolation in d\mathbb{Z}^{d} and established results such as the law of large numbers, the Einstein relation, and monotonicity with respect to the bias. Lelli and Stauffer [38] studied the mixing time of random walk on a dynamical random cluster model in d\mathbb{Z}^{d}, where edge states evolve according to continuous-time Glauber dynamics.

Many research interests have also been devoted to studying random walks on dynamical models that differ from dynamical percolation in terms of the evolving mechanism of the underlying graph. For example, Avena, Güldaş, van der Hofstad, den Hollander, and Nagy [4, 5, 6] studied random walks on dynamical configuration models, where a small fraction of edges is sampled and rewired uniformly at random at each unit of time. Caputo and Quattropani [17] investigated the mixing of random walks on dynamic random digraphs, which undergo full regeneration at independent geometrically distributed random time intervals. Figueiredo, Iacobelli, Oliveira, Reed, and Ribeiro [22] and Iacobelli, Ribeiro, Valle, and Zuaznábar [34] considered random walks that build their own trees, wherein the trees evolve with time, randomly, and depending upon the walker. Cai, Sauerwald, and Zanetti [16] and Figueiredo, Nain, Ribeiro, de Souza e Silva, and Towsley [23] studied random walks on evolving graph models generated as general edge-Markovian processes. Iacobelli and Figueiredo [33] considered random walks moving over dynamic networks in the presence of mutual dependencies—the network influences the walker steps and vice versa. Avin, Koucký, and Lotker [8] and Sauerwald and Zanetti [49] derived bounds for cover, mixing, and hitting times of random walks on dynamically changing graphs, where the set of edges changes in each round under some mild assumptions regarding the evolving mechanism.

In the preceding discussion, we have focused on highlighting a few representative studies that are particularly relevant to random walks in evolving random environments on nonamenable (or tree-like) graphs. There are also numerous references concerning random walks in evolving random environments on d\mathbb{Z}^{d} lattices, which will be reviewed in the companion paper [24].

Organization

The rest of the paper is organized as follows. In Section 2, we provide several preliminary results that will be used in the proofs of the main results, including the choice of the initial bond configuration, the existence of speed, and the stationarity of the environment seen by the moving particle. In Section 3, we consider the critical case and present the proof of Theorem 1.4. The proofs for the supercritical case (Theorem 1.2) and the subcritical case (Theorem 1.1) are presented in Section 4 and Section 5, respectively. Additionally, in Section 5, we establish the general lower bound stated in Proposition 1.5. Finally, some concluding remarks and open questions are stated in Section 6.

Notations. We will use the sets of natural numbers ={1,2,3,}{\mathbb{N}}=\{1,2,3,\ldots\} and positive real numbers +=(0,)\mathbb{R}_{+}=\mathbb{R}\cap(0,\infty). For any (real or complex) numbers aa and bb, we will use the notations aba\lesssim b, a=O(b)a=O(b), or b=Ω(a)b=\Omega(a) to mean that |a|C|b||a|\leq C|b| for a constant C>0C>0 that does not depend on μ\mu.

2. Preliminaries

This section aims to establish several preliminary results regarding the random walk on dynamical percolation. In some results, the underlying graph can be more general than our setting in the main results. These results include the impact of the initial bond configuration of the dynamical percolation, the estimate of reset times, the existence of speed on transitive graphs, and the stationarity of the environment seen by the moving particle (which is the only result in this section that requires unimodularity of the underlying graph).

Given a graph G=(V,E)G=(V,E), we denote by dist(u,v)\mathrm{dist}(u,v) the graph distance between two vertices u,vVu,v\in V. Let B(v,r):={xV:dist(x,v)r}B(v,r):=\{x\in V:\mathrm{dist}(x,v)\leq r\} denote the rr-neighborhood of vv on GG. In the presence of percolation, we use dist𝒞(u,v)\mathrm{dist}_{\mathcal{C}}(u,v) to denote the chemical distance, i.e., the length of the shortest path connecting uu and vv that consists only of the open edges. If u,vu,v belong to different clusters, then dist𝒞(u,v)=\mathrm{dist}_{\mathcal{C}}(u,v)=\infty. We call dist(,)\mathrm{dist}(\cdot,\cdot) and dist𝒞(,)\mathrm{dist}_{\mathcal{C}}(\cdot,\cdot) the intrinsic and extrinsic distances on the percolation, respectively. Let 𝒞v\mathcal{C}_{v} represent the cluster containing the vertex vv. We define the extrinsic radius of 𝒞v\mathcal{C}_{v} as

𝗋𝖺𝖽ext(𝒞v):=sup{dist(v,u):u𝒞v}.\displaystyle\mathsf{rad}_{\operatorname{ext}}(\mathcal{C}_{v}):=\sup\{\mathrm{dist}(v,u):u\in\mathcal{C}_{v}\}. (2.1)

Similarly, we define the intrinsic radius of 𝒞v\mathcal{C}_{v} as

𝗋𝖺𝖽int(𝒞v):=sup{dist𝒞(v,u):u𝒞v}.\displaystyle\mathsf{rad}_{\operatorname{int}}(\mathcal{C}_{v}):=\sup\{\mathrm{dist}_{\mathcal{C}}(v,u):u\in\mathcal{C}_{v}\}. (2.2)

We now introduce the following notations for the random walk on dynamical percolation in an underlying graph G=(V,E)G=(V,E):

  • Let η:E×+{0,1}\eta:E\times{\mathbb{R}}_{+}\to\{0,1\} denote the state of edges, where ηt(e)=1\eta_{t}(e)=1 (resp. 0) indicates the edge ee is open (resp. closed) at time tt. Each edge refreshes independently at a rate μ(0,)\mu\in(0,\infty), and when the refresh happens, the edge is open with probability p(0,1)p\in(0,1) and closed with probability (1p)(1-p). For each eEe\in E, we use 0χ1e<χ2e<0\leq\chi^{e}_{1}<\chi^{e}_{2}<\cdots to denote the sequence of refresh times. Note {χje}j\{\chi^{e}_{j}\}_{j\in{\mathbb{N}}} forms a Poisson point process of intensity μ\mu on +{\mathbb{R}}_{+}.

  • Let X:+VX:{\mathbb{R}}_{+}\to V denote the position of the moving particle, which attempts to jump to one of its neighbors uniformly at random and independently of the dynamics (ηt)t0(\eta_{t})_{t\geq 0}, with a rate of 11. A jump is successful if and only if the edge the particle attempts to cross is open. Denote by ξk\xi_{k} the moment of the kk-th attempt to jump, and let 𝐞k{\mathbf{e}}_{k} denote the corresponding edge the particle attempts to cross. Note {ξk}k\{\xi_{k}\}_{k\in{\mathbb{N}}} forms a Poisson point process of intensity 11 on +{\mathbb{R}}_{+}.

Suppose the particle starts from a vertex xVx\in V. Given an initial bond configuration ω{0,1}E\omega\in\{0,1\}^{E}, we denote by ω,x{\mathbb{P}}_{\omega,x} the probability measure generated by

({ξk,𝐞k}k,{χje}j,eE,X0=x,η0=ω).\displaystyle\left(\{\xi_{k},{\mathbf{e}}_{k}\}_{k\in{\mathbb{N}}},\{\chi^{e}_{j}\}_{j\in{\mathbb{N}},e\in E},X_{0}=x,\eta_{0}=\omega\right). (2.3)

The process (Xt,ηt)t0(X_{t},\eta_{t})_{t\geq 0} is Markovian under ω,x{\mathbb{P}}_{\omega,x}. By default, we set oVo\in V as the initial position of the particle, and write ω=ω,o{\mathbb{P}}_{\omega}={\mathbb{P}}_{\omega,o} for short. We use 𝜼{\mathbb{P}}^{\bm{\eta}} to denote the probability measure conditioned on the whole evolution of the dynamical percolation process (ηt)t0(\eta_{t})_{t\geq 0}. Notice that the product Bernoulli measure πp:=Ber(p)E\pi_{p}:=\mathrm{Ber}(p)^{E} is the invariant measure for the process (ηt)t0(\eta_{t})_{t\geq 0}. Then, we define the probability measure

:=𝔼πpω,\displaystyle{\mathbb{P}}:={\mathbb{E}}_{\pi_{p}}{\mathbb{P}}_{\omega}, (2.4)

which is the annealed probability measure with respect to 𝜼\bm{\eta} when the initial bond configuration is distributed according to πp\pi_{p}.

2.1. Choosing the initial environment

The following proposition shows that to prove the main results, it suffices to choose the initial distribution to be stationary.

Proposition 2.1.

Consider the random walk (Xt)t0(X_{t})_{t\geq 0} on dynamical percolation in a connected and locally finite graph G=(V,E)G=(V,E), started at oVo\in V. If an event DD in the σ\sigma-field generated by (Xt)t0(X_{t})_{t\geq 0} satisfies η(D)=1{\mathbb{P}}_{\eta}(D)=1 for a.e. initial environment η{0,1}E\eta\in\{0,1\}^{E} sampled according to the product Bernoulli measure πp=Ber(p)E\pi_{p}=\mathrm{Ber}(p)^{E}, then ω(D)=1{\mathbb{P}}_{\omega}(D)=1 for every initial environment ω{0,1}E\omega\in\{0,1\}^{E}.

Proof.

Given ω{0,1}E\omega\in\{0,1\}^{E} and an integer r>1r>1, let ωr\omega^{r} denote a random initial environment that agrees with ω\omega on the ball B(o,r)B(o,r) and is i.i.d. Ber(p)\mathrm{Ber}(p) outside this ball. Since ωr\omega^{r} is obtained by conditioning the random variables with law Ber(p)E\mathrm{Ber}(p)^{E} on an event of positive probability, we have that ωr(D)=1{\mathbb{P}}_{\omega^{r}}(D)=1.

We couple the processes started with ω\omega and ωr\omega^{r}, respectively, so that every edge ee is refreshed at the same times χ1e,χ2e,\chi^{e}_{1},\ \chi^{e}_{2},\ldots, in both environments and has the same status after updating, i.e., ωtr(e)=ωt(e)\omega_{t}^{r}(e)=\omega_{t}(e) for all tχ1et\geq\chi^{e}_{1}. Furthermore, the sequence of times ξj\xi_{j} when the particle attempts to jump is the same under both measures. If {𝐞j}\{{\mathbf{e}}_{j}\} is the sequence of edges selected by the moving particle under ω{\mathbb{P}}_{\omega}, we may select the same edges 𝐞j{\mathbf{e}}_{j} under ωr{\mathbb{P}}_{\omega^{r}}, for every index jj that satisfies

i<j,{χ1𝐞i<ξior𝐞iB(o,r)}.\forall i<j,\quad\{\chi_{1}^{{\mathbf{e}}_{i}}<\xi_{i}\quad\text{or}\quad{\mathbf{e}}_{i}\in B(o,r)\}\,.

Next, consider the events

Qn:={𝐞n𝐞jj<n}.Q_{n}:=\{{\mathbf{e}}_{n}\neq{\mathbf{e}}_{j}\ \forall j<n\}\,.

For each nn, the probability that 𝐞n{\mathbf{e}}_{n} is selected for the first time before it is refreshed is

ω(Qn{ξn<χ1𝐞n})=𝔼ω(eμξn𝟏Qn)𝔼ω(eμξn)=(1+μ)n,{\mathbb{P}}_{\omega}\bigl{(}Q_{n}\cap\{\xi_{n}<\chi_{1}^{\mathbf{e}_{n}}\}\bigr{)}={\mathbb{E}}_{\omega}\bigl{(}e^{-\mu\xi_{n}}{\bf 1}_{Q_{n}}\bigr{)}\leq{\mathbb{E}}_{\omega}\bigl{(}e^{-\mu\xi_{n}}\bigr{)}=(1+\mu)^{-n}\,,

where the first equality is obtained by conditioning on ξn\xi_{n}, QnQ_{n}, and 𝐞n{\mathbf{e}}_{n}. Define

Q:=n>r(Qn{ξn<χ1𝐞n}).Q:=\cup_{n>r}\bigl{(}Q_{n}\cap\{\xi_{n}<\chi_{1}^{{\mathbf{e}}_{n}}\}\bigr{)}\,.

Given ε>0\varepsilon>0, we can choose r=r(μ,ε)r=r(\mu,\varepsilon) so that

ω(Q)n>rω(Qn{ξn<χ1𝐞n})n>r(1+μ)n<ε.{\mathbb{P}}_{\omega}(Q)\leq\sum_{n>r}{\mathbb{P}}_{\omega}\bigl{(}Q_{n}\cap\{\xi_{n}<\chi_{1}^{{\mathbf{e}}_{n}}\}\bigr{)}\leq\sum_{n>r}(1+\mu)^{-n}<\varepsilon\,.

On the event QcQ^{c}, the edges {𝐞j}j1\{{\mathbf{e}}_{j}\}_{j\geq 1} selected under ωr{\mathbb{P}}_{\omega^{r}} are the same as those selected under ω{\mathbb{P}}_{\omega}, and ωξj(𝐞j)=ωξjr(𝐞j)\omega_{\xi_{j}}({\mathbf{e}}_{j})=\omega^{r}_{\xi_{j}}({\mathbf{e}}_{j}) for all j1j\geq 1. The locations of the particle (Xt)t0(X_{t})_{t\geq 0} are determined by the variables {ξj,𝐞j,ηξj(𝐞j)}j1,\{\xi_{j},\,{\mathbf{e}}_{j},\,\eta_{\xi_{j}}({\mathbf{e}}_{j})\}_{j\geq 1}\,, so

ω(D)ωr(DQc)1ε.{\mathbb{P}}_{\omega}(D)\geq{\mathbb{P}}_{\omega^{r}}(D\cap Q^{c})\geq 1-\varepsilon\,.

Since ε\varepsilon can be arbitrarily small, the result follows. ∎

2.2. Uniform upper bound for the reset time

We denote by AtA_{t} the memory set of edges that have not been refreshed after the last attempt of the particle to cross it in [0,t][0,t], i.e.,

At:={eE:maxk{ξk:ξkt,𝐞k=e}>maxj{χje:χjet}}.\displaystyle A_{t}:=\left\{e\in E:\max_{k\in{\mathbb{N}}}\{\xi_{k}:\xi_{k}\leq t,{\mathbf{e}}_{k}=e\}>\max_{j\in{\mathbb{N}}}\{\chi^{e}_{j}:\chi^{e}_{j}\leq t\}\right\}. (2.5)

Here, we adopt the convention that maxk=0\max_{k\in{\mathbb{N}}}\emptyset=0. Therefore, an edge ee that has never been attempted to be crossed before time tt is not included in the memory set AtA_{t}. We define the reset time as the sequence of moments (Tk)k(T_{k})_{k\in{\mathbb{N}}} at which the memory set returns to the empty state from the non-empty state: T0=0T_{0}=0 and

Tk:=inf{t>Tk1:|At|=0 and sups(Tk1,t)|As|1},k.\displaystyle\begin{split}T_{k}:=\inf\left\{t>T_{k-1}:|A_{t}|=0\ \text{ and }\sup_{s\in(T_{k-1},t)}|A_{s}|\geq 1\right\},\quad\forall k\in{\mathbb{N}}.\end{split} (2.6)

The next result gives a rough uniform bound on this reset time.

Lemma 2.2.

For any infinite graph GG and any initial bond configuration ω{0,1}E\omega\in\{0,1\}^{E}, the following upper bound holds for the reset time:

𝔼ω[TkTk1]e1μ,k.\displaystyle\qquad{\mathbb{E}}_{\omega}[T_{k}-T_{k-1}]\leq e^{\frac{1}{\mu}},\quad\forall k\in{\mathbb{N}}. (2.7)
Proof.

It suffices to estimate 𝔼ω[T1]{\mathbb{E}}_{\omega}[T_{1}], as the same proof can be applied to 𝔼ω[TkTk1]{\mathbb{E}}_{\omega}[T_{k}-T_{k-1}]. We define the hitting time of 11 for the process (|At|)t0(|A_{t}|)_{t\geq 0}:

τ1:=inf{t>0:|At|=1}.\displaystyle\tau_{1}:=\inf\left\{t>0:|A_{t}|=1\right\}.

The proof is based on the observation that the size of the memory set (|At|)t0(|A_{t}|)_{t\geq 0} is stochastically dominated by a birth-death process (St)t0(S_{t})_{t\geq 0} with birth rate 11 and death rate μ|St|\mu|S_{t}|:

|At|=St,t[0,τ1],\displaystyle|A_{t}|=S_{t},\quad\forall t\in[0,\tau_{1}], (2.8)
|At|St,t(τ1,).\displaystyle|A_{t}|\leq S_{t},\quad\forall t\in(\tau_{1},\infty). (2.9)

Regarding (2.8), since the memory set AtA_{t} is empty before τ1\tau_{1}, its increasing rate is the same as that of StS_{t}, and we can couple the two processes using the same random variable ξ1\xi_{1}. In the subsequent evolution, the increasing rate of the memory set is smaller than 11 because every attempt to jump may choose an edge already in AtA_{t}. Meanwhile, the decay rate of the memory set is μ|At|\mu|A_{t}| since every edge refreshes independently with rate μ\mu. This leads to (2.9).

We define another sequence of reset times (T~k)k(\widetilde{T}_{k})_{k\in{\mathbb{N}}} for the process (St)t0(S_{t})_{t\geq 0}. In view of (2.8) and (2.9), we have that

𝔼ω[T1]𝔼[T~1].\displaystyle{\mathbb{E}}_{\omega}[T_{1}]\leq{\mathbb{E}}[\widetilde{T}_{1}].

To estimate 𝔼[T~1]{\mathbb{E}}[\widetilde{T}_{1}], we can calculate the stationary distribution (q~n)n(\tilde{q}_{n})_{n\in{\mathbb{N}}} for the process (St)t0(S_{t})_{t\geq 0} by solving the detailed balance equation

q~n=q~n+1μ(n+1),n.\displaystyle\tilde{q}_{n}=\tilde{q}_{n+1}\cdot\mu(n+1),\qquad\forall n\in{\mathbb{N}}. (2.10)

We find that (q~n)n(\tilde{q}_{n})_{n\in{\mathbb{N}}} follows a Poisson(μ1)\operatorname{Poisson}(\mu^{-1}) distribution. Therefore, (St)t0(S_{t})_{t\geq 0} is a positive recurrent Markov process (see e.g., Theorem 21.13 in the book [39] by Levin and Peres) and the expectation of T~1\widetilde{T}_{1} is given by (see e.g., [39, Proposition 1.14 (ii)])

𝔼[T~1]=q~01=e1μ.\displaystyle{\mathbb{E}}[\widetilde{T}_{1}]=\tilde{q}_{0}^{-1}=e^{\frac{1}{\mu}}. (2.11)

This concludes (2.7). ∎

2.3. Existence of speed on transitive graphs

With Lemma 2.2, we can derive that the speed of random walk on dynamical percolation exists for any transitive graph.

Lemma 2.3.

For any infinite transitive graph GG and any initial bond configuration ω{0,1}E\omega\in\{0,1\}^{E}, the speed vp(μ)v_{p}(\mu) of (Xt,ηt)t0(X_{t},\eta_{t})_{t\geq 0} (recall (1.2)) exists ω{\mathbb{P}}_{\omega}-a.s. and in L1(ω)L^{1}({\mathbb{P}}_{\omega}). Furthermore, vp(μ)v_{p}(\mu) does not depend on the choice of ω\omega.

Proof.

For simplicity, we assume the initial configuration ω\omega is sampled from the product measure πp\pi_{p}. We will address the general initial configuration using Proposition 2.1.

Step 1: limit along the reset times. Recall the reset times (Tk)k(T_{k})_{k\in{\mathbb{N}}} defined in (2.6). By the strong Markov property, the stationarity of the initial configuration, and the transitivity of GG, we know that (TkTk1)k(T_{k}-T_{k-1})_{k\in{\mathbb{N}}} are i.i.d. random variables and (XTk)k(X_{T_{k}})_{k\in{\mathbb{N}}} is a discrete random walk on GG. Since the increments of dist(X0,Xt)\mathrm{dist}(X_{0},X_{t}) have a rate at most 11, (dist(X0,Xt)t)t0(\mathrm{dist}(X_{0},X_{t})-t)_{t\geq 0} is a supermartingale. Then, the optional stopping time theorem yields that

𝔼[dist(X0,XT1t)T1t]0.\displaystyle{\mathbb{E}}[\mathrm{dist}(X_{0},X_{T_{1}\wedge t})-T_{1}\wedge t]\leq 0.

This implies that 𝔼[dist(X0,XT1t)]𝔼[T1t]{\mathbb{E}}[\mathrm{dist}(X_{0},X_{T_{1}\wedge t})]\leq{\mathbb{E}}[T_{1}\wedge t]. Taking tt\to\infty, we obtain that

𝔼[dist(X0,XT1)]\displaystyle{\mathbb{E}}[\mathrm{dist}(X_{0},X_{T_{1}})] lim inft𝔼[dist(X0,XT1t)]\displaystyle\leq\liminf_{t\to\infty}{\mathbb{E}}[\mathrm{dist}(X_{0},X_{T_{1}\wedge t})]
limt𝔼[T1t]=𝔼[T1]<,\displaystyle\leq\lim_{t\to\infty}{\mathbb{E}}[T_{1}\wedge t]={\mathbb{E}}[T_{1}]<\infty,

where we applied Fatou’s lemma in the first step, the monotone convergence theorem in the third step, and (2.7) in the last step. Once we have verified that 𝔼[dist(X0,XT1)]<{\mathbb{E}}[\mathrm{dist}(X_{0},X_{T_{1}})]<\infty, according to [39, Theorem 14.10] based on Kingman’s subadditive ergodic theorem, the limit

v:=limndist(X0,XTn)n\displaystyle v^{\prime}:=\lim_{n\to\infty}\frac{\mathrm{dist}(X_{0},\,X_{T_{n}})}{n} (2.12)

exists {\mathbb{P}}-a.s. and is a constant.

Step 2: limit along +{\mathbb{R}}_{+}. We define the total number of attempts to jump during [s,t][s,t] as

J[s,t]:=#{i+:ξi[s,t]}.\displaystyle J[s,t]:=\#\{i\in{\mathbb{N}}_{+}:\xi_{i}\in[s,t]\}. (2.13)

Note that

dist(X0,XTn)J[Tn1,Tn]Tninft(Tn1,Tn]dist(X0,Xt)tsupt(Tn1,Tn]dist(X0,Xt)tdist(X0,XTn1)+J[Tn1,Tn]Tn1.\frac{\mathrm{dist}(X_{0},\,X_{T_{n}})-J[T_{n-1},T_{n}]}{T_{n}}\leq\inf_{t\in(T_{n-1},T_{n}]}\frac{\mathrm{dist}(X_{0},\,X_{t})}{t}\\ \leq\sup_{t\in(T_{n-1},T_{n}]}\frac{\mathrm{dist}(X_{0},\,X_{t})}{t}\leq\frac{\mathrm{dist}(X_{0},\,X_{T_{n-1}})+J[T_{n-1},T_{n}]}{T_{n-1}}. (2.14)

Hence, it suffices to show that the two sides converge to the same limit as nn\to\infty. First, we have that

limndist(X0,XTn)Tn=limndist(X0,XTn)nnTn=v𝔼[T1],-a.s.\displaystyle\lim_{n\to\infty}\frac{\mathrm{dist}(X_{0},\,X_{T_{n}})}{T_{n}}=\lim_{n\to\infty}\frac{\mathrm{dist}(X_{0},\,X_{T_{n}})}{n}\cdot\frac{n}{T_{n}}=\frac{v^{\prime}}{{\mathbb{E}}[T_{1}]},\qquad{\mathbb{P}}\text{-a.s.} (2.15)

by (2.12) and the strong law of large numbers (LLN). Regarding J[Tn1,Tn]J[T_{n-1},T_{n}], notice that tJ[0,t]t\mapsto J[0,t] is a Poisson point process of intensity 11, so it admits a strong LLN (see e.g., Theorem 2.4.7 of Durrett [21]):

limtJ[0,t]t=1,-a.s.\displaystyle\lim_{t\to\infty}\frac{J[0,t]}{t}=1,\qquad{\mathbb{P}}\text{-a.s.}

On the other hand, using the LLN again, we have that Tn/Tn11T_{n}/T_{n-1}\to 1 {\mathbb{P}}-a.s. Hence, we conclude that

limnJ[Tn1,Tn]Tn1=limn[J[0,Tn]TnTnTn1J[0,Tn1]Tn1]=0,-a.s.\displaystyle\lim_{n\to\infty}\frac{J[T_{n-1},T_{n}]}{T_{n-1}}=\lim_{n\to\infty}\left[\frac{J[0,T_{n}]}{T_{n}}\frac{T_{n}}{T_{n-1}}-\frac{J[0,T_{n-1}]}{T_{n-1}}\right]=0,\qquad{\mathbb{P}}\text{-a.s.} (2.16)

Applying (2.15) and (2.16) to (2.14) yields that

vp(μ)=limtdist(X0,Xt)t=v𝔼[T1],-a.s.\displaystyle v_{p}(\mu)=\lim_{t\to\infty}\frac{\mathrm{dist}(X_{0},\,X_{t})}{t}=\frac{v^{\prime}}{{\mathbb{E}}[T_{1}]},\qquad{\mathbb{P}}\text{-a.s.}

For a general initial configuration ω\omega, consider the event A:={limtdist(X0,Xt)/t=vp(μ)}A:=\{\lim_{t\to\infty}{\mathrm{dist}(X_{0},\,X_{t})}/{t}=v_{p}(\mu)\}. By Proposition 2.1, we have that

ω(A)=(A)=1,\displaystyle{\mathbb{P}}_{\omega}(A)={\mathbb{P}}(A)=1,

which implies the ω{\mathbb{P}}_{\omega}-a.s. convergence. Since vp(μ)v_{p}(\mu) is defined under {\mathbb{P}}, it does not depend on the choice of ω\omega.

Step 3: L1L^{1}-convergence. Since dist(X0,Xt)J[0,t]\mathrm{dist}(X_{0},X_{t})\leq J[0,t] and J[0,t]J[0,t] follows a Poisson(t)(t) distribution, for every initial configuration ω\omega, we have that

𝔼ω[(dist(X0,Xt)t)2]\displaystyle{\mathbb{E}}_{\omega}\left[\left(\frac{\mathrm{dist}(X_{0},\,X_{t})}{t}\right)^{2}\right] 𝔼ω[(J[0,t]t)2]=t2+tt22,t1.\displaystyle\leq{\mathbb{E}}_{\omega}\left[\left(\frac{J[0,t]}{t}\right)^{2}\right]=\frac{t^{2}+t}{t^{2}}\leq 2,\quad\forall t\geq 1. (2.17)

Thus, the ratios (dist(X0,Xt)/t)t1({\mathrm{dist}(X_{0},\,X_{t})}/{t})_{t\geq 1} are uniformly integrable under ω{\mathbb{P}}_{\omega}. Together with the ω{\mathbb{P}}_{\omega}-a.s. convergence, this justifies the L1(ω)L^{1}({\mathbb{P}}_{\omega}) convergence. ∎

Remark 2.4.

We can establish the positivity of the speed vp(μ)v_{p}(\mu) for discrete nonamenable groups: Kaimanovich and Vershik proved in [35, Theorem 5] that the spectral radius of the random walk (XTk)k(X_{T_{k}})_{k\in{\mathbb{N}}} is strictly less than 11. Proposition 14.6 in [42] (which is due to Avez’s work in 1976 [7]) connects this lower bound of the spectral radius to that of entropy, which further gives a lower bound of the speed of the random walk following [42, Theorem 14.9].

2.4. Environment seen by the particle on transitive unimodular graphs

In this subsection, we study the process (Xt,ηt)t0(X_{t},\eta_{t})_{t\geq 0} from the view of the particle. Using the reversibility of the process (Xt,ηt)t0(X_{t},\eta_{t})_{t\geq 0}, we extend it to a two-sided process (Xt,ηt)t(X_{t},\eta_{t})_{t\in\mathbb{R}} by first running an independent copy (Xt,ηt)t0(X^{\prime}_{t},\eta^{\prime}_{t})_{t\geq 0} and then defining

(Xt,ηt):=(Xt,ηt),t<0.\displaystyle(X_{t},\eta_{t}):=(X^{\prime}_{-t},\eta^{\prime}_{-t}),\quad\forall t<0. (2.18)

The random variables in (2.3) can also be extended to ({ξk,𝐞k}k,{χje}j,eE)(\{\xi_{k},{\mathbf{e}}_{k}\}_{k\in{\mathbb{Z}}},\{\chi^{e}_{j}\}_{j\in{\mathbb{Z}},e\in E}) for the two-sided process (Xt,ηt)t(X_{t},\eta_{t})_{t\in{\mathbb{R}}}. Specifically, for k<0k<0, ξk\xi_{k} and 𝐞k{\mathbf{e}}_{k} record the moments and the corresponding edges attempted to be crossed before time 0, while χke\chi^{e}_{k} records the moments when the edge ee refresh before time 0.

To ensure the stationarity of the environment seen by the particle, besides transitivity, we also need to assume that the graph GG is unimodular. We now introduce some necessary definitions and notations (for more details, refer to [42, Chapter 8.2]). Let G=(V,E)G=(V,E) be a transitive graph and Aut(G)\operatorname{Aut}(G) be the group of automorphisms. Then, GG is unimodular if its left Haar measure coincides with its right Haar measure. Denote this Haar measure by HH. Let Γx,y\Gamma_{x,y} be the set of automorphisms that maps xx to yy:

Γx,y:={γAut(G):γx=y}.\displaystyle\Gamma_{x,y}:=\{\gamma\in\operatorname{Aut}(G):\gamma x=y\}. (2.19)

Let Ho,xH_{o,x} represent the normalized probability measure obtained by restricting HH to Γx,y\Gamma_{x,y}.

In this subsection, we always assume that G=(V,E)G=(V,E) is a locally finite, transitive, and unimodular graph. We define the environment seen from the particle (ηt)t(\eta^{*}_{t})_{t\in{\mathbb{R}}} as follows: the process (ηt)t(\eta^{*}_{t})_{t\in{\mathbb{R}}} takes value in {0,1}E\{0,1\}^{E} and the particle is always located at the origin oo. Each edge refreshes independently with rate μ\mu. At an exponential clock of rate 11, the particle attempts to jump to a neighbor xx of the origin oo, selected uniformly at random, and the environment is shifted accordingly. More specifically, for the kk-th attempt to jump at time ξk\xi_{k}, we sample an automorphism γ\gamma according to the normalized Haar measure Ho,xH_{o,x}. We then define the shift of the environment at ξk\xi_{k} as follows:

ηξk={ηξkγ, if ηξk({o,x})=1ηξk, if ηξk({o,x})=0,\displaystyle\eta^{*}_{\xi_{k}}=\left\{\begin{array}[]{ll}\eta^{*}_{\xi_{k}-}\circ\gamma,&\quad\text{ if }\eta^{*}_{\xi_{k}-}(\{o,x\})=1\\ \eta^{*}_{\xi_{k}-},&\quad\text{ if }\eta^{*}_{\xi_{k}-}(\{o,x\})=0\end{array}\right., (2.22)

where (ηξkγ)(e):=ηξk(γe)(\eta^{*}_{\xi_{k}-}\circ\gamma)(e):=\eta^{*}_{\xi_{k}-}(\gamma e) for every eEe\in E. Here, in the first case, when the edge {o,x}\{o,x\} is open, the particle jumps and the corresponding automorphism in Aut(G)\operatorname{Aut}(G) applies. The automorphism γ\gamma is chosen uniformly at random since there can be multiple automorphisms in Γo,x\Gamma_{o,x}. In the second case, when the edge {o,x}\{o,x\} is closed, the environment remains unchanged and the particle does not move.

We continue to use {\mathbb{P}} to denote the probability measure for the extended two-sided process when the initial bond configuration η0\eta_{0} is sampled according to πp\pi_{p}. The next result shows that the process (ηt)t(\eta^{*}_{t})_{t\in\mathbb{R}} seen from the particle forms a stationary ergodic environment.

Lemma 2.5.

Given a transitive unimodular graph G=(V,E)G=(V,E) where each vertex has degree d3d\geq 3, the environment (ηt)t(\eta^{*}_{t})_{t\in\mathbb{R}} seen from the particle is stationary and ergodic under {\mathbb{P}}.

Proof.

First, we prove the stationarity. Let \mathcal{L} denote the generator of the dynamics (ηt)t(\eta^{*}_{t})_{t\in\mathbb{R}}: for any bounded measurable function f:{0,1}Ef:\{0,1\}^{E}\to\mathbb{R},

f(η):\displaystyle\mathcal{L}f(\eta^{*}): =Rf(η)+Mf(η),\displaystyle=\mathcal{L}_{R}f(\eta^{*})+\mathcal{L}_{M}f(\eta^{*}),

where R\mathcal{L}_{R} and M\mathcal{L}_{M} are defined as

Rf(η)\displaystyle\mathcal{L}_{R}f(\eta^{*}) :=eE[μp(f(η,e,1)f(η))+μ(1p)(f(η,e,0)f(η))],\displaystyle:=\sum_{e\in E}\left[\mu p\left(f(\eta^{*,e,1})-f(\eta^{*})\right)+\mu(1-p)\left(f(\eta^{*,e,0})-f(\eta^{*})\right)\right],
Mf(η)\displaystyle\mathcal{L}_{M}f(\eta^{*}) :=xVo1|Vo|Γo,x[f(ηγ)f(η)]𝟏{η({o,x})=1}dHo,x(γ).\displaystyle:=\sum_{x\in V_{o}}\frac{1}{|V_{o}|}\int_{\Gamma_{o,x}}\left[f(\eta^{*}\circ\gamma)-f(\eta^{*})\right]\mathbf{1}_{\{\eta^{*}(\{o,x\})=1\}}\,\,\mathrm{d}H_{o,x}(\gamma).

Here, η,e,1\eta^{*,e,1} (resp. η,e,0\eta^{*,e,0}) denotes the bond configuration obtained by opening (resp. closing) the edge ee, and VoV_{o} represents the set of neighboring vertices of oo. We have decomposed the generator into two parts R\mathcal{L}_{R} and M\mathcal{L}_{M}, which correspond to the edge refreshing and the movement of the particle, respectively. To establish stationarity, it suffices to prove that

𝔼[f]=𝔼[Rf]+𝔼[Mf]=0.{\mathbb{E}}[\mathcal{L}f]={\mathbb{E}}[\mathcal{L}_{R}f]+{\mathbb{E}}[\mathcal{L}_{M}f]=0.

The term 𝔼[Rf]=0{\mathbb{E}}[\mathcal{L}_{R}f]=0 follows from the fact that πp\pi_{p} is an invariant measure for the refreshing generator R\mathcal{L}_{R}. On the other hand, we have 𝔼[Mf]=0{\mathbb{E}}[\mathcal{L}_{M}f]=0 by [43, Lemma 3.13], where Lyons and Schramm proved that the environment seen by the particle on static percolation is stationary when the graph is transitive unimodular.

Next, we prove the ergodicity through the following strong mixing condition: for any T,R(0,)T,R\in(0,\infty) and cylinder events A,Bσ({0,1}B(o,R)×[0,T])A,B\in\sigma(\{0,1\}^{B(o,R)}\times[0,T]),

lims({ηA}{TsηB})=(ηA)(ηB).\displaystyle\lim_{s\to\infty}{\mathbb{P}}\left(\{\eta^{*}\in A\}\cap\{T_{s}\eta^{*}\in B\}\right)={\mathbb{P}}\left(\eta^{*}\in A\right){\mathbb{P}}\left(\eta^{*}\in B\right). (2.23)

Here, the notation η\eta^{*} represents the whole process (ηt)t(\eta^{*}_{t})_{t\in\mathbb{R}} and TsT_{s} is the time-shift operator defined as (Tsη)t:=ηs+t(T_{s}\eta^{*})_{t}:=\eta^{*}_{s+t}. To prove (2.23), let ϵ>0\epsilon>0 be a constant that will be chosen later and choose an arbitrary s>Ts>T. Then, we introduce the events

E1\displaystyle E_{1} :={the number of attempted jumps is less than T+sϵ},\displaystyle:=\{\text{the number of attempted jumps is less than }T+s^{\epsilon}\},
E2\displaystyle E_{2} :={all the edges in B(o,R+T+sϵ) refresh at least once during [T,s]}.\displaystyle:=\{\text{all the edges in }B(o,R+T+s^{\epsilon})\text{ refresh at least once during }[T,s]\}.

Clearly, we have that

lims(E1c)=0,\displaystyle\lim_{s\to\infty}{\mathbb{P}}(E_{1}^{c})=0, (2.24)

since the number of attempted jumps is finite almost surely. On the other hand, using a simple union bound, we obtain that

lim sups(E2c)limsdR+T+sϵeμ(sT)=0,\displaystyle\limsup_{s\to\infty}{\mathbb{P}}(E_{2}^{c})\leq\lim_{s\to\infty}d^{R+T+s^{\epsilon}}e^{-\mu(s-T)}=0, (2.25)

as long as we choose ϵ(0,1)\epsilon\in(0,1) sufficiently small depending on dd and μ\mu.

Denoting E0:={ηA}{TsηB}{E_{0}:=\{\eta^{*}\in A\}\cap\{T_{s}\eta^{*}\in B\}}, we can rewrite the left-hand side (LHS) of (2.23) using (2.24) and (2.25) as

lims(E0)\displaystyle\lim_{s\to\infty}{\mathbb{P}}\left(E_{0}\right) =lims(E0E1E2)+lims(E0E1E2c)+lims(E0E1c)\displaystyle=\lim_{s\to\infty}{\mathbb{P}}\left(E_{0}\cap E_{1}\cap E_{2}\right)+\lim_{s\to\infty}{\mathbb{P}}\left(E_{0}\cap E_{1}\cap E_{2}^{c}\right)+\lim_{s\to\infty}{\mathbb{P}}\left(E_{0}\cap E_{1}^{c}\right)
=lims(E0E1E2).\displaystyle=\lim_{s\to\infty}{\mathbb{P}}\left(E_{0}\cap E_{1}\cap E_{2}\right). (2.26)

We decompose the event E0E1E2E_{0}\cap E_{1}\cap E_{2} as

(E0E1E2)=({TsηB}|{ηA}E1E2)({ηA}E1E2).\displaystyle{\mathbb{P}}\left(E_{0}\cap E_{1}\cap E_{2}\right)={\mathbb{P}}\left(\{T_{s}\eta^{*}\in B\}|\{\eta^{*}\in A\}\cap E_{1}\cap E_{2}\right){\mathbb{P}}\left(\{\eta^{*}\in A\}\cap E_{1}\cap E_{2}\right). (2.27)

Using (2.24) and (2.25) again, the second factor on the right-hand side (RHS) satisfies

lims({ηA}E1E2)=(ηA).\displaystyle\lim_{s\to\infty}{\mathbb{P}}\left(\{\eta^{*}\in A\}\cap E_{1}\cap E_{2}\right)={\mathbb{P}}\left(\eta^{*}\in A\right). (2.28)

For the first factor on the RHS of (2.26), the Markov property implies that

({TsηB}|{ηA}E1E2)=(TsηB)=(ηB).\displaystyle{\mathbb{P}}\left(\{T_{s}\eta^{*}\in B\}|\{\eta^{*}\in A\}\cap E_{1}\cap E_{2}\right)={\mathbb{P}}\left(T_{s}\eta^{*}\in B\right)={\mathbb{P}}\left(\eta^{*}\in B\right). (2.29)

Here, the first equality holds because every visited edge is refreshed under E1E2E_{1}\cap E_{2}, and the second equality is due to stationarity. Combining (2.26)–(2.28) concludes (2.23). ∎

Remark 2.6.

The ergodicity of the environment seen from the particle will not be used in the rest of the paper. However, it may be of interest for future studies related to random walks on dynamical percolation.

3. Speed for the critical case

In this section, we prove Theorem 1.4. A key ingredient is the following mean-field behavior exhibited by the supercritical percolation on GG with pp slightly greater than pcp_{c}.

Lemma 3.1.

Given an infinite graph GG satisfying the one-arm estimate (1.6), there exist constants δ=δ(G)>0\delta=\delta(G)>0 and C>0C>0 such that for every vertex vVv\in V and p[pc,pc+δ/r]p\in\left[p_{c},p_{c}+{\delta}/{r}\right],

𝐏p(𝗋𝖺𝖽ext(𝒞v)r)𝐏p(𝗋𝖺𝖽int(𝒞v)r)Cr,r>0.\displaystyle{\bf P}_{p}\left(\mathsf{rad}_{\operatorname{ext}}(\mathcal{C}_{v})\geq r\right)\leq{\bf P}_{p}\left(\mathsf{rad}_{\operatorname{int}}(\mathcal{C}_{v})\geq r\right)\leq\frac{C}{r},\quad\forall r>0. (3.1)
Proof.

The first inequality follows immediately from the fact that 𝗋𝖺𝖽ext(𝒞v)𝗋𝖺𝖽int(𝒞v)\mathsf{rad}_{\operatorname{ext}}(\mathcal{C}_{v})\leq\mathsf{rad}_{\operatorname{int}}(\mathcal{C}_{v}). For the second inequality, we introduce the shorthand notation Dr:={𝗋𝖺𝖽int(𝒞v)r}D_{r}:=\{\mathsf{rad}_{\operatorname{int}}(\mathcal{C}_{v})\geq r\}. Notice that the percolation cluster containing vv at pcp_{c} can be obtained in two steps: we first construct a Bernoulli-pp percolation, and then consider the Bernoulli-(pc/p)(p_{c}/p) percolation of it. This gives us the inequality

𝐏pc(Dr)(pcp)r𝐏p(Dr)(1+δpcr)r𝐏p(Dr)eδ/pc𝐏p(Dr),\displaystyle{\bf P}_{p_{c}}(D_{r})\geq\Big{(}\frac{p_{c}}{p}\Big{)}^{\!r}{\bf P}_{p}(D_{r})\geq\left(1+\frac{\delta}{p_{c}r}\right)^{-r}{\bf P}_{p}(D_{r})\geq e^{-\delta/p_{c}}{\bf P}_{p}(D_{r}),

where the factor (pc/p)r({p_{c}}/{p})^{r} above represents the probability that all the rr edges in a path realizing {𝗋𝖺𝖽int(𝒞v)r}\{\mathsf{rad}_{\operatorname{int}}(\mathcal{C}_{v})\geq r\} is open, and the second inequality is due to the condition p[pc,pc+δ/r]p\in\left[p_{c},p_{c}+{\delta}/{r}\right]. Now, applying the one-arm estimate (1.6) to 𝐏pc(Dr){\bf P}_{p_{c}}(D_{r}), we obtain that

𝐏p(Dr)eδ/pc𝐏pc(Dr)Cr,\displaystyle{\bf P}_{p}(D_{r})\leq e^{\delta/p_{c}}{\bf P}_{p_{c}}(D_{r})\leq\frac{C}{r},

which completes the proof of the second inequality in (3.1). ∎

Proof of Theorem 1.4.

We study the displacement 𝔼[dist(X0,Xt)]{\mathbb{E}}[\mathrm{dist}(X_{0},X_{t})], where t=t(μ)t=t(\mu) will be chosen later. Denote by 𝒞~\widetilde{\mathcal{C}} the subgraph composed of all the bonds that are open at least once during [0,t][0,t]. Under the stationary measure {\mathbb{P}}, the subgraph 𝒞~\widetilde{\mathcal{C}} is a percolation on GG, where each bond is open with probability

p:=pc+(1pc)(1eμtpc)pc(1+μt).\displaystyle p:=p_{c}+(1-p_{c})(1-e^{-\mu tp_{c}})\leq p_{c}(1+\mu t).

Let 𝒞~o\widetilde{\mathcal{C}}_{o} denote the cluster containing the root oo in 𝒞~\widetilde{\mathcal{C}}, and let δ\delta be the constant in Lemma 3.1. Under the condition

0pcμtδ/r,\displaystyle 0\leq p_{c}\mu t\leq{\delta}/{r}, (3.2)

we apply Lemma 3.1 to obtain that

(𝗋𝖺𝖽ext(𝒞~o)r)Cr.\displaystyle{\mathbb{P}}\left(\mathsf{rad}_{\operatorname{ext}}(\widetilde{\mathcal{C}}_{o})\geq r\right)\leq\frac{C}{r}. (3.3)

Next, we analyze the displacement using 𝒞~\widetilde{\mathcal{C}} and decompose it as follows:

𝔼[dist(X0,Xt)]k=1K𝔼[dist(X0,Xt)1{2k1𝗋𝖺𝖽ext(𝒞~o)2k}]+𝔼[dist(X0,Xt)1{𝗋𝖺𝖽ext(𝒞~o)2K}],\begin{split}{\mathbb{E}}[\mathrm{dist}(X_{0},X_{t})]\leq\sum_{k=1}^{K}&~{}{\mathbb{E}}\left[\mathrm{dist}(X_{0},X_{t}){\text{\Large$\mathfrak{1}$}}_{\left\{2^{k-1}\leq\mathsf{rad}_{\operatorname{ext}}(\widetilde{\mathcal{C}}_{o})\leq 2^{k}\right\}}\right]\\ +&~{}{\mathbb{E}}\left[\mathrm{dist}(X_{0},X_{t}){\text{\Large$\mathfrak{1}$}}_{\left\{\mathsf{rad}_{\operatorname{ext}}(\widetilde{\mathcal{C}}_{o})\geq 2^{K}\right\}}\right],\end{split} (3.4)

where KK is a threshold to be determined later. For the scale 2k2^{k}, since the random walk during [0,t][0,t] must stay within 𝒞~o\widetilde{\mathcal{C}}_{o}, we have that

𝔼[dist(X0,Xt)1{2k1𝗋𝖺𝖽ext(𝒞~o)2k}]\displaystyle{\mathbb{E}}\left[\mathrm{dist}(X_{0},X_{t}){\text{\Large$\mathfrak{1}$}}_{\left\{2^{k-1}\leq\mathsf{rad}_{\operatorname{ext}}(\widetilde{\mathcal{C}}_{o})\leq 2^{k}\right\}}\right] 2k(𝗋𝖺𝖽ext(𝒞~o)2k1)2kC2k1=2C,\displaystyle\leq 2^{k}{\mathbb{P}}\left(\mathsf{rad}_{\operatorname{ext}}(\widetilde{\mathcal{C}}_{o})\geq 2^{k-1}\right)\leq 2^{k}\cdot\frac{C}{2^{k-1}}=2C,

where we used the estimate (3.3) in the second step, assuming that (3.2) holds. For the scale beyond the threshold 2K2^{K}, we use the trivial bound 𝔼[dist(X0,Xt)|𝒞~o]t{\mathbb{E}}[\mathrm{dist}(X_{0},X_{t})|\widetilde{\mathcal{C}}_{o}]\leq t to obtain that

𝔼[dist(X0,Xt)1{𝗋𝖺𝖽ext(𝒞~o)2K}]\displaystyle{\mathbb{E}}\left[\mathrm{dist}(X_{0},X_{t}){\text{\Large$\mathfrak{1}$}}_{\left\{\mathsf{rad}_{\operatorname{ext}}(\widetilde{\mathcal{C}}_{o})\geq 2^{K}\right\}}\right] =𝔼[dist(X0,Xt)|𝗋𝖺𝖽ext(𝒞~o)2K](𝗋𝖺𝖽ext(𝒞~o)2K)\displaystyle={\mathbb{E}}\left[\mathrm{dist}(X_{0},X_{t})\,\big{|}\mathsf{rad}_{\operatorname{ext}}(\widetilde{\mathcal{C}}_{o})\geq 2^{K}\right]{\mathbb{P}}\left(\mathsf{rad}_{\operatorname{ext}}(\widetilde{\mathcal{C}}_{o})\geq 2^{K}\right)
Ct2K.\displaystyle\leq\frac{Ct}{2^{K}}.

Once again, we applied the estimate (3.3) provided that the condition (3.2) holds for r=2Kr=2^{K}. We choose KK such that

δ2K+1<pcμtδ2K.\frac{\delta}{2^{K+1}}<p_{c}\mu t\leq\frac{\delta}{2^{K}}.

By choosing this KK, we ensure that the condition pcμtδ/2kp_{c}\mu t\leq{\delta}/{2^{k}} is satisfied in all previous steps for 1kK1\leq k\leq K. Plugging the above estimates into (3.4), we obtain that

1t𝔼[dist(X0,Xt)]2Ct(K+t2K+1)2C(log2(pcμt/δ)t+pcμtδ).\displaystyle\frac{1}{t}{\mathbb{E}}[\mathrm{dist}(X_{0},X_{t})]\leq\frac{2C}{t}\left(K+\frac{t}{2^{K+1}}\right)\leq 2C\left(\frac{-\log_{2}(p_{c}\mu t/\delta)}{t}+\frac{p_{c}\mu t}{\delta}\right).

To minimize the RHS, we set t=t(μ):=(1/μ)log(1/μ)t=t(\mu):=\sqrt{(1/\mu)\log(1/\mu)} and obtain that

1t𝔼[dist(X0,Xt)]Cμlog(1/μ).\displaystyle\frac{1}{t}{\mathbb{E}}[\mathrm{dist}(X_{0},X_{t})]\leq C\sqrt{\mu\log(1/\mu)}. (3.5)

Finally, we use the definition (1.2) and the triangle inequality to obtain that

vpc(μ)\displaystyle v_{p_{c}}(\mu) =limT𝔼[dist(X0,XT)]T\displaystyle=\lim_{T\to\infty}\frac{{\mathbb{E}}[\mathrm{dist}(X_{0},\,X_{T})]}{T}
limTtTn=0T/t1t𝔼[dist(Xnt,X(n+1)t)]\displaystyle\leq\lim_{T\to\infty}\frac{t}{T}\sum_{n=0}^{\lfloor T/t\rfloor}\frac{1}{t}{\mathbb{E}}[\mathrm{dist}(X_{nt},X_{(n+1)t})]
Cμlog(1/μ).\displaystyle\leq C\sqrt{\mu\log(1/\mu)}.

Here, in the second step, we applied the estimate (3.5) to every segment [nt,(n+1)t][nt,(n+1)t] due to stationarity. This concludes (1.7). ∎

4. Speed for the supercritical case

In this section, we provide the proof of Theorem 1.2. The upper bound in (1.5) is trivial. When μ1\mu\geq 1, the lower bound follows from Proposition 1.5. When μ\mu is much smaller than 1, we get a substantial improvement over (1.9). The proof of the lower bound in this case relies on the Diaconis-Fill coupling [18] between the random walk and the evolving set process (which we will review in Section 4.1). Assuming a key estimate in Lemma 4.2, we complete the proof of Theorem 1.2 in Section 4.1. For clarity, we first present the proof of Lemma 4.2 in Section 4.2 for a special case where GG is an infinite regular tree. This proof is simpler and already contains all the key ideas. Subsequently, in Section 4.3, we explain the necessary modifications to extend the proof to general nonamenable transitive unimodular graphs.

4.1. The evolving set process and Diaconis-Fill coupling

The whole evolution of the environment is denoted by 𝜼=(ηt:t0)\bm{\eta}=(\eta_{t}:t\geq 0). We discretize time by observing the random walk at nonnegative integer times. We consider the time-inhomogeneous Markov chain with transition probability given by

Pn+1𝜼(x,y)=𝜼(Xn+1=y|Xn=x),x,yV,n{0}.P_{n+1}^{\bm{\eta}}(x,y)=\mathbb{P}^{\bm{\eta}}\left(X_{n+1}=y\;|\;X_{n}=x\right),\quad\forall x,y\in V,\ n\in\mathbb{N}\cup\{0\}. (4.1)

Note that π(x)1,xV,\pi(x)\equiv 1,~{}{x\in V}, is a stationary measure for each Pn𝜼P_{n}^{\bm{\eta}}. Moreover, since the random walk moves at rate 1, we have

Pn𝜼(x,x)e1,xV,n.P_{n}^{\bm{\eta}}(x,x)\geq e^{-1},\quad\forall x\in V,\ n\in\mathbb{N}.

The evolving set process is a Markov chain that takes values in the collection of subsets of VV. Its transition is defined as follows: given the current state Sn=SVS_{n}=S\subset V, we pick a random variable UU uniformly distributed in [0,1][0,1], and the next state of the chain is the set

Sn+1:={yV:xSPn+1𝜼(x,y)U}.S_{n+1}:=\left\{y\in V:\sum_{x\in S}P_{n+1}^{\bm{\eta}}(x,y)\geq U\right\}.

Note that the evolving set process has two absorbing states: \emptyset and VV. Denote by KPK_{P} the transition probability for the evolving set process (Sn:n{0})(S_{n}:n\in\mathbb{N}\cup\{0\}) when the transition matrix for the Markov chain is P{Pn𝜼:n}P\in\{P_{n}^{\bm{\eta}}:n\in\mathbb{N}\}. Doob’s transform of the evolving set process conditioned to stay nonempty is defined by the transition kernel

K^P(A,B)=π(B)π(A)KP(A,B).\widehat{K}_{P}(A,B)=\frac{\pi(B)}{\pi(A)}K_{P}(A,B).

For more discussion about evolving sets, we refer to [45] by Morris and Peres, [42, Section 6.7], or [39, Section 17.4].

Now, the Diaconis-Fill coupling ^𝜼\widehat{\mathbb{P}}^{\bm{\eta}} is a coupling between the Markov chain XnX_{n} and Doob’s transform of the evolving set process, defined as follows. Let DF={(x,A):xA,AV}\mathrm{DF}=\{(x,A):x\in A,A\subset V\}. We define the Diaconis-Fill transition kernel on DF\mathrm{DF} as

P^n+1𝜼((x,A),(y,B)):=Pn+1𝜼(x,y)KPn+1𝜼(A,B)zAPn+1𝜼(z,y),(x,A),(y,B)DF.\widehat{P}^{\bm{\eta}}_{n+1}((x,A),(y,B)):=\frac{P_{n+1}^{\bm{\eta}}(x,y)K_{P_{n+1}^{\bm{\eta}}}(A,B)}{\sum_{z\in A}P_{n+1}^{\bm{\eta}}(z,y)},\quad\forall(x,A),(y,B)\in\mathrm{DF}.

Let ((Xn,Sn):n{0})((X_{n},S_{n}):n\in\mathbb{N}\cup\{0\}) be the Markov chain with initial state (x,{x})DF(x,\{x\})\in\mathrm{DF} and transition kernel P^n+1𝜼\widehat{P}^{\bm{\eta}}_{n+1} from time nn to n+1n+1. Then, the following properties hold (see Theorem 17.23 of [39] for a proof):

  • The chain (Xn:n{0})(X_{n}:n\in\mathbb{N}\cup\{0\}) has transition kernels {Pn+1:n{0}}\{P_{n+1}:n\in\mathbb{N}\cup\{0\}\}.

  • The chain (Sn:n{0})(S_{n}:n\in\mathbb{N}\cup\{0\}) has transition kernels {K^Pn+1:n{0}}\{\widehat{K}_{P_{n+1}}:n\in\mathbb{N}\cup\{0\}\}.

  • For any ySny\in S_{n}, we have

    ^𝜼(Xn=y|S0,S1,,Sn)=|Sn|1.\widehat{\mathbb{P}}^{\bm{\eta}}(X_{n}=y\;|\;S_{0},S_{1},\ldots,S_{n})={|S_{n}|}^{-1}. (4.2)

Throughout the proof, we will write ^𝜼\widehat{\mathbb{P}}^{\bm{\eta}} for the probability measure arising from the Diaconis-Fill coupling with initial state (o,{o})(o,\{o\}) when the entire evolution of the environment is given by 𝜼\bm{\eta}. Then, we will use ^\widehat{\mathbb{P}} to denote the annealed probability measure with respect to 𝜼\bm{\eta} when the initial bond configuration is given by πp\pi_{p}. We will use 𝔼^𝜼\widehat{\mathbb{E}}^{\bm{\eta}} and 𝔼^\widehat{\mathbb{E}} to denote the corresponding expectations.

For every subgraph GG^{\prime} of GG and SVS\subset V, we denote GS\partial_{G^{\prime}}S as the edge boundary of SS in GG^{\prime}, i.e., the set of edges in E(G)E(G^{\prime}) that have one endpoint in SS and the other endpoint in VSV\setminus S. We will also view ηt\eta_{t} as a subgraph of GG with vertex set VV. We now prove a crucial property related to evolving sets. This property holds for random walks on dynamical percolation of general graphs.

Lemma 4.1.

Let (Xt,ηt)t0(X_{t},\eta_{t})_{t\geq 0} be a random walk on the dynamical percolation of an arbitrary infinite graph GG, where every vertex has a degree bounded by dd. Then, the following estimate holds:

𝔼^𝜼[|Sn+1|1/2|Sn]exp(ΦSn2/6)|Sn|1/2,n{0}.\widehat{\mathbb{E}}^{\bm{\eta}}\left[\left.|S_{n+1}|^{-1/2}\;\right|\;S_{n}\right]\leq\exp\left(-{\Phi_{S_{n}}^{2}}/{6}\right)|S_{n}|^{-1/2},\quad\forall n\in\mathbb{N}\cup\{0\}. (4.3)

Here, ΦSnΦSn𝛈\Phi_{S_{n}}\equiv\Phi^{\bm{\eta}}_{S_{n}} is defined as

ΦSn\displaystyle\Phi_{S_{n}} :=1|Sn|xSnySnc^𝜼(Xn+1=y|Xn=x),\displaystyle:=\frac{1}{\left|S_{n}\right|}\sum_{x\in S_{n}}\sum_{y\in S_{n}^{c}}\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{n+1}=y|X_{n}=x\right), (4.4)

and it satisfies

ΦSn1de|Sn|nn+1|ηtSn|dt.\displaystyle\Phi_{S_{n}}\geq\frac{1}{de\left|S_{n}\right|}\int_{n}^{n+1}\left|\partial_{\eta_{t}}S_{n}\right|\,\mathrm{d}t. (4.5)
Proof.

A similar result for random walk on finite graphs has been established in Lemma 2.3 of [47]. Our proof follows a similar approach. By equation (29) of [45], we have

𝔼^𝜼[|Sn+1|1/2|Sn|1/2|Sn]\displaystyle\widehat{\mathbb{E}}^{\bm{\eta}}\left[\frac{\left|S_{n+1}\right|^{-1/2}}{\left|S_{n}\right|^{-1/2}}\bigg{|}S_{n}\right] =𝔼^Pn+1𝜼[|Sn+1|1/2|Sn|1/2|Sn]\displaystyle=\widehat{\mathbb{E}}^{\bm{\eta}}_{P_{n+1}}\left[\frac{\left|S_{n+1}\right|^{-1/2}}{\left|S_{n}\right|^{-1/2}}\bigg{|}S_{n}\right]
=𝔼Pn+1𝜼[|Sn+1||Sn||Sn+1|1/2|Sn|1/2|Sn]\displaystyle=\mathbb{E}^{\bm{\eta}}_{P_{n+1}}\left[\frac{\left|S_{n+1}\right|}{\left|S_{n}\right|}\frac{\left|S_{n+1}\right|^{-1/2}}{\left|S_{n}\right|^{-1/2}}\bigg{|}S_{n}\right]
=𝔼Pn+1𝜼[|Sn+1|1/2|Sn|1/2|Sn],\displaystyle=\mathbb{E}^{\bm{\eta}}_{P_{n+1}}\left[\frac{\left|S_{n+1}\right|^{1/2}}{\left|S_{n}\right|^{1/2}}\bigg{|}S_{n}\right],

where 𝔼^P𝜼\widehat{\mathbb{E}}^{\bm{\eta}}_{P} (resp. 𝔼P𝜼\mathbb{E}^{\bm{\eta}}_{P}) denotes the expectation when {Sn}\{S_{n}\} has transition kernel K^P\widehat{K}_{P} (resp. KPK_{P}), and the first equality is due to the definition of 𝔼^𝜼\widehat{\mathbb{E}}^{\bm{\eta}}. By Lemma 3 of [45] (with γ=e1\gamma=e^{-1}) and using the definition (4.4), we obtain that

𝔼Pn+1𝜼[|Sn+1|1/2|Sn|1/2|Sn]1e22(1e1)2ΦSn21ΦSn26exp[ΦSn26].\mathbb{E}^{\bm{\eta}}_{P_{n+1}}\left[\frac{\left|S_{n+1}\right|^{1/2}}{\left|S_{n}\right|^{1/2}}\bigg{|}S_{n}\right]\leq 1-\frac{e^{-2}}{2\left(1-e^{-1}\right)^{2}}\Phi_{S_{n}}^{2}\leq 1-\frac{\Phi_{S_{n}}^{2}}{6}\leq\exp\left[-\frac{\Phi_{S_{n}}^{2}}{6}\right].

This concludes (4.3).

For the bound (4.5), recall that

ηt(x,y)={1, if {x,y} is open in ηt,0, if {x,y} is closed in ηt.\eta_{t}(x,y)=\begin{cases}1,&\text{ if }\{x,y\}\text{ is open in }\eta_{t},\\ 0,&\text{ if }\{x,y\}\text{ is closed in }\eta_{t}.\end{cases}

For neighboring vertices xx and yy, by considering the event that the random walk clock rings exactly once during {t[n,n+1]:ηt(x,y)=1}\{t\in[n,n+1]:\eta_{t}(x,y)=1\}, we have that

^𝜼(Xn+1=y|Xn=x)1denn+1ηt(x,y)dt.\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{n+1}=y\;|\;X_{n}=x\right)\geq\frac{1}{de}\int_{n}^{n+1}\eta_{t}(x,y)\mathrm{d}t.

Therefore, we obtain

xSnySnc^𝜼(Xn+1=y|Xn=x)1dexSnySncnn+1ηt(x,y)dt.\sum_{x\in S_{n}}\sum_{y\in S_{n}^{c}}\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{n+1}=y\;|\;X_{n}=x\right)\geq\frac{1}{de}\sum_{x\in S_{n}}\sum_{y\in S_{n}^{c}}\int_{n}^{n+1}\eta_{t}(x,y)\mathrm{d}t.

Then, using Fubini’s theorem and the fact that

|ηtSn|=xSnySncηt(x,y),\left|\partial_{\eta_{t}}S_{n}\right|=\sum_{x\in S_{n}}\sum_{y\in S_{n}^{c}}\eta_{t}(x,y),

we conclude (4.5). ∎

We define a sequence of random variables

M0:=1,andMn:=|Sn|1/2exp(k=0n1ΦSk26)n.M_{0}:=1,\quad\text{and}\quad M_{n}:=|S_{n}|^{-1/2}\exp\left(\sum_{k=0}^{n-1}\frac{\Phi_{S_{k}}^{2}}{6}\right)\quad\forall n\in\mathbb{N}. (4.6)

Let n\mathscr{F}_{n} be the σ\sigma-algebra generated by the evolving sets up to time nn. By Lemma 4.1, we have

𝔼^𝜼[Mn+1|n]Mn,\widehat{\mathbb{E}}^{\bm{\eta}}[M_{n+1}|\mathscr{F}_{n}]\leq M_{n},

which implies that {Mn:n{0}}\{M_{n}:n\in\mathbb{N}\cup\{0\}\} is a supermartingale with respect to {n}\{\mathscr{F}_{n}\}. Thus, we have 𝔼^𝜼[Mn]𝔼^𝜼[M0]=1\widehat{\mathbb{E}}^{\bm{\eta}}[M_{n}]\leq\widehat{\mathbb{E}}^{\bm{\eta}}[M_{0}]=1. Applying Markov’s inequality yields that for any ε>0\varepsilon>0,

^𝜼(Mn2/ε)ε2𝔼𝜼[Mn]=ε2.\widehat{\mathbb{P}}^{\bm{\eta}}(M_{n}\geq 2/\varepsilon)\leq\frac{\varepsilon}{2}\mathbb{E}^{\bm{\eta}}[M_{n}]=\frac{\varepsilon}{2}.

In the special case where the initial configuration is πp\pi_{p}, it also gives that

^(Mn2/ε)ε/2.\widehat{\mathbb{P}}(M_{n}\geq 2/\varepsilon)\leq\varepsilon/2. (4.7)
Lemma 4.2.

Under the setting of Theorem 1.2, suppose η0\eta_{0} has the stationary distribution πp\pi_{p}. Then, there exist constanats c0,c1>0c_{0},c_{1}>0 such that for any n{0}n\in\mathbb{N}\cup\{0\},

^(nn+1|ηtSn|dtc1|Sn|)c0.\widehat{\mathbb{P}}\left(\int_{n}^{n+1}\left|\partial_{\eta_{t}}S_{n}\right|\mathrm{d}t\geq c_{1}|S_{n}|\right)\geq c_{0}. (4.8)

This lemma will be proved as Lemma 4.5 in the tree case, and will be proved at the end of Section 4.3 in the general case. Assuming Lemma 4.2, we first show that the volume of SnS_{n} grows exponentially fast in Lemma 4.3, which is then used to complete the proof of Theorem 1.2.

Lemma 4.3.

Using the same notations as in Lemma 4.2, we have

^(|Sn|>(c04)2exp[c0c126e2d2n])c04,n.\widehat{\mathbb{P}}\left(|S_{n}|>\left(\frac{c_{0}}{4}\right)^{2}\exp\left[\frac{c_{0}c_{1}^{2}}{6e^{2}d^{2}}n\right]\right)\geq\frac{c_{0}}{4},\quad\forall n\in\mathbb{N}. (4.9)
Proof.

We define the indicator functions

Ik:=𝟏(ΦSkc1/(ed))I_{k}:=\mathbf{1}(\Phi_{S_{k}}\geq c_{1}/(ed))

Then, by (4.5) and (4.8), we have that

𝔼^[Ik]\displaystyle\widehat{\mathbb{E}}[I_{k}] =^(ΦSkc1/(ed))^(kk+1|ηtSk|dtc1|Sk|)c0,k{0}.\displaystyle=\widehat{\mathbb{P}}\left(\Phi_{S_{k}}\geq c_{1}/(ed)\right)\geq\widehat{\mathbb{P}}\left(\int_{k}^{k+1}|\partial_{\eta_{t}}S_{k}|\mathrm{d}t\geq c_{1}|S_{k}|\right)\geq c_{0},\quad\forall k\in\mathbb{N}\cup\{0\}. (4.10)

Therefore, we have that

c0n𝔼^[k=0n1Ik]\displaystyle c_{0}n\leq\widehat{\mathbb{E}}\left[\sum_{k=0}^{n-1}I_{k}\right] =𝔼^[k=0n1Ik𝟏{k=0n1Ikc0n2}]+𝔼^[k=0n1Ik𝟏{k=0n1Ik<c0n2}]\displaystyle=\widehat{\mathbb{E}}\left[\sum_{k=0}^{n-1}I_{k}\cdot\mathbf{1}_{\left\{\sum_{k=0}^{n-1}I_{k}\geq\frac{c_{0}n}{2}\right\}}\right]+\widehat{\mathbb{E}}\left[\sum_{k=0}^{n-1}I_{k}\cdot\mathbf{1}_{\left\{\sum_{k=0}^{n-1}I_{k}<\frac{c_{0}n}{2}\right\}}\right]
n^(k=0n1Ikc0n2)+c0n2,\displaystyle\leq n\widehat{\mathbb{P}}\left(\sum_{k=0}^{n-1}I_{k}\geq\frac{c_{0}n}{2}\right)+\frac{c_{0}n}{2},

which implies that

^(k=0n1Ikc0n2)c02,n.\widehat{\mathbb{P}}\left(\sum_{k=0}^{n-1}I_{k}\geq\frac{c_{0}n}{2}\right)\geq\frac{c_{0}}{2},\quad\forall n\in\mathbb{N}. (4.11)

Taking ε=c0/2\varepsilon={c_{0}}/{2} in (4.7), we get that

^(|Sn|>(c04)2exp[k=0n1ΦSk23])1c04,n.\widehat{\mathbb{P}}\left(|S_{n}|>\left(\frac{c_{0}}{4}\right)^{2}\exp\left[\sum_{k=0}^{n-1}\frac{\Phi_{S_{k}}^{2}}{3}\right]\right)\geq 1-\frac{c_{0}}{4},\quad\forall n\in\mathbb{N}.

Combined with with (4.11), this implies that

^(|Sn|>(c04)2exp[k=0n1ΦSk23] and k=0n1Ikc0n2)c04,\widehat{\mathbb{P}}\left(|S_{n}|>\left(\frac{c_{0}}{4}\right)^{2}\exp\left[\sum_{k=0}^{n-1}\frac{\Phi_{S_{k}}^{2}}{3}\right]\text{ and }\sum_{k=0}^{n-1}I_{k}\geq\frac{c_{0}n}{2}\right)\geq\frac{c_{0}}{4},

which concludes (4.9) by the definition of IkI_{k}. ∎

Proof of Theorem 1.2.

Recall that B(o,k)B(o,k) denotes the kk-neighborhood of oo on GG. Let ((Xn,Sn):n0)((X_{n},S_{n}):n\geq 0) be the Diaconis-Fill coupling defined above. By (4.2), we have

^𝜼(XnB(o,k)||Sn|>(8/c0)dk)|B(o,k)|(8/c0)dkc08,\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{n}\in B(o,k)\;\Big{|}\;|S_{n}|>(8/c_{0})d^{k}\right)\leq\frac{|B(o,k)|}{(8/c_{0})d^{k}}\leq\frac{c_{0}}{8},

which implies that

^𝜼(XnB(o,k))\displaystyle\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{n}\in B(o,k)\right) ^𝜼(XnB(o,k)||Sn|>(8/c0)dk)+^𝜼(|Sn|(8/c0)dk)\displaystyle\leq\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{n}\in B(o,k)\;\Big{|}\;|S_{n}|>(8/c_{0})d^{k}\right)+\widehat{\mathbb{P}}^{\bm{\eta}}\left(|S_{n}|\leq(8/c_{0})d^{k}\right)
^𝜼(|Sn|(8/c0)dk)+c08.\displaystyle\leq\widehat{\mathbb{P}}^{\bm{\eta}}\left(|S_{n}|\leq(8/c_{0})d^{k}\right)+\frac{c_{0}}{8}.

In the special case where the initial configuration is πp\pi_{p}, we obtain that

^(XnB(o,k))^(|Sn|(8/c0)dk)+c08.\widehat{\mathbb{P}}\left(X_{n}\in B(o,k)\right)\leq\widehat{\mathbb{P}}\left(|S_{n}|\leq(8/c_{0})d^{k}\right)+\frac{c_{0}}{8}. (4.12)

By setting

k=k(n)=1logd(logc03128+c0c126e2d2n)k=k(n)=\left\lfloor\frac{1}{\log d}\left(\log\frac{c_{0}^{3}}{128}+\frac{c_{0}c_{1}^{2}}{6e^{2}d^{2}}n\right)\right\rfloor

in (4.9), we get that

^(|Sn|(8/c0)dk(n))1c04n.\widehat{\mathbb{P}}\left(|S_{n}|\leq(8/c_{0})d^{k(n)}\right)\leq 1-\frac{c_{0}}{4}\quad\forall n\in\mathbb{N}.

Combined with (4.12), this gives us

^(XnB(o,k(n)))1c08.\displaystyle\widehat{\mathbb{P}}\left(X_{n}\in B(o,k(n))\right)\leq 1-\frac{c_{0}}{8}. (4.13)

Applying Lemma 2.3 and (4.13) in the case where η0\eta_{0} has distribution πp\pi_{p}, we otain

vp(μ)c0c126e2d2logd.v_{p}(\mu)\geq\frac{c_{0}c_{1}^{2}}{6e^{2}d^{2}\log d}. (4.14)

Finally, by Proposition 2.1, this bound extends to a general initial bond configuration. ∎

4.2. The tree case

Let G=𝕋d=(V(𝕋d),E(𝕋d))G=\mathbb{T}_{d}=(V\left(\mathbb{T}_{d}\right),E\left(\mathbb{T}_{d}\right)) be an infinite regular tree where every vertex has degree d3d\geq 3. It is known that pc(𝕋d)=b1p_{c}(\mathbb{T}_{d})=b^{-1}, where b:=d1b:=d-1. Let oo be an arbitrary vertex in VV. Define

θp:=( infinite open path containing o in the Bernoulli-p bond percolation of 𝕋d).\theta_{p}:=\mathbb{P}(\exists\text{ infinite open path containing }o\text{ in the Bernoulli-$p$ bond percolation of }\mathbb{T}_{d}).

Let 𝕋~b\tilde{\mathbb{T}}_{b} be an infinite bb-ary tree, i.e., an infinite tree with a root oo having degree bb and all other vertices having degree dd. Then, define

θ~p:=( infinite open path containing o in the Bernoulli-p bond percolation of 𝕋~b).\tilde{\theta}_{p}:=\mathbb{P}(\exists\text{ infinite open path containing }o\text{ in the Bernoulli-$p$ bond percolation of }\tilde{\mathbb{T}}_{b}).

Note that θp\theta_{p} and θ~p\tilde{\theta}_{p} are related by the equation

1θp=(1pθ~p)d.1-\theta_{p}=(1-p\tilde{\theta}_{p})^{d}.

In particular, we have θpdpθ~p as ppc\theta_{p}\sim dp\tilde{\theta}_{p}\text{ as }p\downarrow p_{c}. This subsection focuses on proving the following theorem, which implies the desired lower bound in (1.5).

Theorem 4.4.

For any p>pc=1/bp>p_{c}=1/b, the speed for the random walk on dynamical percolation in 𝕋d\mathbb{T}_{d} with any initial bond configuration η0\eta_{0} satisfies

vp(μ)p3(pθ~p)948e2d3logd.v_{p}(\mu)\geq\frac{p^{3}(p\tilde{\theta}_{p})^{9}}{48e^{2}d^{3}\log d}. (4.15)

In particular, there exists a constant c4.4=c4.4(d)>0c_{\ref{thm::supercritical_quantitative}}=c_{\ref{thm::supercritical_quantitative}}(d)>0 depending only on dd such that

vp(μ)c4.4(ppc)9 as ppc.v_{p}(\mu)\geq c_{\ref{thm::supercritical_quantitative}}(p-p_{c})^{9}\ \ \text{ as }\ \ p\downarrow p_{c}. (4.16)

Recall that in the context of bond percolation η\eta, a vertex xVx\in V is called a trifurcation point of η\eta if closing all edges incident to xx would split the component of xx in η\eta into at least 3 disjoint infinite connected components; see Figure 1 for an illustration of trifurcation points on 𝕋3\mathbb{T}_{3}.

Refer to caption
Figure 1. Dots stand for an infinite open path emanating from the corresponding vertex; three trifurcation points are marked in red.

Assuming that η0\eta_{0} is distributed according to πp\pi_{p}, by stationarity, ηt\eta_{t} also follows the distribution πp\pi_{p} for any t0t\geq 0. Let SV(𝕋d)S\subset V\left(\mathbb{T}_{d}\right) be an arbitrary finite subset. For each xSx\in S, by setting 33 edges incident to xx to be open, we have

^(x is a trifurcation point of ηt)(pθ~p)3.\widehat{\mathbb{P}}(x\text{ is a trifurcation point of }\eta_{t})\geq(p\tilde{\theta}_{p})^{3}. (4.17)

This implies that

𝔼^[number of trifurcation points of ηt in S]|S|(pθ~p)3.\widehat{\mathbb{E}}[\text{number of trifurcation points of }\eta_{t}\text{ in }S]\geq|S|(p\tilde{\theta}_{p})^{3}. (4.18)

For any SVS\subset V, Burton-Keane [15] proved (which can be verified by induction on |S||S|) that

|ηtS|(number of trifurcation points of ηt in S)+2.|\partial_{\eta_{t}}S|\geq(\text{number of trifurcation points of $\eta_{t}$ in }S)+2. (4.19)

Combining (4.18) and (4.19), we get that

𝔼^[|ηtS|]|S|(pθ~p)3,t0.\widehat{\mathbb{E}}\left[|\partial_{\eta_{t}}S|\right]\geq|S|(p\tilde{\theta}_{p})^{3},\quad\forall t\geq 0. (4.20)

With this estimate, we can derive the following lemma.

Lemma 4.5.

Suppose η0\eta_{0} follows the stationary distribution πp\pi_{p}. For any fixed finite nonempty subset SV(𝕋d)S\subset V\left(\mathbb{T}_{d}\right) and n{0}n\in\mathbb{N}\cup\{0\}, we have

^(1|S|nn+1|ηtS|dt12(pθ~p)3)(pθ~p)32d.\widehat{\mathbb{P}}\left(\frac{1}{|S|}\int_{n}^{n+1}\left|\partial_{\eta_{t}}S\right|\mathrm{d}t\geq\frac{1}{2}(p\tilde{\theta}_{p})^{3}\right)\geq\frac{(p\tilde{\theta}_{p})^{3}}{2d}. (4.21)

A similar estimate also holds when S=SnS=S_{n} (which is random and depends on the environment ηn\eta_{n}):

^(1|Sn|nn+1|ηtSn|dtp2(pθ~p)3)p2d(pθ~p)3.\widehat{\mathbb{P}}\left(\frac{1}{|S_{n}|}\int_{n}^{n+1}\left|\partial_{\eta_{t}}S_{n}\right|\mathrm{d}t\geq\frac{p}{2}(p\tilde{\theta}_{p})^{3}\right)\geq\frac{p}{2d}(p\tilde{\theta}_{p})^{3}. (4.22)
Proof.

First, we trivially have |ηtS|d|S|\left|\partial_{\eta_{t}}S\right|\leq d|S|, which implies the rough bound

Zn:=1|S|nn+1|ηtS|dtd.Z_{n}:=\frac{1}{|S|}\int_{n}^{n+1}\left|\partial_{\eta_{t}}S\right|\,\mathrm{d}t\leq d.

On the other hand, by Fubini’s theorem and (4.20), we have

𝔼^[Zn]=1|S|nn+1𝔼^[|ηtS|]dt(pθ~p)3.\widehat{\mathbb{E}}[Z_{n}]=\frac{1}{|S|}\int_{n}^{n+1}\widehat{\mathbb{E}}\left[\left|\partial_{\eta_{t}}S\right|\right]\,\mathrm{d}t\geq(p\tilde{\theta}_{p})^{3}.

Thus, we get that

(pθ~p)3\displaystyle(p\tilde{\theta}_{p})^{3}\leq 𝔼^[Zn]=𝔼^[Zn𝟏(Zn12(pθ~p)3)]+𝔼^[Zn𝟏(Zn<12(pθ~p)3)]\displaystyle\widehat{\mathbb{E}}[Z_{n}]=\widehat{\mathbb{E}}\left[Z_{n}\mathbf{1}\left(Z_{n}\geq\frac{1}{2}{(p\tilde{\theta}_{p})^{3}}\right)\right]+\widehat{\mathbb{E}}\left[Z_{n}\mathbf{1}\left(Z_{n}<\frac{1}{2}{(p\tilde{\theta}_{p})^{3}}\right)\right]
\displaystyle\leq d^(Zn12(pθ~p)3)+12(pθ~p)3,\displaystyle d\widehat{\mathbb{P}}\left(Z_{n}\geq\frac{1}{2}{(p\tilde{\theta}_{p})^{3}}\right)+\frac{1}{2}(p\tilde{\theta}_{p})^{3}, (4.23)

which gives the desired inequality (4.21).

To show the estimate for SnS_{n}, we will establish the lower bound

𝔼^[|ηtSn||Sn|]p(pθ~p)3,t[n,n+1].\widehat{\mathbb{E}}\left[\frac{|\partial_{\eta_{t}}S_{n}|}{|S_{n}|}\right]\geq p(p\tilde{\theta}_{p})^{3},\quad\forall t\in[n,n+1]. (4.24)

Then, using a similar argument as above, we can conclude (4.22). Since for t[n,n+1]t\in[n,n+1],

𝔼^[|ηtSn||Sn][eμ(tn)+p(1eμ(tn))]𝔼^[|ηnSn||Sn]p𝔼^[|ηnSn||Sn],\widehat{\mathbb{E}}\left[\left.|\partial_{\eta_{t}}S_{n}|\right|S_{n}\right]\geq[e^{-\mu(t-n)}+p(1-e^{-\mu(t-n)})]\widehat{\mathbb{E}}\left[\left.|\partial_{\eta_{n}}S_{n}|\right|S_{n}\right]\geq p\widehat{\mathbb{E}}\left[\left.|\partial_{\eta_{n}}S_{n}|\right|S_{n}\right],

it suffices to prove that

𝔼^[|ηnSn||Sn|](pθ~p)3.\widehat{\mathbb{E}}\left[\frac{|\partial_{\eta_{n}}S_{n}|}{|S_{n}|}\right]\geq(p\tilde{\theta}_{p})^{3}. (4.25)

By (4.19), it follows from the following estimate on the proportion of trifurcation points:

𝔼^[1|Sn|xSn𝟏(x is a trifurcation point of ηn)](pθ~p)3.\widehat{\mathbb{E}}\left[\frac{1}{|S_{n}|}\sum_{x\in S_{n}}\mathbf{1}(x\text{ is a trifurcation point of }\eta_{n})\right]\geq(p\tilde{\theta}_{p})^{3}. (4.26)

Using (4.2), we obtain that

1|Sn|𝔼^𝜼[xSn𝟏(x is a trifurcation point of ηn)|Sn]\displaystyle~{}\frac{1}{|S_{n}|}\widehat{\mathbb{E}}^{\bm{\eta}}\left[\sum_{x\in S_{n}}\mathbf{1}{(x\text{ is a trifurcation point of }\eta_{n})}\Big{|}S_{n}\right]
=\displaystyle= ^𝜼(Xn is a trifurcation point of ηn|Sn).\displaystyle~{}\widehat{\mathbb{P}}^{\bm{\eta}}\left(\left.X_{n}\text{ is a trifurcation point of }\eta_{n}\right|S_{n}\right).

Taking the expectation of both sides, we see that to prove (4.26), it suffices to show that

^(Xn is a trifurcation point of ηn)(pθ~p)3.\widehat{\mathbb{P}}\left(X_{n}\text{ is a trifurcation point of }\eta_{n}\right)\geq(p\tilde{\theta}_{p})^{3}. (4.27)

By Lemma 2.5, the environment seen by the moving particle is stationary when η0\eta_{0} has distribution πp\pi_{p}. Thus, we have that

^(Xn is a trifurcation point of ηn)=^(X0 is a trifurcation point of η0)(pθ~p)3.\widehat{\mathbb{P}}\left(X_{n}\text{ is a trifurcation point of }\eta_{n}\right)=\widehat{\mathbb{P}}\left(X_{0}\text{ is a trifurcation point of }\eta_{0}\right)\geq(p\tilde{\theta}_{p})^{3}. (4.28)

This implies (4.27) and concludes the proof of (4.22). ∎

Proof of Theorem 4.4.

By Lemma 4.5, we can take

c0=p(pθ~p)32d,c1=p(pθ~p)32,c_{0}=\frac{p(p\tilde{\theta}_{p})^{3}}{2d},\quad c_{1}=\frac{p(p\tilde{\theta}_{p})^{3}}{2},

in (4.8). Then, applying (4.14) concludes (4.15). It is well-known that (see e.g., equation (2.1.17) of [28] by Heydenreich and van der Hofstad)

θ~p2b2b1(ppc) as ppc.\tilde{\theta}_{p}\sim\frac{2b^{2}}{b-1}(p-p_{c})\ \ \text{ as }\ \ p\downarrow p_{c}. (4.29)

Combined with (4.15), it implies (4.16). ∎

4.3. The general case

In this subsection, we complete the proof of Lemma 4.2 for general nonamenable transitive unimodular graphs. We still use the evolving set process and the Diaconis-Fill coupling defined for the general graph GG. The proof of Lemma 4.2 depends on the following lemma established by Benjamini, Lyons, and Schramm [13]. Hermon and Hutchcroft [26] also utilized this lemma (stated as Lemma 2.5 there) to prove the anchored expansion of percolation clusters.

Lemma 4.6.

Let GG be a connected, locally finite, nonamenable transitive unimodular graph. We select an arbitrary root oo in GG. For any p(pc,1]p\in(p_{c},1], there exists an automorphism-invariant percolation process ζ\zeta on GG and a coupling between ζ\zeta and the Bernoulli-pp bond percolation η\eta (with law πp\pi_{p}) on GG, such that the following holds:

  • (i)

    There exists a constant cpcp(G)>0c_{p}\equiv c_{p}(G)>0 such that the root oo is a trifurcation point of ζ\zeta with probability at least cpc_{p}.

  • (ii)

    The process ζ\zeta is dominated by η\eta, i.e., ζη\zeta\leq\eta.

  • (iii)

    The coupling (η,ζ)(\eta,\zeta) between η\eta and ζ\zeta is automorphism-invariant. Specifically, ζ\zeta can be obtained as ζ=F(η,ξ)\zeta=F(\eta,\xi), where ξ\xi is a collection of i.i.d. random variables at the vertices and edges (independent of η\eta), and FF is an equivariant function under automorphisms of GG.

Proof.

Under the assumption of unimodularity, this lemma is essentially contained in Lemma 3.8 and Theorem 3.10 of [13]. (It also holds for nonunimodular graphs as explained in Lemma 2.5 of [26].) We now briefly explain how the equivariant function FF is constructed. Let

pu=pu(G):=inf{p[0,1]:η has a unique infinite cluster 𝐏p-a.s.}.p_{u}=p_{u}(G):=\inf\{p\in[0,1]:\eta\text{ has a unique infinite cluster }{\bf P}_{p}\text{-a.s.}\}\,.

If pc<pup_{c}<p_{u}, we can choose p(pc,pup)p_{*}\in(p_{c},p_{u}\wedge p) and let ξ\xi be a Bernoulli-(p/p)(p_{*}/p) percolation on GG, independent of η\eta. Then, we can define the function FF as

ζ(e)=F(ξ,η)(e):=ξ(e)η(e).\zeta(e)=F(\xi,\eta)(e):=\xi(e)\eta(e).

In the more challenging case pc=pup_{c}=p_{u}, the construction of FF was explained in the proof of Lemma 3.8 and Theorem 3.10 in [13]. It involves Bernoulli percolation (independent of η\eta), minimal spanning forest, and wired uniform spanning forest. (To construct the wired uniform spanning forest, one can employ the classical Wilson’s method, initially developed by Wilson [53] and subsequently extended by Benjamini, Lyons, Peres, and Schramm [12] to infinite graphs.) The proof in [13, Theorem 3.10] is based on the construction of a subprocess ηη\eta^{\prime}\subset\eta in [13, Theorem 3.1]. There is one minor point in the construction of η\eta^{\prime} that we want to clarify: when a shortest path between two points must be selected in an equivariant way, we assign to the edges an independent collection of i.i.d. uniform [0,1][0,1] labels and choose the path with the minimal sum. This is slightly different from the construction in the proof of [13, Theorem 3.1]. ∎

Proof of Lemma 4.2.

We prove that (4.8) holds for

c0=pcp2d,c1=pcp2,c_{0}=\frac{pc_{p}}{2d},\quad c_{1}=\frac{pc_{p}}{2},

where cpc_{p} is the constant in Lemma 4.6 (i). Similar to the proof of (4.22), we only need to establish the following estimate on |ηnSn|/|Sn||\partial_{\eta_{n}}S_{n}|/|S_{n}|, which is analogous to (4.25):

𝔼^[|ηnSn|/|Sn|]cp.\widehat{\mathbb{E}}\left[{|\partial_{\eta_{n}}S_{n}|}/{|S_{n}|}\right]\geq c_{p}.

Using the function FF from Lemma 4.6, we define ζt=F(ηt,ξ)\zeta_{t}=F(\eta_{t},\xi) for t0t\geq 0. By Lemma 4.6 (ii), it suffices to prove that

𝔼^[|ζnSn|/|Sn|]cp.\widehat{\mathbb{E}}\left[{|\partial_{\zeta_{n}}S_{n}|}/{|S_{n}|}\right]\geq c_{p}. (4.30)

Following the argument presented below (4.25), we can derive that

𝔼^[|ζnSn||Sn|]^(Xn is a trifurcation point of ζn).\displaystyle\widehat{\mathbb{E}}\left[\frac{|\partial_{\zeta_{n}}S_{n}|}{|S_{n}|}\right]\geq\widehat{\mathbb{P}}\left(X_{n}\text{ is a trifurcation point of }\zeta_{n}\right).

By Lemma 2.5, the pair (ηt,ζt)=(ηt,F(ηt,ξ))(\eta_{t},\zeta_{t})=(\eta_{t},F(\eta_{t},\xi)) seen by the moving particle at XtX_{t} is stationary. Hence, we have

^(Xn is a trifurcation point of ζn)=^(X0 is a trifurcation point of ζ0)cp\widehat{\mathbb{P}}\left(X_{n}\text{ is a trifurcation point of }\zeta_{n}\right)=\widehat{\mathbb{P}}\left(X_{0}\text{ is a trifurcation point of }\zeta_{0}\right)\geq c_{p}

according to Lemma 4.6 (i). This concludes (4.30), and hence completes the proof of Lemma 4.2. ∎

5. Speed for the subcritical case

In this section, we provide the proofs of Theorem 1.1 and Proposition 1.5. Note that when p<pcp<p_{c}, Proposition 1.5 already gives the desired lower bound in (1.4), so we focus on proving the upper bound in Theorem 1.1 and Proposition 1.5.

5.1. Proof of Theorem 1.1

The upper bound in (1.4) is trivially satisfied when μ1/2\mu\geq 1/2. Hence, in the following proof, we will assume that μ1/2\mu\leq 1/2. It suffices to establish the following result.

Theorem 5.1.

For any p(0,pc)p\in(0,p_{c}) and μ(0,1/2]\mu\in(0,1/2], there exists a constant C5.1>0C_{\ref{thm:Subcritical_Upper}}>0 independent of μ\mu such that the following estimate holds for all t0t\geq 0:

𝔼[dist(X0,Xt/μ)]C5.1(t1).\displaystyle{\mathbb{E}}[\mathrm{dist}(X_{0},X_{t/\mu})]\leq C_{\ref{thm:Subcritical_Upper}}(t\vee 1). (5.1)

This theorem is a consequence of the following classical exponential tail estimate (5.2) concerning the diameter of connected components in subcritical percolation. This estimate follows directly from [20, Theorem 1.1] by Duminil-Copin and Tassion for subcritical percolation on arbitrary locally finite transitive infinite graphs. In fact, it already follows from the arguments in the breakthrough works of Menshikov [44] and Aizenman and Barsky [1].

Proposition 5.2.

For any p(0,pc)p\in(0,p_{\mathrm{c}}), there exists a constant C5.2=C5.2(p)>0C_{\ref{thm:exp.volume}}=C_{\ref{thm:exp.volume}}(p)>0 such that for any vertex oVo\in V and all r>0r>0, the following bound holds:

𝐏p(𝗋𝖺𝖽ext(𝒞o)r)eC5.2r.{\bf P}_{p}(\mathsf{rad}_{\operatorname{ext}}(\mathcal{C}_{o})\geq r)\leq e^{-C_{\ref{thm:exp.volume}}r}. (5.2)

Here, recall that 𝒞o\mathcal{C}_{o} is the connected component containing oo, and 𝗋𝖺𝖽ext()\mathsf{rad}_{\operatorname{ext}}(\cdot) refers to its extrinsic radius as defined in (2.1).

Proof of Theorem 5.1.

We divide the time interval [0,t/μ][0,t/\mu] into smaller time intervals of length β/μ\beta/\mu, where β>0\beta>0 is a constant that will be chosen later. Applying the triangle inequality, we get that

dist(X0,Xt/μ)k=0t/β1dist(Xkβ/μ,X(k+1)β/μ)+dist(Xt/ββ/μ,Xt/μ).\displaystyle\mathrm{dist}(X_{0},X_{t/\mu})\leq\sum_{k=0}^{\lfloor t/\beta\rfloor-1}\mathrm{dist}(X_{k\beta/\mu},X_{(k+1)\beta/\mu})+\mathrm{dist}(X_{\lfloor t/\beta\rfloor\cdot\beta/\mu},X_{t/\mu}). (5.3)

For 0kt/β0\leq k\leq\lfloor t/\beta\rfloor, let η¯k\bar{\eta}_{k} be the set of edges that are open at some point during [kβ/μ,(k+1)β/μ][k\beta/\mu,(k+1)\beta/\mu]. Let 𝒞kV\mathcal{C}_{k}\subset V denote the open cluster of Xkβ/μX_{k\beta/\mu} with respect to the bond configuration η¯k\bar{\eta}_{k}. Then, the particle must stay within 𝒞k\mathcal{C}_{k} during [kβ/μ,(k+1)β/μ][k\beta/\mu,(k+1)\beta/\mu], giving that

dist(Xkβ/μ,X(k+1)β/μ)2𝗋𝖺𝖽ext(𝒞k).\displaystyle\mathrm{dist}(X_{k\beta/\mu},X_{(k+1)\beta/\mu})\leq 2\mathsf{rad}_{\operatorname{ext}}(\mathcal{C}_{k}). (5.4)

By combining (5.3) and (5.4), we conclude that

𝔼[dist(X0,Xt/μ)]2k=0t/β𝔼[𝗋𝖺𝖽ext(𝒞k)].\displaystyle{\mathbb{E}}[\mathrm{dist}(X_{0},X_{t/\mu})]\leq 2\sum_{k=0}^{\lfloor t/\beta\rfloor}{\mathbb{E}}[\mathsf{rad}_{\operatorname{ext}}(\mathcal{C}_{k})]. (5.5)

Note that for any edge ee, we have

p~:=(e is open some time during [kβ/μ,(k+1)β/μ])1(1p)eβ.\displaystyle\tilde{p}:={\mathbb{P}}\left(e\text{ is open some time during }[k\beta/\mu,(k+1)\beta/\mu]\right)\leq 1-(1-p)e^{-\beta}.

Since p<pcp<p_{c}, we can choose a small enough constant β>0\beta>0 such that p~<pc\tilde{p}<p_{c}. Hence, 𝒞k\mathcal{C}_{k} is a connected component in the subcritical percolation of GG with parameter p~\tilde{p}. By Proposition 5.2, 𝗋𝖺𝖽ext(𝒞k)\mathsf{rad}_{\operatorname{ext}}(\mathcal{C}_{k}) has exponential tail, which implies that 𝔼[𝗋𝖺𝖽ext(𝒞k)]C{\mathbb{E}}[\mathsf{rad}_{\operatorname{ext}}(\mathcal{C}_{k})]\leq C for a large constant C>0C>0. Together with (5.5), it concludes the proof. ∎

5.2. A general lower bound: Proof of Proposition 1.5

In this subsection, we prove Proposition 1.5. As a special case, when p<pcp<p_{c}, it gives the lower bound in (1.4).

5.2.1. Case μ>1/2\mu>1/2.

First, we consider the simpler case where μ>1/2\mu>1/2. Similarly to the setting in Section 4.1, we utilize the Diaconis-Fill coupling between the random walk and the evolving set process. Following the proof of Theorem 1.2, our goal is to establish a similar estimate as in Lemma 4.2. Let η0{\mathbb{P}}_{\eta_{0}} denote the probability measure of the environment process 𝜼\bm{\eta} when the initial bond configuration is η0{\eta}_{0}.

Lemma 5.3.

Assume the same setup as in Proposition 1.5. For any p(0,1]p\in(0,1], μ>1/2\mu>1/2, initial environment configuration η0{\eta}_{0}, and nonempty finite subset SVS\subset V, we have that

η0(1|S|01|ηtS|dtcpd2Φ(G))cp2,{\mathbb{P}}_{{\eta}_{0}}\left(\frac{1}{|S|}\int_{0}^{1}\left|\partial_{\eta_{t}}S\right|\mathrm{d}t\geq\frac{cpd}{2}\Phi(G)\right)\geq\frac{cp}{2}, (5.6)

where c=01(1et/2)dt>0c=\int_{0}^{1}(1-e^{-t/2})\mathrm{d}t>0.

Proof.

Recall that

|ηtS|=xSyScηt(x,y)|ES|.\left|\partial_{\eta_{t}}S\right|=\sum_{x\in S}\sum_{y\in S^{c}}\eta_{t}(x,y)\leq\left|\partial_{E}S\right|.

For each edge eESe\in\partial_{E}S, we have

η0(ηt(e)=1)(1eμt)p,{\mathbb{P}}_{{\eta}_{0}}(\eta_{t}(e)=1)\geq(1-e^{-\mu t})p, (5.7)

where the equality holds when η0(e)=0\eta_{0}(e)=0. Let 𝔼η0{\mathbb{E}}_{{\eta}_{0}} denote the expectation with respect to η0{\mathbb{P}}_{{\eta}_{0}}. By applying Fubini’s theorem and (5.7), we obtain that

𝔼η0[01|ηtS|dt]=01𝔼η0[|ηtS|]dt|ES|01(1eμt)pdtcp|ES|.{\mathbb{E}}_{{\eta}_{0}}\left[\int_{0}^{1}\left|\partial_{\eta_{t}}S\right|\mathrm{d}t\right]=\int_{0}^{1}{\mathbb{E}}_{{\eta}_{0}}\left[\left|\partial_{\eta_{t}}S\right|\right]\mathrm{d}t\geq|\partial_{E}S|\cdot\int_{0}^{1}(1-e^{-\mu t})p\mathrm{d}t\geq cp|\partial_{E}S|.

Then, from this inequality, we get that

cp|ES|\displaystyle cp|\partial_{E}S|\leq 𝔼η0[01|ηtS|dt]|ES|η0(01|ηtS|dtcp2|ES|)+cp2|ES|,\displaystyle{\mathbb{E}}_{{\eta}_{0}}\left[\int_{0}^{1}\left|\partial_{\eta_{t}}S\right|\mathrm{d}t\right]\leq|\partial_{E}S|{\mathbb{P}}_{\eta_{0}}\left(\int_{0}^{1}\left|\partial_{\eta_{t}}S\right|\mathrm{d}t\geq\frac{cp}{2}|\partial_{E}S|\right)+\frac{cp}{2}|\partial_{E}S|,

which implies that

η0(1|S|01|ηtS|dtcp2|ES||S|)cp2.\displaystyle{\mathbb{P}}_{\eta_{0}}\left(\frac{1}{|S|}\int_{0}^{1}\left|\partial_{\eta_{t}}S\right|\mathrm{d}t\geq\frac{cp}{2}\frac{|\partial_{E}S|}{|S|}\right)\geq\frac{cp}{2}.

This estimate, together with the fact that |ES|dΦ(G)|S||\partial_{E}S|\geq d\Phi(G)|S|, gives the desired inequality (5.6). ∎

Corollary 5.4.

Under the setting of Lemma 5.3, we have that

vp(μ)c3p3Φ(G)248e2logd.v_{p}(\mu)\geq\frac{c^{3}p^{3}\Phi(G)^{2}}{48e^{2}\log d}. (5.8)
Proof.

With (5.6), we obtain (4.8) with the following values:

c0=cp2,c1=cpd2Φ(G).c_{0}=\frac{cp}{2},\quad c_{1}=\frac{cpd}{2}\Phi(G).

Using exactly the same argument as in Section 4.1 and setting c0,c1c_{0},c_{1} in  (4.14) as above, we can conclude (5.8). ∎

5.2.2. Case μ(0,1/2]\mu\in(0,1/2].

It remains to deal with the more challenging case where μ(0,1/2]\mu\in(0,1/2]. In this case, we discretize time by observing the random walk at times n/μn/\mu for nn\in{\mathbb{N}}. Similarly to (4.1), we consider another time-inhomogeneous Markov chain Yn:=Xn/μY_{n}:=X_{n/\mu} for n{0}n\in\mathbb{N}\cup\{0\}. With a slight abuse of notation, we continue to denote the transition probability by

Pn+1𝜼(x,y)=𝜼(Yn+1=y|Yn=x),x,yV,n{0}.P_{n+1}^{\bm{\eta}}(x,y)=\mathbb{P}^{\bm{\eta}}\left(Y_{n+1}=y\;|\;Y_{n}=x\right),\quad\forall x,y\in V,\ n\in\mathbb{N}\cup\{0\}. (5.9)

Then, we define the Diaconis-Fill coupling and the evolving set process using Pn+1𝜼P_{n+1}^{\bm{\eta}} as in Section 4.1. Moreover, we adopt exactly the same notations as those below (4.1), replacing the XnX_{n}’s with YnY_{n}. Similarly to the proof of Theorem 1.2, it suffices to establish the following counterpart of the estimate (4.10): there exist constants c0,c1>0c_{0},c_{1}>0 such that

^(ΦSnc1/(ed))c0,n{0}.\widehat{\mathbb{P}}\left(\Phi_{S_{n}}\geq c_{1}/(ed)\right)\geq c_{0},\quad\forall n\in\mathbb{N}\cup\{0\}. (5.10)

To this end, we need the following two auxiliary lemmas whose proofs are similar to those of Lemmas 3.1 and 3.2 in [46].

Lemma 5.5.

The following estimate holds for all nonempty finite subsets SVS\subset V, p(0,1]p\in(0,1], μ(0,1/2]\mu\in(0,1/2], and any initial environment configuration η0{\eta}_{0}:

η0(|{eES:ηt(e)=1for allt[μ11,μ1]}|β|ES|)β2,{\mathbb{P}}_{\eta_{0}}\left(\left|\{e\in\partial_{E}S:\eta_{t}(e)=1\ \text{for all}\ t\in[\mu^{-1}-1,\mu^{-1}]\}\right|\geq\beta|\partial_{E}S|\right)\geq\frac{\beta}{2}, (5.11)

where β=p2(1e1/2)e(1p)/2\beta=\frac{p}{2}(1-e^{-1/2})e^{-(1-p)/2}.

Proof.

Note that the LHS is minimized when the initial configuration is η0(e)0\eta_{0}(e)\equiv 0 for eEe\in E. Thus, it is bounded from below by the probability that |Bin(|ES|,(1e1+μ)peμ(1p))|β|ES||\text{Bin}(|\partial_{E}S|,(1-e^{-1+\mu})pe^{-\mu(1-p)})|\geq\beta|\partial_{E}S|. Here, Bin(n,q)\text{Bin}(n,q) denotes a binomial random variable with parameters nn and qq. Applying the Paley–Zygmund inequality and noticing that (1e1+μ)peμ(1p)2β(1-e^{-1+\mu})pe^{-\mu(1-p)}\geq 2\beta for all μ(0,1/2]\mu\in(0,1/2], we get (5.11). ∎

Lemma 5.6.

For all nonempty finite subsets SVS\subset V, p(0,1]p\in(0,1], and μ(0,1/2]\mu\in(0,1/2], if 𝛈\bm{\eta} satisfies that

|{eES:ηt(e)=1for allt[μ11,μ1]}|β|ES|\left|\{e\in\partial_{E}S:\eta_{t}(e)=1\ \text{for all}\ t\in[\mu^{-1}-1,\mu^{-1}]\}\right|\geq\beta|\partial_{E}S| (5.12)

for some β>0\beta>0, then we have that

ΦS𝜼[0,μ1]βdeΦ(G),n{0},μ(0,1/2].\displaystyle\Phi^{\bm{\eta}_{[0,\mu^{-1}]}}_{S}\geq\frac{\beta}{de}\Phi(G),\quad\forall n\in{\mathbb{N}}\cup\{0\},\ \mu\in(0,1/2]. (5.13)

Here, 𝛈[0,μ1]\bm{\eta}_{[0,\mu^{-1}]} refers to the entire environment process between times 0 and μ1\mu^{-1}, and ΦS𝛈[0,μ1]\Phi^{\bm{\eta}_{[0,\mu^{-1}]}}_{S} is defined in a similar way as in (4.4):

ΦS𝜼[0,μ1]=1|S|xSySc^𝜼[0,μ1](Y1=y|Y0=x).\Phi_{S}^{\bm{\eta}_{[0,\mu^{-1}]}}=\frac{1}{\left|S\right|}\sum_{x\in S}\sum_{y\in S^{c}}\widehat{\mathbb{P}}^{\bm{\eta}_{[0,\mu^{-1}]}}\left(Y_{1}=y|Y_{0}=x\right). (5.14)
Proof.

We denote

Sgood:={xS:there exists an edge e from x to Sc that is open during [μ11,μ1]},S_{\text{good}}:=\left\{x\in S:\text{there exists an edge $e$ from $x$ to $S^{c}$ that is open during $[\mu^{-1}-1,\mu^{-1}]$}\right\},

and let Sbad=SSgoodS_{\text{bad}}=S\setminus S_{\text{good}}. By (5.12), we have that

|Sgood|1d|{eES:ηt(e)=1for allt[μ11,μ1]}|βd|ES|βΦ(G)|S|.|S_{\text{good}}|\geq\frac{1}{d}\left|\{e\in\partial_{E}S:\eta_{t}(e)=1\ \text{for all}\ t\in[\mu^{-1}-1,\mu^{-1}]\}\right|\geq\frac{\beta}{d}|\partial_{E}S|\geq\beta\Phi(G)|S|. (5.15)

Consider ΦS=ΦS𝜼[0,μ1]\Phi_{S}=\Phi^{\bm{\eta}_{[0,\mu^{-1}]}}_{S} as defined in (5.14) and abbreviate 𝜼=𝜼[0,μ1]\bm{\eta}=\bm{\eta}_{[0,\mu^{-1}]}. Since π(x)1\pi(x)\equiv 1 is a stationary measure for all realizations of the environment according to the definition of the random walk, we have

maxyVxS^𝜼(Xμ11=y|X0=x)maxyVxV^𝜼(Xμ11=y|X0=x)1.\displaystyle\max_{y\in V}\sum_{x\in S}\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{\mu^{-1}-1}=y|X_{0}=x\right)\leq\max_{y\in V}\sum_{x\in V}\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{\mu^{-1}-1}=y|X_{0}=x\right)\leq 1. (5.16)

We observe that

ΦS\displaystyle\Phi_{S} =^𝜼(Y1Sc|Y0S)\displaystyle=\widehat{\mathbb{P}}^{\bm{\eta}}\left(Y_{1}\in S^{c}|Y_{0}\in S\right) (5.17)
^𝜼(Xμ1Sc|X0S,Xμ11SgoodSc)^𝜼(Xμ11SgoodSc|X0S),\displaystyle\geq\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{\mu^{-1}}\in S^{c}|X_{0}\in S,X_{\mu^{-1}-1}\in S_{\text{good}}\cup S^{c}\right)\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{\mu^{-1}-1}\in S_{\text{good}}\cup S^{c}|X_{0}\in S\right),

where the conditioning Y0SY_{0}\in S assigns a probability of |S|1|S|^{-1} to each point in SS. Using (5.16), we can bound the second factor by

^𝜼(Xμ11SgoodSc|X0S)\displaystyle\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{\mu^{-1}-1}\in S_{\text{good}}\cup S^{c}|X_{0}\in S\right) =1^𝜼(Xμ11Sbad|X0S)\displaystyle=1-\widehat{\mathbb{P}}^{\bm{\eta}}\left(X_{\mu^{-1}-1}\in S_{\text{bad}}|X_{0}\in S\right)
1|Sbad||S|=|Sgood||S|βΦ(G),\displaystyle\geq 1-\frac{|S_{\text{bad}}|}{|S|}=\frac{|S_{\text{good}}|}{|S|}\geq\beta\Phi(G), (5.18)

where we used (5.15) in the last step. For the first factor, if Xμ11SgoodX_{\mu^{-1}-1}\in S_{\text{good}}, we fix an arbitrary edge ee from Xμ11X_{\mu^{-1}-1} to ScS^{c} that is open during [μ11,μ1][\mu^{-1}-1,\mu^{-1}]. The probability that the random walk clock rings exactly once during [μ11,μ1][\mu^{-1}-1,\mu^{-1}] and the attempted jump is along ee is at least (de)1(de)^{-1}. On the other hand, if Xμ11ScX_{\mu^{-1}-1}\in S^{c}, then the probability that the random walk clock does not ring during [μ11,μ1][\mu^{-1}-1,\mu^{-1}] is at least 1e11-e^{-1}. Thus, we can bound (5.17) by

ΦS(1de(1e1))βΦ(G)=βΦ(G)de.\Phi_{S}\geq\left(\frac{1}{de}\wedge(1-e^{-1})\right)\beta\Phi(G)=\frac{\beta\Phi(G)}{de}.

This leads to (5.13). ∎

Corollary 5.7.

Under the setting of Proposition 1.5, for any p(0,1)p\in(0,1) and μ(0,1/2]\mu\in(0,1/2], we have

vp(μ)μβ3Φ(G)212e2d2logd,v_{p}(\mu)\geq\mu\frac{\beta^{3}\Phi(G)^{2}}{12e^{2}d^{2}\log d}, (5.19)

where β=p2(1e1/2)e(1p)/2\beta=\frac{p}{2}(1-e^{-1/2})e^{-(1-p)/2}.

Proof.

Lemmas 5.5 and 5.6 together imply that (5.10) holds with

c0=β2,c1=βΦ(G).c_{0}=\frac{\beta}{2},\quad c_{1}=\beta\Phi(G).

Following the same proof as in Section 4.1, we obtain an estimate that is similar to (4.14) but with the speed scaled by μ1\mu^{-1}:

1μvp(μ)c0c126e2d2logd=β3Φ(G)212e2d2logd.\frac{1}{\mu}v_{p}(\mu)\geq\frac{c_{0}c_{1}^{2}}{6e^{2}d^{2}\log d}=\frac{\beta^{3}\Phi(G)^{2}}{12e^{2}d^{2}\log d}.

This gives (5.19). ∎

Proof of Proposition 1.5.

Combining Corollary 5.4 for the μ>1/2\mu>1/2 case and Corollary 5.7 for the μ1/2\mu\leq 1/2 case, we conclude (1.9). ∎

6. Concluding remarks and questions

We believe that in the critical case, the speed of the random walk should be of order μα\mu^{\alpha} for some fixed exponent α=α(G)>0\alpha=\alpha(G)>0 (with potential log(1/μ)\log(1/\mu) factors). Our results suggest that 1/2α11/2\leq\alpha\leq 1. Based on further heuristics, we conjecture that the correct exponent is α=3/4\alpha=3/4 for all nonamenable transitive unimodular graphs. Furthermore, we expect that the diffusion constant Dpc(μ)D_{p_{c}}(\mu) for the random walk on critical dynamical percolation in d{\mathbb{Z}}^{d} would exhibit a similar behavior Dpc(μ)μαD_{p_{c}}(\mu)\sim\mu^{\alpha} for a fixed exponent α=α(d)>0\alpha=\alpha(\mathbb{Z}^{d})>0. In the companion paper [24], we study the random walk on critical dynamical percolation in d{\mathbb{Z}}^{d} and establish a similar bound 1/2α(d)11/2\leq\alpha(\mathbb{Z}^{d})\leq 1 when d11d\geq 11. As mentioned in the introduction, it is commonly believed that the random walk on critical percolation in high-dimensional d{\mathbb{Z}}^{d} lattices is closely connected to that in trees. Consequently, we conjecture that α(d)\alpha(\mathbb{Z}^{d}) should match the exponent observed in trees, e.g., α(d)=α(𝕋b+1)\alpha(\mathbb{Z}^{d})=\alpha(\mathbb{T}_{b+1}) for large dd (e.g., d11d\geq 11) and any b2b\geq 2.

The lower bound for the speed of the random walk on supercritical dynamical percolation in trees is quantitive in terms of dd and pp as shown in Theorem 4.4. In particular, for the near-critical case when ppcp\downarrow p_{c}, the dependence of the lower bound on ppcp-p_{c} is shown in (4.16). We now consider the simple random walk on a Galton-Watson (GW) tree with

pk=(bk)pk(1p)bk,k{0,1,,b}.p_{k}={b\choose k}p^{k}(1-p)^{b-k},\quad k\in\{0,1,\ldots,b\}.

This walk is closely related to the random walk on Bernoulli-pp percolation in 𝕋d\mathbb{T}_{d}. By Theorem 17.13 and Exercise 17.7 of [42] (or Theorem 3.2 of [41] by Lyons, Pemantle, and Peres), we know that the speed on the GW tree is

v~(p)=k=0bk1k+1pk1qk+11q2.\widetilde{v}(p)=\sum_{k=0}^{b}\frac{k-1}{k+1}p_{k}\frac{1-q^{k+1}}{1-q^{2}}. (6.1)

Here, q=1θ~pq=1-\widetilde{\theta}_{p} is the extinction probability, satisfying the equation (1pθ~p)b=1θ~p.(1-p\widetilde{\theta}_{p})^{b}=1-\widetilde{\theta}_{p}. From this equation, it follows that as ppc,p\downarrow p_{c},

ppc=b12p2θ~p(b1)(b2)6p3θ~p2+O(θ~p3).p-p_{c}=\frac{b-1}{2}p^{2}\widetilde{\theta}_{p}-\frac{(b-1)(b-2)}{6}p^{3}\widetilde{\theta}_{p}^{2}+O(\widetilde{\theta}_{p}^{3}). (6.2)

By performing a Taylor expansion of (6.1) around θ~p=0\widetilde{\theta}_{p}=0, we obtain that

v~(p)\displaystyle\widetilde{v}(p) =12θ~pk=0b(k1)pk(1k2θ~p+k(k1)6θ~p2)+O(θ~p3)\displaystyle=\frac{1}{2-\widetilde{\theta}_{p}}\sum_{k=0}^{b}(k-1)p_{k}\left(1-\frac{k}{2}\widetilde{\theta}_{p}+\frac{k(k-1)}{6}\widetilde{\theta}_{p}^{2}\right)+O(\widetilde{\theta}_{p}^{3})
=12θ~p𝔼[(Z1)θ~p2Z(Z1)+θ~p26Z(Z1)2]+O(θ~p3),\displaystyle=\frac{1}{2-\widetilde{\theta}_{p}}\mathbb{E}\left[(Z-1)-\frac{\widetilde{\theta}_{p}}{2}Z(Z-1)+\frac{\widetilde{\theta}_{p}^{2}}{6}Z(Z-1)^{2}\right]+O(\widetilde{\theta}_{p}^{3}), (6.3)

where ZBin(b,p)Z\sim\text{Bin}(b,p) is a binomial random variable with parameters bb and pp. Using (6.2) and that

𝔼Z=bp,𝔼[Z(Z1)]=b(b1)p2,𝔼[Z(Z1)2]=b(b1)(b2)p3+b(b1)p2,\mathbb{E}Z=bp,\quad{\mathbb{E}}[Z(Z-1)]=b(b-1)p^{2},\quad{\mathbb{E}}[Z(Z-1)^{2}]=b(b-1)(b-2)p^{3}+b(b-1)p^{2},

we can simplify (6.3) as

v~(p)\displaystyle\widetilde{v}(p) =b(b1)p26(2θ~p)θ~p2+O(θ~p3)(ppc)2 as ppc,\displaystyle=\frac{b(b-1)p^{2}}{6(2-\widetilde{\theta}_{p})}\widetilde{\theta}_{p}^{2}+O(\widetilde{\theta}_{p}^{3})\gtrsim(p-p_{c})^{2}\ \ \text{ as }\ \ p\downarrow p_{c}, (6.4)

where we used the asymptotic (4.29) in the second step. Taking into account the fact that X0=oX_{0}=o has a probability θpppc\theta_{p}\sim p-p_{c} of lying inside an infinite component, we heuristically expect that the random walk on near-critical dynamical percolation in 𝕋d\mathbb{T}_{d} should have a speed

vp(μ)θpv~(p)(ppc)3, as ppc.v_{p}(\mu)\gtrsim\theta_{p}\widetilde{v}(p)\gtrsim(p-p_{c})^{3},\quad\text{ as }p\downarrow p_{c}.

Thus, the exponent shown in (4.16) is likely not sharp.

In contrast to the tree case, the lower bound for the speed on general nonamenable graphs depends on an implicit constant cpc_{p} in Lemma 4.6, which in turn relies on the specific graph GG. It is interesting to investigate whether the speed has a uniform lower bound that only depends on the Cheeger constant, i.e., there exists a positive function ff such that for all p>pcp>p_{c},

infμ>0v(μ,p,G)f(p,d,Φ(G))>0.\inf_{\mu>0}v(\mu,p,G)\geq f(p,d,\Phi(G))>0.

If such a function exists, it would be desirable to obtain more quantitative estimates for it when p>pcp>p_{c}. In particular, we will be interested in determining the exact exponent of ppcp-p_{c} in v(μ,p)v(\mu,p) as ppcp\downarrow p_{c}.

Finally, an important open question concerns the monotonicity of the speed v(p,μ)v(p,\mu) as a function of μ\mu and pp for all transitive graphs. Currently, it is unknown whether the speed exhibits monotonicity with respect to either μ\mu or pp, but we conjecture that both forms of monotonicity hold. In particular, if the monotonicity in μ\mu is valid, then the speed of the random walk on static percolation should always give a lower bound for the speed on dynamical percolation. This open question was also proposed at the conclusion of [48] regarding the random walk on dynamical percolation of d{\mathbb{Z}}^{d} lattices.

Acknowledgments

Chenlin Gu is supported by the National Key R&D Program of China (No. 2023YFA1010400) and National Natural Science Foundation of China (12301166). Jianping Jiang is supported by National Natural Science Foundation of China (12271284 and 12226001). Hao Wu is supported by Beijing Natural Science Foundation (JQ20001). Fan Yang is supported by the National Key R&D Program of China (No. 2023YFA1010400).

References

  • [1] M. Aizenman and D. J. Barsky. Sharpness of the phase transition in percolation models. Communications in Mathematical Physics, 108(3):489–526, 1 1987.
  • [2] M. Aizenman and C. M. Newman. Tree graph inequalities and critical behavior in percolation models. Journal of Statistical Physics, 36(1-2):107–143, 1984.
  • [3] S. Andres, N. Gantert, D. Schmid, and P. Sousi. Biased random walk on dynamical percolation. arXiv:2301.05208.
  • [4] L. Avena, H. Güldaş, R. van der Hofstad, and F. den Hollander. Mixing times of random walks on dynamic configuration models. The Annals of Applied Probability, 28(4):1977–2002, 2018.
  • [5] L. Avena, H. Güldaş, R. van der Hofstad, and F. den Hollander. Random walks on dynamic configuration models: A trichotomy. Stochastic Processes and their Applications, 129(9):3360–3375, 2019.
  • [6] L. Avena, H. Güldaş, R. van der Hofstad, F. den Hollander, and O. Nagy. Linking the mixing times of random walks on static and dynamic random graphs. Stochastic Processes and their Applications, 153:145–182, 2022.
  • [7] A. Avez. Croissance des groupes de type fini et fonctions harmoniques. In J.-P. Conze and M. S. Keane, editors, Théorie Ergodique, pages 35–49, Berlin, Heidelberg, 1976. Springer Berlin Heidelberg.
  • [8] C. Avin, M. Koucký, and Z. Lotker. Cover time and mixing time of random walks on dynamic graphs. Random Structures & Algorithms, 52(4):576–596, 2018.
  • [9] M. T. Barlow and T. Kumagai. Random walk on the incipient infinite cluster on trees. Illinois Journal of Mathematics, 50(1-4):33–65, 2006.
  • [10] I. Benjamini, R. Lyons, Y. Peres, and O. Schramm. Critical percolation on any nonamenable group has no infinite clusters. The Annals of Probability, 27(3):1347–1356, 1999.
  • [11] I. Benjamini, R. Lyons, Y. Peres, and O. Schramm. Group-invariant percolation on graphs. Geometric and Functional Analysis, 9(1):29–66, 1999.
  • [12] I. Benjamini, R. Lyons, Y. Peres, and O. Schramm. Uniform spanning forests. The Annals of Probability, 29(1):1–65, 2 2001.
  • [13] I. Benjamini, R. Lyons, and O. Schramm. Percolation perturbations in potential theory and random walks. In Random Walks and Discrete Potential Theory (Cortona, 1997), Symposium on Mathematics, XXXIX, page 56–84, Cambridge, 1999. Cambridge University Press.
  • [14] I. Benjamini and O. Schramm. Percolation beyond d\mathbb{Z}^{d}, many questions and a few answers. Electronic Communications in Probability, 1:no. 8, 71–82, 1996.
  • [15] R. M. Burton and M. Keane. Density and uniqueness in percolation. Communications in Mathematical Physics, 121(3):501–505, 1989.
  • [16] L. Cai, T. Sauerwald, and L. Zanetti. Random walks on randomly evolving graphs. In A. W. Richa and C. Scheideler, editors, Structural Information and Communication Complexity, pages 111–128, Cham, 2020. Springer International Publishing.
  • [17] P. Caputo and M. Quattropani. Mixing time trichotomy in regenerating dynamic digraphs. Stochastic Processes and their Applications, 137:222–251, 2021.
  • [18] P. Diaconis and J. A. Fill. Strong stationary times via a new form of duality. The Annals of Probability, 18(4):1483–1522, 1990.
  • [19] H. Duminil-Copin, S. Goswami, A. Raoufi, F. Severo, and A. Yadin. Existence of phase transition for percolation using the Gaussian free field. Duke Mathematical Journal, 169(18):3539–3563, 2020.
  • [20] H. Duminil-Copin and V. Tassion. A new proof of the sharpness of the phase transition for Bernoulli percolation and the Ising model. Communications in Mathematical Physics, 343(2):725–745, 2016.
  • [21] R. Durrett. Probability: theory and examples, volume 49. Cambridge University Press, 2019.
  • [22] D. Figueiredo, G. Iacobelli, R. Oliveira, B. Reed, and R. Ribeiro. On a random walk that grows its own tree. Electronic Journal of Probability, 26(none):1–40, 2021.
  • [23] D. Figueiredo, P. Nain, B. Ribeiro, E. de Souza e Silva, and D. Towsley. Characterizing continuous time random walks on time varying graphs. arXiv:1112.5762.
  • [24] C. Gu, J. Jiang, Y. Peres, Z. Shi, H. Wu, and F. Yang. Random walk on dynamical percolation in euclidean lattices: separating critical and supercritial regimes. In preparation.
  • [25] O. Häggström, Y. Peres, and J. E. Steif. Dynamical percolation. Annales de l’Institut Henri Poincaré (B) Probabilités et Statistiques, 33(4):497–528, 1997.
  • [26] J. Hermon and T. Hutchcroft. Supercritical percolation on nonamenable graphs: isoperimetry, analyticity, and exponential decay of the cluster size distribution. Inventiones Mathematicae, 224(2):445–486, 2021.
  • [27] J. Hermon and P. Sousi. A comparison principle for random walk on dynamical percolation. The Annals of Probability, 48(6):2952 – 2987, 2020.
  • [28] M. Heydenreich and R. van der Hofstad. Progress in high-dimensional percolation and random graphs. CRM Short Courses. Springer, Cham; Centre de Recherches Mathématiques, Montreal, QC, 2017.
  • [29] T. Hutchcroft. Percolation on hyperbolic graphs. Geometric and Functional Analysis, 29(3):766–810, 2019.
  • [30] T. Hutchcroft. The L2L^{2} boundedness condition in nonamenable percolation. Electronic Journal of Probability, 25:Paper No. 127, 27, 2020.
  • [31] T. Hutchcroft. Nonuniqueness and mean-field criticality for percolation on nonunimodular transitive graphs. Journal of the American Mathematical Society, 33(4):1101–1165, 2020.
  • [32] T. Hutchcroft. Slightly supercritical percolation on non-amenable graphs I: The distribution of finite clusters. Proceedings of the London Mathematical Society. Third Series, 125(4):968–1013, 2022.
  • [33] G. Iacobelli and D. R. Figueiredo. Edge-attractor random walks on dynamic networks. Journal of Complex Networks, 5(1):84–110, 06 2016.
  • [34] G. Iacobelli, R. Ribeiro, G. Valle, and L. Zuaznábar. Tree builder random walk: Recurrence, transience and ballisticity. Bernoulli, 28(1):150–180, 2022.
  • [35] V. A. Kaimanovich and A. M. Vershik. Random walks on discrete groups: boundary and entropy. The Annals of Probability, 11(3):457–490, 1983.
  • [36] H. Kesten. Subdiffusive behavior of random walk on a random cluster. Annales de l’Institut Henri Poincaré (B) Probabilités et Statistiques, 22(4):425–487, 1986.
  • [37] G. Kozma and A. Nachmias. The Alexander-Orbach conjecture holds in high dimensions. Inventiones Mathematicae, 178(3):635–654, 2009.
  • [38] A. Lelli and A. Stauffer. Mixing time of random walk on dynamical random cluster. arXiv:2209.03227.
  • [39] D. Levin and Y. Peres. Markov Chains and Mixing Times. American Mathematical Society, Providence, RI, 2017.
  • [40] R. Lyons. Random walks and the growth of groups. Comptes Rendus de l’Académie des Sciences. Série I. Mathématique, 320(11):1361–1366, 1995.
  • [41] R. Lyons, R. Pemantle, and Y. Peres. Conceptual proofs of LlogLL\log L criteria for mean behavior of branching processes. The Annals of Probability, 23(3):1125–1138, 1995.
  • [42] R. Lyons and Y. Peres. Probability on Trees and Networks. Cambridge University Press, Cambridge, 2016.
  • [43] R. Lyons and O. Schramm. Indistinguishability of percolation clusters. The Annals of Probability, 27(4):1809–1836, 1999.
  • [44] M. V. Menshikov. Coincidence of critical points in percolation problems. Dokl. Akad. Nauk SSSR, 288(6):1308–1311, 1986.
  • [45] B. Morris and Y. Peres. Evolving sets, mixing and heat kernel bounds. Probability Theory and Related Fields, 133(2):245–266, 2005.
  • [46] Y. Peres, P. Sousi, and J. E. Steif. Quenched exit times for random walk on dynamical percolation. Markov Processes and Related Fields, 24(5):715–732, 2018.
  • [47] Y. Peres, P. Sousi, and J. E. Steif. Mixing time for random walk on supercritical dynamical percolation. Probability Theory and Related Fields, 176(3):809–849, 2020.
  • [48] Y. Peres, A. Stauffer, and J. E. Steif. Random walks on dynamical percolation: mixing times, mean squared displacement and hitting times. Probability Theory and Related Fields, 162(3):487–530, 2015.
  • [49] T. Sauerwald and L. Zanetti. Random walks on dynamic graphs: Mixing times, hitting times, and return probabilities. arXiv:1903.01342.
  • [50] R. H. Schonmann. Multiplicity of phase transitions and mean-field criticality on highly non-amenable graphs. Communications in Mathematical Physics, 219(2):271–322, 2001.
  • [51] P. Sousi and S. Thomas. Cutoff for random walk on dynamical Erdős-Rényi graph. Annales de l’Institut Henri Poincaré (B) Probabilités et Statistiques, 56(4):2745 – 2773, 2020.
  • [52] A. Teixeira. Percolation and local isoperimetric inequalities. Probability Theory and Related Fields, 165(3-4):963–984, 2016.
  • [53] D. B. Wilson. Generating random spanning trees more quickly than the cover time. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC ’96, page 296–303, New York, NY, USA, 1996. Association for Computing Machinery.