This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Graphical constructions of simple exclusion processes with applications to random environments

Alessandra Faggionato Alessandra Faggionato. Department of Mathematics, University La Sapienza, P.le Aldo Moro 2, 00185 Rome, Italy faggiona@mat.uniroma1.it
Abstract.

We show that the symmetric simple exclusion process (SSEP) on a countable set is well defined by the stirring graphical construction as soon as the dynamics of a single particle is. The resulting process is Feller, its Markov generator is derived on local functions, duality at the level of the empirical density field holds. We also provide a general criterion assuring that local functions form a core for the generator. We then move to the simple exclusion process (SEP) and show that the graphical construction leads to a well defined Feller process under a percolation-type assumption corresponding to subcriticality in a percolation with random inhomogeneous parameters. We derive its Markov generator on local functions which, under an additional general assumption, form a core for the generator. We discuss applications of the above results to SSEPs and SEPs in random environments, where the standard assumptions to construct the process and investigate its basic properties (by the analytic approach or by graphical constructions) are typically violated. As detailed in [14], our results for SSEP also allow to extend the quenched hydrodynamic limit in path space obtained in [11] by removing Assumption (SEP) used in there.

Keywords: Feller process, Markov generator, exclusion process, graphical construction, duality, empirical density field, random environment.

MSC2020 Subject Classification: 60K35, 60K37 60G55, 82D30

1. Introduction

Given a countable set SS and given non-negative numbers cx,yc_{x,y} associated to (x,y)S×S(x,y)\in S\times S with xyx\not=y, the simple exclusion process (SEP) with rates cx,yc_{x,y} is the interacting particle system on SS roughly described a follows. At most one particle can lie on a site and each particle - when sitting at site xx - attempts to jump to a site yy with probability rate cx,yc_{x,y}, afterwards the jump is allowed only if the site yy is empty. One can think of a family of continuous-time random walks with jump probability rates cx,yc_{x,y} apart from the hard-core interaction. The SEP is called symmetric (and will be denoted as SSEP below) when cx,y=cy,xc_{x,y}=c_{y,x}. Of course, conditions have to be imposed to have a well defined process for all times. For example, when the particle system is given by a single particle, the random walk with jump probability rates cx,yc_{x,y} has to be well defined for all times: the holding time parameter cx:=yS:yxcx,yc_{x}:=\sum_{y\in S:y\not=x}c_{x,y} is finite for all xSx\in S and a.s. no explosion occurs (whatever the starting site is).

The analytic approach in [20] assures that the SEP is well defined and is a Feller process with state space {0,1}S\{0,1\}^{S} if

supxSyS:yxmax{cx,y,cy,x}<+.\sup_{x\in S}\sum_{y\in S:y\not=x}\max\{c_{x,y},c_{y,x}\}<+\infty\,. (1)

This follows by combining the Hille-Yosida Theorem (cf. [20, Theorem 2.9, Chapter 1]) with [20, Theorem 3.9, Chapter 1]. Indeed, conditions (3.3) and (3.8) in [20, Theorem 3.9, Chapter 1] are both equivalent to (1) as derived in Appendix A. For the SSEP, (1) can be rewritten as supxScx<+\sup_{x\in S}c_{x}<+\infty. It turns out that (1) is a too much restrictive assumption when dealing with SEPs or SSEPs in a random environment, i.e. with random rates cx,yc_{x,y} and possibly with a random set SS. In this case we will write cx,y(ω)c_{x,y}(\omega) and S(ω)S(\omega) in order to stress the dependence from the environment ω\omega. For example one could consider the SSEP on S=dS={\mathbb{Z}}^{d} with nearest-neighbor jumps and i.i.d. unbounded jump probability rates associated to the undirected edges. Or, starting with a simple point process111A simple point process on d{\mathbb{R}}^{d} is a random locally finite subset of d{\mathbb{R}}^{d} [10]. ω:={xi}\omega:=\{x_{i}\} on d{\mathbb{R}}^{d} (e.g. a Poisson point process), one could consider the SSEP on S(ω):=ωS(\omega):=\omega with jump probability rates cx,yc_{x,y} of the form cx,y=g(|xy|)c_{x,y}=g(|x-y|) for all xyx\not=y in ω\omega and for a fixed decaying function gg. One could also consider the Mott variable range hopping (v.r.h.), without any mean-field approximation, which describes the electron transport in amorphous solids as doped semiconductors in the regime of strong Anderson localization at low temperature [1, 22, 23]. Starting with a marked simple point process ω:={(xi,Ei)}\omega:=\{(x_{i},E_{i})\} where222By definition of marked simple point process, {xi}\{x_{i}\} is a simple point process and EiE_{i} is called mark of xix_{i} [10] xidx_{i}\in{\mathbb{R}}^{d} and Ei[A,A]E_{i}\in[-A,A], Mott v.r.h. corresponds to the SEP on S(ω):={xi}S(\omega):=\{x_{i}\} where, for iji\not=j,

cxi,xj(ω):=exp{|xixj|max{EjEi,0}}.c_{x_{i},x_{j}}(\omega):=\exp\{-|x_{i}-x_{j}|-\max\{E_{j}-E_{i},0\}\}\,.

The list of examples can be made much longer (cf. e.g. [11, 13, 14] for others). The above models anyway show that, in some contexts with disorder, conditions (1) is typically not satisfied (i.e. for almost all ω\omega (1) is violated with S=S(ω)S=S(\omega) and cx,y=cx,y(ω)c_{x,y}=c_{x,y}(\omega)). On the other hand, it is natural to ask if a.s. the above SEPs exist, are Feller processes, to ask how their Markov generator behaves on good (e.g. local) functions, when local functions form a core and so on.

To address the above questions we leave the analytic approach of [20] and move to graphical constructions. The graphical approach has a long tradition in interacting particle systems, in particular also for the investigation of attractiveness and duality. Graphical constructions of SEP and SSEP are discussed e.g. in [18, 19] (also for more general exclusion processes) and in [25, Chapter 2], briefly in [20, p. 383, Chapter VIII.2] for SEP and [20, p. 399, Chapter VIII.6] for SSEP as stirring processes. For the graphical constructions of other particle systems we mention in particular [20, Chapter III.6] and [7] (see also the references in [20, Chapter III.7]). On the other hand, the above references again make assumptions not compatible with many applications to particles in a random environment (e.g. finite range jumps or rates cx,yc_{x,y} of the form p(x,y)p(x,y), pp being a probability kernel).

Let us describe our contributions. In Section 3 we consider the SSEP on a countable set SS (of course, the interesting case is for SS infinite). Under the only assumption that the continuous time random walk on SS with jump probability rates cx,yc_{x,y} is well defined for all times t0t\geq 0 (called Assumption SSEP below), we show that the stirring graphical construction leads to a well defined Feller process, with the right form of generator on local functions and on other good functions (see Propositions 3.2, 3.3, 3.4, 3.5 in Section 3). We also provide a general criterion assuring that local functions form a core for the generator (see Proposition 3.6). Due to its relevance for the study of hydrodynamics and fluctuations of the empirical field for SSEPs in random environments (see e.g. [5, 11, 16]), we also investigate duality properties of the SSEP at the level of the empirical field (see Section 3.3). Finally, in Section 4 we discuss some applications to SSEPs in a random environment. We point out that the construction of the SSEP on a countable set SS, when the single random walk is well defined, can be obtained also by duality and Kolmogorov’s theorem as in [4, Appendix A]. On the other hand, the analysis there is limited to the existence of the stochastic process.

We then move to the SEP. Under what we call Assumption SEP, which is inspired by Harris’ percolation argument [9, 18], in Section 5 we show that the graphical construction leads to a well defined Feller process and derive explicitly its generator on local functions and other good functions (see Propositions 5.8, 5.10, 5.11, 5.12). The analysis generalizes the one in [25, Chapter 2] (done for S=dS={\mathbb{Z}}^{d}, and cx,yc_{x,y} of the form p(xy)p(x-y) for a finite range probability p()p(\cdot) on d{\mathbb{Z}}^{d}). Checking the validity of Assumption SEP consists of proving subcriticality in suitable percolation models with random inhomogeneous parameters. Also for SEP we provide a general criterion assuring that local functions form a core for the generator (see Propositions 5.14, 5.16). In Section 6 we discuss some applications to SEPs in a random environment. We point out that in [11] we assumed what we called there “Assumption (SEP)”, which corresponds to the validity of the present Assumption SEP for a.a. realizations of the environment. In particular, in [11] we checked its validity for some classes of SEPs in a random environment. In Section 6 we recall these results in the present language. As a byproduct, we also derive the existence (and several properties of its Markov semigroup on continuous functions) of Mott v.r.h. on a marked Poisson point process. We point out that the SEPs treated in [11] are indeed SSEPs. As a byproduct of our results in Section 3, the quenched hydrodynamic limit derived in [11] remains valid also by removing Assumption (SEP) there. This application will be detailed in [14].

Outline of the paper. Section 2 is devoted to notation and preliminaries. In Section 3 we describe the stirring graphical construction and our main results for SSEP (the analogous for SEP is given in Section 5). In Section 4 we discuss some applications to SSEPs in a random environment (the analogous for SEP is given in Section 6). The other sections and Appendix A are devoted to proofs.

2. Notation and preliminaries

Given a topological space 𝒲\mathcal{W} we denote by (𝒲)\mathcal{B}(\mathcal{W}) the σ\sigma–algebra of its Borel subsets. We think of 𝒲\mathcal{W} as a measurable space with σ\sigma–algebra of measurable subsets given by (𝒲)\mathcal{B}(\mathcal{W}).

Given a metric space 𝒲\mathcal{W} with metric dd satisfying d(x,y)1d(x,y)\leq 1 for all x,y𝒲x,y\in\mathcal{W} (this can be assumed at cost to replace dd by d1d\wedge 1), D𝒲:=D(+,𝒲)D_{\mathcal{W}}:=D({\mathbb{R}}_{+},\mathcal{W}) is the space of càdlàg paths from +:=[0,+){\mathbb{R}}_{+}:=[0,+\infty) to 𝒲\mathcal{W} endowed with the Skorohod distance associated to dd. We denote this distance by dSd_{\rm S} and for completeness we recall its definition (see [8, Chapter 3]). Let Λ\Lambda be the family of strictly increasing bijective functions λ:++\lambda:{\mathbb{R}}_{+}\to{\mathbb{R}}_{+} such that

γ(λ):=sups>t0|logλ(s)λ(t)st|<+.\gamma(\lambda):=\sup_{s>t\geq 0}\Big{|}\log\frac{\lambda(s)-\lambda(t)}{s-t}\Big{|}<+\infty\,.

Then, given ζ,ξD𝒲\zeta,\xi\in D_{\mathcal{W}}, the distance dS(ζ,ξ)d_{S}(\zeta,\xi) is defined as

dS(ζ,ξ):=infλΛ{γ(λ)0eu[supt0d(ζ(tu),ζ(λ(t)u))]𝑑u}.d_{\rm S}(\zeta,\xi):=\inf_{\lambda\in\Lambda}\left\{\gamma(\lambda)\lor\int_{0}^{\infty}e^{-u}\Big{[}\sup_{t\geq 0}d\Big{(}\zeta(t\wedge u),\zeta(\lambda(t)\wedge u)\Big{)}\Big{]}du\right\}\,.

Due to [8, Proposition 5.3, Chapter 3] ζnζ\zeta_{n}\to\zeta in D𝒲D_{\mathcal{W}} if and only if there exists a sequence λnΛ\lambda_{n}\in\Lambda such that

γ(λn)0 and sup0tTd(ζn(t),ζ(λn(t)))0for all T>0.\gamma(\lambda_{n})\to 0\qquad\text{ and }\qquad\sup_{0\leq t\leq T}d\big{(}\zeta_{n}(t)\,,\,\zeta(\lambda_{n}(t))\big{)}\to 0\;\;\text{for all }T>0\,. (2)

If 𝒲\mathcal{W} is separable, then the Borel σ\sigma–algebra (D𝒲)\mathcal{B}(D_{\mathcal{W}}) of D𝒲D_{\mathcal{W}} coincides with the σ\sigma-algebra generated by the coordinate maps ζζ(t)\zeta\mapsto\zeta(t), t+t\in{\mathbb{R}}_{+} (see [8, Proposition 7.1, Chapter 3]). If 𝒲\mathcal{W} is Polish (i.e. it is a complete separable metric space), then also D𝒲D_{\mathcal{W}} is Polish (see [8, Theorem 5.6, Chapter 3]).

We now discuss two examples, frequently used in the rest. In what follows we will take ={0,1,2,}{\mathbb{N}}=\{0,1,2,\dots\} endowed with the discrete topology and in particular with distance 𝟙(xy)\mathds{1}(x\not=y) between x,yx,y\in{\mathbb{N}}. DD_{\mathbb{N}} will be endowed with the Skorohod metric, denoted in this case by 𝔡\mathfrak{d}. Since {\mathbb{N}} is separable, (D)\mathcal{B}(D_{{\mathbb{N}}}) is generated by the coordinate maps Dξξ(t)D_{{\mathbb{N}}}\ni\xi\mapsto\xi(t)\in{\mathbb{N}}, t+t\in{\mathbb{R}}_{+}.

Given a countable infinite set SS, we fix once and for all an enumeration

S={sn:n=1,2,}S=\{s_{n}\,:\,n=1,2,\dots\} (3)

of SS and we endow {0,1}S\{0,1\}^{S} with the metric

d(ξ,ξ):=n12n|ξ(sn)ξ(sn)|.d(\xi,\xi^{\prime}):=\sum_{n\geq 1}2^{-n}|\xi(s_{n})-\xi^{\prime}(s_{n})|\,. (4)

Then this metric induces the product topology on {0,1}S\{0,1\}^{S} ({0,1}\{0,1\} has the discrete topology). We point out that {0,1}S\{0,1\}^{S} is a Polish space. Indeed, {0,1}S\{0,1\}^{S} is a compact metric space and therefore it is also complete and separable. As a consequence also D{0,1}SD_{\{0,1\}^{S}} is Polish. Given a path ξD{0,1}S\xi\in D_{\{0,1\}^{S}} and a time t0t\geq 0, we will sometimes write ξt\xi_{t} instead of ξ(t)\xi(t). In particular, ξt(x)\xi_{t}(x) will be the value at xSx\in S of the configuration ξt\xi_{t}. Moreover, we will usually write ξ\xi_{\cdot} instead of ξ\xi to denote a generic path in D{0,1}SD_{\{0,1\}^{S}}.

Since {0,1}S\{0,1\}^{S} is separable, the σ\sigma–algebra (D{0,1}S)\mathcal{B}\left(D_{\{0,1\}^{S}}\right) is generated by the coordinate maps D{0,1}Sξξt{0,1}SD_{\{0,1\}^{S}}\ni\xi_{\cdot}\mapsto\xi_{t}\in\{0,1\}^{S}, t+t\in{\mathbb{R}}_{+}. Since ({0,1}S)\mathcal{B}\left(\{0,1\}^{S}\right) is generated by the coordinate maps {0,1}Sξξ(x){0,1}\{0,1\}^{S}\ni\xi\mapsto\xi(x)\in\{0,1\}, we conclude that (D{0,1}S)\mathcal{B}\left(D_{\{0,1\}^{S}}\right) is generated by the maps D{0,1}Sξξt(x){0,1}D_{\{0,1\}^{S}}\ni\xi_{\cdot}\mapsto\xi_{t}(x)\in\{0,1\} as t,xt,x vary in +{\mathbb{R}}_{+} and SS, respectively. This will be used in what follows.

3. Graphical construction, Markov generator and duality of SSEP

Let S={sn:n=1,2,}S=\{s_{n}:n=1,2,...\} be an infinite countable set. We denote by S\mathcal{E}_{S} the family of unordered pairs of elements of SS, i.e.

S:={{x,y}:xy,x,yS}.\mathcal{E}_{S}:=\{\{x,y\}\,:\,x\not=y,\;x,y\in S\}\,. (5)

To each pair {x,y}S\{x,y\}\in\mathcal{E}_{S} we associate a number c{x,y}[0,+)c_{\{x,y\}}\in[0,+\infty). To simplify the notation, we write cx,yc_{x,y} instead of c{x,y}c_{\{x,y\}}. Note that cx,y=cy,xc_{x,y}=c_{y,x} for all xyx\not=y in SS. Moreover, to simplify the formulas below, we set

cx,x:=0xS.c_{x,x}:=0\qquad\forall x\in S\,.

The following assumption, in force throughout all this section, will assure that the graphical construction of the symmetric simple exclusion process (SSEP) as stirring process is well posed.

Assumption SSEP. We assume that the following two conditions are satisfied:

  • (C1)

    For all xSx\in S it holds cx:=yScx,y[0,+)c_{x}:=\sum_{y\in S}c_{x,y}\in[0,+\infty);

  • (C2)

    For each xSx\in S the continuous-time random walk on SS starting at xx and with jump probability rates (cy,z:{y,z}S)\bigl{(}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}c_{y,z}\,:\,\{y,z\}}\in\mathcal{E}_{S}\bigr{)} a.s. has no explosion.

When Conditions (C1) and (C2) are satisfied, we say that the random walk (Xt)t0(X_{t})_{t\geq 0} on SS with jump rates (or conductances) cx,yc_{x,y} is well defined (for all times). This random walk (also called conductance model, cf. [3]) is built in terms of waiting times and jumps as follows. Arrived at (or starting at) xx, the random walk waits at xx an exponential time with mean 1/cx(0,+]1/c_{x}\in(0,+\infty]. If cx=0c_{x}=0, this waiting time is infinite. If cx>0c_{x}>0, once completed its waiting, the random walk jumps to another site yy chosen with probability cx,y/cxc_{x,y}/c_{x} (independently from the rest). Condition (C2) says that, a.s., the jump times in the above construction have no accumulation point and therefore the random walk is well defined for all times.

3.1. Graphical construction of the SSEP

We consider the product space DSD_{\mathbb{N}}^{\mathcal{E}_{S}} endowed with the product topology (recall that D=D(+,)D_{\mathbb{N}}=D({\mathbb{R}}_{+},{\mathbb{N}}) is endowed with the Skorohod metric 𝔡\mathfrak{d}, see Section 2). We write 𝒦=(𝒦x,y){x,y}S\mathcal{K}=(\mathcal{K}_{x,y})_{\{x,y\}\in\mathcal{E}_{S}} for a generic element of DSD_{\mathbb{N}}^{\mathcal{E}_{S}}. The product topology on DSD_{\mathbb{N}}^{\mathcal{E}_{S}} is induced by the metric

d(𝒦,𝒦):=i,j:1i<j2(i+j)min{1,𝔡(𝒦si,sj,𝒦si,sj)}.d(\mathcal{K},\mathcal{K}^{\prime}):=\sum_{\begin{subarray}{c}i,j\in{\mathbb{N}}:\\ 1\leq i<j\end{subarray}}2^{-(i+j)}\min\left\{1,\mathfrak{d}(\mathcal{K}_{s_{i},s_{j}},\mathcal{K}_{s_{i},s_{j}}^{\prime})\right\}\,. (6)
Definition 3.1 (Probability measure {\mathbb{P}}).

We associate to each pair {x,y}S\{x,y\}\in\mathcal{E}_{S} a Poisson process (Nx,y(t))t0(N_{x,y}(t))_{t\geq 0} with intensity cx,yc_{x,y} and with Nx,y(0)=0N_{x,y}(0)=0, such that the Nx,y()N_{x,y}(\cdot)’s are independent processes when varying the pair {x,y}\{x,y\}. We define {\mathbb{P}} as the law on DSD_{{\mathbb{N}}}^{\mathcal{E}_{S}} of the random object (Nx,y()){x,y}S(N_{x,y}(\cdot))_{\{x,y\}\in\mathcal{E}_{S}} and we denote by 𝔼[]{\mathbb{E}}[\cdot] the expectation associated to {\mathbb{P}}.

We stress that, since the pairs {x,y}\{x,y\} are unordered, we have Kx,y=Ky,xK_{x,y}=K_{y,x} and Nx,y=Ny,xN_{x,y}=N_{y,x}.

We briefly recall the graphical construction of the SSEP as stirring process (see also Figure 1). The detailed description and the proof that a.s. it is well posed will be provided in Section 7.

Given 𝒦DS\mathcal{K}\in D_{{\mathbb{N}}}^{\mathcal{E}_{S}} and xSx\in S we define Xtx[𝒦]X_{t}^{x}[\mathcal{K}] as the output of the following algorithm (see Definition 7.4 for details). Start at xx and consider the set of all jump times not exceeding tt of the paths of the form 𝒦x,y()\mathcal{K}_{x,y}(\cdot) with ySy\in S. If this set is empty, then stop and define Xtx[𝒦]:=xX_{t}^{x}[\mathcal{K}]:=x, otherwise take the maximum value t1t_{1} in this set. If t1t_{1} is the jump time of 𝒦x,x1()\mathcal{K}_{x,x_{1}}(\cdot), then move to x1x_{1} and consider now the set of all jump times strictly smaller than t1t_{1} of the paths of the form 𝒦x1,y()\mathcal{K}_{x_{1},y}(\cdot) with ySy\in S. If this set is empty, then stop and define Xtx[𝒦]:=x1X_{t}^{x}[\mathcal{K}]:=x_{1}, otherwise take the maximum value t2t_{2} in this set and repeat the above step. Iterate this procedure until the set of jump times is empty. Then the algorithm stops and its output Xtx[𝒦]X_{t}^{x}[\mathcal{K}] is the last site of SS visited by the algorithm. Roughly, to determine Xtx[𝒦]X_{t}^{x}[\mathcal{K}] it is enough to follow the path in the graph of Figure 1, starting at xx at time tt, going back in time and crossing an horizontal edge every time it appears. Then Xtx[𝒦]X_{t}^{x}[\mathcal{K}] is the site visited by the path at time 0. In Section 7 we will prove that the above construction is well posed for all xSx\in S and t0t\geq 0 if 𝒦Γ\mathcal{K}\in\Gamma_{*}, where Γ\Gamma_{*} is a suitable Borel subset of DSD_{{\mathbb{N}}}^{\mathcal{E}_{S}} with (Γ)=1{\mathbb{P}}(\Gamma_{*})=1.

Refer to caption
Figure 1. Graphical construction of Xtx[𝒦]X_{t}^{x}[\mathcal{K}] when S=S={\mathbb{Z}} and cy,z>0c_{y,z}>0 if and only if |yz|=1|y-z|=1. 𝒦\mathcal{K} is typical (jumps are only at edges {y,z}\{y,z\} with |yz|=1|y-z|=1). Vertical segments associated to the edge {y,z}\{y,z\} correspond to the jump times of 𝒦y,z()\mathcal{K}_{y,z}(\cdot). The vertexes x1,x2,..x_{1},x_{2},.. built in the algorithm are the one visited by the bold path moving from time tt to time 0. At the end one gets Xtx[𝒦]=x+1X_{t}^{x}[\mathcal{K}]=x+1.

Having defined Xtx[𝒦]X^{x}_{t}[\mathcal{K}], given σ{0,1}S\sigma\in\{0,1\}^{S} we set

ηtσ[𝒦](x):=σ(Xtx[𝒦]).\eta^{\sigma}_{t}[\mathcal{K}](x):=\sigma\bigl{(}X^{x}_{t}[\mathcal{K}]\bigr{)}\,.

Then ηtσ[𝒦]{0,1}S\eta^{\sigma}_{t}[\mathcal{K}]\in\{0,1\}^{S}. In Lemma 7.12 in Section 7 we show that ησ[𝒦]=(ηtσ[𝒦])t0D{0,1}S\eta^{\sigma}_{\cdot}[\mathcal{K}]=\bigl{(}\eta^{\sigma}_{t}[\mathcal{K}]\bigr{)}_{t\geq 0}\in D_{\{0,1\}^{S}} and that the map Γ𝒦ησ[𝒦]D{0,1}S\Gamma_{*}\ni\mathcal{K}\mapsto\eta^{\sigma}_{\cdot}[\mathcal{K}]\in D_{\{0,1\}^{S}} is measurable in 𝒦\mathcal{K}.

We write \mathcal{F} for the σ\sigma–algebra (D{0,1}S)\mathcal{B}(D_{\{0,1\}^{S}}) of Borel subsets of D{0,1}SD_{\{0,1\}^{S}}. Since {0,1}S\{0,1\}^{S} is separable, \mathcal{F} is generated by the coordinate maps D{0,1}Sηηt{0,1}SD_{\{0,1\}^{S}}\ni\eta\mapsto\eta_{t}\in\{0,1\}^{S}, t0t\geq 0 (see Section 2). We also define t\mathcal{F}_{t} as the σ\sigma–algebra generated by the coordinate maps D{0,1}Sηηs{0,1}SD_{\{0,1\}^{S}}\ni\eta\mapsto\eta_{s}\in\{0,1\}^{S} with s[0,t]s\in[0,t]. Then (D{0,1}S,(t)t0,)(D_{\{0,1\}^{S}},(\mathcal{F}_{t})_{t\geq 0},\mathcal{F}) is a filtered measurable space. For each σ{0,1}S\sigma\in\{0,1\}^{S} we define σ{\mathbb{P}}^{\sigma} as the probability measure on the above filtered measurable space given by

σ(A):=(𝒦Γ:ησ[𝒦]A)A.{\mathbb{P}}^{\sigma}(A):={\mathbb{P}}(\mathcal{K}\in\Gamma_{*}\,:\,\eta^{\sigma}_{\cdot}[\mathcal{K}]\in A)\qquad A\in\mathcal{F}\,.

In what follows, we write 𝔼σ{\mathbb{E}}^{\sigma} for the expectation w.r.t. σ{\mathbb{P}}^{\sigma}. Similarly to [25, Theorem 2.4] we get:

Proposition 3.2 (Construction of SSEP).

The family {σ:σ{0,1}S}\big{\{}{\mathbb{P}}^{\sigma}:\sigma\in\{0,1\}^{S}\big{\}} of probability measures on the filtered measurable space (D{0,1}S,(t)t0,)(D_{\{0,1\}^{S}},(\mathcal{F}_{t})_{t\geq 0},\mathcal{F}) is a Markov process (called symmetric simple exclusion process with conductances cx,yc_{x,y}), i.e.

  • (i)

    σ(η0=σ){\mathbb{P}}^{\sigma}(\eta_{0}=\sigma) for all σ{0,1}S\sigma\in\{0,1\}^{S};

  • (ii)

    for any AA\in\mathcal{F} the function {0,1}Sσσ(A)[0,1]\{0,1\}^{S}\ni\sigma\mapsto{\mathbb{P}}^{\sigma}(A)\in[0,1] is measurable;

  • (iii)

    for any σ{0,1}S\sigma\in\{0,1\}^{S} and AA\in\mathcal{F} it holds σ(ηt+A|t)=ηt(A){\mathbb{P}}^{\sigma}(\eta_{t+\cdot}\in A\,|\,\mathcal{F}_{t})={\mathbb{P}}^{\eta_{t}}(A) σ{\mathbb{P}}^{\sigma}–a.s.

For the proof of the above proposition see Section 7.1.

3.2. Markov semigroup and infinitesimal generator

We write C({0,1}S)C(\{0,1\}^{S}) for the space of real continuous functions on {0,1}S\{0,1\}^{S} endowed with the uniform norm.

Proposition 3.3 (Feller property).

Given fC({0,1}S)f\in C(\{0,1\}^{S}) and given t0t\geq 0, the map S(t)f:{0,1}SS(t)f:\{0,1\}^{S}\to{\mathbb{R}} defined as (S(t)f)(σ):=𝔼[f(ηtσ[𝒦])]=𝑑σ(η)f(ηt)\big{(}S(t)f\big{)}(\sigma):={\mathbb{E}}\left[f\big{(}\eta^{\sigma}_{t}[\mathcal{K}]\big{)}\right]=\int d{\mathbb{P}}^{\sigma}(\eta_{\cdot})f(\eta_{t}) belongs to C({0,1}S)C(\{0,1\}^{S}). In particular, the SSEP with conductances cx,yc_{x,y} is a Feller process.

For the proof of the above proposition see Section 7.2.

Due to the Markov property in Proposition 3.2, (S(t))t0(S(t))_{t\geq 0} is a semigroup on {0,1}S\{0,1\}^{S}. Moreover, by using dominated convergence and that tηtσ[𝒦]t\mapsto\eta^{\sigma}_{t}[\mathcal{K}] is right-continuous for 𝒦Γ\mathcal{K}\in\Gamma_{*} (cf. Lemma 7.12), it is simple to check that (S(t))t0(S(t))_{t\geq 0} is a strongly continuous semigroup. Its infinitesimal generator \mathcal{L} is then the Markov generator \mathcal{L} of the SSEP with conductances cx,yc_{x,y}. We recall that \mathcal{L} has domain

𝒟():={fC({0,1}S):S(t)fft has limit in C({0,1})S as t0}\mathcal{D}(\mathcal{L}):=\big{\{}f\in C(\{0,1\}^{S})\,:\,\frac{S(t)f-f}{t}\text{ has limit in }C(\{0,1\})^{S}\text{ as }t\downarrow 0\big{\}}

and is defined as f=limt0S(t)fft\mathcal{L}f=\lim_{t\downarrow 0}\frac{S(t)f-f}{t} where the above limit is in C({0,1}S)C(\{0,1\}^{S}).

Proposition 3.4 (Infinitesimal generator on local functions).

Local functions belong to the domain 𝒟()\mathcal{D}(\mathcal{L}). Moreover, for any local function ff, we have

f(η)=xSyScx,yη(x)(1η(y))[f(ηx,y)f(η)],η{0,1}S\mathcal{L}f(\eta)=\sum_{x\in S}\sum_{y\in S}c_{x,y}\eta(x)\bigl{(}1-\eta(y)\bigr{)}\left[f(\eta^{x,y})-f(\eta)\right]\,,\;\;\eta\in\{0,1\}^{S} (7)

and

f(η)={x,y}Scx,y[f(ηx,y)f(η)],η{0,1}S.\mathcal{L}f(\eta)=\sum_{\{x,y\}\in\mathcal{E}_{S}}c_{x,y}\bigl{[}f(\eta^{x,y})-f(\eta)\bigr{]}\,,\;\;{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\eta\in\{0,1\}^{S}}\,. (8)

The sum in the r.h.s. of (7) and (8) are absolutely convergent series of functions in C({0,1}S)C(\{0,1\}^{S}).

The configuration ηx,y\eta^{x,y} is obtained from η\eta by exchanging the occupation variables at xx and yy, i.e.

ηx,y(z)={η(y) if z=x,η(x) if z=y,η(z) otherwise.\eta^{x,y}(z)=\begin{cases}\eta(y)&\text{ if }z=x\,,\\ \eta(x)&\text{ if }z=y\,,\\ \eta(z)&\text{ otherwise}\,.\end{cases} (9)

Moreover, we recall that a function f:{0,1}Sf:\{0,1\}^{S}\to{\mathbb{R}} is called local if, for some finite ASA\subset S, f(η)f(\eta) is determined by (ηx)xA(\eta_{x})_{x\in A} (note that any local function is also continuous on {0,1}S\{0,1\}^{S}). In what follows, we will denote by 𝒞\mathcal{C} the set of local functions, which is dense in C({0,1}S)C(\{0,1\}^{S}).

The proof of Proposition 3.4, given in Section 7.3, has several similarities with then one in [11, Appendix B], where another graphical construction is used.

As in [20], given fC({0,1}S)f\in C(\{0,1\}^{S}), we set

Δf(x):=sup{|f(η)f(ξ)|:η,ξ{0,1}S with η(y)=ξ(y)yS{x}}.\Delta_{f}(x):=\sup\big{\{}|f(\eta)-f(\xi)|:\eta,\xi\in\{0,1\}^{S}\text{ with }\eta(y)=\xi(y)\;\forall y\in S\setminus\{x\}\big{\}}\,. (10)

We also define

|f|:=xSΔf(x) and |f|:=xScxΔf(x).{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}:=\sum_{x\in S}\Delta_{f}(x)\;\;\text{ and }\;\;{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}:=\sum_{x\in S}c_{x}\Delta_{f}(x)\,. (11)

Then Proposition 3.4 can be extended to a larger class of functions by approximation. Indeed we have:

Proposition 3.5 (Infinitesimal generator on further good functions).

Let fC({0,1}S)f\in C(\{0,1\}^{S}) satisfy |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty and |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty. Then f𝒟()f\in\mathcal{D}(\mathcal{L}) and f(η)={x,y}Scx,y[f(ηx,y)f(η)]\mathcal{L}f(\eta)=\sum_{\{x,y\}\in\mathcal{E}_{S}}c_{x,y}\bigl{[}f(\eta^{x,y})-f(\eta)\bigr{]}, where the r.h.s. is an absolutely convergent series of functions in C({0,1}S)C(\{0,1\}^{S}).

The proof of the above proposition is given in Section 7.4. Note that if ff is a local function, then Δf(x)=0\Delta_{f}(x)=0 except for a finite set of elements xx. In particular, local functions satisfy both |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty and |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty.

For the next result we recall that a set 𝒜\mathcal{A} is a core of \mathcal{L} if 𝒜𝒟()\mathcal{A}\subset\mathcal{D}(\mathcal{L}) and the graph of \mathcal{L} in C({0,1}S)×C({0,1}S)C(\{0,1\}^{S})\times C(\{0,1\}^{S}) is the closure of the set {(f,f):f𝒜}\{(f,\mathcal{L}f)\,:\,f\in\mathcal{A}\}. We denote by (Xt)t0(X_{t})_{t\geq 0} the continuous–time random walk on SS with jump probability rates cx,yc_{x,y} (which is well defined by Assumption SSEP) and we denote by Ex[]E_{x}[\cdot] the expectation referred to the random walk (Xt)t0(X_{t})_{t\geq 0} starting at xx. Moreover, we set +:=+{\mathbb{Q}}_{+}:={\mathbb{R}}_{+}\cap{\mathbb{Q}} and we recall that cx:=yScx,yc_{x}:=\sum_{y\in S}c_{x,y}.

Proposition 3.6 (Core for \mathcal{L}).

Suppose that

Ex[cXt]<+t+,xS.E_{x}[c_{X_{t}}]<+\infty\qquad\forall t\in{\mathbb{Q}}_{+}\,,\;\forall x\in S\,. (12)

Then the family 𝒞\mathcal{C} of local functions is a core for \mathcal{L}.

The proof of the above proposition is given in Section 7.5.

Remark 3.7.

Trivially, by combining Proposition 3.5 and Proposition 3.6, we get that the set {fC({0,1}S):|f|<+,|f|<+}\{f\in C(\{0,1\}^{S})\,:\,{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty\,,\;{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty\} is a core for \mathcal{L} under condition (12).

Our attention to deal in (12) with t+t\in{\mathbb{Q}}_{+} and not with t+t\in{\mathbb{R}}_{+} is motivated by the applications to random walks in random environment (see Section 4). Indeed, dealing with the countable set +{\mathbb{Q}}_{+}, (12) is valid for almost any environment if, fixed t+t\in{\mathbb{Q}}_{+}, for almost any environment it holds Ex[cXt]<+E_{x}[c_{X_{t}}]<+\infty for all xSx\in S.

We note that, due to the symmetry cx,y=cy,xc_{x,y}=c_{y,x}, condition (1) assuring the validity of the analytic construction of the SSEP in [20] and in particular of [20, Theorem 3.9] reads

supxScx<+.\sup_{x\in S}c_{x}<+\infty\,. (13)

When (13) is satisfied, [20, Theorem 3.9] provides also a core for the generator of SSEP, which is given by {fC({0,1}S):|f|<+}\{f\in C(\{0,1\}^{S})\,:\,{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty\}. It is then standard to derive from this result that 𝒞\mathcal{C} is a core for \mathcal{L} (see e.g. Remark 7.13 in Section 7.4 and use that the graph of \mathcal{L} is closed since \mathcal{L} is a Markov generator [20, Chapter 1]). All the above results from [20] are indeed included in ours since, under (13), Assumption SSEP is fulfilled, condition (12) is automatically satisfied and |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty whenever |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty.

3.3. Duality with the rw (Xt)t0(X_{t})_{t\geq 0}

Due to its relevance for the study of hydrodynamics and fluctuations of the empirical field for SSEPs in random environment (see e.g. [5, 11, 16]), we focus here on duality at the level of the density field. Apart from Lemma 3.9, the notions and results presented below are discussed in [11, Sections 6 and 8] (the proofs in [11] can be easily adapted to our notation, since it is enough to take ε=1\varepsilon=1 there and to replace ω^\hat{\omega} and Cloc(εω^)C_{\rm loc}(\varepsilon\hat{\omega}) there by SS and Cc(S)C_{c}(S) respectively). We denote by Cc(S)C_{c}(S) the set of functions f:Sf:S\to{\mathbb{R}} which are zero outside a finite subset of SS. Since SS has the discrete topology, Cc(S)C_{c}(S) corresponds to the set of real functions with compact support. We write 𝔫\mathfrak{n} for the counting measure on SS and we introduce the set 𝒟\mathcal{D} given by

𝒟:={fL2(𝔫):xSyScx,y(f(y)f(x))2<+}.\mathcal{D}:=\big{\{}f\in L^{2}(\mathfrak{n})\,:\,\sum_{x\in S}\sum_{y\in S}c_{x,y}\bigl{(}f(y)-f(x)\bigr{)}^{2}<+\infty\big{\}}\,.

We then consider the bilinear form \mathcal{E} with domain 𝒟\mathcal{D} given by

(f,g):=12xSyScx,y(f(y)f(x))(g(y)g(x)),f,g𝒟.\mathcal{E}(f,g):=\frac{1}{2}\sum_{x\in S}\sum_{y\in S}c_{x,y}\big{(}f(y)-f(x)\big{)}\big{(}g(y)-g(x)\big{)}\,,\qquad f,g\in\mathcal{D}.

On 𝒟\mathcal{D} we introduce the norm f𝒟\|f\|_{\mathcal{D}} with f𝒟2:=fL2(𝔫)2+(f,f)\|f\|^{2}_{\mathcal{D}}:=\|f\|^{2}_{L^{2}(\mathfrak{n})}+\mathcal{E}(f,f). One can easily derive from Condition (C1) that Cc(S)𝒟C_{c}(S)\subset\mathcal{D}. We then call 𝒟\mathcal{D}_{*} the closure of Cc(S)C_{c}(S) in 𝒟\mathcal{D} w.r.t. the norm 𝒟\|\cdot\|_{\mathcal{D}} (see [11, Section 6]). By arguing as in [15, Example 1.2.5] the bilinear form \mathcal{E} restricted to 𝒟\mathcal{D}_{*} is a regular Dirichlet form. As a consequence, there exists a unique nonpositive self-adjoint operator 𝕃{\mathbb{L}} in L2(𝔫)L^{2}(\mathfrak{n}) such that 𝒟\mathcal{D}_{*} equals the domain of 𝕃\sqrt{-{\mathbb{L}}} and (f,f)=𝕃fL2(𝔫)\mathcal{E}(f,f)=\|\sqrt{-{\mathbb{L}}}f\|_{L^{2}(\mathfrak{n})} for any f𝒟f\in\mathcal{D}_{*} (cf. [15, Theorem 1.3.1]). By [15, Lemma 1.3.2 and Exercise 4.4.1] 𝕃{\mathbb{L}} is the infinitesimal generator of the strongly continuous Markov semigroup (Pt)t0(P_{t})_{t\geq 0} on L2(𝔫)L^{2}(\mathfrak{n}) associated to the random walk (Xt)t0(X_{t})_{t\geq 0} on SS with jump probability rates cx,yc_{x,y} (defined in terms of holding times and jump probabilities as after Assumption SSEP). In particular, we have Ptf(x):=Ex[f(Xt)]P_{t}f(x):=E_{x}\bigl{[}f(X_{t})\bigr{]} where ExE_{x} is the expectation referred to the random walk starting at xx. In what follows we write 𝒟(𝕃)\mathcal{D}({\mathbb{L}}) for the domain of the operator 𝕃{\mathbb{L}}.

Definition 3.8.

Given a function f:Sf:S\to{\mathbb{R}} such that xScx|f(x)|<+\sum_{x\in S}c_{x}|f(x)|<+\infty, we define 𝕃~f:S\tilde{{\mathbb{L}}}f:S\to{\mathbb{R}} as 𝕃~f(x):=yScx,y(f(y)f(x))\tilde{{\mathbb{L}}}f(x):=\sum_{y\in S}c_{x,y}\big{(}f(y)-f(x)\big{)}.

Note that the series in the r.h.s. of the definition of 𝕃~f(x)\tilde{{\mathbb{L}}}f(x) is absolutely convergent by the assumption on ff and the symmetry of the cx,yc_{x,y}’s (indeed yScx,y|f(x)|=cx|f(x)|<+\sum_{y\in S}c_{x,y}|f(x)|=c_{x}|f(x)|<+\infty, while yScx,y|f(y)|\sum_{y\in S}c_{x,y}|f(y)| is finite since we have xSyScx,y|f(y)|=xSyScy,x|f(y)|=yScy|f(y)|\sum_{x\in S}\sum_{y\in S}c_{x,y}|f(y)|=\sum_{x\in S}\sum_{y\in S}c_{y,x}|f(y)|=\sum_{y\in S}c_{y}|f(y)|). In particular, if fCc(S)f\in C_{c}(S) then 𝕃~f\tilde{\mathbb{L}}f is well defined.

Although not necessary to prove the hydrodynamic limit of SSEPs on point processes, the following result has its own interest since it makes the generator 𝕃{\mathbb{L}} explicit on local functions:

Lemma 3.9.

If fCc(S)f\in C_{c}(S), then f𝒟(𝕃)f\in\mathcal{D}({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathbb{L}}}) and 𝕃f=𝕃~f{\mathbb{L}}f=\tilde{\mathbb{L}}f.

The above lemma is proved in Section 7.6.

We can now describe the duality between the SSEP with conductances cx,yc_{x,y} and the random walk with probability rates cx,yc_{x,y} at the level of the density field (i.e. the empirical measure). To this aim we recall that given η{0,1}S\eta\in\{0,1\}^{S} the empirical measure π[η]\pi[\eta] is the atomic measure on SS given by

π[η]:=xSη(x)δx.\pi[\eta]:=\sum_{x\in S}\eta(x)\delta_{x}\,. (14)

Given a real function ff on SS integrable w.r.t. π[η]\pi[\eta], we will write π[η](f)\pi[\eta](f) or simply π(f)\pi(f) for the sum xSf(x)η(x)\sum_{x\in S}f(x)\eta(x) of ff w.r.t. π[η]\pi[\eta]. Trivially, if xS|f(x)|<+\sum_{x\in S}|f(x)|<+\infty as in the lemma below, then ff is integrable w.r.t. π[η]\pi[\eta] for all η{0,1}S\eta\in\{0,1\}^{S}

Recall that :𝒟()C({0,1}S)\mathcal{L}:\mathcal{D}(\mathcal{L})\to C(\{0,1\}^{S}) is the infinitesimal generator of the semigroup (S(t))t0(S(t))_{t\geq 0} on C({0,1}S)C(\{0,1\}^{S}) associated to the SSEP with conductances cx,yc_{x,y}.

Lemma 3.10 (see [11, Lemma 8.2]).

Suppose that f:Sf:S\to{\mathbb{R}} satisfies

xS|f(x)|<+ and xScx|f(x)|<+.\sum_{x\in S}|f(x)|<+\infty\qquad\text{ and }\qquad\sum_{x\in S}c_{x}|f(x)|<+\infty\,. (15)

Then the map {0,1}Sηπ[η](f)\{0,1\}^{S}\ni\eta\mapsto\pi[\eta](f)\in{\mathbb{R}} is continuous and indeed xSf(x)η(x)\sum_{x\in S}f(x)\eta(x) is an absolutely convergent series in C({0,1}S)C(\{0,1\}^{S}). This map belongs to the domain 𝒟()\mathcal{D}(\mathcal{L}) of \mathcal{L} and

(π(f))=xSη(x)𝕃~f(x),\mathcal{L}\big{(}\pi(f)\big{)}=\sum_{x\in S}\eta(x)\tilde{{\mathbb{L}}}f(x)\,, (16)

the r.h.s. of (16) being an absolutely convergent series in C({0,1}S)C(\{0,1\}^{S}).

If in addition to (15) we have f𝒟(𝕃)L2(𝔫)f\in\mathcal{D}({\mathbb{L}})\subset L^{2}(\mathfrak{n}) (for example, if fCc(S)f\in C_{c}(S)), then 𝕃f=𝕃~f{\mathbb{L}}f=\tilde{\mathbb{L}}f and in particular we have the duality relation

(π(f))=xSη(x)𝕃f(x).\mathcal{L}\big{(}\pi(f)\big{)}=\sum_{x\in S}\eta(x){\mathbb{L}}f(x)\,. (17)

Identities of the form (17) are relevant to study hydrodynamics and fluctuations of the density field since they are associated to Dynkin’s martingales. We point out that above we have considered the Markov semigroup of the random walk in L2(𝔫)L^{2}(\mathfrak{n}) since particularly convenient for the stochastic homogenization analysis as in [12]. Of course, one could have considered as well the Markov semigroup on other functional spaces, as the space C0(S)C_{0}(S) of continuous functions on SS vanishing at infinity endowed with the uniform norm.

4. Applications to SSEPs in a random environment

We discuss some applications of the results presented in the previous section to SSEPs in a random environment. We consider for simplicity S=dS={\mathbb{Z}}^{d}, but the arguments and results we will present can be extended to more general graphs (see e.g. [14]).

We take S=dS={\mathbb{Z}}^{d}. We denote by 𝔼d{\mathbb{E}}_{d} the set of undirected edges of the lattice d{\mathbb{Z}}^{d}. We take Ω:=+𝔼d\Omega:={\mathbb{R}}_{+}^{{\mathbb{E}}_{d}}, endowed with the product topology and the Borel σ\sigma–algebra (Ω)\mathcal{B}(\Omega). We let 𝒫\mathcal{P} be a probability measure on (Ω,(Ω))(\Omega,{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}(\Omega)}). Given ωΩ\omega\in\Omega we write ωx,y\omega_{x,y} instead of ω{x,y}\omega_{\{x,y\}}. Given zdz\in{\mathbb{Z}}^{d} we write τz:ΩΩ\tau_{z}:\Omega\to\Omega for the shift (τzω)x,y=ωxz,yz(\tau_{z}\omega)_{x,y}=\omega_{x-z,y-z}.

Given the generic environment ωΩ\omega\in\Omega, we set cx,y(ω):=ωx,yc_{x,y}(\omega):=\omega_{x,y} if {x,y}𝔼d\{x,y\}\in{\mathbb{E}}_{d} and cx,y(ω):=0c_{x,y}(\omega):=0 otherwise. We write (Xtω)t0(X_{t}^{\omega})_{t\geq 0} for the continuous-time random walk in the environment ω\omega with jump rates cx,y(ω)c_{x,y}(\omega) and with state space d{}{\mathbb{Z}}^{d}\cup\{\partial\}, \partial being a cemetery state (in case of explosion). The properties stated in Section 3 hold 𝒫\mathcal{P}–a.s. if Conditions (C1) and (C2) are satisfied 𝒫\mathcal{P}–a.s.. Trivially (C1) is always satisfied. For (C2) (i.e. the random walk (Xtω)t0(X_{t}^{\omega})_{t\geq 0} a.s. does not explode) we have the following criterion:

Proposition 4.1.

Condition (C2) is satisfied 𝒫\mathcal{P}–a.s. in the following three cases:

  • (i)

    d=1d=1 and 𝒫\mathcal{P} is stationary w.r.t. shifts;

  • (ii)

    𝒫\mathcal{P} is stationary w.r.t. shifts and 𝒫(dω)c0(ω)<+\int\mathcal{P}(d\omega)c_{0}(\omega)<+\infty;

  • (iii)

    d2d\geq 2 and under 𝒫\mathcal{P} the coordinates ωx,y\omega_{x,y}, as {x,y}\{x,y\} varies in 𝔼d{\mathbb{E}}_{d}, are i.i.d.

The proof of Item (ii) will be an extension of the arguments used in [6, Lemma 4.3] since we are not assuming here that 𝒫\mathcal{P} is ergodic and that 𝒫(ωx,y>0)=1\mathcal{P}(\omega_{x,y}>0)=1 for all {x,y}𝔼d\{x,y\}\in{\mathbb{E}}_{d}. Item (iii) will follow from the results of [2].

Proof.

We start with Item (i). Since 𝒫\mathcal{P} is stationary, it is enough to restrict to the random walk starting at the origin. Let ω\omega be an environment for which the random walk XtωX^{\omega}_{t} starting at the origin has explosion in path space with positive probability. Since the sum of infinite independent exponential variables with parameters upper bounded by a finite constant diverges a.s., if the random walk explodes with positive probability (in path space) then the parameter cx(ω)c_{x}(\omega) has to diverge for x+x\to+\infty or for xx\to-\infty.

Now, given MM, consider the random set ω^M:={x:cx(ω)M}\hat{\omega}_{M}:=\{x\in{\mathbb{Z}}\,:\,c_{x}(\omega)\leq M\}. For each kk\in{\mathbb{N}} consider the event AM,k(Ω)A_{M,k}\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}(\Omega)} defined as AM,k:={ωΩ:maxω^M=k}A_{M,k}:=\{\omega\in\Omega\,:\max\hat{\omega}_{M}=k\}. By the stationarity of 𝒫\mathcal{P}, 𝒫(AM,k)\mathcal{P}(A_{M,k}) does not depend on kk. On the other hand, the events AM,kA_{M,k}, kk\in{\mathbb{N}}, are disjoint. Since 1𝒫(kAM,k)=k𝒫(AM,k)1\geq\mathcal{P}(\cup_{k}A_{M,k})=\sum_{k}\mathcal{P}(A_{M,k}) we conclude that 𝒫(AM,k)=0\mathcal{P}(A_{M,k})=0 for each MM and kk. By reasoning similarly for minω^M\min\hat{\omega}_{M}, we conclude that 𝒫\mathcal{P}–a.s. for any MM\in{\mathbb{N}} the set ω^M\hat{\omega}_{M} is empty or it is unbounded from the left and from the right. This implies that 𝒫\mathcal{P}–a.s. limx+cx(ω)=+\lim_{x\to+\infty}c_{x}(\omega)=+\infty is violated and, similarly, 𝒫\mathcal{P}–a.s. limxcx(ω)=+\lim_{x\to-\infty}c_{x}(\omega)=+\infty is violated. This allows to conclude that for 𝒫\mathcal{P}–a.a. ω\omega the random walk XtωX_{t}^{\omega} a.s. does not explode.

We now move to Item (ii). At cost to enlarge the probability space by marking edges by i.i.d. non degenerate random variables, independent from the rest, we can assume that

𝒫(ωΩ:τzωτzω for all zz in d)=1.\mathcal{P}\big{(}\omega\in\Omega\,:\,\tau_{z}\omega\not=\tau_{z^{\prime}}\omega\text{ for all }z\not=z^{\prime}\text{ in }{\mathbb{Z}}^{d}\big{)}=1\,{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}.} (18)

We let 𝒵:=𝒫(dω)c0(ω)<+\mathcal{Z}:=\int\mathcal{P}(d\omega)c_{0}(\omega)<+\infty. If 𝒵=0\mathcal{Z}=0 then c0=0c_{0}=0 𝒫\mathcal{P}–a.s. and, by stationarity, cx=0c_{x}=0 for all xdx\in{\mathbb{Z}}^{d} 𝒫\mathcal{P}–a.s.. In this case, 𝒫\mathcal{P}–a.s., all sites are absorbing for the random walk and therefore (C2) is trivially satisfied. From now on we restrict to the case 𝒵>0\mathcal{Z}>0. We define 𝒬\mathcal{Q} as the probability measure on Ω\Omega given by 𝒬(dω)=𝒵1c0(ω)𝒫(dω)\mathcal{Q}(d\omega)=\mathcal{Z}^{-1}c_{0}(\omega)\mathcal{P}(d\omega). Due to the stationarity of 𝒫\mathcal{P}, it is enough to consider the random walk starting at the origin and show that a.s. there is no explosion.

We introduce the discrete-time Markov chain on Ω\Omega with jump probability rates r(ω,ω)r(\omega,\omega^{\prime}) defined as follows: if c0(ω)=0c_{0}(\omega)=0 we set r(ω,ω):=δω,ωr(\omega,\omega^{\prime}):=\delta_{\omega,\omega^{\prime}}, if c0(ω)>0c_{0}(\omega)>0 we set r(ω,ω):=c0,x(ω)/c0(ω)r(\omega,\omega^{\prime}):=c_{0,x}(\omega)/c_{0}(\omega) when ω=τxω\omega^{\prime}=\tau_{x}\omega for some xdx\in{\mathbb{Z}}^{d} and r(ω,ω)=0r(\omega,\omega^{\prime})=0 otherwise. We write (ω¯n)n0(\bar{\omega}_{n})_{n\geq 0} for the above Markov chain when starting at ω\omega. Note that, due to (18), (ω¯n)n0(\bar{\omega}_{n})_{n\geq 0} is well defined for 𝒫\mathcal{P}–a.a. ω\omega and therefore for 𝒬\mathcal{Q}–a.a. ω\omega. Since c0(ω)r(ω,ω)=c0(ω)r(ω,ω)c_{0}(\omega)r(\omega,\omega^{\prime})=c_{0}(\omega^{\prime})r(\omega^{\prime},\omega), the probability measure 𝒬\mathcal{Q} is reversible for the above Markov chain. We write P𝒬P_{\mathcal{Q}} for the law on the path space Ω\Omega^{{\mathbb{N}}} of the Markov chain (ω¯n)n0(\bar{\omega}_{n})_{n\geq 0} with initial distribution 𝒬\mathcal{Q}. Since 𝒬\mathcal{Q} is reversible, P𝒬P_{\mathcal{Q}} is invariant w.r.t. shifts.

We introduce now a sequence (Tn)n0(T_{n})_{n\geq 0} of i.i.d. exponential times of mean one defined on another probability space (Θ,P)(\Theta,P). We can take Θ:=+\Theta:={\mathbb{R}}_{+}^{{\mathbb{N}}} with the product topology, endowed with the Borel σ\sigma–algebra, and we can take Tk(θ):=θkT_{k}(\theta):=\theta_{k} for all θΘ\theta\in\Theta. Then P𝒬PP_{\mathcal{Q}}\otimes P is stationary w.r.t. time-shifts when thought of as probability measure on the path space (Ω+)(\Omega\otimes{\mathbb{R}}_{+})^{{\mathbb{N}}}. We write \mathcal{I} for the σ\sigma–algebra of shift-invariant subsets of (Ω+)(\Omega\otimes{\mathbb{R}}_{+})^{{\mathbb{N}}}. By the ergodic theorem the limit limn1nk=0n1Tk(θ)/c0(ω¯k)\lim_{n\to_{\infty}}\frac{1}{n}\sum_{k=0}^{n-1}T_{k}(\theta)/c_{0}(\bar{\omega}_{k}) exists P𝒬PP_{\mathcal{Q}}\otimes P–a.s. and equals the expectation of T0(θ)/c0(ω¯0)T_{0}(\theta)/c_{0}(\bar{\omega}_{0}) w.r.t. P𝒬PP_{\mathcal{Q}}\otimes P conditioned to \mathcal{I}, which a random variable with values in (0,+](0,+\infty]. As a consequence, P𝒬P(𝒲)=1P_{\mathcal{Q}}\otimes P(\mathcal{W})=1 where 𝒲:={k=0Tk(θ)/c0(ω¯k)=+}\mathcal{W}:=\{\sum_{k=0}^{\infty}T_{k}(\theta)/c_{0}(\bar{\omega}_{k})=+\infty\}. We observe that 𝒬\mathcal{Q} is concentrated on {ω:c0(ω)>0}\{\omega:c_{0}(\omega)>0\}, and 𝒬\mathcal{Q} and 𝒫\mathcal{P} are mutually absolutely continuous when restricted to this set. On the other hand, if c0(ω)=0c_{0}(\omega)=0, we trivially have that 𝒫δωP(𝒲)=1{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{P}}_{\delta_{\omega}}\otimes P(\mathcal{W})=1 (the definition of 𝒫δω\mathcal{P}_{\delta_{\omega}} is similar to the one of 𝒫𝒬\mathcal{P}_{\mathcal{Q}}). We conclude that P𝒫P(𝒲)=1P_{\mathcal{P}}\otimes P\,(\mathcal{W})=1.

Finally we can build the continuous-time random walk Xω=(Xtω)t0X^{\omega}_{\cdot}=(X^{\omega}_{t})_{t\geq 0} with conductances cx,y(ω)c_{x,y}(\omega) and starting at the origin by defining its jump process (i.e. the sequence of states visited by XωX^{\omega}_{\cdot} in chronological order) as an additive functional of the Markov chain (ω¯n)n0(\bar{\omega}_{n})_{n\geq 0} (here we use again (18)) and by using the exponential times Tk(θ)/c0(ω¯k)T_{k}(\theta)/c_{0}(\bar{\omega}_{k}) as waiting times. The construction is standard: when the environment is ω\omega, XωX^{\omega}_{\cdot} starts at the origin and remains there until time T0(θ)/c0(ω¯0)=T0(θ)/c0(ω)T_{0}(\theta)/c_{0}(\bar{\omega}_{0})=T_{0}(\theta)/c_{0}(\omega), afterwards it jumps to the site xdx\in{\mathbb{Z}}^{d} such that ω¯1=τxω\bar{\omega}_{1}=\tau_{x}\omega and remains there for a time T1(θ)/c0(ω¯1)T_{1}(\theta)/c_{0}(\bar{\omega}_{1}) and so on. By (18), for 𝒫\mathcal{P}–a.a. ω\omega the above construction is well defined (e.g. the above xx is univocally determined). The event 𝒲\mathcal{W} then corresponds to non-explosion of the trajectory. Since P𝒫P(𝒲)=1P_{\mathcal{P}}\otimes P\,(\mathcal{W})=1, we conclude that for 𝒫\mathcal{P}–a.a. ω\omega condition (C2) is fulfilled.

We conclude with Item (iii). We define Γ(ω)\Gamma(\omega) as the graph with edges {x,y}𝔼d\{x,y\}\in{\mathbb{E}}_{d} with ωx,y>0\omega_{x,y}>0 and with vertexes given by the points belonging to the above edges. We write V(ω)V(\omega) and E(ω)E(\omega) for the vertex set and the edge set of Γ(ω)\Gamma(\omega), respectively. For eE(ω)e\in E(\omega) we set t(e):=min{1,ωe1/2}t(e):=\min\{1,\omega_{e}^{-1/2}\} and, given x,yV(ω)x,y\in V(\omega), we set d~(x,y):=inf{i=1nt(ei)}\tilde{d}(x,y):=\inf\{\sum_{i=1}^{n}t(e_{i})\}, where the infimum is taken over all paths (e1,e2,,en)(e_{1},e_{2},\dots,e_{n}) from xx to yy in Γ(ω)\Gamma(\omega). Then by [2, Lemma 2.5] the r.w. XtωX_{t}^{\omega} a.s. does not explode if for any connected component 𝒞\mathcal{C} of Γ(ω)\Gamma(\omega) there exists x𝒞x\in\mathcal{C} and θ>0\theta>0 such that

y𝒞exp{θd~(x,y)}<+.\sum_{y\in\mathcal{C}}\exp\{-\theta\tilde{d}(x,y)\}<+\infty\,. (19)

We now define ωe:=max{1,ωe}\omega^{\prime}_{e}:=\max\{1,\omega_{e}\} and t(e):=min{1,(ωe)1/2}t^{\prime}(e):=\min\{1,(\omega^{\prime}_{e})^{-1/2}\} for any e𝔼de\in{\mathbb{E}}_{d}. Given x,ydx,y\in{\mathbb{Z}}^{d}, we set d~(x,y):=inf{i=1nt(ei)}\tilde{d}^{\prime}(x,y):=\inf\{\sum_{i=1}^{n}t^{\prime}(e_{i})\}, where the infimum is taken over all paths (e1,e2,,en)(e_{1},e_{2},\dots,e_{n}) from xx to yy in the lattice d{\mathbb{Z}}^{d}. For eE(ω)e\in E(\omega) we have t(e)t(e)t(e)\geq t^{\prime}(e) since ωeωe\omega^{\prime}_{e}\geq\omega_{e}. Since in addition paths in Γ(ω)\Gamma(\omega) are also paths in the lattice d{\mathbb{Z}}^{d}, we get that d~(x,y)d(x,y)\tilde{d}^{\prime}(x,y)\leq d(x,y) for any x,yV(ω)x,y\in V(\omega). In particular, given x𝒞x\in\mathcal{C} as in (19), the bound in (19) is true if it holds

ydexp{θd~(x,y)}<+.\sum_{y\in{\mathbb{Z}}^{d}}\exp\{-\theta\tilde{d}^{\prime}(x,y)\}<+\infty\,. (20)

We observe that under 𝒫\mathcal{P} the new conductances ωx,y\omega^{\prime}_{x,y}, with {x,y}𝔼d\{x,y\}\in{\mathbb{E}}_{d}, are i.i.d and lower bounded by 1. This is exactly the context of [2]. Then the bound (20) holds for 𝒫\mathcal{P}–a.a. ω\omega due to Theorem 4.3 and Lemma 2.11 in [2]. We conclude that for 𝒫\mathcal{P}–a.a. ω\omega condition (C2) is fulfilled. ∎

The result in Proposition 4.1–(ii) is extended in [14] to prove the a.s. non-explosion (i.e. Condition (C2)) for 𝒫\mathcal{P}–a.a. ω\omega for a very large class of random walks on random graphs in d{\mathbb{R}}^{d}, with also a random vertex set (given by a simple point process).

We now give a simple criterion (based on Proposition 3.6) to verify that the set of local functions 𝒞\mathcal{C} is a core for the generator =(ω)\mathcal{L}=\mathcal{L}(\omega) for 𝒫\mathcal{P}–a.a. environments ω\omega:

Proposition 4.2.

Suppose that 𝒫\mathcal{P} is stationary w.r.t. shifts and 𝒫(dω)c0(ω)<+\int\mathcal{P}(d\omega)c_{0}(\omega)<+\infty. Then for 𝒫\mathcal{P}-a.a. ω\omega condition (12) is satisfied. In particular, for 𝒫\mathcal{P}–a.a. ω\omega the family 𝒞\mathcal{C} of local functions is a core for the generator =(ω)\mathcal{L}=\mathcal{L}(\omega).

We point out that the SSEP considered in Proposition 4.2 is well defined due to Proposition 4.1–(ii).

Proof.

Given the environment ωΩ\omega\in\Omega, consider the process environment viewed from the particle (ω¯t)t0(\bar{\omega}_{t})_{t\geq 0} , i.e. ω¯t:=τXtωω\bar{\omega}_{t}:=\tau_{X_{t}^{\omega}}\omega. By the definition of the translations τz:ΩΩ\tau_{z}:\Omega\to\Omega we have cXtω(ω)=c0(τXtωω)=c0(ω¯t)c_{X^{\omega}_{t}}(\omega)=c_{0}(\tau_{X^{\omega}_{t}}\omega)=c_{0}(\bar{\omega}_{t}). Moreover, because of the symmetry of the jump rates, we have that 𝒫\mathcal{P} is a reversible (and therefore invariant) probability measure for the process. As a consequence, we have

𝒫(dω)Eω[c0(ω¯t)]=𝒫(dω)c0(ω),\int\mathcal{P}(d\omega){\rm E}_{\omega}\big{[}c_{0}(\bar{\omega}_{t})\big{]}=\int\mathcal{P}(d\omega)c_{0}(\omega)\,, (21)

where Eω{\rm E}_{\omega} is the expectation w.r.t. the process environment viewed from the particle starting at the environment ω\omega. By our assumption, (21) is finite. This implies that, for any t0t\geq 0, Eω[c0(ω¯t)]<+{\rm E}_{\omega}\big{[}c_{0}(\bar{\omega}_{t})\big{]}<+\infty for 𝒫\mathcal{P}–a.a. ω\omega. Hence, there exists 𝒜Ω\mathcal{A}\subset\Omega measurable with 𝒫(𝒜)=1\mathcal{P}(\mathcal{A})=1 such that, for all ω𝒜\omega\in\mathcal{A}, Eω[c0(ω¯t)]<+{\rm E}_{\omega}\big{[}c_{0}(\bar{\omega}_{t})\big{]}<+\infty for any t+t\in{\mathbb{Q}}_{+}. This means that, for all ω𝒜\omega\in\mathcal{A}, E0[cXtω(ω)]<+E_{0}\big{[}c_{X^{\omega}_{t}}(\omega)\big{]}<+\infty for all t+t\in{\mathbb{Q}}_{+}. We set 𝒜:=zdτz𝒜\mathcal{A}_{*}:=\cap_{z\in{\mathbb{Z}}^{d}}\tau_{z}\mathcal{A}. By the stationarity of 𝒫\mathcal{P} we have 𝒫(τz𝒜)=𝒫(𝒜)=1\mathcal{P}(\tau_{z}\mathcal{A})=\mathcal{P}(\mathcal{A})=1 and therefore 𝒫(𝒜)=1\mathcal{P}(\mathcal{A}_{*})=1. Moreover, for all ω𝒜\omega\in\mathcal{A}_{*} and xdx\in{\mathbb{Z}}^{d}, τxω𝒜\tau_{x}\omega\in\mathcal{A} and therefore it holds E0[cXtτxω(τxω)]<+E_{0}\big{[}c_{X^{\tau_{x}\omega}_{t}}(\tau_{x}\omega)\big{]}<+\infty for all t+t\in{\mathbb{Q}}_{+}. Using that E0[cXtτxω(τxω)]=Ex[cXtω(ω)]<+E_{0}\big{[}c_{X^{\tau_{x}\omega}_{t}}(\tau_{x}\omega)\big{]}=E_{x}\big{[}c_{X^{\omega}_{t}}(\omega)\big{]}<+\infty, we get that (12) is satisfied for all ω𝒜\omega\in\mathcal{A}_{*}. Proposition 3.6 allows to conclude.

The proof of Proposition 4.2 can be easily adapted to more general SSEP in a random environment ω\omega, with symmetric jump rates cx,y(ω)c_{x,y}(\omega) and on a random graph 𝒢(ω)\mathcal{G}(\omega) as in [12, 14]. More precisely, when considering models as in [12] with symmetric just rates, the condition assuring (12) becomes 𝒫0(dω)c0(ω)<+\int\mathcal{P}_{0}(d\omega)c_{0}(\omega)<+\infty where 𝒫0\mathcal{P}_{0} is the Palm distribution associated to 𝒫\mathcal{P} (of course, one does not need to require all the assumptions in [12]). One can apply the above observation for example to the random walk on the infinite cluster of a supercritical percolation on d{\mathbb{Z}}^{d} also with random conductances (assuring anyway stationarity). In this case 𝒫0\mathcal{P}_{0} would be the probability measure 𝒫\mathcal{P} conditioned to the event that 0 is in the infinite cluster (see [12, Eq. (12)]).

5. Graphical construction and Markov semigroup of SEP

We discuss here the graphical construction and the Markov semigroup of the simple exclusion process (SEP) on the countable set SS, when the jump rates cx,yc_{x,y} are not necessarily symmetric.

We denote by So\mathcal{E}^{o}_{S} the family of ordered pairs of elements of SS, i.e.

So:={(x,y):xy,x,yS}.\mathcal{E}^{o}_{S}:=\{(x,y)\,:\,x\not=y,\;x,y\in S\}\,.

To each pair (x,y)So(x,y)\in\mathcal{E}^{o}_{S} we associate a number cx,y[0,+)c_{x,y}\in[0,+\infty). It is convenient to set

cx,x:=0xS.{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}c_{x,x}:=0\qquad\forall x\in S}\,.

Note that cx,yc_{x,y} is not assumed to be symmetric in xx, yy.

We consider the product space DSoD_{\mathbb{N}}^{\mathcal{E}^{o}_{S}} endowed with the product topology. This topology is induced by a metric d(,)d(\cdot,\cdot), defined similarly to (6): d(𝒦,𝒦):=i1j12(i+j)min{1,𝔡(𝒦si,sj,𝒦si,sj)}d(\mathcal{K},\mathcal{K}^{\prime}):=\sum_{i\geq 1}\sum_{j\geq 1}2^{-(i+j)}\min\big{\{}1,\mathfrak{d}(\mathcal{K}_{s_{i},s_{j}},\mathcal{K}_{s_{i},s_{j}}^{\prime})\big{\}}. We write 𝒦=(𝒦x,y)(x,y)So\mathcal{K}=(\mathcal{K}_{x,y})_{(x,y)\in\mathcal{E}^{o}_{S}} for a generic element of DSoD_{\mathbb{N}}^{\mathcal{E}^{o}_{S}}.

Definition 5.1 (Probability measure {\mathbb{P}}).

We associate to each pair (x,y)So(x,y)\in\mathcal{E}^{o}_{S} a Poisson process (Nx,y(t))t0(N_{x,y}(t))_{t\geq 0} with intensity cx,yc_{x,y} and with Nx,y(0)=0N_{x,y}(0)=0, such that the Nx,y()N_{x,y}(\cdot)’s are independent processes when varying the pair (x,y)(x,y) in So\mathcal{E}^{o}_{S}. We define {\mathbb{P}} as the law on DSoD_{{\mathbb{N}}}^{\mathcal{E}^{o}_{S}} of the random object (Nx,y())(x,y)So(N_{x,y}(\cdot))_{(x,y)\in\mathcal{E}^{o}_{S}} and we denote by 𝔼[]{\mathbb{E}}[\cdot] the expectation associated to {\mathbb{P}}.

The graphical construction of the SEP presented below is based on Harris’ percolation argument [7, 18]. To justify this construction we need a percolation-type assumption. To this aim we define

𝒦x,ys(t):=𝒦x,y(t)+𝒦y,x(t) for t0,{x,y}S.\mathcal{K}_{x,y}^{s}(t):=\mathcal{K}_{x,y}(t)+\mathcal{K}_{y,x}(t)\qquad\text{ for }t\geq 0\,,\;\{x,y\}\in\mathcal{E}_{S}\,.

S\mathcal{E}_{S} above is defined as in (5). Note the symmetry relation 𝒦x,ys(t)=𝒦y,xs(t)\mathcal{K}_{x,y}^{s}(t)=\mathcal{K}_{y,x}^{s}(t) and that 𝒦s:=(𝒦x,ys){x,y}S\mathcal{K}^{s}:=(\mathcal{K}^{s}_{x,y})_{\{x,y\}\in\mathcal{E}_{S}} belongs to DSD_{{\mathbb{N}}}^{\mathcal{E}_{S}}. When 𝒦\mathcal{K} is sampled with distribution {\mathbb{P}}, 𝒦s\mathcal{K}^{s} is a collection of independent processes and in particular 𝒦x,ys\mathcal{K}^{s}_{x,y} is a Poisson process with parameter

cx,ys:=cx,y+cy,x.c^{s}_{x,y}:=c_{x,y}+c_{y,x}\,.

We set cx,xs:=0c^{s}_{x,x}:=0 for all xSx\in S.

From now on we make the following assumption, in force throughout all this section:

Assumption SEP. There exists t0>0t_{0}>0 such that for {\mathbb{P}}–a.a. 𝒦DSo\mathcal{K}\in D_{{\mathbb{N}}}^{\mathcal{E}_{S}^{o}} the undirected graph 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}) with vertex set SS and edge set {{x,y}S:𝒦x,ys(t0)>𝒦x,ys(0)}\{\{x,y\}\in\mathcal{E}_{S}\,:\,\mathcal{K}^{s}_{x,y}(t_{0})>{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{K}^{s}_{x,y}(0)}\big{\}} has only connected components with finite cardinality.

We point out that for {\mathbb{P}}–a.a. 𝒦\mathcal{K} it holds 𝒦x,y(0)=0\mathcal{K}_{x,y}(0)=0 for all (x,y)So(x,y)\in\mathcal{E}_{S}^{o} and therefore 𝒦x,ys(0)=0\mathcal{K}_{x,y}^{s}(0)=0 for all {x,y}S\{x,y\}\in\mathcal{E}_{S}. Hence, Assumption SEP remains unchanged if we replace 𝒦x,ys(0)\mathcal{K}^{s}_{x,y}(0) by zero there. On the other hand, the above choice is more suited for the construction of similar graphs for further time intervals as in Lemma 5.2 below. Due to the loss of memory of the Poisson point process, Assumption SEP implies the following property (we omit the proof since standard):

Lemma 5.2.

For {\mathbb{P}}–a.a. 𝒦DSo\mathcal{K}\in D_{{\mathbb{N}}}^{\mathcal{E}_{S}^{o}} the following holds: r\forall r\in{\mathbb{N}} the undirected graph 𝒢t0r(𝒦)\mathcal{G}^{r}_{t_{0}}(\mathcal{K}) with vertex set SS and edge set {{x,y}S:𝒦x,ys((r+1)t0)>𝒦x,ys(rt0)}\{\{x,y\}\in\mathcal{E}_{S}\,:\,\mathcal{K}^{s}_{x,y}((r+1)t_{0})>\mathcal{K}^{s}_{x,y}(rt_{0})\} has only connected components with finite cardinality.

Trivially, 𝒢t0r(𝒦)=𝒢t0(𝒦)\mathcal{G}^{r}_{t_{0}}(\mathcal{K})=\mathcal{G}_{t_{0}}(\mathcal{K}) for r=0r=0. We also point out that the properties appearing in Assumption SEP and Lemma 5.2 define indeed measurable subsets of DSoD_{\mathbb{N}}^{\mathcal{E}^{o}_{S}}:

Lemma 5.3.

Given rr\in{\mathbb{N}} the set Γr\Gamma_{r} of configurations 𝒦DSo\mathcal{K}\in D_{{\mathbb{N}}}^{\mathcal{E}_{S}^{o}} such that the graph 𝒢t0r(𝒦)\mathcal{G}^{r}_{t_{0}}(\mathcal{K}) has only connected components with finite cardinality is a Borel subset of DSoD_{{\mathbb{N}}}^{\mathcal{E}_{S}^{o}}.

The proof of the above lemma is trivial and therefore omitted (simply note that Γrc\Gamma_{r}^{c} corresponds to the fact that, for some site sns_{n}, for each integer k1k\geq 1 there exist distinct sites y1,y2,,yky_{1},y_{2},\dots,y_{k} in S{sn}S\setminus\{s_{n}\} such that 𝒦yi,yi+1s((r+1)t0)>𝒦yi,yi+1s(rt0)\mathcal{K}^{s}_{y_{i},y_{i+1}}((r+1)t_{0})>\mathcal{K}^{s}_{y_{i},y_{i+1}}(rt_{0}) for all i=0,1,,k1i=0,1,\dots,k-1, where y0:=sny_{0}:=s_{n}).

Trivially, Assumption SEP can be reformulated as follows:

Equivalent formulation of Assumption SEP: Given t0>0t_{0}>0, consider the random graph with vertex set SS obtained by putting an edge between xyx\not=y in SS with probability 1exp{cx,yst0}1-\exp\{-c^{s}_{x,y}t_{0}\}, independently when varying {x,y}\{x,y\} among S\mathcal{E}_{S}. Then, for some t0>0t_{0}>0, the above random graph has a.s. only connected components with finite cardinality.

Remark 5.4.

By stochastic domination, to check Assumption SEP one can as well replace {cx,ys:{x,y}S}\{c^{s}_{x,y}\,:\,\{x,y\}\in\mathcal{E}_{S}\} by any other family {c¯x,y:{x,y}S}\{\bar{c}_{x,y}\,:\,\{x,y\}\in\mathcal{E}_{S}\} such that cx,ysc¯x,yc^{s}_{x,y}\leq\bar{c}_{x,y} for any {x,y}S\{x,y\}\in\mathcal{E}_{S}.

In the above Assumption SEP we have not required any summability property as Condition (C1) in Assumption SSEP. Indeed, this is not necessary due to the following fact proved in Section 8:

Lemma 5.5.

Assumption SEP implies for all xSx\in S that cxs:=yScx,ys<+c^{s}_{x}:=\sum_{y\in S}c^{s}_{x,y}<+\infty, yScx,y<+\sum_{y\in S}c_{x,y}<+\infty, yScy,x<+\sum_{y\in S}c_{y,x}<+\infty.

Recall the definition of Γr\Gamma_{r} given in Lemma 5.3.

Definition 5.6 (Set Γ\Gamma_{*}).

We define Γ\Gamma_{*} as the family of 𝒦DSo\mathcal{K}\in D_{{\mathbb{N}}}^{\mathcal{E}^{o}_{S}} such that

  • (i)

    𝒦rΓr\mathcal{K}\in\cap_{r\in{\mathbb{N}}}\Gamma_{r};

  • (ii)

    the sum ySx𝒦x,ys(t)=ySx(𝒦x,y(t)+𝒦y,x(t))\sum_{y\in S\setminus x}\mathcal{K}^{s}_{x,y}(t)=\sum_{y\in S\setminus x}(\mathcal{K}_{x,y}(t)+\mathcal{K}_{y,x}(t)) is finite for all xx\in{\mathbb{N}} and t+t\in{\mathbb{R}}_{+};

  • (iii)

    given any (x,y)(x,y)(x,y)\not=(x^{\prime},y^{\prime}) in So\mathcal{E}^{o}_{S} the set of jump times of 𝒦x,yo\mathcal{K}^{o}_{x,y} and the set of jump times of 𝒦x,yo\mathcal{K}^{o}_{x^{\prime},y^{\prime}} are disjoint and moreover all jumps equal +1+1;

  • (iv)

    𝒦x,y(0)=0\mathcal{K}_{x,y}(0)=0 for all (x,y)So(x,y)\in\mathcal{E}_{S}^{o}.

It is simple to check (also by using Lemmas 5.3 and 5.5) the following:

Lemma 5.7.

Γ\Gamma_{*} is measurable, i.e. Γ(DSo)\Gamma_{*}\in\mathcal{B}(D_{{\mathbb{N}}}^{\mathcal{E}^{o}_{S}}), and (Γ)=1{\mathbb{P}}(\Gamma_{*})=1.

We briefly describe the graphical construction of the SEP for 𝒦Γ\mathcal{K}\in\Gamma_{*} under Assumption SEP.

Given σ{0,1}S\sigma\in\{0,1\}^{S} we first define a trajectory (ηtσ[𝒦])t0(\eta^{\sigma}_{t}[\mathcal{K}])_{t\geq 0} in D{0,1}SD_{\{0,1\}^{S}} starting at σ\sigma by an iterative procedure. We set η0σ[𝒦]:=σ\eta^{\sigma}_{0}[\mathcal{K}]:=\sigma. Suppose that the trajectory has been defined up to time rt0rt_{0}, rr\in{\mathbb{N}}. As 𝒦Γ\mathcal{K}\in\Gamma_{*} all connected components of 𝒢t0r(𝒦)\mathcal{G}^{r}_{t_{0}}(\mathcal{K}) have finite cardinality. Let 𝒞\mathcal{C} be such a connected component and let

{s1<s2<<sk}={s:𝒦x,y(s)=𝒦x,y(s)+1 for some xy in 𝒞,rt0<s(r+1)t0}.\begin{split}&\{s_{1}<s_{2}<\cdots<s_{k}\}=\\ &\bigl{\{}s\,:\mathcal{K}_{x,y}(s)=\mathcal{K}_{x,y}(s-)+1\text{ for some }x\not=y\text{ in }\mathcal{C},\;rt_{0}<s\leq(r+1)t_{0}\bigr{\}}\,.\end{split} (22)

The local evolution ηtσ[𝒦](z)\eta^{\sigma}_{t}[\mathcal{K}](z) with z𝒞z\in\mathcal{C} and rt0<t(r+1)t0rt_{0}<t\leq(r+1)t_{0} is described as follows. Start with ηrt0σ[𝒦]\eta^{\sigma}_{rt_{0}}[\mathcal{K}] as configuration at time rt0rt_{0} in 𝒞\mathcal{C}. At time s1s_{1} move a particle from xx to yxy\not=x with x,y𝒞x,y\in\mathcal{C} if, just before time s1s_{1}, it holds:

  • (i)

    site xx is occupied and site yy is empty;

  • (ii)

    𝒦x,y(s1)=𝒦x,y(s1)+1\mathcal{K}_{x,y}(s_{1})=\mathcal{K}_{x,y}(s_{1}-)+1.

Note that, since 𝒦Γ\mathcal{K}\in\Gamma_{*}, the set (22) is indeed finite and there exists at most one ordered pair (x,y)(x,y) satisfying (i) and (ii). After this first step, repeat the same operation as above orderly for times s2,s3,,sks_{2},s_{3},\dots,s_{k}. Then move to another connected component of 𝒢t0r(𝒦)\mathcal{G}_{t_{0}}^{r}(\mathcal{K}) and repeat the above construction and so on. As the connected components are disjoint, the resulting path does not depend on the order by which we choose the connected components in the above algorithm (we could as well proceed simultaneously with all connected components). This procedure defines (ηtσ[𝒦])rt0<t(r+1)t0(\eta^{\sigma}_{t}[\mathcal{K}])_{rt_{0}<t\leq(r+1)t_{0}}. Starting with r=0r=0 and progressively increasing rr by +1+1 we get the trajectory ησ[𝒦]=(ηtσ[𝒦])t0\eta^{\sigma}_{\cdot}[\mathcal{K}]=(\eta^{\sigma}_{t}[\mathcal{K}])_{t\geq 0}.

The filtered measurable space (D{0,1}S,(t)t0,)(D_{\{0,1\}^{S}},(\mathcal{F}_{t})_{t\geq 0},\mathcal{F}) is defined as in Section 3.1. Again the space C({0,1}S)C(\{0,1\}^{S}) of real continuous functions on {0,1}S\{0,1\}^{S} is endowed with the uniform topology. Given σ{0,1}S\sigma\in\{0,1\}^{S}, we define σ{\mathbb{P}}^{\sigma} as the probability measure on the above filtered measurable space given by σ(A):=(𝒦Γ:ησ[𝒦]A){\mathbb{P}}^{\sigma}(A):={\mathbb{P}}(\mathcal{K}\in\Gamma_{*}\,:\,\eta^{\sigma}_{\cdot}[\mathcal{K}]\in A) for all AA\in\mathcal{F}. By Lemma 8.3 in Section 8 the set {𝒦Γ:ησ[𝒦]A}\{\mathcal{K}\in\Gamma_{*}\,:\,\eta^{\sigma}_{\cdot}[\mathcal{K}]\in A\} is indeed measurable and therefore σ{\mathbb{P}}^{\sigma} is well defined.

Similarly to Propositions 3.2, 3.3 and 3.4 we have the following results (see Section 8.1 and 8.2 for their proofs):

Proposition 5.8 (Construction of SEP).

The family {σ:σ{0,1}S}\big{\{}{\mathbb{P}}^{\sigma}:\sigma\in\{0,1\}^{S}\big{\}} of probability measures on the filtered measurable space (D{0,1}S,(t)t0,)(D_{\{0,1\}^{S}},(\mathcal{F}_{t})_{t\geq 0},\mathcal{F}) is a Markov process (called simple exclusion process with rates cx,yc_{x,y}), i.e.

  • (i)

    σ(η0=σ){\mathbb{P}}^{\sigma}(\eta_{0}=\sigma) for all σ{0,1}S\sigma\in\{0,1\}^{S};

  • (ii)

    for any AA\in\mathcal{F} the function {0,1}Sσσ(A)[0,1]\{0,1\}^{S}\ni\sigma\mapsto{\mathbb{P}}^{\sigma}(A)\in[0,1] is measurable;

  • (iii)

    for any σ{0,1}S\sigma\in\{0,1\}^{S} and AA\in\mathcal{F} it holds σ(ηt+A|t)=ηt(A){\mathbb{P}}^{\sigma}(\eta_{t+\cdot}\in A\,|\,\mathcal{F}_{t})={\mathbb{P}}^{\eta_{t}}(A) σ{\mathbb{P}}^{\sigma}–a.s.

Remark 5.9.

By changing t0t_{0} in the graphs 𝒢t0r(𝒦)\mathcal{G}_{t_{0}}^{r}(\mathcal{K}), for {\mathbb{P}}–a.a. 𝒦\mathcal{K} the path ησ[𝒦]\eta^{\sigma}_{\cdot}[\mathcal{K}] constructed above does not change, and this for any σ\sigma. In particular, the above SEP does not depend on the particular t0t_{0} for which Assumption SEP holds.

Proposition 5.10 (Feller property).

Given fC({0,1}S)f\in C(\{0,1\}^{S}) and given t0t\geq 0, the map Stf(σ):=𝔼[f(ηtσ[𝒦])]=𝑑σ(η)f(ηt)S_{t}f(\sigma):={\mathbb{E}}\left[f\big{(}\eta^{\sigma}_{t}[\mathcal{K}]\big{)}\right]=\int d{\mathbb{P}}^{\sigma}(\eta_{\cdot})f(\eta_{t}) belongs to C({0,1}S)C(\{0,1\}^{S}). In particular, the SEP with rates cx,yc_{x,y} is a Feller process.

Proposition 5.11 (Infinitesimal generator on local functions).

Local functions belong to the domain 𝒟()\mathcal{D}(\mathcal{L}) of the infinitesimal generator \mathcal{L} of the SEP with rates cx,yc_{x,y}. Moreover, for any local function ff, we have

f(η)=xSyScx,yη(x)(1η(y))[f(ηx,y)f(η)],η{0,1}S.\mathcal{L}f(\eta)=\sum_{x\in S}\sum_{y\in S}c_{x,y}\,\eta(x)\bigl{(}1-\eta(y)\bigr{)}\left[f(\eta^{x,y})-f(\eta)\right]\,,\;\;\eta\in\{0,1\}^{S}\,. (23)

The series in the r.h.s. is an absolutely convergent series of functions in C({0,1}S)C(\{0,1\}^{S}).

We now show that Proposition 5.11 can be extended to a larger class of functions. To this aim, given fC({0,1}S)f\in C(\{0,1\}^{S}), recall the definition of Δf(x)\Delta_{f}(x) given in (10) and recall that |f|:=xSΔf(x){\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}:=\sum_{x\in S}\Delta_{f}(x) (cf. (11)). While in the symmetric case we considered |f|:=xScxΔf(x){\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}:=\sum_{x\in S}c_{x}\Delta_{f}(x), we now set

|f|:=xScxsΔf(x) where cxs:=yScx,ys.{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\star}:=\sum_{x\in S}c^{s}_{x}\Delta_{f}(x)\;\text{ where }\;c^{s}_{x}:=\sum_{y\in S}c_{x,y}^{s}\,.

Similarly to Proposition 3.5 we have the following:

Proposition 5.12 (Infinitesimal generator on further good functions).

Let fC({0,1}S)f\in C(\{0,1\}^{S}) satisfy |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty and |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\star}<+\infty. Then f𝒟()f\in\mathcal{D}(\mathcal{L}) and f(η)=xSyScx,yη(x)(1η(y))[f(ηx,y)f(η)]\mathcal{L}f(\eta)=\sum_{x\in S}\sum_{y\in S}c_{x,y}\eta(x)(1-\eta(y))\bigl{[}f(\eta^{x,y})-f(\eta)\bigr{]}, where the r.h.s. is an absolutely convergent series of functions in C({0,1}S)C(\{0,1\}^{S}).

We point out that local functions satisfy both |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty and |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\star}<+\infty, hence Proposition 5.12 is an extension of Proposition 5.11. The proof of Proposition 5.12 is given in Section 8.3.

Finally we provide a criterion assuring that the family of local functions is a core for the generator \mathcal{L}. To this aim we need the following:

Definition 5.13 (Set Br,x(𝒦)B_{r,x}(\mathcal{K}) and 𝒞t,x(𝒦)\mathcal{C}_{t,x}(\mathcal{K})).

Given xSx\in S, rr\in{\mathbb{N}} and 𝒦Γ\mathcal{K}\in\Gamma_{*} we define the set Br,x(𝒦)B_{r,x}(\mathcal{K}) as follows. First we let C0C_{0} be the connected component of xx in the graph 𝒢t0r(𝒦)\mathcal{G}^{r}_{t_{0}}(\mathcal{K}). Then, we let C1C_{1} be the union of the connected components in the graph 𝒢t0r1(𝒦)\mathcal{G}^{r-1}_{t_{0}}(\mathcal{K}) of yy as yy varies in C0C_{0}. In general, we introduce iteratively C1,C2,,CrC_{1},C_{2},\dots,C_{r} by defining CjC_{j} as the union of the connected components in the graph 𝒢t0rj(𝒦)\mathcal{G}^{r-j}_{t_{0}}(\mathcal{K}) of yy as yy varies in Cj1C_{j-1}. We then set Br,x(𝒦):=CrB_{r,x}(\mathcal{K}):=C_{r} and 𝒞t,x(𝒦):=Br,x(𝒦)=Cr\mathcal{C}_{t,x}(\mathcal{K}):=B_{r,x}(\mathcal{K})=C_{r} for rt0<t(r+1)t0rt_{0}<t\leq(r+1)t_{0}, 𝒞0,x:={x}\mathcal{C}_{0,x}:=\{x\}.

Properties and relevance of Br,x(𝒦)B_{r,x}(\mathcal{K}) and 𝒞t,x(𝒦)\mathcal{C}_{t,x}(\mathcal{K}) will be commented in Remark 8.1 in Section 8. We can now describe the above mentioned criterion:

Proposition 5.14 (Core for \mathcal{L}).

Suppose that

𝔼[|𝒞t,x|]<+xS,t+,\displaystyle{\mathbb{E}}\big{[}|\mathcal{C}_{t,x}|\big{]}<+\infty\qquad\;\;\;\,\forall x\in S\,,\forall t\in{\mathbb{R}}_{+}\,, (24)
𝔼[z𝒞t,xczs]<+xS,t+.\displaystyle{\mathbb{E}}\big{[}\sum_{z\in\mathcal{C}_{t,x}}c_{z}^{s}\big{]}<+\infty\qquad\forall x\in S\,,\forall t\in{\mathbb{R}}_{+}\,. (25)

Then the family 𝒞\mathcal{C} of local functions is a core for \mathcal{L}.

The proof of Proposition 5.14 is given in Section 8.4.

Remark 5.15.

Trivially, by combining Proposition 3.5 and Proposition 3.6, we get that the set {fC({0,1}S):|f|<+,|f|<+}\{f\in C(\{0,1\}^{S})\,:\,{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty\,,\;{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty\} is a core for \mathcal{L} under conditions (24) and (25).

The above conditions (24) and (25) correspond to a countable family of requests. Indeed, since (24) and (25) are trivially satisfied for t=0t=0 as 𝒞0,x={x}\mathcal{C}_{0,x}=\{x\}, (24) and (25) are equivalent respectively to (26) and (27):

𝔼[|Br,x(𝒦)|]<+r,xS,\displaystyle{\mathbb{E}}\big{[}|B_{r,x}(\mathcal{K})|\big{]}<+\infty\qquad\;\;\forall r\in{\mathbb{N}}\,,\;\forall x\in S\,, (26)
𝔼[zBr,x(𝒦)czs]<+r,xS.\displaystyle{\mathbb{E}}\big{[}\sum_{z\in B_{r,x}(\mathcal{K})}c_{z}^{s}\big{]}<+\infty\qquad\forall r\in{\mathbb{N}}\,,\;\forall x\in S\,. (27)

Below we write that xyx\longleftrightarrow y in 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}) if {x,y}\{x,y\} is an edge of 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}). Since 𝒞t0,x(𝒦)\mathcal{C}_{t_{0},x}(\mathcal{K}) corresponds to the connected component of xx in 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}), we have xyx\longleftrightarrow y if and only if y𝒞t0,x(𝒦)y\in\mathcal{C}_{t_{0},x}(\mathcal{K}). Given x,yx,y in SS, let

p(x,y):=(xy in 𝒢t0).p(x,y):={\mathbb{P}}(x\longleftrightarrow y\text{ in }\mathcal{G}_{t_{0}})\,. (28)

The following result reduces the verification of (24) and (25) to a percolation problem associated to the random graph 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}):

Proposition 5.16.

Condition (24) is satisfied if for all xSx\in S and n+n\in{\mathbb{N}}_{+}

x1,x2,,xnSp(x,x1)p(x1,x2)p(xn1,xn)<+;\sum_{x_{1},x_{2},\dots,x_{n}\in S}p(x,x_{1})p(x_{1},x_{2})\cdots p(x_{n-1},x_{n})<+\infty\,; (29)

equivalently if for all xSx\in S and nn\in{\mathbb{N}}

x1,x2,,xnSp(x,x1)p(x1,x2)p(xn1,xn)𝔼[|𝒞t0,xn|]<+.\sum_{x_{1},x_{2},\dots,x_{n}\in S}p(x,x_{1})p(x_{1},x_{2})\cdots p(x_{n-1},x_{n}){\mathbb{E}}\big{[}|\mathcal{C}_{t_{0},x_{n}}|\big{]}<+\infty\,. (30)

Condition (25) is satisfied if for all xSx\in S and n+n\in{\mathbb{N}}_{+}

x1,x2,,xnSp(x,x1)p(x1,x2)p(xn1,xn)cxns<+;\sum_{x_{1},x_{2},\cdots,x_{n}\in S}p(x,x_{1})p(x_{1},x_{2})\cdots p(x_{n-1},x_{n})c^{s}_{x_{n}}<+\infty\,; (31)

equivalently if for all xSx\in S and nn\in{\mathbb{N}}

x1,x2,,xnSp(x,x1)p(x1,x2)p(xn1,xn)𝔼[w𝒞t0,xncws]<+.\sum_{x_{1},x_{2},\cdots,x_{n}\in S}p(x,x_{1})p(x_{1},x_{2})\cdots p(x_{n-1},x_{n}){\mathbb{E}}\big{[}\sum_{w\in\mathcal{C}_{t_{0},x_{n}}}c_{w}^{s}\big{]}<+\infty\,. (32)

Proposition 5.16 is proved in Section 8.5.

For n=0n=0 (30) and (32) have to be thought of as 𝔼[|𝒞t0,x|]<+{\mathbb{E}}\big{[}|\mathcal{C}_{t_{0},x}|\big{]}<+\infty and 𝔼[w𝒞t0,xcws]<+{\mathbb{E}}\big{[}\sum_{w\in\mathcal{C}_{t_{0},x}}c_{w}^{s}\big{]}<+\infty. Since ySp(x,y)=|𝒞t0,x|\sum_{y\in S}p(x,y)=|\mathcal{C}_{t_{0},x}|, (30) and (32) are automatically satisfied for all xSx\in S and nn\in{\mathbb{N}} if

supxS𝔼[|𝒞t0,x|]<+ and supxS𝔼[w𝒞t0,xcws]<+.\sup_{x\in S}{\mathbb{E}}\big{[}|\mathcal{C}_{t_{0},x}|\big{]}<+\infty\;\;\text{ and }\;\;\sup_{x\in S}{\mathbb{E}}\big{[}\sum_{w\in\mathcal{C}_{t_{0},x}}c_{w}^{s}\big{]}<+\infty\,. (33)

If supxScxs<+\sup_{x\in S}c_{x}^{s}<+\infty, then (33) reduces to supxS𝔼[|𝒞t0,x|]<+\sup_{x\in S}{\mathbb{E}}\big{[}|\mathcal{C}_{t_{0},x}|\big{]}<+\infty (this is indeed what checked in [25] in order to prove that 𝒞\mathcal{C} is a core for \mathcal{L} for the exclusion process on d{\mathbb{Z}}^{d} considered in [25, Proposition 4.5]).

6. Applications to SEPs in a random environment

We now consider the SEP with state space S(ω)S(\omega) and jump probability rates cx,y(ω)c_{x,y}(\omega) depending on a random environment ω\omega. More precisely, we have a probability space (Ω,𝒢,𝒫)(\Omega,\mathcal{G},\mathcal{P}) and the environment ω\omega is a generic element of Ω\Omega.

Due to the results presented in Section 5, to have 𝒫\mathcal{P}–a.s. a well defined process built by the graphical construction and enjoying the properties stated in Propositions 5.8, 5.10 and 5.11, it is sufficient to check that for 𝒫\mathcal{P}–a.a. ω\omega Assumption SEP is valid with S=S(ω)S=S(\omega) and cx,ys=cx,ys(ω):=cx,y(ω)+cy,x(ω)c^{s}_{x,y}=c^{s}_{x,y}(\omega):=c_{x,y}(\omega)+c_{y,x}(\omega). This becomes an interesting problem in percolation theory since one has a percolation problem in a random environment. By restating the content of Propositions 5.1 and 5.2 in [11] in the present setting (cx,ys(ω)c^{s}_{x,y}(\omega) here plays the role of the conductance of {x,y}\{x,y\} there) we have the two criteria below, where 𝔼do{\mathbb{E}}^{o}_{d} (𝔼d{\mathbb{E}}_{d}) denotes the set of directed (undirected) edges of the lattice d{\mathbb{Z}}^{d}.

Proposition 6.1.

[11, Proposition 5.1] Let Ω:=[0,+)𝔼do\Omega:=[0,+\infty)^{{\mathbb{E}}^{o}_{d}}. Suppose that 𝒫\mathcal{P} is stationary w.r.t. shifts. Take S(ω):=dS(\omega):={\mathbb{Z}}^{d} and cx,y(ω):=ωx,yc_{x,y}(\omega):=\omega_{x,y} for (x,y)𝔼do(x,y)\in{\mathbb{E}}^{o}_{d} and cx,y(ω):=0c_{x,y}(\omega):=0 otherwise. Then for 𝒫\mathcal{P}–a.a. ω\omega Assumption SEP is satisfied if at least one of the following conditions is fulfilled:

  • (i)

    𝒫\mathcal{P}–a.s. there exists a constant C(ω)C(\omega) such that ωx,yC(ω)\omega_{x,y}\leq C(\omega) for all (x,y)𝔼do(x,y)\in{\mathbb{E}}^{o}_{d};

  • (ii)

    under 𝒫\mathcal{P} the random variables cx,ys(ω)=ωx,y+ωy,xc^{s}_{x,y}(\omega)=\omega_{x,y}+\omega_{y,x} are independent when varying {x,y}\{x,y\} in 𝔼d{\mathbb{E}}_{d};

  • (iii)

    for some k>0k>0 under 𝒫\mathcal{P} the random variables cx,ys(ω)=ωx,y+ωy,xc^{s}_{x,y}(\omega)=\omega_{x,y}+\omega_{y,x} are kk–dependent when varying {x,y}\{x,y\} in 𝔼d{\mathbb{E}}_{d}.

The kk–dependence in Item (iii) means that, given A,BdA,B\subset{\mathbb{Z}}^{d} with distance at least kk, the random fields (cx,ys(ω):x,yA,{x,y}𝔼d)(c_{x,y}^{s}(\omega)\,:x,y\in A,\,\{x,y\}\in{\mathbb{E}}_{d}) and (cx,ys(ω):x,yB,{x,y}𝔼d)(c_{x,y}^{s}(\omega)\,:x,y\in B,\,\{x,y\}\in{\mathbb{E}}_{d}) are independent (see e.g. [17, Section 7.4]).

Proposition 6.2.

[11, Proposition 5.2] Suppose that there is a measurable map ωω^\omega\mapsto\hat{\omega} into the set of locally finite subsets of d{\mathbb{R}}^{d}, such that ω^\hat{\omega} is a Poisson point process (PPP) when ω\omega is sampled according to 𝒫\mathcal{P}. Take S(ω):=ω^S(\omega):=\hat{\omega}. Suppose that, for 𝒫\mathcal{P}–a.a. ω\omega, cx,ys(ω)g(|xy|)c^{s}_{x,y}(\omega)\leq g(|x-y|) for any x,yS(ω)x,y\in S(\omega), where g(r)g(r) is a fixed bounded function such that the map xg(|x|)x\mapsto g(|x|) belongs to L1(d,dx)L^{1}({\mathbb{R}}^{d},dx). Then for 𝒫\mathcal{P}–a.a. ω\omega Assumption SEP is satisfied.

By taking g(r)=2erg(r)=2e^{-r} the above proposition implies the following (recall Mott v.r.h. discussed in the Introduction):

Corollary 6.3.

For 𝒫\mathcal{P}–a.a. ω\omega the Mott v.r.h. on a marked Poisson point process (without the mean field approximation) can be built as SEP by the graphical construction and it is a Feller process, whose Markov generator is given by (23) on local functions.

We now give an application of Proposition 5.16 to check that 𝒞\mathcal{C} is a core for the generator. We discuss a special case, where (33) can be violated (of course, further generalizations are possible, we just aim to illustrate the criterion):

Proposition 6.4.

Let Ω:=[0,+)𝔼do\Omega:=[0,+\infty)^{{\mathbb{E}}^{o}_{d}}. Take S(ω):=dS(\omega):={\mathbb{Z}}^{d} and cx,y(ω):=ωx,yc_{x,y}(\omega):=\omega_{x,y} for (x,y)𝔼do(x,y)\in{\mathbb{E}}^{o}_{d} and cx,y(ω):=0c_{x,y}(\omega):=0 otherwise. Suppose that the random variables cx,ys(ω)=ωx,y+ωy,xc^{s}_{x,y}(\omega)=\omega_{x,y}+\omega_{y,x} with {x,y}𝔼d\{x,y\}\in{\mathbb{E}}_{d} are i.i.d. and have finite (1+ε)(1+\varepsilon)-moment for some ε>0\varepsilon>0 (i.e. the expectation of cx,ys(ω)1+εc^{s}_{x,y}(\omega)^{1+\varepsilon} is finite). Then, for 𝒫\mathcal{P}–a.a. environments ω\omega, the family 𝒞\mathcal{C} of local functions is a core for the generator of the SEP.

We point out that for the above model Assumption SEP is satisfied due to Proposition 6.1.

Proof.

Given the environment ω\omega, we write ω{\mathbb{P}}_{\omega} and pω(x,y)p_{\omega}(x,y) instead of {\mathbb{P}} and p(x,y)p(x,y) (cf. Definition 5.1 and Eq. (28)) in order to stress the dependence on ω\omega. Let pcp_{c} be the critical probability for the Bernoulli bond percolation on d{\mathbb{Z}}^{d}. pc=1p_{c}=1 for d=1d=1, while pc(0,1)p_{c}\in(0,1) for d2d\geq 2. In both cases we can fix a>0a>0 such that 𝒫(cx,ysa)<pc/2\mathcal{P}(c^{s}_{x,y}\geq a)<p_{c}/2 for all {x,y}𝔼d\{x,y\}\in{\mathbb{E}}_{d}. Given (ω,𝒦)(\omega,\mathcal{K}) we consider the undirected graph ¯𝒢t0(ω,𝒦)\bar{}\mathcal{G}_{t_{0}}(\omega,\mathcal{K}) with vertex set d{\mathbb{Z}}^{d} and edges given by the pairs {x,y}𝔼d\{x,y\}\in{\mathbb{E}}_{d} such that (i) cx,ys(ω)ac^{s}_{x,y}(\omega)\geq a or (ii) cx,ys(ω)<ac^{s}_{x,y}(\omega)<a and 𝒦x,ys(t0)>𝒦x,ys(0)\mathcal{K}^{s}_{x,y}(t_{0})>\mathcal{K}^{s}_{x,y}(0). When {x,y}𝔼d\{x,y\}\in{\mathbb{E}}_{d} is an edge of ¯𝒢t0(ω,𝒦)\bar{}\mathcal{G}_{t_{0}}(\omega,\mathcal{K}) we say that it is open, otherwise we say that it is closed. Note that 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}) is a subgraph of ¯𝒢t0(ω,𝒦)\bar{}\mathcal{G}_{t_{0}}(\omega,\mathcal{K}). Moreover, under 𝒫(dω)ω\mathcal{P}(d\omega)\otimes{\mathbb{P}}_{\omega}, the random graph ¯𝒢t0(ω,𝒦)\bar{}\mathcal{G}_{t_{0}}(\omega,\mathcal{K}) corresponds to a Bernoulli bond percolation on d{\mathbb{Z}}^{d} [17] with parameter

p:=𝒫(cx,ys(ω)a)+𝒫(dω)𝟙(cx,ys(ω)<a)(1ecx,ys(ω)t0)<pc2+1eat0.p:=\mathcal{P}(c^{s}_{x,y}(\omega)\geq a)+\int\mathcal{P}(d\omega)\mathds{1}(c^{s}_{x,y}(\omega)<a)\left(1-e^{-c_{x,y}^{s}(\omega)t_{0}}\right)<\frac{p_{c}}{2}+1-e^{-at_{0}}\,.

Since a>0a>0 we can fix t0>0t_{0}>0 small enough to assure that p<pcp<p_{c}. As a consequence, 𝒫(dω)ω\mathcal{P}(d\omega)\otimes{\mathbb{P}}_{\omega}–a.s. the graph ¯𝒢t0(ω,𝒦)\bar{}\mathcal{G}_{t_{0}}(\omega,\mathcal{K}) is subcritical. Due to the results of subcritical Bernoulli bond percolation (cf. [17, Theorem (5.4)]), there exists c>0c>0 such that the 𝒫(dω)ω\mathcal{P}(d\omega)\otimes{\mathbb{P}}_{\omega}–probability that two points x,ydx,y\in{\mathbb{Z}}^{d} are connected in ¯𝒢t0(ω,𝒦)\bar{}\mathcal{G}_{t_{0}}(\omega,\mathcal{K}) is bounded by ec|xy|e^{-c|x-y|}. Hence also the 𝒫(dω)ω\mathcal{P}(d\omega)\otimes{\mathbb{P}}_{\omega}–probability that two points x,ydx,y\in{\mathbb{Z}}^{d} are connected in 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}) is bounded by ec|xy|e^{-c|x-y|}, i.e.

E[pω(x,y)]ec|xy|x,yS,{\rm E}\left[p_{\omega}(x,y)\right]\leq e^{-c|x-y|}\qquad\forall x,y\in S\,, (34)

where E[]{\rm E}[\cdot] denotes the expectation w.r.t. 𝒫\mathcal{P}.

To get that 𝒞\mathcal{C} is a core for =(ω)\mathcal{L}=\mathcal{L}(\omega) 𝒫\mathcal{P}–a.s. we apply Proposition 5.16. Since we just need that 𝒫\mathcal{P}–a.s. the countable family of conditions (29) with xSx\in S and n+n\in{\mathbb{N}}_{+} holds, it is enough to prove that given xSx\in S and n+n\in{\mathbb{N}}_{+} condition (29) holds 𝒫\mathcal{P}–a.s. and to this aim it is enough to show that

x1,x2,,xnSE[pω(x,x1)pω(x1,x2)pω(xn1,xn)]<+.\sum_{x_{1},x_{2},\dots,x_{n}\in S}{\rm E}\left[p_{\omega}(x,x_{1})p_{\omega}(x_{1},x_{2})\cdots p_{\omega}(x_{n-1},x_{n})\right]<+\infty\,. (35)

By applying several times Schwarz inequality, using that pω(a,b)αpω(a,b)p_{\omega}(a,b)^{\alpha}\leq p_{\omega}(a,b) for α1\alpha\geq 1 since pω(a,b)[0,1]p_{\omega}(a,b)\in[0,1] and using (34), as detailed in an example below, we can bound the expectation in (35) by

exp{C(|xx1|+|x1x2|++|xn1xn|},\exp\{-C(|x-x_{1}|+|x_{1}-x_{2}|+\cdots+|x_{n-1}-x_{n}|\}\,, (36)

for some constant C>0C>0 determined by cc and nn. This allows to get (35). For example, for n=3n=3, we have

E[pω(x,x1)pω(x1,x2)pω(x2,x3)]E[pω(x,x1)]12E[pω(x1,x2)pω(x2,x3)]12E[pω(x,x1)]12E[pω(x1,x2)]14E[pω(x2,x3)]14exp{c4(|xx1|+|x1x2|+|x2x3|)}.\begin{split}{\rm E}\left[p_{\omega}(x,x_{1})p_{\omega}(x_{1},x_{2})p_{\omega}(x_{2},x_{3})\right]&\leq{\rm E}\left[p_{\omega}(x,x_{1})\right]^{\frac{1}{2}}{\rm E}\left[p_{\omega}(x_{1},x_{2})p_{\omega}(x_{2},x_{3})\right]^{\frac{1}{2}}\\ &\leq{\rm E}\left[p_{\omega}(x,x_{1})\right]^{\frac{1}{2}}{\rm E}\left[p_{\omega}(x_{1},x_{2})\right]^{\frac{1}{4}}{\rm E}\left[p_{\omega}(x_{2},x_{3})\right]^{\frac{1}{4}}\\ &\leq\exp\{-\frac{c}{4}(|x-x_{1}|+|x_{1}-x_{2}|+|x_{2}-x_{3}|)\}\,.\end{split}

By similar arguments one can check condition (31). In particular, as above, it is enough to prove that given xSx\in S and n+n\in{\mathbb{N}}_{+} it holds

x1,x2,,xnSE[pω(x,x1)pω(x1,x2)pω(xn1,xn)cxns(ω)]<+.\sum_{x_{1},x_{2},\cdots,x_{n}\in S}{\rm E}\left[p_{\omega}(x,x_{1})p_{\omega}(x_{1},x_{2})\cdots p_{\omega}(x_{n-1},x_{n})c^{s}_{x_{n}}(\omega)\right]<+\infty\,. (37)

The only difference here is that one has to use Hölder’s inequality in the first step. In particular, one has to bound the expectation in (37) by

E[pω(x,x1)pω(x1,x2)pω(xn1,xn)]ε1+εE[cxns(ω)1+ε]11+ε<+.{\rm E}\left[p_{\omega}(x,x_{1})p_{\omega}(x_{1},x_{2})\cdots p_{\omega}(x_{n-1},x_{n})\right]^{\frac{\varepsilon}{1+\varepsilon}}{\rm E}\left[c^{s}_{x_{n}}(\omega)^{1+\varepsilon}\right]^{\frac{1}{1+\varepsilon}}<+\infty\,. (38)

Due to the stationarity of 𝒫\mathcal{P} and our moment assumption, the expectation E[cxns(ω)1+ε]{\rm E}\left[c^{s}_{x_{n}}(\omega)^{1+\varepsilon}\right] is bounded uniformly in xnx_{n}, while the first expectation in (38) is bounded by (36) as already observed. This allows to conclude.

7. Graphical construction, Markov generator and duality of SSEP: proofs

In this section we detail the graphical construction of the SSEP, thus requiring a certain care on the measurability issues. At the end we provide the proof of Propositions 3.2, 3.3, 3.4, 3.5, 3.6 and Lemma 3.9 presented in Section 3.

Let us start with the graphical construction. For later use we note that (DS)\mathcal{B}(D_{\mathbb{N}}^{\mathcal{E}_{S}}) is the σ\sigma–algebra generated by the coordinate maps DS𝒦𝒦x,yDD_{\mathbb{N}}^{\mathcal{E}_{S}}\ni\mathcal{K}\mapsto\mathcal{K}_{x,y}\in D_{\mathbb{N}}, as {x,y}\{x,y\} varies in S\mathcal{E}_{S} (see the metric (6)). Since (D)\mathcal{B}(D_{{\mathbb{N}}}) is generated by the coordinate maps Dξξ(t)D_{{\mathbb{N}}}\ni\xi\mapsto\xi(t)\in{\mathbb{N}} with t+t\in{\mathbb{R}}_{+} (see Section 2), we get that (DS)\mathcal{B}(D_{\mathbb{N}}^{\mathcal{E}_{S}}) is the σ\sigma–algebra generated by the maps 𝒦𝒦x,y(t)\mathcal{K}\mapsto\mathcal{K}_{x,y}(t) as {x,y}\{x,y\} varies in S\mathcal{E}_{S} and tt varies in +{\mathbb{R}}_{+}.

Given 𝒦DS\mathcal{K}\in D_{\mathbb{N}}^{\mathcal{E}_{S}}, 𝒦x,y\mathcal{K}_{x,y} is a càdlàg path with values in {\mathbb{N}}, hence 𝒦x,y\mathcal{K}_{x,y} has a finite set of jump times on any interval [0,T][0,T], T>0T>0. Given xSx\in S and 𝒦DS\mathcal{K}\in D_{\mathbb{N}}^{\mathcal{E}_{S}}, we define 𝒦x:+{+}\mathcal{K}_{x}:{\mathbb{R}}_{+}\to{\mathbb{N}}\cup\{+\infty\} as

𝒦x(t):=yS:yx𝒦x,y(t).\mathcal{K}_{x}(t):=\sum_{y\in S:y\not=x}\mathcal{K}_{x,y}(t)\,. (39)

Since each map 𝒦𝒦x,y(t)\mathcal{K}\mapsto\mathcal{K}_{x,y}(t) is measurable, the same holds for the map 𝒦𝒦x(t)\mathcal{K}\mapsto\mathcal{K}_{x}(t). We point out that the path 𝒦x\mathcal{K}_{x} is not necessarily càdlàg. For example, take 𝒦DS\mathcal{K}\in D_{{\mathbb{N}}}^{\mathcal{E}_{S}} such that, for 1i<j1\leq i<j,

𝒦si,sj(t):={0 if i1,𝟙(t1+1/j) if i=1.\mathcal{K}_{s_{i},s_{j}}(t):=\begin{cases}0&\text{ if }i\not=1\,,\\ \mathds{1}(t\geq 1+1/j)&\text{ if }i=1\,.\end{cases}

Then 𝒦s1(t)\mathcal{K}_{s_{1}}(t) equals zero for t1t\leq 1 and equals ++\infty for t>1t>1.

Definition 7.1.

We define ΓDS\Gamma\subset D_{{\mathbb{N}}}^{\mathcal{E}_{S}} as the family of 𝒦DS\mathcal{K}\in D_{{\mathbb{N}}}^{\mathcal{E}_{S}} such that

  • (i)

    for all xSx\in S, Kx(t)K_{x}(t) is finite for all t+t\in{\mathbb{R}}_{+} and the map Kx:+K_{x}:{\mathbb{R}}_{+}\to{\mathbb{N}} is càdlàg with jumps of value +1+1;

  • (ii)

    for any {x,y}S\{x,y\}\in\mathcal{E}_{S} the path 𝒦x,y\mathcal{K}_{x,y} has only jumps of value +1+1;

  • (iii)

    given any {x,y}{x,y}\{x,y\}\not=\{x^{\prime},y^{\prime}\} in S\mathcal{E}_{S}, the set of jump times of 𝒦x,y\mathcal{K}_{x,y} and the set of jump times of 𝒦x,y\mathcal{K}_{x^{\prime},y^{\prime}} are disjoint;

  • (iv)

    𝒦x,y(0)=0\mathcal{K}_{x,y}(0)=0 for all {x,y}S\{x,y\}\in\mathcal{E}_{S}.

Lemma 7.2.

Γ\Gamma is measurable, i.e. Γ(DS)\Gamma\in\mathcal{B}(D_{{\mathbb{N}}}^{\mathcal{E}_{S}}), and (Γ)=1{\mathbb{P}}(\Gamma)=1.

Proof.

It is a standard fact that the 𝒦\mathcal{K}’s satisfying (ii), (iii) and (iv) form a measurable set 𝒟DS{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{D}}\subset D_{\mathbb{N}}^{\mathcal{E}_{S}} with {\mathbb{P}}–probability one. By property (ii) for all 𝒦𝒟\mathcal{K}\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{D}} and xSx\in S the path KxK_{x} is weakly increasing. Hence 𝒜:=xS{𝒦𝒟:𝒦x(t)<+t0}=xSt+{𝒦𝒟:𝒦x(t)<+}\mathcal{A}:=\cap_{x\in S}\{\mathcal{K}\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{D}}\,:\,\mathcal{K}_{x}(t)<+\infty\;\forall t\geq 0\}=\cap_{x\in S}\cap_{t\in{\mathbb{Q}}_{+}}\{\mathcal{K}\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{D}}\,:\,\mathcal{K}_{x}(t)<+\infty\}, where +:=+{\mathbb{Q}}_{+}:={\mathbb{Q}}\cap{\mathbb{R}}_{+}. Since the map 𝒦𝒦x(t)\mathcal{K}\mapsto\mathcal{K}_{x}(t) is measurable, also the set {𝒦𝒟:𝒦x(t)<+}\{\mathcal{K}\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{D}}\,:\,\mathcal{K}_{x}(t)<+\infty\} is measurable. Moreover this set has {\mathbb{P}}-probability 11 since (𝒟)=1{\mathbb{P}}({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{D}})=1 and, by (C1), 𝔼[𝒦x(t)]=yS:yxcx,y=cx<+{\mathbb{E}}[\mathcal{K}_{x}(t)]=\sum_{y\in S:y\not=x}c_{x,y}=c_{x}<+\infty. This allows to conclude that 𝒜\mathcal{A} is measurable and (𝒜)=1{\mathbb{P}}(\mathcal{A})=1.

To conclude it is enough to show that Γ=𝒜\Gamma=\mathcal{A}. Trivially Γ𝒜\Gamma\subset\mathcal{A}. To prove that 𝒜Γ\mathcal{A}\subset\Gamma we just need to show that, given 𝒦𝒜\mathcal{K}\in\mathcal{A} and xSx\in S, Kx:+K_{x}:{\mathbb{R}}_{+}\to{\mathbb{N}} is càdlàg with jumps of value +1+1. Let us first prove that KxK_{x} is càdlàg . To this aim we fix t0t\geq 0 and take T>tT>t. We note that yS:yx𝒦x,y(T)=𝒦x(T)<+\sum_{y\in S:y\not=x}\mathcal{K}_{x,y}(T)=\mathcal{K}_{x}(T)<+\infty, 𝒦x,y(s)𝒦x,y(T)\mathcal{K}_{x,y}(s)\leq\mathcal{K}_{x,y}(T) for any s(t,T)s\in(t,T) and limst𝒦x,y(s)=𝒦x,y(t)\lim_{s\downarrow t}\mathcal{K}_{x,y}(s)=\mathcal{K}_{x,y}(t) (for the first two properties use that 𝒦𝒜\mathcal{K}\in\mathcal{A}, for the third one use that 𝒦x,yD\mathcal{K}_{x,y}\in D_{\mathbb{N}}). Then, by dominated convergence applied to the sum among ySy\in S with yxy\not=x, we get that limst𝒦x(s)=yS:yxlimst𝒦x,y(s)=𝒦x(t)\lim_{s\downarrow t}\mathcal{K}_{x}(s)=\sum_{y\in S:y\not=x}\lim_{s\downarrow t}\mathcal{K}_{x,y}(s)=\mathcal{K}_{x}(t) (i.e. 𝒦x\mathcal{K}_{x} is right-continuous). Similarly, for t>0t>0, we get that limst𝒦x(s)=yS:yxlimst𝒦x,y(s)=yS:yx𝒦x,y(t)\lim_{s\uparrow t}\mathcal{K}_{x}(s)=\sum_{y\in S:y\not=x}\lim_{s\uparrow t}\mathcal{K}_{x,y}(s)=\sum_{y\in S:y\not=x}\mathcal{K}_{x,y}(t-) (i.e. 𝒦x\mathcal{K}_{x} has left limits). Due to the above identities, if tt is a jump time of the càdlàg path 𝒦x\mathcal{K}_{x}, then the jump value is 𝒦x(t)𝒦x(t)=yS:yx𝒦x,y(t)yS:yx𝒦x,y(t)\mathcal{K}_{x}(t)-\mathcal{K}_{x}(t-)=\sum_{y\in S:y\not=x}\mathcal{K}_{x,y}(t)-\sum_{y\in S:y\not=x}\mathcal{K}_{x,y}(t-). The above expression and properties (ii) and (iii) imply that 𝒦x(t)𝒦x(t)=+1\mathcal{K}_{x}(t)-\mathcal{K}_{x}(t-)=+1. ∎

Definition 7.3.

Let 𝒦Γ\mathcal{K}\in\Gamma. Given W+W\subset{\mathbb{R}}_{+} and {x,y}S\{x,y\}\in S we define JW(𝒦x,y)J_{W}(\mathcal{K}_{x,y}) as the set of jump times of 𝒦x,y\mathcal{K}_{x,y} which lie in WW. Similarly, given xSx\in S, we define JW(𝒦x)J_{W}(\mathcal{K}_{x}) as the set of jump times of 𝒦x\mathcal{K}_{x} which lie in WW.

Note that, if 𝒦Γ\mathcal{K}\in\Gamma, then JW(𝒦x)=yS{x}JW(𝒦x,y)J_{W}(\mathcal{K}_{x})=\cup_{y\in S\setminus\{x\}}J_{W}(\mathcal{K}_{x,y}) and J[0,T](𝒦x)J_{[0,T]}(\mathcal{K}_{x}) is finite for all xSx\in S and T0T\geq 0. Moreover, 0 cannot be a jump time of Kx,yK_{x,y} or KxK_{x} since both paths Kx,yK_{x,y} and KxK_{x} are càdlàg. In particular, J(0,t](𝒦x)=J[0,t](𝒦x)J_{(0,t]}(\mathcal{K}_{x})=J_{[0,t]}(\mathcal{K}_{x}) and J(0,t](𝒦x,y)=J[0,t](𝒦x,y)J_{(0,t]}(\mathcal{K}_{x,y})=J_{[0,t]}(\mathcal{K}_{x,y}).


We introduce \partial as an abstract state not in SS. Given 𝒦Γ\mathcal{K}\in\Gamma and given t0t\geq 0, we associate to each xSx\in S an element Xtx[𝒦]X^{x}_{t}[\mathcal{K}] in S{}S\cup\{\partial\} as follows:

Definition 7.4 (Definition of Xtx[𝒦]X^{x}_{t}\text{[}\mathcal{K}\text{]} for 𝒦Γ\mathcal{K}\in\Gamma and t0t\geq 0).

  
\bullet We first consider the set A1:=J[0,t](𝒦x)A_{1}:=J_{[0,t]}(\mathcal{K}_{x}). If A1A_{1} is empty, then we set Xtx[𝒦]:=xX^{x}_{t}[\mathcal{K}]:=x and we stop. If A1A_{1} is nonempty, then we define t1t_{1} as the largest time in A1A_{1} and x1x_{1} as the unique point in SS such that t1J[0,t](𝒦x,x1)t_{1}\in J_{[0,t]}(\mathcal{K}_{x,x_{1}})333Note that t1t_{1} and x1x_{1} are well defined since 𝒦Γ\mathcal{K}\in\Gamma..

\bullet In general, arrived at tk,xkt_{k},x_{k} with k1k\geq 1, we consider the set Ak+1:=J[0,tk)(𝒦xk)A_{k+1}:=J_{[0,t_{k})}(\mathcal{K}_{x_{k}}). If Ak+1A_{k+1} is empty, then we set Xtx[𝒦]:=xkX^{x}_{t}[\mathcal{K}]:=x_{k} and we stop. If Ak+1A_{k+1} is nonempty, then we define tk+1t_{k+1} as the largest time in Ak+1A_{k+1} and xk+1x_{k+1} as the unique point in SS such that tk+1J[0,tk)(𝒦xk,xk+1)t_{k+1}\in J_{[0,t_{k})}(\mathcal{K}_{x_{k},x_{k+1}}).

\bullet If the algorithm stops in a finite number of steps, then Xtx[𝒦]X^{x}_{t}[\mathcal{K}] has been defined by the algorithm itself and belongs to SS. If the algorithm does not stop, then we set Xtx[𝒦]:=X^{x}_{t}[\mathcal{K}]:=\partial.

Remark 7.5.

Note that X0x[𝒦]=xX_{0}^{x}[\mathcal{K}]=x according to the above algorithm (indeed, in this case A1=A_{1}=\emptyset as 𝒦x\mathcal{K}_{x} has no jump at time 0 being càdlàg).

We endow the countable set S{}S\cup\{\partial\} with the discrete topology. Below, when considering functions with domain Γ\Gamma, we consider Γ(DS)\Gamma\in\mathcal{B}(D_{\mathbb{N}}^{\mathcal{E}_{S}}) as a measurable space with σ\sigma–algebra {AΓ:A(DS}={A(DS):AΓ}\{A\cap\Gamma\,:A\in\mathcal{B}(D_{\mathbb{N}}^{\mathcal{E}_{S}}\}=\{A\in\mathcal{B}(D_{\mathbb{N}}^{\mathcal{E}_{S}}):A\subset\Gamma\}.

Lemma 7.6.

Given xSx\in S and t0t\geq 0, the map Γ𝒦Xtx[𝒦]S{}\Gamma\ni\mathcal{K}\mapsto X^{x}_{t}[\mathcal{K}]\in S\cup\{\partial\} is measurable.

Proof.

We just need to show that, given ySy\in S, the set 𝒲={𝒦Γ:Xtx[𝒦]=y}\mathcal{W}=\{\mathcal{K}\in\Gamma\,:\,X^{x}_{t}[\mathcal{K}]=y\} is measurable. Let us call Nx(𝒦)N_{x}(\mathcal{K}) the number of steps in the algorithm of Definition 7.4. Below x1,t1,x2,t2,..x_{1},t_{1},x_{2},t_{2},.. are as in the above algorithm. Moreover, to simplify the notation, we take t=1t=1.

We first take yxy\not=x. The above set 𝒲\mathcal{W} is then the union of the countable family of sets {𝒦Γ:Nx(𝒦)=m,x1=a1,x2=a2,xm1=am1,xm=y}\{\mathcal{K}\in\Gamma\,:\,N_{x}(\mathcal{K})=m,\,x_{1}=a_{1},\,x_{2}=a_{2}\,,\,...\,x_{m-1}=a_{m-1}\,,\;x_{m}=y\}, where m{1,2,}m\in\{1,2,\dots\}, a0:=xa_{0}:=x, am:=ya_{m}:=y and a1,a2,,am1Sa_{1},a_{2},\dots,a_{m-1}\in S satisfy aiai+1a_{i}\not=a_{i+1} for i=0,2,,m1i=0,2,\dots,m-1. Let us show for example that the set 𝒜:={𝒦Γ:Nx(𝒦)=1,x1=y}\mathcal{A}:=\{\mathcal{K}\in\Gamma\,:\,N_{x}(\mathcal{K})=1,\,x_{1}=y\} is measurable (the case with Nx(𝒦)=m2N_{x}(\mathcal{K})=m\geq 2 is similar, just more involved from a notational viewpoint). We claim that

𝒜=n01(nn0(k=12n𝒜n,k)),\mathcal{A}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\cup_{n_{0}\geq 1}\left(\cap_{n\geq n_{0}}\left(\cup_{k=1}^{2^{n}}\mathcal{A}_{n,k}\right)\right)}\,, (40)

where

𝒜n,k:={𝒦Γ:𝒦x(1)=𝒦x(k2n),𝒦x(k2n)𝒦x((k1)2n)=1,𝒦x,y(k2n)𝒦x,y((k1)2n)=1,𝒦y((k1)2n)=𝒦y(0)}.\begin{split}\mathcal{A}_{n,k}:=\bigl{\{}&\mathcal{K}\in\Gamma\,:\,\mathcal{K}_{x}(1)=\mathcal{K}_{x}(k2^{-n})\,,\;\mathcal{K}_{x}(k2^{-n})-\mathcal{K}_{x}((k-1)2^{-n})=1\,,\;\\ &\mathcal{K}_{x,y}(k2^{-n})-\mathcal{K}_{x,y}((k-1)2^{-n})=1\,,\;\mathcal{K}_{y}((k-1)2^{-n})=\mathcal{K}_{y}(0)\bigr{\}}\,.\end{split}

To prove our claim take 𝒦𝒜\mathcal{K}\in\mathcal{A}. Since 𝒦Γ\mathcal{K}\in\Gamma, we know that the map 𝒦x\mathcal{K}_{x} takes value in {\mathbb{N}} and is càdlàg with jumps equal to +1+1. Recall that t1>0t_{1}>0 is the last time in the nonempty set J[0,1](𝒦x)J_{[0,1]}(\mathcal{K}_{x}). This implies that 𝒦x(1)=𝒦x(s)\mathcal{K}_{x}(1)=\mathcal{K}_{x}(s) for t1s1t_{1}\leq s\leq 1. Given n0n_{0} and nn0n\geq n_{0} let k{1,,2n}k\in\{1,\dots,2^{n}\} with (k1)2n<t1k2n(k-1)2^{-n}<t_{1}\leq k2^{-n}. Then, by the above observation, 𝒦x(1)=𝒦x(k2n)\mathcal{K}_{x}(1)=\mathcal{K}_{x}(k2^{-n}). Since 𝒦x\mathcal{K}_{x} is càdlàg, there exists ε>0\varepsilon>0 small such that 𝒦x(t1)1=𝒦x(s)\mathcal{K}_{x}(t_{1})-1=\mathcal{K}_{x}(s) for all s[t1ε,t1)s\in[t_{1}-\varepsilon,t_{1}). Therefore, by taking n0n_{0} such that 2n0<ε2^{-n_{0}}<\varepsilon and using that 𝒦𝒜\mathcal{K}\in\mathcal{A}, we have that 𝒦x(k2n)𝒦x((k1)2n)=1=𝒦x,y(k2n)𝒦x,y((k1)2n)=1\mathcal{K}_{x}(k2^{-n})-\mathcal{K}_{x}((k-1)2^{-n})=1=\mathcal{K}_{x,y}(k2^{-n})-\mathcal{K}_{x,y}((k-1)2^{-n})=1. By definition of 𝒜\mathcal{A} we know that 𝒦y\mathcal{K}_{y} has no jumps in [0,t1)[0,t_{1}), thus implying that 𝒦y((k1)2n)=𝒦y(0)\mathcal{K}_{y}((k-1)2^{-n})=\mathcal{K}_{y}(0). This concludes the proof that, if 𝒦𝒜\mathcal{K}\in\mathcal{A}, then 𝒦\mathcal{K} belongs to r.h.s. of (40).

Suppose now that 𝒦\mathcal{K} belongs to the r.h.s. of (40). Let us prove that 𝒦𝒜\mathcal{K}\in\mathcal{A}. First we observe that, for some n01n_{0}\geq 1 and for all nn0n\geq n_{0}, we can find kn{1,2,3,,2n}k_{n}\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\{1,2,3,\dots,2^{n}\}} such that 𝒦𝒜n,kn\mathcal{K}\in\mathcal{A}_{n,k_{n}}. By definition of 𝒜n,kn\mathcal{A}_{n,k_{n}} we have 𝒦x(1)=𝒦x(s)\mathcal{K}_{x}(1)=\mathcal{K}_{x}(s) for all s[kn2n,1]s\in[k_{n}2^{-n},1] and 𝒦x(kn2n)𝒦x((kn1)2n)=1\mathcal{K}_{x}(k_{n}2^{-n})-\mathcal{K}_{x}((k_{n}-1)2^{-n})=1. To simplify the notation let a:=(kn1)2na:=(k_{n}-1)2^{-n} and b:=kn2nb:=k_{n}2^{-n}. By the above properties and since 𝒦Γ\mathcal{K}\in\Gamma, 𝒦x\mathcal{K}_{x} has exactly one jump time in (a,b](a,b]. Setting c:=(a+b)/2c:=(a+b)/2, a<c<ba<c<b are subsequent points in 2n12^{-n-1}{\mathbb{N}}. If the above jump time lies in (c,b](c,b], then it must be kn+12n1=b=kn2nk_{n+1}2^{-n-1}=b=k_{n}2^{-n} and (kn+11)2n1=c>a=(kn1)2n(k_{n+1}-1)2^{-n-1}=c>a=(k_{n}-1)2^{-n}. If the above jump time lies in (a,c](a,c], then it must be kn+12n1=c<kn2n=bk_{n+1}2^{-n-1}=c<k_{n}2^{-n}=b and (kn+11)2n1=a=(kn1)2n(k_{n+1}-1)2^{-n-1}=a=(k_{n}-1)2^{-n}. This proves that the sequence kn2nk_{n}2^{-n} is weakly decreasing in [0,1][0,1], while the sequence (kn1)2n(k_{n}-1)2^{-n} is weakly increasing in [0,1][0,1]. Since in addition the terms (kn1)2n(k_{n}-1)2^{-n} and kn2nk_{n}2^{-n} differ by 2n2^{-n}, we obtain that (kn1)2n(k_{n}-1)2^{-n} converges to some u[0,1]u\in[0,1] from the left and kn2nk_{n}2^{-n} converges to the same u[0,1]u\in[0,1] from the right. It cannot be u=0u=0 otherwise it would be (kn1)2n=0(k_{n}-1)2^{-n}=0 for nn0n\geq n_{0} and we would have 𝒦x,y(2n)𝒦x,y(0)=1\mathcal{K}_{x,y}(2^{-n})-\mathcal{K}_{x,y}(0)=1 (due to the definition of 𝒜n,kn\mathcal{A}_{n,k_{n}}), thus contradicting the right continuity of 𝒦x,y\mathcal{K}_{x,y}. Hence u>0u>0. We claim that 𝒦𝒜\mathcal{K}\in\mathcal{A} and u=t1u=t_{1}. By taking the limit n+n\to+\infty in the properties defining 𝒜n,kn\mathcal{A}_{n,k_{n}} and using that 𝒦x,y\mathcal{K}_{x,y}, 𝒦x\mathcal{K}_{x} and 𝒦y\mathcal{K}_{y} are càdlàg, we get 𝒦x(1)=𝒦x(u)\mathcal{K}_{x}(1)=\mathcal{K}_{x}(u), 𝒦x(u)𝒦x(u)=1\mathcal{K}_{x}(u)-\mathcal{K}_{x}(u-)=1, 𝒦x,y(u)𝒦x,y(u)=1\mathcal{K}_{x,y}(u)-\mathcal{K}_{x,y}(u-)=1, 𝒦y(u)=𝒦y(0)\mathcal{K}_{y}(u-)=\mathcal{K}_{y}(0). This implies that 𝒦𝒜\mathcal{K}\in\mathcal{A} and u=t1u=t_{1}. At this point we have proved our claim (40).

Having (40), which involves countable intersections and unions, the measurability of 𝒜\mathcal{A} follows from the measurability of 𝒜n,k\mathcal{A}_{n,k} (for the latter recall that the maps DS𝒦𝒦x,y(s)D_{\mathbb{N}}^{S}\ni\mathcal{K}\to\mathcal{K}_{x,y}(s)\in{\mathbb{N}} and DS𝒦𝒦x(s)D_{\mathbb{N}}^{S}\ni\mathcal{K}\to\mathcal{K}_{x}(s)\in{\mathbb{N}} are measurable).

When y=xy=x, the analysis is the same. One has just to consider in addition the event {𝒦Γ:Nx(t)=0}={𝒦Γ:𝒦x(0)=𝒦x(t)}\{\mathcal{K}\in\Gamma\,:\,N_{x}(t)=0\}=\{\mathcal{K}\in\Gamma\,:\,\mathcal{K}_{x}(0)=\mathcal{K}_{x}(t)\}, which is trivially measurable. ∎

Lemma 7.7.

Given 𝒦Γ\mathcal{K}\in\Gamma and xSx\in S, the path +tXtx[𝒦]S{}{\mathbb{R}}_{+}\ni t\mapsto X^{x}_{t}[\mathcal{K}]\in S\cup\{\partial\} is càdlàg.

Proof.

We prove the càdlàg property at t>0t>0, the case t=0t=0 is similar. We fix 𝒦Γ\mathcal{K}\in\Gamma, xSx\in S. Since 𝒦Γ\mathcal{K}\in\Gamma, 𝒦x\mathcal{K}_{x} is càdlàg and therefore 𝒦x\mathcal{K}_{x} has no jump in (t,t+ε](t,t+\varepsilon] and in [tε,t)[t-\varepsilon,t) for ε>0\varepsilon>0 small enough. By the first step in the algorithm in Definition 7.4 we conclude respectively that Xsx[𝒦]=Xtx[𝒦]X^{x}_{s}[\mathcal{K}]=X^{x}_{t}[\mathcal{K}] for any s[t,t+ε]s\in[t,t+\varepsilon] and Xsx[𝒦]=Xtεx[𝒦]X^{x}_{s}[\mathcal{K}]={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}X^{x}_{t-\varepsilon}[\mathcal{K}]} for for any s[tε,t)s\in[t-\varepsilon,t). ∎

Let us point out an important symmetry (called below reflection invariance) of the law {\mathbb{P}}. We define τx,y(1)<τx,y(2)<\tau_{x,y}^{(1)}<\tau_{x,y}^{(2)}<\cdots the jump times of the random path Nx,y()N_{x,y}(\cdot). Setting τx,y(0):=0\tau_{x,y}^{(0)}:=0, we have that τx,y(k)τx,y(k1)\tau_{x,y}^{(k)}-\tau_{x,y}^{(k-1)}, k1k\geq 1, are i.i.d. exponential random variables of parameter cx,yc_{x,y}. Another characterization is that {τx,y(k):k1}\{\tau_{x,y}^{(k)}\,:\,k\geq 1\} is a Poisson point process with intensity cx,yc_{x,y}. As a consequence, given T>0T>0, the set (0,T){τx,y(k):k1}(0,T)\cap\{\tau_{x,y}^{(k)}\,:\,k\geq 1\} is a Poisson point process on (0,T)(0,T) with intensity cx,yc_{x,y}, i.e. the numbers of points in disjoint Borel subsets of (0,T)(0,T) are independent random variables and the number of points in a given Borel subset AA of (0,T)(0,T) is a Poisson random variable with parameter cx,yc_{x,y} times the Lebesgue measure of AA. Due to the above characterization of the Poisson point process, we have that also (0,T){Tτx,y(k):k1}(0,T)\cap\{T-\tau_{x,y}^{(k)}\,:\,k\geq 1\} is a Poisson point process on (0,T)(0,T) with intensity cx,yc_{x,y}.

Due to the above reflection invariance, the independence of the Poisson processes and the algorithm in Definition 7.4, we get that, by sampling 𝒦\mathcal{K} with probability {\mathbb{P}}, the random variable Xtx[𝒦]X^{x}_{t}[\mathcal{K}] is distributed as the state at time tt of the random walk with conductances cy,zc_{y,z} starting at xx, with the convention that Xtx[𝒦]=X^{x}_{t}[\mathcal{K}]=\partial corresponds to the explosion of the above random walk.

The above observation will be crucial in proving Lemma 7.9 below. We first give the following definition:

Definition 7.8.

We define the set ΓΓ\Gamma_{*}\subset\Gamma as Γ:={𝒦Γ:Xtx[𝒦]S for all xS,t0}\Gamma_{*}:=\{\mathcal{K}\in\Gamma\,:\,X^{x}_{t}[\mathcal{K}]\in S\text{ for all }x\in S,t\geq 0\}.

Lemma 7.9.

The set Γ\Gamma_{*} is measurable and (Γ)=1{\mathbb{P}}(\Gamma_{*})=1.

Proof.

Let Γ:={𝒦Γ:Xtx[𝒦]S for all xS,t+}\Gamma_{\circ}:=\{\mathcal{K}\in\Gamma\,:\,X^{x}_{t}[\mathcal{K}]\in S\text{ for all }x\in S,t\in{\mathbb{Q}}\cap{\mathbb{R}}_{+}\}. By Lemma 7.7, Γ=Γ\Gamma_{\circ}=\Gamma_{*}. Due to Lemma 7.6 the set Γt:={𝒦Γ:Xtx[𝒦]SxS}\Gamma_{t}:=\{\mathcal{K}\in\Gamma\,:\,X^{x}_{t}[\mathcal{K}]\in S\;\;\forall x\in S\} is measurable. Due to the above interpretation of the distribution of Xtx[𝒦]X^{x}_{t}[\mathcal{K}] in terms of the random walk with random conductances and due to Condition (C2), we have (Γt)=1{\mathbb{P}}(\Gamma_{t})=1. Then, since Γ\Gamma_{\circ} is the countable intersection of the measurable sets Γt\Gamma_{t}, t+t\in{\mathbb{R}}_{+}\cap{\mathbb{Q}}, we conclude that Γ\Gamma_{\circ} is measurable and (Γ)=1{\mathbb{P}}(\Gamma_{\circ})=1. As Γ=Γ\Gamma_{\circ}=\Gamma_{*}, the same holds for Γ\Gamma_{*}. ∎

Definition 7.10.

For each 𝒦Γ\mathcal{K}\in\Gamma_{*}, t0t\geq 0 and σ{0,1}S\sigma\in\{0,1\}^{S}, we define ηtσ[𝒦]{0,1}S\eta^{\sigma}_{t}[\mathcal{K}]\in\{0,1\}^{S} as

ηtσ[𝒦](x):=σ(Xtx[𝒦]).\eta^{\sigma}_{t}[\mathcal{K}](x):=\sigma\bigl{(}X^{x}_{t}[\mathcal{K}]\bigr{)}\,. (41)

We note that, given 𝒦Γ\mathcal{K}\in\Gamma_{*}, it holds η0σ[𝒦]=σ\eta^{\sigma}_{0}[\mathcal{K}]=\sigma by Remark 7.5. We refer to Figure 2 for an example illustrating our definitions.

Refer to caption
0t<s10\leq t<s_{1} s1t<s2s_{1}\leq t<s_{2} t=s2t=s_{2}
Xtx[𝒦]X^{x}_{t}[\mathcal{K}] xx yy yy
Xty[𝒦]X^{y}_{t}[\mathcal{K}] yy xx zz
Xtz[𝒦]X^{z}_{t}[\mathcal{K}] zz zz xx
ηtσ[𝒦]\eta^{\sigma}_{t}[\mathcal{K}] (..,1,0,1,..)(..,1,0,1,..) (..,0,1,1,..)(..,0,1,1,..) (..,0,1,1,..)(..,0,1,1,..)
Figure 2. (Top) Horizontal edges represent the jump times of 𝒦x,y\mathcal{K}_{x,y} and 𝒦y,z\mathcal{K}_{y,z}. (Bottom) Value of Xtx[𝒦]X^{x}_{t}[\mathcal{K}], Xty[𝒦]X^{y}_{t}[\mathcal{K}], Xtz[𝒦]X^{z}_{t}[\mathcal{K}] and ηtσ[𝒦]\eta^{\sigma}_{t}[\mathcal{K}] for t[0,s2]t\in[0,s_{2}].
Lemma 7.11.

Fixed t0t\geq 0, the map {0,1}S×Γ(σ,𝒦)ηtσ[𝒦]{0,1}S\{0,1\}^{S}\times\Gamma_{*}\ni(\sigma,\mathcal{K})\mapsto\eta^{\sigma}_{t}[\mathcal{K}]\in\{0,1\}^{S} is continuous in σ\sigma (for fixed 𝒦\mathcal{K}) and measurable in 𝒦\mathcal{K} (for fixed σ\sigma).

Proof.

We prove the continuity in σ\sigma for fixed 𝒦\mathcal{K}. Recall the metric d(,)d(\cdot,\cdot) on {0,1}S\{0,1\}^{S} defined in (4). Fix N+N\in{\mathbb{N}}_{+}. We have that d(ηtσ[𝒦],ηtσ[𝒦])2Nd(\eta^{\sigma}_{t}[\mathcal{K}],\eta^{\sigma^{\prime}}_{t}[\mathcal{K}])\leq 2^{-N} if ηtσ[𝒦](sn)=ηtσ[𝒦](sn)\eta^{\sigma}_{t}[\mathcal{K}](s_{n})=\eta^{\sigma^{\prime}}_{t}[\mathcal{K}](s_{n}) for all n=1,2,,Nn=1,2,\dots,N. By (41) this holds if σ(y)=σ(y)\sigma(y)=\sigma^{\prime}(y) for all yA:={Xtsn[𝒦]:n=1,2,,N}y\in A:=\{X^{s_{n}}_{t}[\mathcal{K}]\,:\,n=1,2,\dots,N\}. Let us now choose MM large enough that A{s1,s2,,sM}A\subset\{s_{1},s_{2},\dots,s_{M}\}. Since we have σ(sn)=σ(sn)\sigma(s_{n})=\sigma^{\prime}(s_{n}) for all n=1,2,,Mn=1,2,\dots,M if d(σ,σ)<2Md(\sigma,\sigma^{\prime})<2^{-M}, we conclude that whenever d(σ,σ)<2Md(\sigma,\sigma^{\prime})<2^{-M} we have σ(y)=σ(y)\sigma(y)=\sigma^{\prime}(y) for all yAy\in A and therefore d(ηtσ[𝒦],ηtσ[𝒦])2Nd(\eta^{\sigma}_{t}[\mathcal{K}],\eta^{\sigma^{\prime}}_{t}[\mathcal{K}])\leq 2^{-N}. This proves the continuity in σ\sigma.

We prove the measurability in 𝒦\mathcal{K} for fixed σ\sigma. The Borel σ\sigma–algebra ({0,1}S)\mathcal{B}(\{0,1\}^{S}) is generated by the sets {η{0,1}S:η(x)=1}\{\eta\in\{0,1\}^{S}\,:\,\eta(x)=1\} as xx varies in SS. Then to prove the measurability in 𝒦\mathcal{K}, for fixed σ\sigma, we just need to show that B:={𝒦Γ:ηtσ[𝒦](x)=1}B:=\{\mathcal{K}\in\Gamma_{*}\,:\,\eta_{t}^{\sigma}[\mathcal{K}](x)=1\} is measurable for any xSx\in S. By (41) we can rewrite BB as the countable union yd:σ(y)=1{𝒦Γ:Xtx[𝒦]=y}\cup_{y\in{\mathbb{Z}}^{d}\,:\sigma(y)=1}\{\mathcal{K}\in\Gamma_{*}\,:\,X^{x}_{t}[\mathcal{K}]=y\}. Then the measurability of BB follows from the measurability of the sets {𝒦Γ:Xtx[𝒦]=y}\{\mathcal{K}\in\Gamma_{*}\,:\,X^{x}_{t}[\mathcal{K}]=y\}, which follows from Lemmas 7.6 and 7.9. ∎

For the next result recall that D{0,1}S=D(+,{0,1}S)D_{\{0,1\}^{S}}=D({\mathbb{R}}_{+},\{0,1\}^{S}).

Lemma 7.12.

For each σ{0,1}\sigma\in\{0,1\} and 𝒦Γ\mathcal{K}\in\Gamma_{*}, the path ησ[𝒦]:=(ηtσ[𝒦])t0\eta^{\sigma}_{\cdot}[\mathcal{K}]:=\bigl{(}\eta^{\sigma}_{t}[\mathcal{K}]\bigr{)}_{t\geq 0} belongs to D{0,1}SD_{\{0,1\}^{S}}. Moreover, fixed σ{0,1}S\sigma\in\{0,1\}^{S}, the map

Γ𝒦ησ[𝒦]D{0,1}S\Gamma_{*}\ni\mathcal{K}\mapsto\eta^{\sigma}_{\cdot}[\mathcal{K}]\in D_{\{0,1\}^{S}} (42)

is measurable in 𝒦\mathcal{K}.

Proof.

Let σ{0,1}S\sigma\in\{0,1\}^{S} and 𝒦Γ\mathcal{K}\in\Gamma_{*}. We first check that the path (ηtσ[𝒦])t0\bigl{(}\eta^{\sigma}_{t}[\mathcal{K}]\bigr{)}_{t\geq 0} belongs to D{0,1}SD_{\{0,1\}^{S}}. We first prove its right continuity, i.e. given t0t\geq 0 we show that limutηuσ[𝒦]=ηtσ[𝒦]\lim_{u\downarrow t}\eta^{\sigma}_{u}[\mathcal{K}]=\eta^{\sigma}_{t}[\mathcal{K}]. Due to (4) we just need to show that limutηuσ[𝒦](x)=ηtσ[𝒦](x)\lim_{u\downarrow t}\eta^{\sigma}_{u}[\mathcal{K}](x)=\eta^{\sigma}_{t}[\mathcal{K}](x) for any xSx\in S. By (41) it is enough to show that, for any xSx\in S, Xux[𝒦]=Xtx[𝒦]X^{x}_{u}[\mathcal{K}]=X^{x}_{t}[\mathcal{K}] for any u>tu>t sufficiently near to tt. This follows from Lemma 7.7. We now prove that the path ησ[𝒦]\eta^{\sigma}_{\cdot}[\mathcal{K}] has limit from the left at t>0t>0. Given xSx\in S, by Lemma 7.7, the left limit Xtx[𝒦]X^{x}_{t-}[\mathcal{K}] is well defined. Moreover, since 𝒦Γ\mathcal{K}\in\Gamma_{*}, this limit is in SS. Let ξ(x):=σ(Xtx[𝒦]){0,1}\xi(x):=\sigma(X^{x}_{t-}[\mathcal{K}])\in\{0,1\} for any xSx\in S. We claim that limutηuσ[𝒦]=ξ\lim_{u\uparrow t}\eta^{\sigma}_{u}[\mathcal{K}]=\xi. Again, due to (4), we just need to show that limutηuσ[𝒦](x)=ξ(x)\lim_{u\uparrow t}\eta^{\sigma}_{u}[\mathcal{K}](x)=\xi(x), i.e. limutσ(Xux[𝒦])=ξ(x)=σ(Xtx[𝒦])\lim_{u\uparrow t}\sigma(X^{x}_{u}[\mathcal{K}])=\xi(x)=\sigma(X^{x}_{t-}[\mathcal{K}]), for any xSx\in S. This follows from the fact that, for each xSx\in S, Xux[𝒦]=Xtx[𝒦]X^{x}_{u}[\mathcal{K}]=X^{x}_{t-}[\mathcal{K}] for all u[tε,t)u\in[t-\varepsilon,t), for ε>0\varepsilon>0 small enough. This concludes the proof that (ηtσ[𝒦])t0\bigl{(}\eta^{\sigma}_{t}[\mathcal{K}]\bigr{)}_{t\geq 0} belongs to D{0,1}SD_{\{0,1\}^{S}}.

As discussed at the end of Section 2, (D{0,1}S)\mathcal{B}(D_{\{0,1\}^{S}}) is generated by the maps ηηt\eta_{\cdot}\mapsto\eta_{t} with tt varying in +{\mathbb{R}}_{+}. This means that (D{0,1}S)\mathcal{B}(D_{\{0,1\}^{S}}) is the smallest σ\sigma–algebra such that the sets of the form 𝒰:={ηD{0,1}S:ηtB}\mathcal{U}:=\{\eta_{\cdot}\in D_{\{0,1\}^{S}}\,:\,\eta_{t}\in B\}, with B({0,1}S)B\in\mathcal{B}(\{0,1\}^{S}), are measurable. To prove the measurability of (42) in 𝒦\mathcal{K} for fixed σ\sigma, we just need to prove that the inverse image of 𝒰\mathcal{U} via (42) is measurable, i.e. that the set {𝒦Γ:ηtσ[𝒦]B}\{\mathcal{K}\in\Gamma_{*}\,:\,\eta^{\sigma}_{t}[\mathcal{K}]\in B\} is measurable. But this is exactly the measurability in 𝒦\mathcal{K} (for fixed σ\sigma) of the map in Lemma 7.11. ∎

7.1. Proof of Proposition 3.2

\bullet Item (i) holds since, as already observed, given 𝒦Γ\mathcal{K}\in\Gamma_{*} it holds η0σ[𝒦]=σ\eta^{\sigma}_{0}[\mathcal{K}]=\sigma by Remark 7.5.

\bullet We move to Item (ii). We will use the Dynkin’s π\pi-λ\lambda Theorem (see e.g. [9]) as in the proof of Item (b) in [25, Theorem 2.4]. We fix 0t1t2tn0\leq t_{1}\leq t_{2}\leq\cdots\leq t_{n}, x1,x2,,xnSx_{1},x_{2},\dots,x_{n}\in S and k1,k2,,kn{0,1}k_{1},k_{2},\dots,k_{n}\in\{0,1\}. Due to Lemma 7.11, for fixed 𝒦Γ\mathcal{K}\in\Gamma_{*}, the maps {0,1}Sσηtiσ[𝒦](xi){0,1}\{0,1\}^{S}\ni\sigma\mapsto\eta^{\sigma}_{t_{i}}[\mathcal{K}](x_{i})\in\{0,1\} are continuous. Therefore, also

fσ(𝒦):=i=1n(1|ηtiσ[𝒦](xi)ki|)f_{\sigma}(\mathcal{K}):=\prod_{i=1}^{n}\left(1-\big{|}\eta^{\sigma}_{t_{i}}[\mathcal{K}](x_{i})-k_{i}\big{|}\right)

is continuous in σ\sigma for each fixed 𝒦\mathcal{K}. Trivially fσ(𝒦)=1f_{\sigma}(\mathcal{K})=1 if and only if ηtiσ[𝒦](xi)=ki\eta^{\sigma}_{t_{i}}[\mathcal{K}](x_{i})=k_{i} for all i=1,2,,ni=1,2,\dots,n, otherwise fσ(𝒦)=0f_{\sigma}(\mathcal{K})=0. We then consider the set A:={ηD{0,1}S:ηti(xi)=kii=1,2,,n}A:=\{\,\eta_{\cdot}\in D_{\{0,1\}^{S}}\,:\,\eta_{t_{i}}(x_{i})=k_{i}\;\forall i=1,2,\dots,n\,\}. Note that AA\in\mathcal{F} as \mathcal{F} is generated by the coordinate maps, moreover fσ(𝒦)f_{\sigma}(\mathcal{K}) is measurable in 𝒦\mathcal{K} by Lemma 7.11. Since σ(A)=𝑑(𝒦)fσ(𝒦){\mathbb{P}}^{\sigma}(A)=\int d{\mathbb{P}}(\mathcal{K})f_{\sigma}(\mathcal{K}) and 0fσ(𝒦)10\leq f_{\sigma}(\mathcal{K})\leq 1, by dominated convergence and the above continuity in σ\sigma of fσ(𝒦)f_{\sigma}(\mathcal{K}), we get that the map {0,1}Sσσ(A)[0,1]\{0,1\}^{S}\ni\sigma\mapsto{\mathbb{P}}^{\sigma}(A)\in[0,1] is continuous.

We now consider the family \mathcal{L} of sets AA\in\mathcal{F} such that the map {0,1}Sσσ(A)[0,1]\{0,1\}^{S}\ni\sigma\mapsto{\mathbb{P}}^{\sigma}(A)\in[0,1] is measurable. Due to the above discussion, \mathcal{L} contains the family 𝒫\mathcal{P} given by events of the form {ηD{0,1}S:ηti(xi)=kii=1,2,,n}\{\,\eta_{\cdot}\in D_{\{0,1\}^{S}}\,:\,\eta_{t_{i}}(x_{i})=k_{i}\;\forall i=1,2,\dots,n\,\} with n1n\geq 1, 0t1t2tn0\leq t_{1}\leq t_{2}\leq\cdots\leq t_{n}, x1,x2,,xnSx_{1},x_{2},\dots,x_{n}\in S and k1,k2,,kn{0,1}k_{1},k_{2},\dots,k_{n}\in\{0,1\}. The above family 𝒫\mathcal{P} is a π\pi–system, i.e. AB𝒫A\cap B\in\mathcal{P} if A,B𝒫A,B\in\mathcal{P}. We claim that, on the other hand, \mathcal{L} is a λ\lambda–system, i.e. (a) D{0,1}SD_{\{0,1\}^{S}}\in\mathcal{L}, (b) if A,BA,B\in\mathcal{L} and ABA\subset B, then BAB\setminus A\in\mathcal{L}, (c) if AnA_{n}\in\mathcal{L} and AnAA_{n}\uparrow A (i.e. A1A2A_{1}\subset A_{2}\subset\cdots and nAn=A\cup_{n}A_{n}=A), then AA\in\mathcal{L}. Before justifying our claim, let us conclude the proof of Item (ii). By Dynkin’s π\pi-λ\lambda Theorem, \mathcal{L} contains the σ\sigma–algebra generated by 𝒫\mathcal{P}, which is indeed \mathcal{F} as discussed at the end of Section 2. Hence =\mathcal{L}=\mathcal{F}, thus corresponding to Item (ii).

We conclude by proving our claim. The check of (a) and (b) is trivial. Let us focus on (c). Let AnA_{n}\in\mathcal{L} with AnAA_{n}\uparrow A. We need to prove that AA\in\mathcal{L}. We first observe that for each path ηD{0,1}S\eta_{\cdot}\in D_{\{0,1\}^{S}} it holds 𝟙An(η)𝟙A(η)\mathds{1}_{A_{n}}(\eta_{\cdot})\to\mathds{1}_{A}(\eta_{\cdot}) as n+n\to+\infty by definition of AnAA_{n}\uparrow A. This implies that, for each σ{0,1}S\sigma\in\{0,1\}^{S} and 𝒦Γ\mathcal{K}\in\Gamma_{*}, gn(𝒦)g(𝒦)g_{n}(\mathcal{K})\to g(\mathcal{K}) as n+n\to+\infty, where gn(𝒦):=𝟙An((ηtσ[𝒦])t0)g_{n}(\mathcal{K}):=\mathds{1}_{A_{n}}\big{(}\bigl{(}\eta^{\sigma}_{t}[\mathcal{K}]\bigr{)}_{t\geq 0}\big{)} and g(𝒦):=𝟙A((ηtσ[𝒦])t0)g(\mathcal{K}):=\mathds{1}_{A}\big{(}\bigl{(}\eta^{\sigma}_{t}[\mathcal{K}]\bigr{)}_{t\geq 0}\big{)}. Then, by dominated convergence, σ(An)=𝑑(𝒦)gn(𝒦)𝑑(𝒦)g(𝒦)=σ(A){\mathbb{P}}^{\sigma}(A_{n})=\int d{\mathbb{P}}(\mathcal{K})g_{n}(\mathcal{K})\to\int d{\mathbb{P}}(\mathcal{K})g(\mathcal{K})={\mathbb{P}}^{\sigma}(A) as n+n\to+\infty, for any σ{0,1}\sigma\in\{0,1\}. Since AnA_{n}\in\mathcal{L} the map {0,1}Sσσ(An)[0,1]\{0,1\}^{S}\ni\sigma\mapsto{\mathbb{P}}^{\sigma}(A_{n})\in[0,1] is measurable. As a byproduct of the above limit, we get that the map {0,1}Sσσ(A)[0,1]\{0,1\}^{S}\ni\sigma\mapsto{\mathbb{P}}^{\sigma}(A)\in[0,1] is the pointwise limit of measurable functions, and therefore it is measurable. This concludes the proof of (c).

\bullet We now focus on Item (iii). The proof is very close to the one of Item (c) in [25, Theorem 2.4], although the graphical construction is different. Given 𝒦DS\mathcal{K}\in D_{{\mathbb{N}}}^{\mathcal{E}_{S}} and t0t\geq 0, we define θt𝒦\theta_{t}\mathcal{K} as the element of DSD_{{\mathbb{N}}}^{\mathcal{E}_{S}} such that (θt𝒦)x,y(s):=𝒦x,y(t+s)𝒦x,y(t)(\theta_{t}\mathcal{K})_{x,y}(s):=\mathcal{K}_{x,y}(t+s)-\mathcal{K}_{x,y}({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}t}). We note that θt𝒦Γ\theta_{t}\mathcal{K}\in\Gamma_{*} for any 𝒦Γ\mathcal{K}\in\Gamma_{*}. We also denote by 𝒦[0,t]\mathcal{K}_{[0,t]} the collection of functions [0,t]s𝒦x,y(s)[0,t]\ni s\mapsto\mathcal{K}_{x,y}(s)\in{\mathbb{N}} as {x,y}\{x,y\} varies in S\mathcal{E}_{S}. We claim that for all 𝒦Γ\mathcal{K}\in\Gamma_{*} and t,s0t,s\geq 0 it holds

ηt+sσ[𝒦]=ηsξ[θt𝒦]ξ:=ηtσ[𝒦].\eta^{\sigma}_{t+s}[\mathcal{K}]=\eta^{\xi}_{s}[\theta_{t}\mathcal{K}]\qquad\xi:=\eta^{\sigma}_{t}[\mathcal{K}]\,. (43)

To check the above identity, observe that by the graphical construction Xt+sx[𝒦]=Xty[𝒦]X_{t+s}^{x}[\mathcal{K}]=X_{t}^{y}[\mathcal{K}] where y:=Xsx[θt𝒦]y:=X_{s}^{x}[\theta_{t}\mathcal{K}], and therefore (defining ξ\xi as in (43))

ηt+sσ[𝒦](x)=σ(Xt+sx[𝒦])=σ(Xty[𝒦])=ηtσ[𝒦](y)=ξ(y)=ξ(Xsx[θt𝒦])=ηsξ[θt𝒦].\eta^{\sigma}_{t+s}[\mathcal{K}](x)=\sigma(X_{t+s}^{x}[\mathcal{K}])=\sigma(X_{t}^{y}[\mathcal{K}])=\eta^{\sigma}_{t}[\mathcal{K}](y)=\xi(y)=\xi(X_{s}^{x}[\theta_{t}\mathcal{K}])=\eta^{\xi}_{s}[\theta_{t}\mathcal{K}]\,.

Take now AA\in\mathcal{F} and BtB\in\mathcal{F}_{t}. We can think of BB as a subset of D([0,t],{0,1}S)D([0,t],\{0,1\}^{S}). We set η[0,t]:=(ηs)0st\eta_{[0,t]}:=(\eta_{s})_{0\leq s\leq t}. Then, using (43),

σ(ηt+A,η[0,t]B)=Γ𝑑(𝒦)𝟙A(ηt+σ[𝒦])𝟙B(η[0,t]σ[𝒦])=Γ𝑑(𝒦)𝟙A(ηηtσ[𝒦][θt𝒦])𝟙B(η[0,t]σ[𝒦]).\begin{split}&{\mathbb{P}}^{\sigma}\left(\eta_{t+\cdot}\in A,\eta_{[0,t]}\in B\right)=\int_{\Gamma_{*}}d{\mathbb{P}}(\mathcal{K})\mathds{1}_{A}\left(\eta^{\sigma}_{t+\cdot}[\mathcal{K}]\right)\mathds{1}_{B}\left(\eta^{\sigma}_{[0,t]}[\mathcal{K}]\right)\\ &=\int_{\Gamma_{*}}d{\mathbb{P}}(\mathcal{K})\mathds{1}_{A}\left(\eta^{\eta^{\sigma}_{t}[\mathcal{K}]}_{\cdot}[\theta_{t}\mathcal{K}]\right)\mathds{1}_{B}\left(\eta^{\sigma}_{[0,t]}[\mathcal{K}]\right)\,.\end{split} (44)

Since 𝟙B(η[0,t]σ[𝒦])\mathds{1}_{B}\bigl{(}\eta^{\sigma}_{[0,t]}[\mathcal{K}]\bigr{)} and ηtσ[𝒦]\eta^{\sigma}_{t}[\mathcal{K}] depend only on 𝒦[0,t]\mathcal{K}_{[0,t]}, which is independent under {\mathbb{P}} from θt𝒦\theta_{t}\mathcal{K} and since θt𝒦\theta_{t}\mathcal{K} and 𝒦\mathcal{K} have the same law under {\mathbb{P}}, we have

Γ𝑑(𝒦)𝟙A(ηηtσ[𝒦][θt𝒦])𝟙B(η[0,t]σ[𝒦])=Γ𝑑(𝒦)Γ𝑑(𝒦)𝟙A(ηηtσ[𝒦][𝒦])𝟙B(η[0,t]σ[𝒦])=𝔼σ[𝟙B(η[0,t])ηt(A)].\int_{\Gamma_{*}}d{\mathbb{P}}(\mathcal{K})\mathds{1}_{A}\left(\eta^{\eta^{\sigma}_{t}[\mathcal{K}]}_{\cdot}[\theta_{t}\mathcal{K}]\right)\mathds{1}_{B}\left(\eta^{\sigma}_{[0,t]}[\mathcal{K}]\right)\\ =\int_{\Gamma_{*}}d{\mathbb{P}}(\mathcal{K})\int_{\Gamma_{*}}d{\mathbb{P}}(\mathcal{K}^{\prime})\mathds{1}_{A}\left(\eta^{\eta^{\sigma}_{t}[\mathcal{K}]}_{\cdot}[\mathcal{K}^{\prime}]\right)\mathds{1}_{B}\left(\eta^{\sigma}_{[0,t]}[\mathcal{K}]\right)={\mathbb{E}}^{\sigma}\left[\mathds{1}_{B}\left(\eta_{[0,t]}\right){\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathbb{P}}^{\eta_{t}}(A)}\right]\,.

By collecting our observations we get

σ(ηt+A,η[0,t]B)=𝔼σ[𝟙B(η[0,t])ηt(A)]A,Bt.{\mathbb{P}}^{\sigma}\left(\eta_{t+\cdot}\in A,\eta_{[0,t]}\in B\right)={\mathbb{E}}^{\sigma}\left[\mathds{1}_{B}\left(\eta_{[0,t]}\right){\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathbb{P}}^{\eta_{t}}(A)}\right]\qquad\forall A\in\mathcal{F}\,,\;\forall B\in\mathcal{F}_{t}\,.

The above family of identities leads to Item (iii).

7.2. Proof of Proposition 3.3

Since {0,1}S\{0,1\}^{S} is compact, ff is uniformly bounded. Fixed σ{0,1}S\sigma\in\{0,1\}^{S} the map Fσ:Γ𝒦f(ηtσ[𝒦])F_{\sigma}:\,\Gamma_{*}\ni\mathcal{K}\mapsto f\big{(}\eta^{\sigma}_{t}[\mathcal{K}]\big{)}\in{\mathbb{R}} is measurable (as composition of measurable functions, see Lemma 7.11) and is bounded in modulus by f\|f\|_{\infty}. Fixed ξ\xi and a sequence (ξn)(\xi_{n}) in {0,1}S\{0,1\}^{S} with ξnξ\xi_{n}\to\xi, due to the continuity in σ\sigma of ηtσ[𝒦]\eta^{\sigma}_{t}[\mathcal{K}] (see Lemma 7.11) and due to the continuity of ff, we have Fξn(𝒦)Fξ(𝒦)F_{\xi_{n}}(\mathcal{K})\to F_{\xi}(\mathcal{K}) as n+n\to+\infty for all 𝒦Γ\mathcal{K}\in\Gamma_{*}. Then, by dominated convergence, we get that (S(t)f)(ξn)=𝔼[Fξn]𝔼[Fξ]=(S(t)f)(ξ)\big{(}S(t)f\big{)}(\xi_{n})={\mathbb{E}}[F_{\xi_{n}}]\to{\mathbb{E}}[F_{\xi}]=\big{(}S(t)f\big{)}(\xi) as n+n\to+\infty.

7.3. Proof of Proposition 3.4

Let f:{0,1}Sf:\{0,1\}^{S}\to{\mathbb{R}} be a local function and take ASA\subset S finite such that f(η)f(\eta) is defined in terms only of η(x)\eta(x) with xAx\in A. We set (A):={{x,y}S:{x,y}A}\mathcal{E}(A):=\{\{x,y\}\in S\,:\,\{x,y\}\cap A\not=\emptyset\}. By (C1) we have

cA:={x,y}(A)cx,yxAyScx,y=xAcx<+.c_{A}:=\sum_{\{x,y\}\in\mathcal{E}(A)}c_{x,y}\leq\sum_{x\in A}\sum_{y\in S}c_{x,y}=\sum_{x\in A}c_{x}<+\infty\,. (45)

From the above bound and since f<+\|f\|_{\infty}<+\infty it is simple to prove that the r.h.s.’s of (7) and (8) are absolutely convergent series in C({0,1}S)C(\{0,1\}^{S}) defining the same function, that we denote by ^f\hat{}\mathcal{L}f. Hence we just need to prove that f=^f\mathcal{L}f=\hat{}\mathcal{L}f.

From now on 𝒦\mathcal{K} is assumed to be in Γ\Gamma_{*} (see Definition 7.8), which has {\mathbb{P}}-probability one. Since 𝒦A(t):={x,y}(A)𝒦x,y(t)\mathcal{K}_{A}(t):=\sum_{\{x,y\}\in\mathcal{E}(A)}\mathcal{K}_{x,y}(t), with 𝒦\mathcal{K} sampled by {\mathbb{P}}, is a Poisson random variable with finite parameter cAc_{A}, it holds

(𝒦A(t)2)=1ecAt(1+cAt)Ct2.{\mathbb{P}}(\mathcal{K}_{A}(t)\geq 2)=1-e^{-c_{A}t}(1+c_{A}t)\leq Ct^{2}\,. (46)

Recall that S={s1,s2,}S=\{s_{1},\,s_{2},\dots\}. We introduce on SS the total order \preceq such that sisjs_{i}\preceq s_{j} if and only if iji\leq j. When 𝒦A(t)=1\mathcal{K}_{A}(t)=1, we define the pair {x0,y0}\{x_{0},y_{0}\} as the only edge in A\mathcal{E}_{A} such that 𝒦x0,y0(t)=1\mathcal{K}_{x_{0},y_{0}}(t)=1, with the rule that we call x0x_{0} the minimal (w.r.t. \preceq) point in {x0,y0}A\{x_{0},y_{0}\}\cap A. This rule is introduced in order to have a univocally defined labelling of the points in the above edge.

Recall (39). We observe that 𝒦A(t)=1\mathcal{K}_{A}(t)=1 implies that 𝒦e(t)=0\mathcal{K}_{e}(t)=0 for all e(A)e\in\mathcal{E}(A) with e{x0,y0}e\not=\{x_{0},y_{0}\}, 𝒦x0,y0(t)=1\mathcal{K}_{x_{0},y_{0}}(t)=1, 𝒦x0(t)=1\mathcal{K}_{x_{0}}(t)=1, 𝒦x(t)=0\mathcal{K}_{x}(t)=0 for all xA{x0,y0}x\in A\setminus\{x_{0},y_{0}\} and 𝒦y0(t)1\mathcal{K}_{y_{0}}(t)\geq 1. 𝒦A(t)=1\mathcal{K}_{A}(t)=1 implies also that 𝒦y0(t)=1\mathcal{K}_{y_{0}}(t)=1 if y0Ay_{0}\in A. Let H:={𝒦A(t)=1}{𝒦y0(t)=1}H:=\{\mathcal{K}_{A}(t)=1\}\cap\{\mathcal{K}_{y_{0}}(t)=1\}. By the above observations and the graphical construction in Definition 7.4, HH also implies that Xtx[𝒦]=xX^{x}_{t}[\mathcal{K}]=x for xA{x0,y0}x\in A\setminus\{x_{0},y_{0}\}, Xtx0[𝒦]=y0X^{x_{0}}_{t}[\mathcal{K}]=y_{0} and Xty0[𝒦]=x0X^{y_{0}}_{t}[\mathcal{K}]=x_{0}. Hence, ηtσ[𝒦]=σx0,y0\eta_{t}^{\sigma}[\mathcal{K}]=\sigma^{x_{0},y_{0}} on AA when HH occurs.

As already observed, if 𝒦A(t)=1\mathcal{K}_{A}(t)=1 and y0Ay_{0}\in A, then HH must occur. Hence

{𝒦A(t)=1}H={𝒦A(t)=1,𝒦y0(t)>1,y0A}.\{\mathcal{K}_{A}(t)=1\}\setminus H=\{\mathcal{K}_{A}(t)=1,\;\mathcal{K}_{y_{0}}(t)>1,\;y_{0}\not\in A\}\,. (47)

We claim that, as t0t\downarrow 0,

({𝒦A(t)=1}H)=o(t).{\mathbb{P}}(\{\mathcal{K}_{A}(t)=1\}\setminus H)=o(t)\,. (48)

To prove our claim we estimate

(𝒦A(t)=1,𝒦y0(t)>1,y0A)xAySA(𝒦x,y(t)=1,zS(A{y})𝒦y,z(t)1)txAyScx,yecx,yt(1ecyt).\begin{split}&{\mathbb{P}}(\mathcal{K}_{A}(t)=1,\;\mathcal{K}_{y_{0}}(t)>1,\;y_{0}\not\in A)\\ &\leq\sum_{x\in A}\sum_{y\in S\setminus A}{\mathbb{P}}(\mathcal{K}_{x,y}(t)=1\,,\sum_{z\in S\setminus(A\cup\{y\})}\mathcal{K}_{y,z}(t)\geq 1)\\ &\leq t\sum_{x\in A}\sum_{y\in S}c_{x,y}e^{-c_{x,y}t}(1-e^{-c_{y}t})\,.\end{split} (49)

Note that in the last bound we have used that zS(A{y})𝒦y,z(t)\sum_{z\in S\setminus(A\cup\{y\})}\mathcal{K}_{y,z}(t) is a Poisson random variable with parameter zS(A{y})cy,zcy\sum_{z\in S\setminus(A\cup\{y\})}c_{y,z}\leq c_{y}, which is independent from 𝒦x,y(t)\mathcal{K}_{x,y}(t). By (45) and the dominated convergence theorem applied to the r.h.s. in (49), we get that the r.h.s. of (49) is o(t)o(t). By combining this result with (47), we get (48).

Using that f<+\|f\|_{\infty}<+\infty, f(ηtσ[𝒦])=f(σ)f(\eta^{\sigma}_{t}[\mathcal{K}])=f(\sigma) when 𝒦A(t)=0\mathcal{K}_{A}(t)=0, (46) and (48), we can write

S(t)f(σ)f(σ)=𝔼[(f(ηtσ[𝒦])f(σ))𝟙H]+o(t)={x,y}(A)(f(σx,y)f(σ))(H,{x0,y0}={x,y})+o(t).\begin{split}&S(t)f(\sigma)-f(\sigma)={\mathbb{E}}\left[\left(f(\eta^{\sigma}_{t}[\mathcal{K}])-f(\sigma)\right)\mathds{1}_{H}\right]+o(t)\\ &=\sum_{\{x,y\}\in\mathcal{E}(A)}\left(f(\sigma^{x,y})-f(\sigma)\right){\mathbb{P}}(H,\{x_{0},y_{0}\}=\{x,y\})+o(t)\,.\end{split} (50)

Above, to simplify the notation, for the intersection of events we used the comma instead of the symbol \cap (we keep the same convention also below).

If in (50) we replace (H,{x0,y0}={x,y}){\mathbb{P}}(H,\{x_{0},y_{0}\}=\{x,y\}) by (𝒦A(t)=1,{x0,y0}={x,y}){\mathbb{P}}(\mathcal{K}_{A}(t)=1,\{x_{0},y_{0}\}=\{x,y\}), the global error is of order o(t)o(t). Indeed, since the first event is included in the second one, we have

{x,y}(A)|(𝒦A(t)=1,{x0,y0}={x,y})(H,{x0,y0}={x,y})|={x,y}(A)({𝒦A(t)=1}H,{x0,y0}={x,y})=({𝒦A(t)=1}H),\begin{split}&\sum_{\{x,y\}\in\mathcal{E}(A)}\big{|}{\mathbb{P}}(\mathcal{K}_{A}(t)=1,\{x_{0},y_{0}\}=\{x,y\})-{\mathbb{P}}(H,\{x_{0},y_{0}\}=\{x,y\})\big{|}\\ &=\sum_{\{x,y\}\in\mathcal{E}(A)}{\mathbb{P}}(\{\mathcal{K}_{A}(t)=1\}\setminus H,\{x_{0},y_{0}\}=\{x,y\})={\mathbb{P}}(\{\mathcal{K}_{A}(t)=1\}\setminus H)\,,\end{split}

and the last expression is of order o(t)o(t) due to (48). By making the above replacement we have

S(t)f(σ)f(σ)={x,y}(A)(f(σx,y)f(σ))(𝒦A(t)=1,{x0,y0}={x,y})+o(t)=t{x,y}(A)(f(σx,y)f(σ))cx,yecx,ytexp{t{x,y}(A):{x,y}{x,y}cx,y}+o(t)=t{x,y}(A)(f(σx,y)f(σ))cx,yecAt+o(t).\begin{split}&S(t)f(\sigma)-f(\sigma)\\ &=\sum_{\{x,y\}\in\mathcal{E}(A)}\left(f(\sigma^{x,y})-f(\sigma)\right){\mathbb{P}}(\mathcal{K}_{A}(t)=1,\{x_{0},y_{0}\}=\{x,y\})+o(t)\\ &=t\sum_{\{x,y\}\in\mathcal{E}(A)}\left(f(\sigma^{x,y})-f(\sigma)\right)c_{x,y}e^{-c_{x,y}t}\exp\Big{\{}-t\sum_{\begin{subarray}{c}\{x^{\prime},y^{\prime}\}\in\mathcal{E}(A):\\ \{x^{\prime},y^{\prime}\}\not=\{x,y\}\end{subarray}}c_{x^{\prime},y^{\prime}}\Big{\}}+o(t)\\ &=t\sum_{\{x,y\}\in\mathcal{E}(A)}\left(f(\sigma^{x,y})-f(\sigma)\right)c_{x,y}e^{-c_{A}t}+o(t)\,.\end{split} (51)

We apply the dominated convergence theorem to the measure on (A)\mathcal{E}(A) giving weight cx,yc_{x,y} to {x,y}\{x,y\} and to the tt–parametrized functions Ft({x,y}):=(f(σx,y)f(σ))[ecAt1]F_{t}(\{x,y\}):=\left(f(\sigma^{x,y})-f(\sigma)\right)\left[e^{-c_{A}t}-1\right]. The above functions are dominated by the constant function 2f2\|f\|_{\infty}, which is integrable as {x,y}(A)cx,y=cA<+\sum_{\{x,y\}\in\mathcal{E}(A)}c_{x,y}=c_{A}<+\infty. By dominated convergence we conclude that limt0{x,y}(A)cx,yFt({x,y})=0\lim_{t\downarrow 0}\sum_{\{x,y\}\in\mathcal{E}(A)}c_{x,y}F_{t}(\{x,y\})=0. By combining this observation with (51) we conclude that

S(t)f(σ)f(σ)=t{x,y}(A)(f(σx,y)f(σ))cx,y+o(t)=t{x,y}S(f(σx,y)f(σ))cx,y+o(t)=t^f(σ)+o(t).\begin{split}S(t)f(\sigma)-f(\sigma)&=t\sum_{\{x,y\}\in\mathcal{E}(A)}\left(f(\sigma^{x,y})-f(\sigma)\right)c_{x,y}+o(t)\\ &=t\sum_{\{x,y\}\in\mathcal{E}_{S}}\left(f(\sigma^{x,y})-f(\sigma)\right)c_{x,y}+o(t)=t\hat{}\mathcal{L}f(\sigma)+o(t)\,.\end{split}

The above expression implies that f𝒟()f\in\mathcal{D}(\mathcal{L}) and that f=^f\mathcal{L}f=\hat{}\mathcal{L}f.

7.4. Proof of Proposition 3.5

We first point out a property of ||||||{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\cdot\right|\kern-1.07639pt\right|\kern-1.07639pt\right|} frequently used also in the rest: if hC({0,1})Sh\in C(\{0,1\})^{S} and η,ξ{0,1}S\eta,\xi\in\{0,1\}^{S} satisfy η(x)=ξ(x)\eta(x)=\xi(x) for all xAx\in A for some ASA\subset S (possibly A=A=\emptyset), then it holds

|h(η)h(ξ)|xSAΔh(x).|h(\eta)-h(\xi)|\leq\sum_{x\in S\setminus A}\Delta_{h}(x)\,. (52)

The proof of (52) is standard. Indeed, define σ(n){0,1}S\sigma^{(n)}\in\{0,1\}^{S}, nn\in{\mathbb{N}}, by setting σ(n)(sk)=η(sk)\sigma^{(n)}(s_{k})=\eta(s_{k}) if knk\leq n and σ(n)(sk)=ξ(sk)\sigma^{(n)}(s_{k})=\xi(s_{k}) if k>nk>n (recall that S={s1,s2,}S=\{s_{1},s_{2},\dots\}). Then σ(0)=ξ\sigma^{(0)}=\xi and σ(n)η\sigma^{(n)}\to\eta as n+n\to+\infty, thus implying that h(η)=limn+h(σ(n))h(\eta)=\lim_{n\to+\infty}h(\sigma^{(n)}) due to the continuity of hh and therefore |h(η)h(ξ)|k=0|h(σ(k+1))h(σ(k))||h(\eta)-h(\xi)|\leq\sum_{k=0}^{\infty}|h(\sigma^{(k+1)})-h(\sigma^{(k)})|. To conclude the proof of (52) it is enough to observe that σ(k+1)\sigma^{(k+1)} and σ(k)\sigma^{(k)} differ at most at sk+1s_{k+1} (thus implying that |h(σ(k+1))h(σ(k))|Δh(sk+1)|h(\sigma^{(k+1)})-h(\sigma^{(k)})|\leq\Delta_{h}(s_{k+1})) and are equal if sk+1As_{k+1}\in A.

Let now ff be as in Proposition 3.5. We set Tx,yf(η):=f(ηx,y)f(η)T_{x,y}f(\eta):=f(\eta^{x,y})-f(\eta). Since |f(ηx,y)f(η)|Δf(x)+Δf(y)|f(\eta^{x,y})-f(\eta)|\leq\Delta_{f}(x)+\Delta_{f}(y) by (52), we get Tx,yfΔf(x)+Δf(y)\|T_{x,y}f\|_{\infty}\leq\Delta_{f}(x)+\Delta_{f}(y). Then, since |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty, it holds

{x,y}Scx,yTx,yf{x,y}Scx,y(Δf(x)+Δf(y))=xScxΔf(x)<+.\sum_{\{x,y\}\in\mathcal{E}_{S}}c_{x,y}\|T_{x,y}f\|_{\infty}\leq\sum_{\{x,y\}\in\mathcal{E}_{S}}c_{x,y}(\Delta_{f}(x)+\Delta_{f}(y))=\sum_{x\in S}c_{x}\Delta_{f}(x)<+\infty\,. (53)

This proves the absolute convergence of {x,y}Scx,y[f(ηx,y)f(η)]\sum_{\{x,y\}\in\mathcal{E}_{S}}c_{x,y}\bigl{[}f(\eta^{x,y})-f(\eta)\bigr{]} as series of functions in C({0,1}S)C(\{0,1\}^{S}). Let us call h(η)h(\eta) the resulting function.

Since \mathcal{L} is a Markov generator, its graph is closed in C({0,1}S)×C({0,1}S)C(\{0,1\}^{S})\times C(\{0,1\}^{S}) (cf. [20, Chapter 1]). Hence, to conclude, it is enough to exhibit a sequence of local functions fnf_{n} such that fnf0\|f_{n}-f\|_{\infty}\to 0 and fnh0\|\mathcal{L}f_{n}-h\|_{\infty}\to 0 as n+n\to+\infty (indeed this implies that f𝒟()f\in\mathcal{D}(\mathcal{L}) and h=fh=\mathcal{L}f). To this aim, given η{0,1}S\eta\in\{0,1\}^{S}, we define η(n)(x)=η(x)\eta^{(n)}(x)=\eta(x) if xAn:={s1,s2,,sn}x\in A_{n}:=\{s_{1},s_{2},\dots,s_{n}\} and η(n)(x)=0\eta^{(n)}(x)=0 otherwise. We also set fn(η):=f(η(n))f_{n}(\eta):=f(\eta^{(n)}). Trivially fnf_{n} is a local function. Moreover, by (52), ffnxAnΔf(x)\|f-f_{n}\|_{\infty}\leq\sum_{x\not\in A_{n}}\Delta_{f}(x), which goes to zero as n+n\to+\infty since |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty. It remains to prove that fnh0\|\mathcal{L}f_{n}-h\|_{\infty}\to 0. To this aim we fix ε>0\varepsilon>0. Due to (53) and since |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty, we can fix NN such that

{x,y}S:{x,y}ANcx,yTx,yfε.\sum_{\{x,y\}\in\mathcal{E}_{S}:\\ \{x,y\}\not\subset A_{N}}c_{x,y}\|T_{x,y}f\|_{\infty}\leq\varepsilon\,. (54)

Note that Tx,yfnTx,yf\|T_{x,y}f_{n}\|_{\infty}\leq\|T_{x,y}f\|_{\infty}, hence (54) holds also with fnf_{n} instead of ff for any n=1,2,n=1,2,\dots. As a consequence and due to the series representation of hh and fn\mathcal{L}f_{n} (cf. (8)) we conclude that

hfn2ε+{x,y}S:{x,y}ANcx,y(Tx,yfTx,yfn)2ε+{x,y}S:{x,y}ANcx,yTx,yfTx,yfn.\begin{split}\|h-\mathcal{L}f_{n}\|_{\infty}&\leq 2\varepsilon+\|\sum_{\{x,y\}\in\mathcal{E}_{S}:\{x,y\}\subset A_{N}}c_{x,y}\left(T_{x,y}f-T_{x,y}f_{n}\right)\|_{\infty}\\ &\leq 2\varepsilon+\sum_{\{x,y\}\in\mathcal{E}_{S}:\{x,y\}\subset A_{N}}c_{x,y}\|T_{x,y}f-T_{x,y}f_{n}\|_{\infty}\,.\end{split} (55)

For nNn\geq N and {x,y}ANAn\{x,y\}\subset A_{N}\subset A_{n} we have that ηx,y\eta^{x,y} and (η(n))x,y(\eta^{(n)})^{x,y} coincide on AnA_{n} and also η\eta and η(n)\eta^{(n)} coincide on AnA_{n} and therefore by (52)

|Tx,yf(η)Tx,yf(n)(η)||f(ηx,y)f((η(n))x,y)|+|f(η)f(η(n))|2zAnΔf(z).|T_{x,y}f(\eta)-T_{x,y}f^{(n)}(\eta)|\leq|f(\eta^{x,y})-f((\eta^{(n)})^{x,y})|+|f(\eta)-f(\eta^{(n)})|\leq 2\sum_{z\not\in A_{n}}\Delta_{f}(z)\,. (56)

By combining (55) and (56) and setting C(N):={x,y}S:{x,y}ANcx,yC(N):=\sum_{\{x,y\}\in\mathcal{E}_{S}:\{x,y\}\subset A_{N}}c_{x,y}, we conclude that hfn2ε+2C(N)zAnΔf(z)\|h-\mathcal{L}f_{n}\|_{\infty}\leq 2\varepsilon+2C(N)\sum_{z\not\in A_{n}}\Delta_{f}(z) for nNn\geq N. Hence, by the bound |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty, lim supn+hfn2ε\limsup_{n\to+\infty}\|h-\mathcal{L}f_{n}\|_{\infty}\leq 2\varepsilon. By the arbitrariness of ε\varepsilon we conclude that limn+hfn=0\lim_{n\to+\infty}\|h-\mathcal{L}f_{n}\|_{\infty}=0, thus proving the lemma.

Remark 7.13.

For later use we stress that, in the proof of Proposition 3.5, for any ff with |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty and |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty we have built a sequence fn𝒞𝒟()f_{n}\in\mathcal{C}\subset\mathcal{D}(\mathcal{L}) such that ffn0\|f-f_{n}\|_{\infty}\to 0 and ffn0\|\mathcal{L}f-\mathcal{L}f_{n}\|_{\infty}\to 0 as n+n\to+\infty.

7.5. Proof of Proposition 3.6

We set 𝒲:={fC({0,1}S):|S(t)f|<+ and |S(t)f|<+t+}\mathcal{W}:=\{f\in C(\{0,1\}^{S})\,:\,{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty\text{ and }{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty\;\forall t\in{\mathbb{Q}}_{+}\}. We start showing that if 𝒲\mathcal{W} is a core then also 𝒞\mathcal{C} is a core. If 𝒲\mathcal{W} is a core, then for any h𝒟()h\in\mathcal{D}(\mathcal{L}) there exists a sequence hn𝒲𝒟()h_{n}\in\mathcal{W}\subset\mathcal{D}(\mathcal{L}) such that hnh0\|h_{n}-h\|_{\infty}\to 0 and hnh0\|\mathcal{L}h_{n}-\mathcal{L}h\|_{\infty}\to 0 as n+n\to+\infty. To prove that 𝒞\mathcal{C} is a core, it is then enough to show that for any f𝒲f\in\mathcal{W} there exists a sequence fn𝒞f_{n}\in\mathcal{C} such that fnf0\|f_{n}-f\|_{\infty}\to 0 and fnf0\|\mathcal{L}f_{n}-\mathcal{L}f\|_{\infty}\to 0. This follows from the proof of Proposition 3.5 and in particular from Remark 7.13.

Now it remains to show that 𝒲\mathcal{W} is a core for \mathcal{L}. By a slight modification of [8, Proposition 3.3] it is enough to show the following: (i) 𝒞\mathcal{C} is dense in C({0,1}S)C(\{0,1\}^{S}); (ii) 𝒞𝒲𝒟()\mathcal{C}\subset\mathcal{W}\subset\mathcal{D}(\mathcal{L}) and (iii) S(t)f𝒲S(t)f\in\mathcal{W} for any f𝒞f\in\mathcal{C} and t+t\in{\mathbb{Q}}_{+}. The difference with [8, Proposition 3.3] is that in (iii) we require that S(t)f𝒲S(t)f\in\mathcal{W} only for t+t\in{\mathbb{Q}}_{+} and not for all t+t\in{\mathbb{R}}_{+}. The reader can easily check that the short proof of [8, Proposition 3.3] still works. Indeed, the invariance of 𝒲\mathcal{W} is used in the proof only to show that, given f𝒞f\in\mathcal{C}, fn:=1nk=0n2eλk/nS(k/n)ff_{n}:=\frac{1}{n}\sum_{k=0}^{n^{2}}e^{-\lambda k/n}S(k/n)f belongs to 𝒲\mathcal{W}.

Let us check the above properties (i), (ii) and (iii). Property (i) is well known [20]. The inclusion 𝒲𝒟()\mathcal{W}\subset\mathcal{D}(\mathcal{L}) in (ii) follows from Proposition 3.5 since |f|=|S(0)f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}={\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(0)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty and |f|=|S(0)f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}={\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(0)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}<+\infty for any f𝒲f\in\mathcal{W}. If we prove that 𝒞𝒲\mathcal{C}\subset\mathcal{W} (thus completing (ii)), then automatically we have (iii) since, by definition of 𝒲\mathcal{W}, S(t)𝒲𝒲S(t)\mathcal{W}\subset\mathcal{W} for any t+t\in{\mathbb{Q}}_{+}. Hence, it remains to prove that 𝒞𝒲\mathcal{C}\subset\mathcal{W}. To this aim we first show that, given fC({0,1}S)f\in C(\{0,1\}^{S}),

ΔS(t)f(u)xSΔf(x)Px(Xt=u),\Delta_{S(t)f}(u)\leq\sum_{x\in S}\Delta_{f}(x)P_{x}(X_{t}=u)\,, (57)

where the probability PxP_{x} refers to the random walk starting at xx. To prove (57) suppose that σ\sigma and ξ\xi in {0,1}S\{0,1\}^{S} are equal except at uu. Recall that, by the graphical construction, S(t)f(σ)=𝔼[f(ηtσ[𝒦])]S(t)f(\sigma)={\mathbb{E}}[f(\eta_{t}^{\sigma}[\mathcal{K}])] and S(t)f(ξ)=𝔼[f(ηtξ[𝒦])]S(t)f(\xi)={\mathbb{E}}[f(\eta_{t}^{\xi}[\mathcal{K}])]. Since ηtσ[𝒦](x)=σ(Xtx[𝒦])\eta_{t}^{\sigma}[\mathcal{K}](x)=\sigma(X_{t}^{x}[\mathcal{K}]) and ηtξ[𝒦](x)=ξ(Xtx[𝒦])\eta_{t}^{\xi}[\mathcal{K}](x)=\xi(X_{t}^{x}[\mathcal{K}]), we get that ηtσ[𝒦](x)=ηtξ[𝒦](x)\eta_{t}^{\sigma}[\mathcal{K}](x)=\eta_{t}^{\xi}[\mathcal{K}](x) for all xx such that Xtx[𝒦]uX_{t}^{x}[\mathcal{K}]\not=u. Therefore, by (52),

|f(ηtσ[𝒦])f(ηtξ[𝒦])|xSΔf(x)𝟙(Xtx[𝒦]=u).|f(\eta_{t}^{\sigma}[\mathcal{K}])-f(\eta_{t}^{\xi}[\mathcal{K}])|\leq\sum_{x\in S}\Delta_{f}(x)\mathds{1}(X_{t}^{x}[\mathcal{K}]=u)\,.

It then follows that

|S(t)f(σ)S(t)f(ξ)|xSΔf(x)(Xtx[𝒦]=u)=xSΔf(x)Px(Xt=u).|S(t)f(\sigma)-S(t)f(\xi)|\leq\sum_{x\in S}\Delta_{f}(x){\mathbb{P}}(X_{t}^{x}[\mathcal{K}]=u)=\sum_{x\in S}\Delta_{f}(x)P_{x}(X_{t}=u)\,.

From the above estimate we get (57).

From (57) we then have

|S(t)f|=uSΔS(t)f(u)uSxSΔf(x)Px(Xt=u)=xSΔf(x)=|f|,\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}=\sum_{u\in S}\Delta_{S(t)f}(u)\leq\sum_{u\in S}\sum_{x\in S}\Delta_{f}(x)P_{x}(X_{t}=u)=\sum_{x\in S}\Delta_{f}(x)={\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}\,,
|S(t)f|=uScuΔS(t)f(u)uScuxSΔf(x)Px(Xt=u)=xSΔf(x)Ex[cXt].\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{*}=\sum_{u\in S}c_{u}\Delta_{S(t)f}(u)\leq\sum_{u\in S}c_{u}\sum_{x\in S}\Delta_{f}(x)P_{x}(X_{t}=u)=\sum_{x\in S}\Delta_{f}(x)E_{x}\big{[}c_{X_{t}}\big{]}\,.

From the above two bounds and (12) we get immediately that 𝒞𝒲\mathcal{C}\subset\mathcal{W}.

7.6. Proof of Lemma 3.9

We need to prove for any fCc(S)f\in C_{c}(S) that

limt0xS(Ptf(x)f(x)t𝕃~f(x))2=0.\lim_{t\downarrow 0}\sum_{x\in S}\Big{(}\frac{P_{t}f(x)-f(x)}{t}-\tilde{\mathbb{L}}f(x)\Big{)}^{2}=0\,. (58)

Since functions fCc(S)f\in C_{c}(S) are finite linear combinations of Kronecker’s functions, it is enough to consider the case f(x)=𝟙(x=x0)=δx,x0f(x)=\mathds{1}(x=x_{0})=\delta_{x,x_{0}} for a fixed x0Sx_{0}\in S. To this aim we write NN for the number of jumps performed by the random walk in the time window [0,t][0,t]. Recall that Ptf(x)=Ex[f(Xt)]P_{t}f(x)=E_{x}\big{[}f(X_{t})\big{]}. We write PxP_{x} for the probability on the path space associated to the random walk starting at xx. We have

Ptf(x)f(x)t𝕃~f(x)=f(x)[Px(N=0)1tcx]+yf(y)[Px(N=1,Xt=y)tcx,y]+Px(N2,Xt=x0)t.\begin{split}&\frac{P_{t}f(x)-f(x)}{t}-\tilde{\mathbb{L}}f(x)=f(x)\Big{[}\frac{P_{x}(N=0)-1}{t}-c_{x}\Big{]}\\ &+\sum_{y}f(y)\Big{[}\frac{P_{x}(N=1,\,X_{t}=y)}{t}-c_{x,y}\Big{]}+\frac{P_{x}(N\geq 2,\,X_{t}=x_{0})}{t}\,.\end{split}

As a consequence (recall that f(x)=δx,x0f(x)=\delta_{x,x_{0}}) to get (58) it is enough to prove the following:

limt0[ecx0t1tcx0]2=0,\displaystyle\lim_{t\downarrow 0}\Big{[}\frac{e^{-c_{x_{0}}t}-1}{t}-c_{x_{0}}\Big{]}^{2}=0\,, (59)
limt0xS[cx,x0(0t𝑑secxsecx0(ts)t1)]2=0,\displaystyle\lim_{t\downarrow 0}\sum_{x\in S}\Big{[}c_{x,x_{0}}\Big{(}\frac{\int_{0}^{t}ds\,e^{-c_{x}s}e^{-c_{x_{0}}(t-s)}}{t}-1\Big{)}\Big{]}^{2}=0\,, (60)
limt0xS1t2Px(Xt=x0,N2)2=0.\displaystyle\lim_{t\downarrow 0}\sum_{x\in S}\frac{1}{t^{2}}P_{x}\big{(}X_{t}=x_{0},N\geq 2\big{)}^{2}=0\,. (61)

(59) is trivial. (60) can be rewritten as (recall that cx,x0=cx0,xc_{x,x_{0}}=c_{x_{0},x})

limt0xScx0,x2Ft(x)=0,Ft(x):=(0t𝑑secxsecx0(ts)t1)2.\lim_{t\downarrow 0}\sum_{x\in S}c_{x_{0},x}^{2}F_{t}(x)=0\,,\qquad F_{t}(x):=\Big{(}\frac{\int_{0}^{t}ds\,e^{-c_{x}s}e^{-c_{x_{0}}(t-s)}}{t}-1\Big{)}^{2}\,.

Since Ft1\|F_{t}\|_{\infty}\leq 1 and limt0Ft(x)=0\lim_{t\downarrow 0}F_{t}(x)=0 for all xSx\in S, the above limit follows from the dominated convergence theorem if we show that xScx0,x2<+\sum_{x\in S}c_{x_{0},x}^{2}<+\infty. To this aim we observe that, since cx0=xScx0,x<+c_{x_{0}}=\sum_{x\in S}c_{x_{0},x}<+\infty, it must be supxScx0,x<+\sup_{x\in S}c_{x_{0},x}<+\infty. Then we can bound xScx0,x2cx0supxScx0,x<+\sum_{x\in S}c_{x_{0},x}^{2}\leq c_{x_{0}}\sup_{x\in S}c_{x_{0},x}<+\infty.

To prove (61) we use the symmetry of the random walk to get the bound

xSPx(Xt=x0,N2)2=xSPx0(Xt=x,N2)2[xSPx0(Xt=x,N2)]Px0(N2)=Px0(N2)2.\begin{split}&\sum_{x\in S}P_{x}\big{(}X_{t}=x_{0},N\geq 2\big{)}^{2}=\sum_{x\in S}P_{x_{0}}\big{(}X_{t}=x,N\geq 2\big{)}^{2}\\ &\leq\Big{[}\sum_{x\in S}P_{x_{0}}\big{(}X_{t}=x,N\geq 2\big{)}\Big{]}P_{x_{0}}\big{(}N\geq 2\big{)}=P_{x_{0}}\big{(}N\geq 2\big{)}^{2}\,.\end{split} (62)

Now we note that

1tPx0(N2)=xScx0,xGt(x),Gt(x):=1t0t𝑑secx0s(1ecx(ts)).\frac{1}{t}P_{x_{0}}\big{(}N\geq 2\big{)}=\sum_{x\in S}c_{x_{0},x}G_{t}(x)\,,\qquad G_{t}(x):=\frac{1}{t}\int_{0}^{t}ds\,e^{-c_{x_{0}}s}(1-e^{-c_{x}(t-s)})\,.

We have Gt1\|G_{t}\|_{\infty}\leq 1 and limt0Gt(x)=0\lim_{t\downarrow 0}G_{t}(x)=0. Since in addition xScx0,x=cx0<+\sum_{x\in S}c_{x_{0},x}=c_{x_{0}}<+\infty, by the dominated convergence theorem we conclude that limt0Px0(N2)/t=0\lim_{t\downarrow 0}P_{x_{0}}\big{(}N\geq 2\big{)}/t=0. As a byproduct of the above limit and (62) we get (61).

8. Graphical construction and Markov generator of SEP: proofs

We start with the proof of Lemma 5.5.

Proof of Lemma 5.5.

Trivially it is enough to prove the first bound. To this aim we argue by contradiction and assume Assumption SEP to hold and that cxs=+c_{x}^{s}=+\infty for some xSx\in S. Without loss of generality, we can suppose that x=s1x=s_{1} (see (3)). Due to the superposition property of independent Poisson point processes, under {\mathbb{P}}, Mn(t):=i=2n𝒦x,sis(t)M_{n}(t):=\sum_{i=2}^{n}\mathcal{K}^{s}_{x,s_{i}}(t), t0t\geq 0, is a Poisson process with parameter λn:=i=2ncx,sis\lambda_{n}:=\sum_{i=2}^{n}c^{s}_{x,s_{i}}. Hence, fixed kk\in{\mathbb{N}} and t>0t>0, (Mn(t)k)=j=0keλnt(λnt)j/j!{\mathbb{P}}(M_{n}(t)\leq k)=\sum_{j=0}^{k}e^{-\lambda_{n}t}(\lambda_{n}t)^{j}/j!. Since by hypothesis limn+λn=cx=+\lim_{n\to+\infty}\lambda_{n}=c_{x}=+\infty, we conclude that limn+(Mn(t)k)=0\lim_{n\to+\infty}{\mathbb{P}}(M_{n}(t)\leq k)=0. Since in addition the sequence of events n{Mn(t)k}n\mapsto\{M_{n}(t)\leq k\} is monotone and decreasing, we get that (n2{Mn(t)k})=0{\mathbb{P}}(\cap_{n\geq 2}\{M_{n}(t)\leq k\})=0. Note that n2{Mn(t)k}\cap_{n\geq 2}\{M_{n}(t)\leq k\} means that xx has degree at most kk in the graph 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}). As a consequence, for any fixed t>0t>0, {\mathbb{P}}–a.s. the vertex xx has infinite degree in the graph 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}), thus contradicting Assumption SEP. ∎

Recall the set ΓDSo\Gamma_{*}\subset D_{\mathbb{N}}^{\mathcal{E}_{S}^{o}} introduced in Definition 5.6. Moreover, recall that sets Br,x(𝒦)B_{r,x}(\mathcal{K}) and 𝒞t,x\mathcal{C}_{t,x} introduced in Definition 5.13.

Remark 8.1.

By the graphical construction and since 𝒦Γ\mathcal{K}\in\Gamma_{*}, Br,x(𝒦)B_{r,x}(\mathcal{K}) is a finite set and {𝒦Γ:Br,x(𝒦)=B}\{\mathcal{K}\in\Gamma_{*}\,:\,B_{r,x}(\mathcal{K})=B\} is a Borel subset of DSoD_{\mathbb{N}}^{\mathcal{E}_{S}^{o}} for all BSB\subset S. Fix now tt with rt0<t(r+1)t0rt_{0}<t\leq(r+1)t_{0}. Then, for any σ{0,1}S\sigma\in\{0,1\}^{S}, the value ηtσ(x)[𝒦]\eta_{t}^{\sigma}(x)[\mathcal{K}] depends on σ\sigma only through the restriction of σ\sigma to 𝒞t,x(𝒦)=Br,x(𝒦){\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{C}_{t,x}(\mathcal{K})=}B_{r,x}(\mathcal{K}) and, knowing that 𝒞t,x(𝒦)=Br,x(𝒦)=B{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{C}_{t,x}(\mathcal{K})=}B_{r,x}(\mathcal{K})=B, it depends on 𝒦\mathcal{K} on through the values 𝒦y,z(s)\mathcal{K}_{y,z}(s) with yzy\not=z in BB and s[0,t]s\in[0,t].

The following lemma is the SEP version of Lemma 7.11:

Lemma 8.2.

Fixed t0t\geq 0, the map {0,1}S×Γ(σ,𝒦)ηtσ[𝒦]{0,1}S\{0,1\}^{S}\times\Gamma_{*}\ni(\sigma,\mathcal{K})\mapsto\eta^{\sigma}_{t}[\mathcal{K}]\in\{0,1\}^{S} is continuous in σ\sigma (for fixed 𝒦\mathcal{K}) and measurable in 𝒦\mathcal{K} (for fixed σ\sigma).

Proof.

For t=0t=0 we have ηtσ[𝒦]=σ\eta^{\sigma}_{t}[\mathcal{K}]=\sigma and the claim is trivially true. We fix t>0t>0 and take rr\in{\mathbb{N}} such that rt0<t(r+1)t0rt_{0}<t\leq(r+1)t_{0}.

We first prove the continuity in σ\sigma for fixed 𝒦\mathcal{K}. By (4), given N+N\in{\mathbb{N}}_{+}, we have d(ηtσ[𝒦],ηtσ[𝒦])2Nd(\eta^{\sigma}_{t}[\mathcal{K}],\eta^{\sigma^{\prime}}_{t}[\mathcal{K}])\leq 2^{-N} if ηtσ[𝒦](sn)=ηtσ[𝒦](sn)\eta^{\sigma}_{t}[\mathcal{K}](s_{n})=\eta^{\sigma^{\prime}}_{t}[\mathcal{K}](s_{n}) for all n=1,2,,Nn=1,2,\dots,N. Recall Definition 5.13 and set A(𝒦):=n=1NBr,sn(𝒦)A(\mathcal{K}):=\cup_{n=1}^{N}B_{r,s_{n}}(\mathcal{K}). By Remark 8.1 we have ηtσ[𝒦](sn)=ηtσ[𝒦](sn)\eta^{\sigma}_{t}[\mathcal{K}](s_{n})=\eta^{\sigma^{\prime}}_{t}[\mathcal{K}](s_{n}) for all n=1,2,,Nn=1,2,\dots,N whenever σ\sigma and σ\sigma^{\prime} coincide on A(𝒦)A(\mathcal{K}), which is automatically satisfied if d(σ,σ)d(\sigma,\sigma^{\prime}) is small enough since A(𝒦)A(\mathcal{K}) is finite. This proves the continuity in σ\sigma.

We now prove the measurability in 𝒦\mathcal{K} for fixed σ\sigma. We just need to prove that, fixed σ{0,1}S\sigma\in\{0,1\}^{S} and xSx\in S, the set {𝒦Γ:ηtσ[𝒦](x)=1}\{\mathcal{K}\in\Gamma_{*}\,:\,\eta_{t}^{\sigma}[\mathcal{K}](x)=1\} is measurable in DSoD_{{\mathbb{N}}}^{\mathcal{E}_{S}^{o}}. Given BSB\subset S finite, call 𝒜B\mathcal{A}_{B} the event in DBoD_{{\mathbb{N}}}^{\mathcal{E}^{o}_{B}} that the SEP on BB built by the standard graphical construction on BB has value 11 at xx at time tt (Bo\mathcal{E}^{o}_{B} is given by the pairs (y,z)(y,z) with yzy\not=z in BB). Since in this case all is finite, 𝒜B\mathcal{A}_{B} is Borel in DBoD_{{\mathbb{N}}}^{\mathcal{E}^{o}_{B}}. We define 𝒜B\mathcal{A}_{B}^{*} as the Borel set in DSoD_{{\mathbb{N}}}^{\mathcal{E}^{o}_{S}} obtained as product set of 𝒜B\mathcal{A}_{B} with DSoBoD_{{\mathbb{N}}}^{\mathcal{E}^{o}_{S}\setminus\mathcal{E}^{o}_{B}}. Then, see Remark 8.1,

{𝒦Γ:ηtσ[𝒦](x)=1}=BS finite({𝒦Γ:Br,x(𝒦)=B}𝒜B).\{\mathcal{K}\in\Gamma_{*}\,:\,\eta_{t}^{\sigma}[\mathcal{K}](x)=1\}=\cup_{B\subset S\text{ finite}}\left(\{\mathcal{K}\in\Gamma_{*}\,:\,{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}B_{r,x}(\mathcal{K})}=B\}\cap\mathcal{A}_{B}^{*}\right)\,.

By Remark 8.1 the set {𝒦Γ:Br,x(𝒦)=B}\{\mathcal{K}\in\Gamma_{*}\,:\,B_{r,x}(\mathcal{K})=B\} is Borel in DSoD_{{\mathbb{N}}}^{\mathcal{E}_{S}^{o}}, thus allowing to conclude. ∎

Lemma 8.3.

For each σ{0,1}\sigma\in\{0,1\} and 𝒦Γ\mathcal{K}\in\Gamma_{*}, the path ησ[𝒦]:=(ηtσ[𝒦])t0\eta^{\sigma}_{\cdot}[\mathcal{K}]:=\bigl{(}\eta^{\sigma}_{t}[\mathcal{K}]\bigr{)}_{t\geq 0} belongs to D{0,1}SD_{\{0,1\}^{S}}. Moreover, fixed σ{0,1}S\sigma\in\{0,1\}^{S}, the map

Γ𝒦ησ[𝒦]D{0,1}S\Gamma_{*}\ni\mathcal{K}\mapsto\eta^{\sigma}_{\cdot}[\mathcal{K}]\in D_{\{0,1\}^{S}} (63)

is measurable in 𝒦\mathcal{K}.

Proof.

Let us show that ησ[𝒦]\eta^{\sigma}_{\cdot}[\mathcal{K}] belongs to D{0,1}SD_{\{0,1\}^{S}}. We start with the right continuity. Fix t0t\geq 0 and let rt0<t(r+1)t0rt_{0}<t\leq(r+1)t_{0} with r{1}r\in{\mathbb{N}}\cup\{-1\}. Due to (4) we just need to show that limutηuσ[𝒦](x)=ηtσ[𝒦](x)\lim_{u\downarrow t}\eta^{\sigma}_{u}[\mathcal{K}](x)=\eta^{\sigma}_{t}[\mathcal{K}](x) for any xSx\in S. By Remark 8.1, as 𝒦Γ\mathcal{K}\in\Gamma_{*}, the set B:=Br+1,x(𝒦)B:=B_{r+1,x}(\mathcal{K}) is finite and ηuσ[𝒦](x)=ηtσ[𝒦](x)\eta^{\sigma}_{u}[\mathcal{K}](x)=\eta^{\sigma}_{t}[\mathcal{K}](x) if 𝒦y,z\mathcal{K}_{y,z} has no jump time in (t,u](t,u] for all (y,z)So(y,z)\in\mathcal{E}_{S}^{o} with y,zBy,z\in B. Since BB is finite and the jump times of 𝒦y,z\mathcal{K}_{y,z} form a locally finite set, we conclude that the above condition is satisfied for uu sufficiently close to tt. This proves the right continuity. To prove that ησ[𝒦]\eta^{\sigma}_{\cdot}[\mathcal{K}] has left limit in t>0t>0, by (4) we just need to show that, for any xSx\in S, ησ[𝒦](x)\eta^{\sigma}_{\cdot}[\mathcal{K}](x) has left limit in t>0t>0, i.e. it is constant in the time window (u,t)(u,t) for some u<tu<t. By Remark 8.1 and since 𝒦Γ\mathcal{K}\in\Gamma_{*} this follows from the fact that, by taking uu close to tt, for all (y,z)So(y,z)\in\mathcal{E}_{S}^{o} with y,zBy,z\in B the path 𝒦y,z\mathcal{K}_{y,z} has no jump in (u,t)(u,t) (as BB is finite).

The proof of the measurability of the map (63) follows the same steps of the proof of the measurability of the map (42) (just replace there Lemma 7.11 by Lemma 8.2). ∎

8.1. Proof of Propositions 5.8 and 5.10

The proof of Proposition 5.8 equals verbatim the proof of Proposition 3.2 by replacing Lemma 7.11 with Lemma 8.2, with exception for the derivation of (43) (which can anyway be obtained from the graphical construction). The proof of Proposition 5.10 equals verbatim the proof of Proposition 3.3 by replacing Lemma 7.11 with Lemma 8.2.

8.2. Proof of Propositions 5.11

The proof follows in good part [11, Appendix B]. Since the notation there is slightly different and some steps have to be changed, we give the proof for completeness.

Below 𝒦\mathcal{K} will always vary in Γ\Gamma_{*}, without further mention. Given t(0,t0]t\in(0,t_{0}] we denote by 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}) the undirected graph with vertex set SS and edge set {{x,y}S:𝒦x,ys(t)>0}\{\{x,y\}\in\mathcal{E}_{S}\,:\,\mathcal{K}^{s}_{x,y}(t)>0\}. As 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}) is a subgraph of 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}), the graph 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}) has only connected components of finite cardinality. Moreover, as tt0t\leq t_{0}, it is simple to check that ηtσ[𝒦]\eta_{t}^{\sigma}[\mathcal{K}] can be obtained by the graphical construction of Section 5 but working with the graph 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}) instead of 𝒢t0(𝒦)\mathcal{G}_{t_{0}}(\mathcal{K}).

Let f:{0,1}Sf:\{0,1\}^{S}\to{\mathbb{R}} be a local function such that f(η)f(\eta) is defined in terms only of η(x)\eta(x) with xAx\in A and ASA\subset S finite. We set A:={{x,y}S:{x,y}A}\mathcal{E}_{A}:=\{\{x,y\}\in\mathcal{E}_{S}\,:\,\{x,y\}\cap A\not=\emptyset\}. By Lemma 5.5 we have

cAs:={x,y}Acx,ysxAyScx,ys=xAcxs<+.c^{s}_{A}:=\sum_{\{x,y\}\in\mathcal{E}_{A}}c^{s}_{x,y}\leq\sum_{x\in A}\sum_{y\in S}c^{s}_{x,y}=\sum_{x\in A}c^{s}_{x}<+\infty\,. (64)

By (64) it is simple to check that the r.h.s.’s of (23) is an absolutely convergent series in C({0,1}S)C(\{0,1\}^{S}) defining therefore a function in C({0,1}S)C(\{0,1\}^{S}) that we denote by ^f\hat{}\mathcal{L}f. Hence we just need to prove that f=^f\mathcal{L}f=\hat{}\mathcal{L}f.

Since 𝒦As(t):={x,y}A𝒦x,ys(t)\mathcal{K}^{s}_{A}(t):=\sum_{\{x,y\}\in\mathcal{E}_{A}}\mathcal{K}^{s}_{x,y}(t) is a Poisson random variable with finite parameter cAc_{A}, we have (𝒦:𝒦As(t)2)=1ecAt(1+cAt)Ct2{\mathbb{P}}(\mathcal{K}\,:\,\mathcal{K}^{s}_{A}(t)\geq 2)=1-e^{-c_{A}t}(1+c_{A}t)\leq Ct^{2}. When 𝒦A(t)=1\mathcal{K}_{A}(t)=1, we define the pair {x0,y0}\{x_{0},y_{0}\} as the only edge in A\mathcal{E}_{A} such that 𝒦x0,y0(t)=1\mathcal{K}_{x_{0},y_{0}}(t)=1. To have a univocally defined labelling, as in the proof of Proposition 3.4, if the pair has only one point in AA, then we call this point x0x_{0} and the other one y0y_{0}. Otherwise, we call x0x_{0} the minimal (w.r.t. the enumeration (3)) point inside the pair.

Claim 8.4.

Let FF be the event that (i) 𝒦As(t)=1\mathcal{K}^{s}_{A}(t)=1 and (ii) {x0,y0}\{x_{0},y_{0}\} is not a connected component of 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}). Then (F)=o(t){\mathbb{P}}(F)=o(t).

Proof of Claim 8.4.

We first show that FGF\subset G, where

G={𝒦As(t)=1,x0A,y0A,zS(A{y0}) with 𝒦y0,zs(t)1}.G=\bigl{\{}\mathcal{K}^{s}_{A}(t)=1\,,\;x_{0}\in A\,,\;y_{0}\not\in A\,,\;\\ \exists z\in S\setminus(A\cup\{y_{0}\})\text{ with }\mathcal{K}^{s}_{y_{0},z}(t)\geq 1\bigr{\}}\,.

To this aim suppose first that 𝒦As(t)=1\mathcal{K}^{s}_{A}(t)=1 and x0,y0Ax_{0},y_{0}\in A. Then {x0,y0}\{x_{0},y_{0}\} must be a connected component in 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}) (otherwise we would contradict 𝒦As(t)=1\mathcal{K}^{s}_{A}(t)=1). Hence, FF implies that x0Ax_{0}\in A and y0Ay_{0}\not\in A. By FF, {x0,y0}\{x_{0},y_{0}\} is not a connected component of 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}), and therefore there exists a point zS{x0,y0}z\in S\setminus\{x_{0},y_{0}\} such that 𝒦x0,zs(t)1\mathcal{K}^{s}_{x_{0},z}(t)\geq 1 or 𝒦y0,zs(t)1\mathcal{K}^{s}_{y_{0},z}(t)\geq 1. The first case cannot occur as 𝒦As(t)=1\mathcal{K}^{s}_{A}(t)=1, x0Ax_{0}\in A and 𝒦x0,y0s(t)1\mathcal{K}^{s}_{x_{0},y_{0}}(t)\geq 1. By the same reason, in the second case it must be zAz\not\in A. Hence, there exists zS(A{y0})z\in S\setminus(A\cup\{y_{0}\}) such that 𝒦y0,zs(t)1\mathcal{K}^{s}_{y_{0},z}(t)\geq 1, thus concluding the proof that FGF\subset G.

As FGF\subset G to prove that (F)=o(t){\mathbb{P}}(F)=o(t) it is enough to show that (G)=o(t){\mathbb{P}}(G)=o(t). To this aim we first estimate (G){\mathbb{P}}(G) by

(G)xAySA(𝒦x,ys(t)=1,zS(A{y})𝒦y,zs(t)1)txAyScx,ysecx,yst(1ecyst).\begin{split}{\mathbb{P}}(G)&\leq\sum_{x\in A}\sum_{y\in S\setminus A}{\mathbb{P}}(\mathcal{K}^{s}_{x,y}(t)=1\,,\sum_{z\in S\setminus(A\cup\{y\})}\mathcal{K}^{s}_{y,z}(t)\geq 1)\\ &\leq t\sum_{x\in A}\sum_{y\in S}c^{s}_{x,y}e^{-c^{s}_{x,y}t}(1-e^{-c^{s}_{y}t})\,.\end{split}

By applying the dominated convergence theorem to the measure giving weight cx,ysc_{x,y}^{s} to the pair (x,y)(x,y) with xAx\in A and ySy\in S and by using (64), we get limt0(G)/t=0\lim_{t\downarrow 0}{\mathbb{P}}(G)/t=0. ∎

We define HH as the event that (i) 𝒦As(t)=1\mathcal{K}^{s}_{A}(t)=1 and (ii) {x0,y0}\{x_{0},y_{0}\} is a connected component of 𝒢t(𝒦)\mathcal{G}_{t}(\mathcal{K}). Since (𝒦As(t)2)Ct2{\mathbb{P}}(\mathcal{K}^{s}_{A}(t)\geq 2)\leq Ct^{2} and due to Claim 8.4 we get

({𝒦As(t)=0}H)=1o(t).{\mathbb{P}}(\{\mathcal{K}^{s}_{A}(t)=0\}\cup H)=1-o(t)\,. (65)

We set Ao:={(x,y)So:{x,y}A}\mathcal{E}_{A}^{o}:=\{(x,y)\in\mathcal{E}_{S}^{o}\,:\,{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\{x,y\}\in\mathcal{E}_{A}}\}. Given (x,y)Ao(x,y)\in\mathcal{E}_{A}^{o} we set

Wx,y:={𝒦As(t)=1 and {x0,y0}={x,y}}{𝒦x,y(t)=1}.W_{x,y}:=\{\mathcal{K}^{s}_{A}(t)=1\text{ and }\{x_{0},y_{0}\}=\{x,y\}\,\}\cap\{\mathcal{K}_{x,y}(t)=1\}\,.\,

Trivially, by the graphical construction, if 𝒦As(t)=0\mathcal{K}^{s}_{A}(t)=0, then ηtσ[𝒦](z)=σ(z)\eta^{\sigma}_{t}[\mathcal{K}](z)=\sigma(z) for all zAz\in A. On the other hand, if Wx,yHW_{x,y}\cap H takes place, then ηtσ[𝒦](z)=σx,y(z)\eta^{\sigma}_{t}[\mathcal{K}](z)=\sigma^{x,y}(z) for all zAz\in A if σ(x)=1\sigma(x)=1 and σ(y)=0\sigma(y)=0, otherwise ηtσ[𝒦](z)=σ(z)\eta^{\sigma}_{t}[\mathcal{K}](z)=\sigma(z) for all zAz\in A. By (65) and the above observations and since Stf(σ):=𝔼[f(ηtσ[𝒦])]S_{t}f(\sigma):={\mathbb{E}}\left[f\big{(}\eta^{\sigma}_{t}[\mathcal{K}]\big{)}\right], we can write

S(t)f(σ)f(σ)=(x,y)Aoσ(x)(1σ(y))[f(σx,y)f(σ)](HWx,y)+o(t).S(t)f(\sigma)-f(\sigma)=\sum_{(x,y)\in\mathcal{E}^{o}_{A}}\sigma(x)(1-\sigma(y))[f(\sigma^{x,y})-f(\sigma)]{\mathbb{P}}(H\cap W_{x,y})+o(t)\,.

As (F)=o(t){\mathbb{P}}(F)=o(t) and ff is bounded we can rewrite the above r.h.s. as

(x,y)Aoσ(x)(1σ(y))[f(σx,y)f(σ)](Wx,y)+o(t)=t(x,y)Aoσ(x)(1σ(y))[f(σx,y)f(σ)]cx,yecAt+o(t).\begin{split}&\sum_{(x,y)\in\mathcal{E}^{o}_{A}}\sigma(x)(1-\sigma(y))[f(\sigma^{x,y})-f(\sigma)]{\mathbb{P}}(W_{x,y})+o(t)\\ &=t\sum_{(x,y)\in\mathcal{E}^{o}_{A}}\sigma(x)(1-\sigma(y))[f(\sigma^{x,y})-f(\sigma)]c_{x,y}e^{-c_{A}t}+o(t)\,.\end{split}

As limt0o(t)/t=0\lim_{t\downarrow 0}o(t)/t=0 uniformly in σ\sigma, by the dominated convergence theorem (use (64)) we can conclude that ^ωf=ωf\hat{}\mathcal{L}_{\omega}f=\mathcal{L}_{\omega}f.

8.3. Proof of Proposition 5.12

The proof can be obtained by slight modifications from the proof of Proposition 3.5 for the symmetric case. We give it for the reader’s convenience. Let ff be as in Proposition 5.12 and let Tx,yf(η):=f(ηx,y)f(η)T_{x,y}f(\eta):=f(\eta^{x,y})-f(\eta). Since |f(ηx,y)f(η)|Δf(x)+Δf(y)|f(\eta^{x,y})-f(\eta)|\leq\Delta_{f}(x)+\Delta_{f}(y) by (52), Tx,yfΔf(x)+Δf(y)\|T_{x,y}f\|_{\infty}\leq\Delta_{f}(x)+\Delta_{f}(y) and

xSyScx,yη(x)(1η(y))Tx,yf{x,y}Scx,ysTx,yf{x,y}Scx,ys(Δf(x)+Δf(y))=xScxsΔf(x)|f|<+.\begin{split}&\sum_{x\in S}\sum_{y\in S}\|c_{x,y}\eta(x)(1-\eta(y))T_{x,y}f\|_{\infty}\leq\sum_{\{x,y\}\in\mathcal{E}_{S}}c^{s}_{x,y}\|T_{x,y}f\|_{\infty}\\ &\leq\sum_{\{x,y\}\in\mathcal{E}_{S}}c^{s}_{x,y}(\Delta_{f}(x)+\Delta_{f}(y))=\sum_{x\in S}c^{s}_{x}\Delta_{f}(x)\leq{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\star}<+\infty\,.\end{split} (66)

This proves the absolute convergence of xSyScx,yη(x)(1η(y))Tx,yf(η)\sum_{x\in S}\sum_{y\in S}c_{x,y}\eta(x)(1-\eta(y))T_{x,y}f(\eta) as series of functions in C({0,1}S)C(\{0,1\}^{S}). Let us call h(η)h(\eta) the resulting function.

Since \mathcal{L} is close, to conclude it is enough to exhibit a sequence of local functions fnf_{n} such that fnf0\|f_{n}-f\|_{\infty}\to 0 and fnh0\|\mathcal{L}f_{n}-h\|_{\infty}\to 0 as n+n\to+\infty. To this aim define AnA_{n}, η(n)\eta^{(n)} and fn(η):=f(η(n))f_{n}(\eta):=f(\eta^{(n)}) as in the proof of Proposition 3.5. By what proved there, we know that fnf_{n} is a local function and ffnxAnΔf(x)\|f-f_{n}\|_{\infty}\leq\sum_{x\not\in A_{n}}\Delta_{f}(x), hence it goes to zero as n+n\to+\infty since |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty. It remains to prove that fnh0\|\mathcal{L}f_{n}-h\|_{\infty}\to 0. We fix ε>0\varepsilon>0. Due to (66) we can fix NN such that

(x,y)S×S:{x,y}ANcx,yη(x)(1η(y))Tx,yf{x,y}S:{x,y}ANcx,ysTx,yfε.\sum_{\begin{subarray}{c}(x,y)\in S\times S:\\ \{x,y\}\not\subset A_{N}\end{subarray}}\|c_{x,y}\eta(x)(1-\eta(y))T_{x,y}f\|_{\infty}\leq\sum_{\begin{subarray}{c}\{x,y\}\in\mathcal{E}_{S}:\\ \{x,y\}\not\subset A_{N}\end{subarray}}c^{s}_{x,y}\|T_{x,y}f\|_{\infty}\leq\varepsilon\,. (67)

Since Tx,yfnTx,yf\|T_{x,y}f_{n}\|_{\infty}\leq\|T_{x,y}f\|_{\infty}, (67) holds also with fnf_{n} instead of ff for any n=1,2,n=1,2,\dots. As a consequence, due to the series representation of hh and fn\mathcal{L}f_{n} (cf. (23)) and due to (56), we conclude that for nNn\geq N

hfn2ε+{x,y}S:{x,y}ANcx,ysTx,yfTx,yfn2ε+2{x,y}S:{x,y}ANcx,yszAnΔf(z).\begin{split}\|h-\mathcal{L}f_{n}\|_{\infty}&\leq 2\varepsilon+\sum_{\{x,y\}\in\mathcal{E}_{S}:\{x,y\}\subset A_{N}}c^{s}_{x,y}\|T_{x,y}f-T_{x,y}f_{n}\|_{\infty}\\ &\leq 2\varepsilon+2\sum_{\{x,y\}\in\mathcal{E}_{S}:\{x,y\}\subset A_{N}}c^{s}_{x,y}\sum_{z\not\in A_{n}}\Delta_{f}(z)\,.\end{split} (68)

Setting C(N):={x,y}S:{x,y}ANcx,ysC(N):=\sum_{\{x,y\}\in\mathcal{E}_{S}:\{x,y\}\subset A_{N}}c^{s}_{x,y}, we have proved that hfn2ε+2C(N)zAnΔf(z)\|h-\mathcal{L}f_{n}\|_{\infty}\leq 2\varepsilon+2C(N)\sum_{z\not\in A_{n}}\Delta_{f}(z) for nNn\geq N. Hence, by the bound |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty and the arbitrariness of ε\varepsilon, limn+hfn=0\lim_{n\to+\infty}\|h-\mathcal{L}f_{n}\|_{\infty}=0.

Remark 8.5.

For later use we stress that, in the above proof, for any ff with |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty and |f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\star}<+\infty we have built a sequence fn𝒞𝒟()f_{n}\in\mathcal{C}\subset\mathcal{D}(\mathcal{L}) such that ffn0\|f-f_{n}\|_{\infty}\to 0 and ffn0\|\mathcal{L}f-\mathcal{L}f_{n}\|_{\infty}\to 0 as n+n\to+\infty.

8.4. Proof of Proposition 5.14

We set 𝒲:={fC({0,1}S):|S(t)f|<+ and |S(t)f|<+t+}\mathcal{W}:=\{f\in C(\{0,1\}^{S})\,:\,{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty\text{ and }{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\star}<+\infty\;\forall t\in{\mathbb{R}}_{+}\}. As in the proof of Proposition 3.6, due to Remark 8.5, we get that if 𝒲\mathcal{W} is a core then also 𝒞\mathcal{C} is a core. Moreover, due to [8, Proposition 3.3], to show that 𝒲\mathcal{W} is a core it is enough to prove the following: (i) 𝒞\mathcal{C} is dense in C({0,1}S)C(\{0,1\}^{S}); (ii) 𝒞𝒲𝒟()\mathcal{C}\subset\mathcal{W}\subset\mathcal{D}(\mathcal{L}) and (iii) S(t)f𝒲S(t)f\in\mathcal{W} for any f𝒞f\in\mathcal{C} and t+t\in{\mathbb{R}}_{+}. Having Proposition 5.12, the only nontrivial property to check is that 𝒞𝒲\mathcal{C}\subset\mathcal{W}. So we focus on this.

First we show that, for any fC({0,1}S)f\in C(\{0,1\}^{S}) and xSx\in S, it holds

ΔS(t)f(x)ySΔf(y)(x𝒞t,y(𝒦)).\Delta_{S(t)f}(x)\leq\sum_{y\in S}\Delta_{f}(y){\mathbb{P}}(x\in\mathcal{C}_{t,y}(\mathcal{K}))\,. (69)

Given xSx\in S let ξ,σ{0,1}S\xi,\sigma\in\{0,1\}^{S} be equal expect at most at xx. Then, by denoting by 𝔼{\mathbb{E}} the expectation w.r.t. {\mathbb{P}}, we get

|S(t)f(ξ)S(t)f(σ)|=|𝔼[f(ηtσ[𝒦])]𝔼[f(ηtξ[𝒦])]|𝔼[|f(ηtσ[𝒦])f(ηtξ[𝒦])|].\begin{split}\left|S(t)f(\xi)-S(t)f(\sigma)\right|&=\big{|}{\mathbb{E}}[f(\eta^{\sigma}_{t}[\mathcal{K}])]-{\mathbb{E}}[f(\eta^{\xi}_{t}[\mathcal{K}])]\big{|}\\ &\leq{\mathbb{E}}\big{[}|f(\eta^{\sigma}_{t}[\mathcal{K}])-f(\eta^{\xi}_{t}[\mathcal{K}])|\big{]}\,.\end{split} (70)

By Remark 8.1 and the graphical construction of SEP, we get that ηtσ[𝒦](y)=ηtξ[𝒦](y)\eta^{\sigma}_{t}[\mathcal{K}](y)=\eta^{\xi}_{t}[\mathcal{K}](y) if xCt,y(𝒦)x\not\in C_{t,y}(\mathcal{K}). Hence, by (52), we can bound the last term in (70) by

𝔼[y:x𝒞t,y(𝒦)Δf(y)]=ySΔf(y)(x𝒞t,y(𝒦)).{\mathbb{E}}\big{[}\sum_{y:x\in\mathcal{C}_{t,y}(\mathcal{K})}\Delta_{f}(y)\big{]}=\sum_{y\in S}\Delta_{f}(y){\mathbb{P}}(x\in\mathcal{C}_{t,y}(\mathcal{K}))\,. (71)

By combining (70) and (71) we have

ΔS(t)f(x)ySΔf(y)(x𝒞t,y(𝒦)).\Delta_{S(t)f}(x)\leq\sum_{y\in S}\Delta_{f}(y){\mathbb{P}}(x\in\mathcal{C}_{t,y}(\mathcal{K}))\,. (72)

From (72) we get

|S(t)f|xSySΔf(y)(x𝒞t,y(𝒦))=ySΔf(y)𝔼[|𝒞t,y(𝒦)|],\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}\leq\sum_{x\in S}\sum_{y\in S}\Delta_{f}(y){\mathbb{P}}(x\in\mathcal{C}_{t,y}(\mathcal{K}))=\sum_{y\in S}\Delta_{f}(y){\mathbb{E}}\big{[}|\mathcal{C}_{t,y}(\mathcal{K})|\big{]}\,, (73)
|S(t)f|xScxsySΔf(y)(x𝒞t,y(𝒦))=ySΔf(y)𝔼[x𝒞t,y(𝒦)cxs].\displaystyle{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\star}\leq\sum_{x\in S}c_{x}^{s}\sum_{y\in S}\Delta_{f}(y){\mathbb{P}}(x\in\mathcal{C}_{t,y}(\mathcal{K}))=\sum_{y\in S}\Delta_{f}(y){\mathbb{E}}\big{[}\sum_{x\in\mathcal{C}_{t,y}(\mathcal{K})}c_{x}^{s}\big{]}\,. (74)

Given f𝒞f\in\mathcal{C} we have Δf(y)=0\Delta_{f}(y)=0 for all yy except a finite set. Hence, we get |S(t)f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}<+\infty due to (24) and (73), while |S(t)f|<+{\left|\kern-1.07639pt\left|\kern-1.07639pt\left|S(t)f\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\star}<+\infty due to (25) and (74). This completes the proof that 𝒞𝒲\mathcal{C}\subset\mathcal{W}.

8.5. Proof of Proposition 5.16

As already observed after Proposition 5.14, (24) is equivalent to (26) and (25) is equivalent to (27). Both (26) and (27) can be rewritten as

𝔼[zBr,xaz]<+r,xS,{\mathbb{E}}\big{[}\sum_{z\in B_{r,x}}a_{z}\big{]}<+\infty\qquad\forall r\in{\mathbb{N}}\,,\;x\in S\,, (75)

where az:=1a_{z}:=1 for (26) and az:=czsa_{z}:=c_{z}^{s} for (27).

By definition of Br,xB_{r,x}, given x0Sx_{0}\in S we have the following equality of events (where 𝒦\mathcal{K} is the random object):

{x0Br,x}=x1,x2,,xrS(Fx0,x10Fx1,x21Fxr,xr),\{x_{0}\in B_{r,x}\}=\cup_{x_{1},x_{2},\dots,x_{r}\in S}\left(F^{0}_{x_{0},x_{1}}\cap F^{1}_{x_{1},x_{2}}\cap\cdots\cap F^{r}_{x_{r},x}\right)\,, (76)

where Fa,b0:={ab in 𝒢t00}F^{0}_{a,b}:=\{a\longleftrightarrow b\text{ in }\mathcal{G}^{0}_{t_{0}}\}, Fa,b1:={ab in 𝒢t01}F^{1}_{a,b}:=\{a\longleftrightarrow b\text{ in }\mathcal{G}^{1}_{t_{0}}\} and so on. We point out that when r=0r=0 (76) has to be thought of as {x0B0,x}=Fx0,x0\{x_{0}\in B_{0,x}\}=F^{0}_{x_{0},x}. Since under {\mathbb{P}} the graphs 𝒢t00\mathcal{G}^{0}_{t_{0}}, 𝒢t01\mathcal{G}^{1}_{t_{0}}, 𝒢t02\mathcal{G}^{2}_{t_{0}},… are i.i.d. and since (Fa,bi)=p(a,b){\mathbb{P}}(F^{i}_{a,b})=p(a,b), by (76) and a union bound we have

(x0Br,x)x1,x2,,xrSp(x0,x1)p(x1,x2)p(xr1,xr)p(xr,x).{\mathbb{P}}\big{(}x_{0}\in B_{r,x}\big{)}\leq\sum_{x_{1},x_{2},\dots,x_{r}\in S}p(x_{0},x_{1})p(x_{1},x_{2})\dots p(x_{r-1},x_{r})p(x_{r},x)\,. (77)

Hence we get

𝔼[x0Br,xax0]=x0S(x0Br,x)ax0x0,x1,x2,xrSax0p(x0,x1)p(x1,x2)p(xr1,xr)p(xr,x)=yr,yr1,,y0Sayrp(yr,yr1)p(yr1,yr2)p(y1,y0)p(y0,x).\begin{split}&{\mathbb{E}}\big{[}\sum_{x_{0}\in B_{r,x}}a_{x_{0}}\big{]}=\sum_{x_{0}\in S}{\mathbb{P}}\big{(}x_{0}\in B_{r,x}\big{)}a_{x_{0}}\\ &\leq\sum_{x_{0},x_{1},x_{2}\dots,x_{r}\in S}a_{x_{0}}p(x_{0},x_{1})p(x_{1},x_{2})\dots p(x_{r-1},x_{r})p(x_{r},x)\\ &=\sum_{y_{r},y_{r-1},\dots,y_{0}\in S}a_{y_{r}}p(y_{r},y_{r-1})p(y_{r-1},y_{r-2})\dots p(y_{1},y_{0})p(y_{0},x)\,.\end{split} (78)

Trivially, by setting n=r+1n=r+1 and wi+1:=yiw_{i+1}:=y_{i} and using that p(a,b)=p(b,a)p(a,b)=p(b,a), the last sum can be written as w1,w2,,wnSp(x,w1)p(w1,w2)p(wn1,wn)awn\sum_{w_{1},w_{2},\dots,w_{n}\in S}p(x,w_{1})p(w_{1},w_{2})\cdots p(w_{n-1},w_{n})a_{w_{n}}.

By the above observations, we get (24) if (29) holds for all xSx\in S and n+n\in{\mathbb{N}}_{+} and similarly we get (25) if (31) holds for all xSx\in S and n+n\in{\mathbb{N}}_{+}. Finally, it is trivial to check that (29) holds all xSx\in S and n+n\in{\mathbb{N}}_{+} if and only if (30) holds all xSx\in S and nn\in{\mathbb{N}}. The same equivalence can be easily checked for (31) and (32).

Appendix A Derivation of (1)

In this appendix we show that Conditions (3.3) and (3.8) in [20, Chapter I.3] are both equivalent to (1). In the notation of [20, Chapter I.3], given xyx\not=y in SS, one defines the measure c{x,y}(η,dζ)c_{\{x,y\}}(\eta,d\zeta) on {0,1}{x,y}\{0,1\}^{\{x,y\}} as cx,yη(x)(1η(y))δ(0,1)(dζ)+cy,x(1η(x))η(y)δ(1,0)(dζ)c_{x,y}\eta(x)(1-\eta(y))\delta_{(0,1)}(d\zeta)+c_{y,x}(1-\eta(x))\eta(y)\delta_{(1,0)}(d\zeta), and one sets cT(η,dζ):=0c_{T}(\eta,d\zeta):=0 for TST\subset S with |T|2|T|\not=2.

Condition (3.3) in [20, Chapter I.3] is given by supxSTxcT<+\sup_{x\in S}\sum_{T\ni x}c_{T}<+\infty, where cT=sup{cT(η,{0,1}T):η{0,1}S}c_{T}=\sup\left\{c_{T}(\eta,\{0,1\}^{T})\,:\,\eta\in\{0,1\}^{S}\right\}. In our case we have

cT={max{cx,y,cy,x}if T={x,y} for some xy,0otherwise,c_{T}=\begin{cases}\max\{c_{x,y},c_{y,x}\}&\text{if }T=\{x,y\}\text{ for some }x\not=y\,,\\ 0&\text{otherwise}\,,\end{cases}

and therefore the above mentioned condition is equivalent to (1).

Given uSu\in S and TST\subset S, as in [20, Chapter I.3] we define

cT(u):=sup{cT(η1,dζ)cT(η2,dζ)TV:η1(z)=η2(z) for all zu},c_{T}(u):=\sup\left\{\|c_{T}(\eta_{1},d\zeta)-c_{T}(\eta_{2},d\zeta)\|_{TV}:\eta_{1}(z)=\eta_{2}(z)\text{ for all }z\not=u\right\}\,,

where TV\|\cdot\|_{TV} denotes the total variation norm. In our case, if |T|2|T|\not=2 then cT(u)=0c_{T}(u)=0. If T={x,y}T=\{x,y\} with xyx\not=y in SS, then

cT(u)={max{cx,y,cy,x}if uT,0otherwise.c_{T}(u)=\begin{cases}\max\{c_{x,y},c_{y,x}\}&\text{if }u\in T\,,\\ 0&\text{otherwise}\,.\end{cases}

Since Condition (3.8) in [20, Chapter I.3] is given by supxSTxuxcT(u)<+\sup_{x\in S}\sum_{T\ni x}\sum_{u\not=x}c_{T}(u)<+\infty, also this condition is equivalent to (1).


Acknowledgements. I thank the anonymous referee for the corrections and comments. I am very grateful to Ráth Balázs for the stimulating discussions during the Workshop “Large Scale Stochastic Dynamics” (2022) at MFO. I kindly acknowledge also the organizers of this event. I thank the very heterogeneous population (made of different animal and vegetable species) of my homes in Codroipo and Rome, where this work has been written.

References

  • [1] V. Ambegoakar, B.I. Halperin, J.S. Langer; Hopping conductivity in disordered systems, Phys. Rev. B 4, 2612–2620 (1971).
  • [2] M.T. Barlow, J.-D. Deuschel; Invariance principle for the random conductance model with unbounded conductances. Ann. Probab. 38, 234–276. (2010).
  • [3] M. Biskup; Recent progress on the random conductance model. Probability Surveys, Vol. 8, 294-373 (2011).
  • [4] A. Chiarini, S. Floreani, F. Redig, F. Sau; Fractional kinetics equation from a Markovian system of interacting Bouchaud trap models. arXiv:2302.10156
  • [5] A. Chiarini, A. Faggionato; Density fluctuations of symmetric simple exclusion processes on random graphs in d{\mathbb{R}}^{d} with random conductances. In preparation.
  • [6] A. De Masi, P.A. Ferrari, S. Goldstein, W.D. Wick; An invariance principle for reversible Markov processes. Applications to random motions in random environments. J. Stat. Phys. 55, 787–855 (1989).
  • [7] R. Durrett; Ten lectures on particle systems. In: Bernard, P. (eds) Lectures on Probability Theory. Lecture Notes in Mathematics 1608, 97–20. Springer, Berlin, 1995.
  • [8] S.N. Ethier, T.G. Kurtz; Markov processes : characterization and convergence. J. Wiley, New York, 1986.
  • [9] R. Durrett; Probability - Theory and Examples. Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, Cambridge 2019.
  • [10] D.J. Daley, D. Vere-Jones; An Introduction to the Theory of Point Processes. New York, Springer Verlag, 1988.
  • [11] A. Faggionato; Hydrodynamic limit of simple exclusion processes in symmetric random environments via duality and homogenization. Probab. Theory Relat. Fields. 184, 1093–1137 (2022).
  • [12] A. Faggionato; Stochastic homogenization of random walks on point processes. Ann. Inst. H. Poincaré Probab. Statist. 59, 662–705 (2023).
  • [13] A. Faggionato; Scaling limit of the directional conductivity of random resistor networks on simple point processes. Ann. Inst. H. Poincaré Probab. Statist., to appear (preprint arXiv:2108.11258)
  • [14] A. Faggionato; Graphs with random conductances on point processes: quenched RW homogenization, SSEP hydrodynamics and resistor networks. Forthcoming.
  • [15] M. Fukushima, Y. Oschima, M. Takeda; Dirichlet forms and symmetric Markov processes. Second edition. De Gruyter, Berlin, 2010.
  • [16] P. Gonçalves, M. Jara; Scaling limit of gradient systems in random environment. J. Stat. Phys. 131, 691–716 (2008).
  • [17] G. Grimmett; Percolation. Second edition. Die Grundlehren der mathematischen Wissenschaften 321. Springer, Berlin, 1999.
  • [18] T.E. Harris; Nearest neighbor Markov interaction processes on multidimensional lattices. Adv. in Math. 9, 66–89 (1972).
  • [19] T.E. Harris; Additive set-valued Markov processes and graphical methods. Ann. Probab. 6, 355–378 (1978).
  • [20] T. M. Liggett; Interacting particle systems. Grundlehren der Mathematischen Wissenschaften 276, Springer, Berlin, 1985.
  • [21] T. M. Liggett; Stochastic interacting systems: contact, voter and exclusion processes. Grundlehren der mathematischen Wissenschaften 324, Springer, Berlin, 1999.
  • [22] A. Miller, E. Abrahams; Impurity conduction at low concentrations. Phys. Rev. 120, 745–755 (1960).
  • [23] N.F. Mott; Electrons in glass. Nobel Lecture, 8 December, 1977. Available online at https://www.nobelprize.org/uploads/2018/06/mott-lecture.pdf
  • [24] R. Meester, R. Roy; Continuum percolation. Cambridge Tracts in Mathematics 119. First edition, Cambridge University Press, Cambridge, 1996.
  • [25] T. Seppäläinen; Translation Invariant Exclusion Processes. Online book available at https://www.math.wisc.edu/~seppalai/excl-book/ajo.pdf, 2008.