This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Differentiability of limit shapes in continuous first passage percolation models

Yuri Bakhtin Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, USA bakhtin@cims.nyu.edu  and  Douglas Dow Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, USA dd3103@cims.nyu.edu
Abstract.

We show that for a broad class of continuous first passage percolation models, the boundaries of the associated limit shapes are differentiable.

Abstract.

We introduce and study a class of abstract continuous action minimization problems that generalize continuous first and last passage percolation. In this class of models a limit shape exists. Our main result provides a framework under which that limit shape can be shown to be differentiable. We then describe examples of continuous first passage percolation models that fit into this framework. The first example is of a family of Riemannian first passage percolation models and the second is a discrete time model based on Poissonian points.

2020 Mathematics Subject Classification:
Primary 60K37, 82B44, 60K35

1. Introduction

The main goal of this paper is to show that for a broad class of models of first passage percolation and last passage percolation type in continuous space, the associated shape functions are differentiable away from zero, implying that the boundary of the limit shape is differentiable.

Optimal paths in disordered environments have been extensively studied in the literature. A variety of interesting models has been introduced. The general scheme is the following: each admissible path γ\gamma in a Euclidean space d\mathbb{R}^{d} is assigned a random action/cost/energy Aω(γ)A_{\omega}(\gamma) defined through the intrinsic geometry of the path and interactions of the path with the realization of a random environment associated with a random outcome ωΩ\omega\in\Omega. For every pair of points xx and yy in d\mathbb{R}^{d}, 𝒜ω(x,y)\mathcal{A}_{\omega}(x,y) is defined as the optimal action over the space 𝒮x,y\mathcal{S}_{x,y} of admissible paths connecting xx to yy:

𝒜ω(x,y)=infγ𝒮x,yAω(γ).\mathcal{A}_{\omega}(x,y)=\inf_{\gamma\in\mathcal{S}_{x,y}}A_{\omega}(\gamma).

For many interesting models of this kind, one can use stationarity and ergodicity of the environment to apply the subadditive ergodic theorem and prove that the asymptotic growth of action 𝒜ω(0,x)\mathcal{A}_{\omega}(0,x) is linear in the Euclidean norm |x||x| as |x||x|\to\infty, with rate of growth depending on the direction. More precisely, the limit

(1.1) Λ(v)=limT+1TAω(0,Tv)\Lambda(v)=\lim_{T\to+\infty}\frac{1}{T}A_{\omega}(0,Tv)

is well-defined and deterministic for each vv in d\mathbb{R}^{d} or, for models of LPP (Last Passage Percolation) type, in a smaller convex cone 𝒞d\mathcal{C}\subset\mathbb{R}^{d} of admissible asymptotic directions determined by the structure of the set of admissible paths. For example, in 1+11+1-dimensional models of LPP type, where a certain directionality condition is imposed on paths (i.e., they cannot backtrack), 𝒞\mathcal{C} may be the quadrant {x2:x1,x20}\{x\in\mathbb{R}^{2}:x_{1},x_{2}\geq 0\} or the half-plane {(t,x)2:t>0}\{(t,x)\in\mathbb{R}^{2}:t>0\}. In FPP (First Passage Percolation) type models, 𝒞=d\mathcal{C}=\mathbb{R}^{d}, i.e., there are no restrictions on asymptotic directions of paths.

In some models, paths and their endpoints are restricted to certain subsets of d\mathbb{R}^{d} (such as d\mathbb{Z}^{d} or ×d1\mathbb{Z}\times\mathbb{R}^{d-1}), and since TvTv may fail to belong to this set, the claim (1.1) needs to be modified appropriately.

The function Λ:𝒞\Lambda:\mathcal{C}\to\mathbb{R} characterizing the rate of growth of optimal action as a function of direction vv is called the shape function, and in the context of homogenization for stochastic Hamilton–Jacobi–Bellman (HJB) equations it can be interpreted as the effective Lagrangian. The term shape function comes from the fact that for models where the action Aω(γ)A_{\omega}(\gamma) is nonnegative and plays the role of random length of γ\gamma, the shape function (also nonnegative in this case) can be used to describe the limit shape of normalized balls with respect to the random metric given by 𝒜ω(x,y)\mathcal{A}_{\omega}(x,y). Namely, for many models one can prove that if

(1.2) Eω(T)={x𝒞:𝒜ω(0,x)T},E_{\omega}(T)=\{x\in\mathcal{C}:\mathcal{A}_{\omega}(0,x)\leq T\},

then, for a properly understood notion of convergence of sets, with probability 11,

(1.3) limT+1TEω(T)=EΛ,\lim_{T\to+\infty}\frac{1}{T}E_{\omega}(T)=E_{\Lambda},

where

(1.4) EΛ={v𝒞:Λ(v)1}.E_{\Lambda}=\{v\in\mathcal{C}:\Lambda(v)\leq 1\}.

Thus the set EΛE_{\Lambda} plays the role of the limit shape. Its boundary {v𝒞:Λ(v)=1}\{v\in\mathcal{C}:\Lambda(v)=1\} plays the role of the effective front characterizing the homogenized wave propagation in the disordered environment. The classical works on limit shapes and shape functions are [HW65], [Kin68], [Kin73], [Ric73], [CD81]. Also, see the monograph [ADH17a] and references therein.

Shape functions are always 11-homogeneous, i.e., they satisfy Λ(cv)=cΛ(v)\Lambda(cv)=c\Lambda(v) for c>0c>0. Also, due to a simple subadditivity argument, they are always convex. Thus, the limit shape EΛE_{\Lambda} is always a convex set.

Convex functions and boundaries of convex sets may have corners and flat pieces, and the problem of characterizing further regularity properties of limit shapes and shape functions beyond simple convexity has been one of the recurrent themes in the theory of FPP and LPP models and stochastic HJB equations.

Typical fluctuations of long minimizers and their actions are tightly related to the regularity of the shape function. It is broadly believed that for a vast class of models with fast decay of correlations the shape function and the boundary of the limit shape must be differentiable and strictly convex. This kind of quadratic behavior of the shape function is associated with the KPZ universality. Ergodic properties of stochastic HJB equations also depend on the shape function regularity. We refer to [BK18] for a discussion of this circle of questions.

Despite the importance of the issue, the progress on the regularity properties has been limited. It is known since [HM95] that if one does not require sufficiently fast decay of correlations in the environment, any convex set respecting the symmetries of the model can be the limit shape. Thus, any convex 1-homogeneous function with the same symmetries can be realized as shape function. A set of examples with flat edges is provided by lattice LPP/FPP models with i.i.d. environments based on distributions with atoms, see [DL81]. In a recent paper [BKMV23], an LPP model in nonatomic product-type environment is shown to have both, a corner and a flat edge. We also note that in the deterministic weak KAM theory for spacetime-periodic problems, the shape functions (known as Mather’s beta-functions) are strictly convex in the slope variable, differentiable at all irrational slopes, and typically have corners at all rational slopes, see [Mat90].

There are several continuous space models with distributional symmetries that translate into a precise analytic form of the shape function, with regularity properties trivially implied. For the Hammersley process (and its generalizations), an LPP-type model based on upright paths collecting Poissonian points from the positive quadrant, the precise form of the shape function is inherited from the fact that linear area-preserving automorphisms of the quadrant preserve the admissible paths and the distribution of the Poisson point process, see [Ham72], [AD95], [CP11]. Rotationally invariant Euclidean FPP models introduced in [HN97] obviously produce Euclidean balls as the limit shapes. The LPP type models (including a positive temperature Gibbs polymer version) studied in [BCK14], [Bak16], [BL19], [BL18] in the context of the stochastic Burgers equation, allow for a form of invariance under shear transformations resulting in quadratic shape functions. In addition, shape functions have been computed for a handful of exactly solvable models, see [Ros81], [Bar01],[GTW01], [HMO02], [MO07], [Sep12],[JRAS22].

It is natural to conjecture that these results can be extended to a broader family of models, on discrete lattices and in continuous spaces. However, they are based on very precise restrictive properties of the models in question, and there seems to be a gap between the universality claims and the concreteness of these models. In [BD23b] and [BD23a], we gave results on differentiability of shape functions in the interior of 𝒞\mathcal{C} for a large class of LPP-type models in continuous space. In [BD23b], we gave a simple argument for 1+11+1-dimensional time-discrete and white-in-time models, and in [BD23a] we adapted our method to multidimensional spacetime-continuous nonwhite environments. The latter setup allows for an interpretation in terms of differentiability of the effective Lagrangian in the homogenization problem for HJB equations with random forcing. Although these models are not distributionally shear-invariant, our argument is based on a form of approximate distributional invariance of the model under a family of shear transformations.

In the present paper, we show that our approach is applicable to continuous space models of FPP type.

In fact, our main result is applicable to continuous models of both FPP and LPP types and loosely can be stated as follows:

Theorem 1.1.

Under a set of mild conditions on the random action AωA_{\omega} that we describe in Section 2, there is a deterministic convex function Λ\Lambda such that (1.1) holds for each v𝒞v\in\mathcal{C}. Moreover, convergence in (1.1) is uniform on compact sets. The limit shape theorem in (1.3) holds. The shape function Λ\Lambda is differentiable at every nonzero interior point of 𝒞\mathcal{C}. For positive actions, the effective front (the boundary of the limit shape) is differentiable at all its points in the interior of 𝒞\mathcal{C}.

Remark 1.

We also prove a formula for Λ\nabla\Lambda.

Remark 2.

It is known since [Szn98] (see also [LW10]) that shape functions for FPP type models have a corner at the origin. For example, for rotationally invariant FPP models, the graph of the shape function is a cone with spherical section. So for these models, we can claim differentiability of the shape function only at nonzero points. In our previous results on LPP type models, zero was automatically excluded since it did not belong to the interior of the LPP cone 𝒞\mathcal{C}. In fact, we only considered directions of the form (1,v)1+d(1,v)\in\mathbb{R}^{1+d} (and, by 11-homogeneity, their multiples).

Our results from [BD23a] on LPP-type models fit the framework of the present paper (the time-discrete models of [BD23b] need more adjustments). The conditions we require for our main results here were checked for these models in that paper. See the discussion in Section 5.

Moreover, in the present paper, we check that the conditions of our main results are satisfied for two classes of anisotropic FPP-type models. One of them is a random Riemannian metric model in the spirit of [LW10], see Section 3 and another is a random metric based on broken line paths between Poissonian points inspired by [HN97], see Section 4. Our method should apply to a variety of similar models, but we chose these two classes where the application is relatively straightforward.


The paper is organized as follows. In Section 2 we describe the general setup, state the main general conditions and main results, rigorous counterparts of the informal Theorem 1.1. We also give a proof of our central differentiability result in its general form in that section. Proofs of all the other results stated in this section are postponed until Section 6. In Sections 3 and 4, we describe two classes of FPP-type models that our results apply to. In Section 5, we explain that the results from [BD23a] fit the framework of the present paper. The remaining sections contain proofs of various results from first four sections. Section 6 contains proofs of the results from Section 2. Sections 79 contain proofs of the results from Section 3. Section 10 contains the proof of the result from Section 4

Acknowledgments. YB and DD are grateful to the National Science Foundation for partial support via Awards DMS-1811444 and DMS-2243505. We thank Peter Morfe for pointing to [TZ24], where a method similar to ours is used in a related but different problem.

2. General conditions and results

In this section, we give a general framework and state our main results. Our focus is on the fully continuous case, though some discrete in time problems can be embedded into our framework. In Section 2.1, we introduce a general set of assumptions and state the (standard) results on convergence to the shape function and limit shape. The central part of this paper is Section 2.2 where we state our main results on differentiability.

2.1. General setup and standard limit shape results.

First we will define the spaces we will work with. For x,ydx,y\in\mathbb{R}^{d} and t>0t>0, let

𝒮x,y,t={γW1,1([0,t];d):γ0=x,γt=y}.\mathcal{S}_{x,y,t}=\{\gamma\in W^{1,1}([0,t];\mathbb{R}^{d})\,:\,\gamma_{0}=x,\,\gamma_{t}=y\}.

Then we can define

𝒮x,y,=t>0𝒮x,y,t,x,yd,\mathcal{S}_{x,y,*}=\bigcup_{t>0}\mathcal{S}_{x,y,t},\quad x,y\in\mathbb{R}^{d},
𝒮=𝒮,,=x,yd,t>0𝒮x,y,t,x,yd,t>0,\mathcal{S}=\mathcal{S}_{*,*,*}=\bigcup_{x,y\in\mathbb{R}^{d},\,t>0}\mathcal{S}_{x,y,t},\quad x,y\in\mathbb{R}^{d},\ t>0,

and other similar spaces such as 𝒮,,t\mathcal{S}_{*,*,t}. For γ𝒮,,t\gamma\in\mathcal{S}_{*,*,t}, we define t(γ)=tt(\gamma)=t.

The space 𝒮\mathcal{S} is a separable metric space when equipped with the Sobolev metric given by

(2.1) d(γ,ψ)=01|γt1sψt2s|𝑑s+01|t1γ˙t1st2ψ˙t2s|𝑑s+|t1t2|d(\gamma,\psi)=\int_{0}^{1}|\gamma_{t_{1}s}-\psi_{t_{2}s}|ds+\int_{0}^{1}|t_{1}\dot{\gamma}_{t_{1}s}-t_{2}\dot{\psi}_{t_{2}s}|ds+|t_{1}-t_{2}|

for γ𝒮,,t1\gamma\in\mathcal{S}_{*,*,t_{1}} and ψ𝒮,,t2.\psi\in\mathcal{S}_{*,*,t_{2}}. The spaces 𝒮x,y,t\mathcal{S}_{x,y,t} and 𝒮x,y,\mathcal{S}_{x,y,*} are endowed with the induced topology, which coincides with the W1,1W^{1,1} topology on these spaces.

For any x,y,zdx,y,z\in\mathbb{R}^{d} and t1,t2>0t_{1},t_{2}>0, if γ𝒮x,y,t1\gamma\in\mathcal{S}_{x,y,t_{1}} and ψ𝒮y,z,t2\psi\in\mathcal{S}_{y,z,t_{2}}, then γψ𝒮x,z,t1+t2\gamma\psi\in\mathcal{S}_{x,z,t_{1}+t_{2}} denotes their concatenation, defined by

(γψ)s={γs,s[0,t1],ψst1,s[t1,t2].(\gamma\psi)_{s}=\begin{cases}\gamma_{s},&s\in[0,t_{1}],\\ \psi_{s-t_{1}},&s\in[t_{1},t_{2}].\end{cases}

For xdx\in\mathbb{R}^{d}, the spatial shift θx:dd\theta^{x}:\mathbb{R}^{d}\to\mathbb{R}^{d} is defined by

(2.2) θxy=y+x,yd.\theta^{x}y=y+x,\quad y\in\mathbb{R}^{d}.

This definition lifts to transformations of 𝒮\mathcal{S}, namely, for xdx\in\mathbb{R}^{d}, t>0t>0, and γ𝒮,,t\gamma\in\mathcal{S}_{*,*,t}, the spatial shift θxγ𝒮,,t\theta^{x}\gamma\in\mathcal{S}_{*,*,t} is defined by

(θxγ)s=γs+x,s[0,t].(\theta^{x}\gamma)_{s}=\gamma_{s}+x,\quad s\in[0,t].

We consider a complete probability space (Ω,,)(\Omega,\mathcal{F},\mathbb{P}) and assume that the group d\mathbb{R}^{d} acts on Ω\Omega ergodically. Namely, we assume that we are given a family (θx)xd(\theta_{*}^{x})_{x\in\mathbb{R}^{d}} of measurable transformations θx:ΩΩ\theta_{*}^{x}:\Omega\to\Omega preserving \mathbb{P}, ergodic and having the group property: for all x,ydx,y\in\mathbb{R}^{d}, θx+y=θxθy\theta_{*}^{x+y}=\theta_{*}^{x}\theta_{*}^{y} and θ0\theta_{*}^{0} is the identity map.

We consider a jointly measurable action/energy/cost function

A:Ω×𝒮\displaystyle A:\Omega\times\mathcal{S} {},\displaystyle\to\mathbb{R}\cup\{\infty\},
(ω,γ)\displaystyle(\omega,\gamma) Aω(γ),\displaystyle\mapsto A_{\omega}(\gamma),

which is a random field indexed by absolutely continuous paths. Once γ𝒮\gamma\in\mathcal{S} is fixed, A(γ)=A(γ):Ω{}A(\gamma)=A_{\cdot}(\gamma):\Omega\to\mathbb{R}\cup\{\infty\} is a random variable. Once ωΩ\omega\in\Omega is fixed, Aω:𝒮{}A_{\omega}:\mathcal{S}\to\mathbb{R}\cup\{\infty\} assigns actions to all absolutely continuous paths. One of the goals of allowing for infinite action values is to accommodate the LPP settings where only certain “directed” paths are admissible. Nonadmissible paths will be assigned infinite action and thus will be excluded from variational problems.

The following two assumptions on AA are fundamental. The first relates the action of AA on the concatenation of paths to the sum of the actions of each individual path. In the common set-up where AA is given by a local energy summed or integrated along a path the relation is an identity rather than an inequality. The second assumption, called skew-invariance, implies statistical stationarity and ergodicity conditions on AA.

  1. (A1)

    (subadditivity) For all ωΩ\omega\in\Omega, x,y,zdx,y,z\in\mathbb{R}^{d}, γ𝒮x,y,\gamma\in\mathcal{S}_{x,y,*}, and ψ𝒮y,z,\psi\in\mathcal{S}_{y,z,*},

    Aω(γψ)Aω(γ)+Aω(ψ).A_{\omega}(\gamma\psi)\leq A_{\omega}(\gamma)+A_{\omega}(\psi).
  2. (A2)

    (skew-invariance) For all ωΩ,\omega\in\Omega, xdx\in\mathbb{R}^{d}, all γ𝒮\gamma\in\mathcal{S},

    Aω(γ)=Aθxω(θxγ).A_{\omega}(\gamma)=A_{\theta_{*}^{x}\omega}(\theta^{x}\gamma).

We want to study the minimization problem

(2.3) 𝒜(x,y)=𝒜ω(x,y):=inf{A(γ):γ𝒮x,y,}.\mathcal{A}(x,y)=\mathcal{A}_{\omega}(x,y):=\inf\{A(\gamma)\,:\,\gamma\in\mathcal{S}_{x,y,*}\}.

We will need the following assumption:

  1. (3)

    The minimal action 𝒜\mathcal{A} is jointly measurable as a function from Ω×d×d\Omega\times\mathbb{R}^{d}\times\mathbb{R}^{d} to {+}\mathbb{R}\cup\{+\infty\}.

In order to accommodate LPP-type problems, we need to introduce a cone 𝒞d\mathcal{C}\subset\mathbb{R}^{d} of admissible directions. For the Hammersley process the role of 𝒞\mathcal{C} is played by the positive quadrant. For HJB equations with dynamic random forcing considered in [BD23a], the role of 𝒞\mathcal{C} is played by the half-space (0,)×d1(0,\infty)\times\mathbb{R}^{d-1}. In FPP-type problems with no constraints on directions of paths, 𝒞=d\mathcal{C}=\mathbb{R}^{d}. We require the following properties of the cone 𝒞\mathcal{C} and the action AA:

  1. (4)

    𝒞d\mathcal{C}\subset\mathbb{R}^{d} is a convex and nonempty cone. For all ωΩ\omega\in\Omega and for every x𝒞x\in\mathcal{C}, there is a random path γ(x)=γω(x)𝒮0,x,\gamma(x)=\gamma_{\omega}(x)\in\mathcal{S}_{0,x,*} achieving the infimum in (2.3), i.e., 𝒜ω(0,x)=A(γω(x))\mathcal{A}_{\omega}(0,x)=A(\gamma_{\omega}(x)) for all ωΩ\omega\in\Omega. Additionally, for all x𝒞x\in\mathcal{C}, supr[0,1]|𝒜(0,rx)|\sup_{r\in[0,1]}|\mathcal{A}(0,rx)| is measurable and

    (2.4) 𝔼[supr[0,1]|𝒜(0,rx)|]<.\mathbb{E}\Big{[}\sup_{r\in[0,1]}|\mathcal{A}(0,rx)|\Big{]}<\infty.

Conditions (A2) and 4 imply that 𝔼[|𝒜(x,y)|]<\mathbb{E}[|\mathcal{A}(x,y)|]<\infty for all x,ydx,y\in\mathbb{R}^{d} satisfying yx𝒞.y-x\in\mathcal{C}. For simplicity of presentation we assume that conditions (A1)4 hold for all ωΩ\omega\in\Omega, but with minor adjustments one may allow for a single exceptional set of zero measure on which the conditions of (A1)4 fail.

For v𝒞v\in\mathcal{C} define

(2.5) 𝒜T(v)=𝒜(0,Tv).\mathcal{A}^{T}(v)=\mathcal{A}(0,Tv).
Theorem 2.1.

Under assumptions (A1)4, there is a convex, deterministic function Λ:𝒞[,)\Lambda:\mathcal{C}\to[-\infty,\infty) such that for all v𝒞v\in\mathcal{C}, with probability one,

(2.6) Λ(v)=limT1T𝒜T(v).\Lambda(v)=\lim_{T\to\infty}\frac{1}{T}\mathcal{A}^{T}(v).

Additionally, Λ(sv)=sΛ(v)\Lambda(sv)=s\Lambda(v) for all vdv\in\mathbb{R}^{d} and s>0s>0, and if 0𝒞0\in\mathcal{C}, then Λ(0)=0\Lambda(0)=0.

Kingman’s Subadditive Ergodic Theorem was proved with problems like this in mind. We remind the standard argument in Section 6 for completeness.

In all of our examples, Λ\Lambda is, in fact, finite. Let us supplement our setup with an additional assumption guaranteeing finiteness of Λ\Lambda and uniform convergence in (2.6). We need the latter to prove convergence to a limit shape.

We say that a cone 𝒞𝒞\mathcal{C}^{\prime}\subset\mathcal{C} is properly contained in 𝒞\mathcal{C} if 𝒞¯{0}𝒞\overline{\mathcal{C}^{\prime}}\setminus\{0\}\subset\mathcal{C}^{\circ}.


  1. (5)

    For every cone 𝒞𝒞\mathcal{C}^{\prime}\subset\mathcal{C} properly contained in 𝒞\mathcal{C}, there is κ<\kappa<\infty such that

    (2.7) {supx𝒞,|x|>1|𝒜(0,x)||x|<κ}>0,\mathbb{P}\bigg{\{}\sup_{x\in\mathcal{C}^{\prime},\,|x|>1}\frac{|\mathcal{A}(0,x)|}{|x|}<\kappa\bigg{\}}>0,

    and

    (2.8) {supx𝒞,|x|>1|𝒜(x,0)||x|<κ}>0.\mathbb{P}\bigg{\{}\sup_{x\in\mathcal{C}^{\prime},\,|x|>1}\frac{|\mathcal{A}(-x,0)|}{|x|}<\kappa\bigg{\}}>0.

In most applications, (2.7) and (2.8) are equivalent due to distributional symmetries of the action. Although in some settings (2.7) and (2.8) hold for 𝒞=𝒞\mathcal{C}^{\prime}=\mathcal{C}, there are LPP settings where the action 𝒜(0,x)\mathcal{A}(0,x) goes to infinity as xx approaches the boundary of 𝒞\mathcal{C}, and so using the notion of properly contained cones is unavoidable.

Theorem 2.2.

Under assumptions (A1)5, Λ(v)>\Lambda(v)>-\infty for all v𝒞v\in\mathcal{C}^{\circ} and there is a full measure set Ω0\Omega_{0} such that for all ωΩ0\omega\in\Omega_{0} and all compact sets K𝒞,K\subset\mathcal{C}^{\circ},

(2.9) limTsupwK|1T𝒜T(w)Λ(w)|=0.\lim_{T\to\infty}\sup_{w\in K}\Big{|}\frac{1}{T}\mathcal{A}^{T}(w)-\Lambda(w)\Big{|}=0.

We will prove this theorem in Section 6.

These results can be understood in terms of limit shapes. Limit shapes are usually defined in the context of FPP, where 𝒞=d\mathcal{C}=\mathbb{R}^{d}, the action 𝒜(x,y)\mathcal{A}(x,y) is positive and can be interpreted as a random metric between points xx and yy. The set Eω(T)E_{\omega}(T) defined in (1.2) can be viewed as a ball of radius TT in this metric. For any set KdK\subset\mathbb{R}^{d} and a number aa\in\mathbb{R}, we denote aK={ax:xK}aK=\{ax:x\in K\}.

We will say that a family of sets NT𝒞,N_{T}\subset\mathcal{C}, T>0T>0, converges locally to a set N𝒞N\subset\mathcal{C} and write

NTlocN,T,N_{T}\stackrel{{\scriptstyle\mathrm{loc}}}{{\longrightarrow}}N,\quad T\to\infty,

if for every compact set K𝒞K\subset\mathcal{C}^{\circ} and every ϵ>0\epsilon>0, there is T0>0T_{0}>0 such that

(2.10) ((1ϵ)N)KNTK((1+ϵ)N)K,T>T0.((1-\epsilon)N)\cap K\subset N_{T}\cap K\subset((1+\epsilon)N)\cap K,\quad T>T_{0}.

The following result shows that EΛE_{\Lambda} defined in (1.4) is the deterministic limit shape associated with the random action AωA_{\omega}.

Theorem 2.3.

Under assumptions (A1)5, with probability 11,

(2.11) 1TEω(T)locEΛ,T.\frac{1}{T}E_{\omega}(T)\stackrel{{\scriptstyle\mathrm{loc}}}{{\longrightarrow}}E_{\Lambda},\quad T\to\infty.
Remark 3.

In fact, a stronger form of convergence often holds. Namely, if in addition to the conditions of Theorem 2.3, we require 𝒞=d\mathcal{C}=\mathbb{R}^{d} and Λ(v)>0\Lambda(v)>0 for all v0v\neq 0 (this holds for a typical FPP setting and, in particular, for our examples studied in Sections 3 and 4), we can prove that with probability 1, for every ϵ>0\epsilon>0, there is T0>0T_{0}>0 such that

(2.12) (1ϵ)EΛ1TEω(T)(1+ϵ)EΛ,T>T0,(1-\epsilon)E_{\Lambda}\subset\frac{1}{T}E_{\omega}(T)\subset(1+\epsilon)E_{\Lambda},\quad T>T_{0},

i.e., a version of (2.10) with KK replaced by 𝒞=d\mathcal{C}=\mathbb{R}^{d} holds. We prove this claim along with Theorem 2.3 in Section 6.3.

Remark 4.

In directed settings, where the cone boundary 𝒞\partial\mathcal{C} is nonempty, the stronger convergence (2.12) is also often true, and can be derived from a stronger version of Theorem 2.2, where the uniform convergence holds up to 𝒞\partial\mathcal{C}. In our general setting, even continuity of Λ\Lambda up to the boundary is not guaranteed, and one needs extra regularity conditions to ensure nice behavior of 𝒜\mathcal{A} and Λ\Lambda near 𝒞\partial\mathcal{C}. Paths on the boundary of 𝒞\mathcal{C} are more constrained than paths in 𝒞\mathcal{C}^{\circ} and so in practice different techniques are often used near the boundary (see for e.g. [Mar04]). We do not address these issues in detail and concentrate on differentiability in the interior of 𝒞\mathcal{C}.

2.2. Differentiability

In this section, we assume the setup described in Section 2.1 and present general conditions guaranteeing differentiability of Λ\Lambda at a point v𝒞{0}v\in\mathcal{C}^{\circ}\setminus\{0\}. Our main result is Theorem 2.4. We will see below (Lemma 2.1) that the desired differentiability of Λ\Lambda at vv follows from differentiability along a sufficiently large set of directions. Thus the assumptions introduced in this section will be targeted at checking this directional differentiability. These assumptions are easy to verify. In Sections 3 and 4 we will check them for two classes of FPP-type models. The proofs of differentiability for LPP-type models in [BD23b], [BD23a] were essentially based on checking these assumptions, too.

We denote 𝖡(x,r)={yd:|xy|<r}\mathsf{B}(x,r)=\{y\in\mathbb{R}^{d}:\ |x-y|<r\}. We need the following in our setup:

  1. (B1)

    There is δ>0\delta>0 and a d1d-1 dimensional subspace HdH\subset\mathbb{R}^{d} not containing vv with the following properties: for all T>0T>0 and wH(δ)w\in H(\delta), where

    (2.13) H(δ)=(v+H)𝒞𝖡(v,δ),H(\delta)=(v+H)\cap\mathcal{C}^{\circ}\cap\mathsf{B}(v,\delta),

    there is a pair of maps: a measurable bijection Ξvw:𝒮0,Tv,𝒮0,Tw,\Xi_{v\to w}:\mathcal{S}_{0,Tv,*}\to\mathcal{S}_{0,Tw,*} and a measure preserving map Ξvw:ΩΩ.\Xi_{v\to w}^{*}:\Omega\to\Omega. In addition, Ξvv\Xi_{v\to v}^{*} and Ξvv\Xi_{v\to v} are identity maps.

We drop the dependence on TT from Ξvw\Xi_{v\to w} and Ξvw\Xi_{v\to w}^{*} for brevity. In applications, the maps Ξvw\Xi_{v\to w} are usually lifted from a transformation of d\mathbb{R}^{d} that does not depend on TT.

Before describing the further requirements on δ,H\delta,H and maps Ξvw\Xi_{v\to w}, Ξvw\Xi^{*}_{v\to w}, we need to introduce further notation.

The function BB defined as

(2.14) Bω(w,v,γ)=AΞvwω(Ξvwγ),γ𝒮0,Tv,,B_{\omega}(w,v,\gamma)=A_{\Xi_{v\to w}^{*}\omega}(\Xi_{v\to w}\gamma),\quad\gamma\in\mathcal{S}_{0,Tv,*},

is a transformed version of AA.

The optimal transformed action is given by

(2.15) T(w,v)=inf{B(w,v,γ):γ𝒮0,Tv,}.\mathcal{B}^{T}(w,v)=\inf\{B(w,v,\gamma)\,:\,\gamma\in\mathcal{S}_{0,Tv,*}\}.

If γT(v)=γωT(v)=γω(Tv)\gamma^{T}(v)=\gamma^{T}_{\omega}(v)=\gamma_{\omega}(Tv) is the selection of path realizing the infimum in the definition of AT(Tv)A^{T}(Tv), see condition 4, we define

ψT(w,v)=ψωT(w,v)=Ξvw1γΞvwωT(w)𝒮0,Tv,\psi^{T}(w,v)=\psi^{T}_{\omega}(w,v)=\Xi_{v\to w}^{-1}\gamma^{T}_{\Xi_{v\to w}^{*}\omega}(w)\in\mathcal{S}_{0,Tv,*}

to be our selection of path obtaining the infimum in (2.15).

Since Ξvv\Xi_{v\to v} and Ξvv\Xi_{v\to v}^{*} are identity maps, we have

(2.16) ψT(v,v)=γT(v)=γ(Tv),\psi^{T}(v,v)=\gamma^{T}(v)=\gamma(Tv),

and

(2.17) T(v,v)=B(v,v,γT(v)).\mathcal{B}^{T}(v,v)=B(v,v,\gamma^{T}(v)).

By (2.14) and the assumption that Ξvw\Xi_{v\to w}^{*} is measure preserving, for every w(v+H)𝒞w\in(v+H)\cap\mathcal{C},

(2.18) Λ(w)=limT1TT(w,v)\Lambda(w)=\lim_{T\to\infty}\frac{1}{T}\mathcal{B}^{T}(w,v)

\mathbb{P}-almost surely.

For a function f:H(δ)f:H(\delta)\to\mathbb{R} (H(δ)H(\delta) is defined in (2.13)), one can define Hf\nabla_{H}f and H2f\nabla_{H}^{2}f as, respectively, its first and second derivatives relatively to HH. These derivatives can be identified as elements of HH and H2H^{2}, respectively, so that

f(w)=f(w)+Hf(w),ww+H2f(w)(ww),ww+o(|ww|2),f(w^{\prime})=f(w)+\langle\nabla_{H}f(w),w^{\prime}-w\rangle+\langle\nabla^{2}_{H}f(w)(w^{\prime}-w),w^{\prime}-w\rangle+o(|w^{\prime}-w|^{2}),

as H(δ)wwH(\delta)\ni w^{\prime}\to w, where the inner product in HH is induced by the inner product in d\mathbb{R}^{d}. Note that if ff is in fact defined on an open set in d\mathbb{R}^{d} and is twice differentiable at v,v, then Hf=PHf\nabla_{H}f=P_{H}\nabla f and H2f=PH2f\nabla_{H}^{2}f=P_{H}\nabla^{2}f, where f\nabla f and 2f\nabla^{2}f are the usual derivatives in d\mathbb{R}^{d} and PHP_{H} is the orthogonal linear projection onto H.H.

Since vv is fixed, we consider B(w,v,γT(v))B(w,v,\gamma^{T}(v)) as a function of its first argument ww.


We are now ready to state the crucial assumption and the main differentiability theorem:

  1. (2)

    HB(w,v,γT(v))\nabla_{H}B(w,v,\gamma^{T}(v)) and H2B(w,v,γT(v))\nabla_{H}^{2}B(w,v,\gamma^{T}(v)) exist for all wH(δ)w\in H(\delta), and

    (2.19) M:=lim supT1TsupwH(δ)H2B(w,v,γT(v))<M_{\infty}:=\limsup_{T\to\infty}\frac{1}{T}\sup_{w\in H(\delta)}\|\nabla_{H}^{2}B(w,v,\gamma^{T}(v))\|<\infty

    almost surely.

Theorem 2.4.

Under assumptions (A1)4 and (B1)2, the function Λ\Lambda is differentiable at v.v. Additionally, for wH,w\in H,

(2.20) Λ(v),w=limT1THB(v,v,γT(v)),w.\langle\nabla\Lambda(v),w\rangle=\lim_{T\to\infty}\frac{1}{T}\langle\nabla_{H}B(v,v,\gamma^{T}(v)),w\rangle.
Remark 5.

A convex function is differentiable on an open set iff it is C1C^{1} on that set. Thus, if the conditions of the theorem hold for all v𝒞v\in\mathcal{C}^{\circ}, then ΛC1(𝒞)\Lambda\in C^{1}(\mathcal{C}^{\circ}).

The proof is based on the representation (2.18) and the following two lemmas. The first of them implies that it suffices to check differentiability of Λ\Lambda relatively to HH. The second one is at the core of our argument. It is a minor modification of Lemma 3.3 of [BD23a]. We give a proof of these lemmas in Section 6.

Lemma 2.1.

Let f:𝒞f:\mathcal{C}^{\circ}\to\mathbb{R} be a continuous function such that its restriction to H(δ)H(\delta) is differentiable at vv. Also, suppose ff satisfies f(sw)=sf(w)f(sw)=sf(w) for ww in a neighborhood of vv and all sufficiently small s>0s>0. Then ff is differentiable at vv.

Lemma 2.2.

Let 𝒟H(δ)\mathcal{D}\subset H(\delta) be dense in H(δ)H(\delta), (fn)n(f_{n})_{n\in\mathbb{N}} be a sequence of functions from H(δ)H(\delta) to \mathbb{R}, and f:H(δ)f:H(\delta)\to\mathbb{R} be a function such that for all w𝒟{v}w\in\mathcal{D}\cup\{v\},

(2.21) limnfn(w)=f(w).\lim_{n\to\infty}f_{n}(w)=f(w).

Suppose also that there exists a sequence of vectors (ξn)n(\xi_{n})_{n\in\mathbb{N}} in HH and a function h:H(δ)h:H(\delta)\to\mathbb{R} such that the following holds:

  1. (1)

    For all w𝒟w\in\mathcal{D} and n,n\in\mathbb{N},

    (2.22) fn(w)fn(v)ξn,wv+h(w),f_{n}(w)-f_{n}(v)\leq\langle\xi_{n},w-v\rangle+h(w),
  2. (2)

    limwvh(w)|wv|=0\lim_{w\to v}\frac{h(w)}{|w-v|}=0.

If ff is convex, then: ff is differentiable at vv (relatively to HH), the sequence (ξn)n(\xi_{n})_{n\in\mathbb{N}} converges, and

(2.23) Hf(v)=limnξn.\nabla_{H}f(v)=\lim_{n\to\infty}\xi_{n}.
Proof of Theorem 2.4.

Let wv+Hw\in v+H be such that |wv|<δ.|w-v|<\delta. We have

T(w,v)\displaystyle\mathcal{B}^{T}(w,v) B(w,v,γT(v))\displaystyle\leq B(w,v,\gamma^{T}(v))
=B(v,v,γT(v))+BT(w,v,γT(v))B(v,v,γT(v))\displaystyle=B(v,v,\gamma^{T}(v))+B^{T}(w,v,\gamma^{T}(v))-B(v,v,\gamma^{T}(v))
T(v,v)+HB(v,v,γT(v)),wv\displaystyle\leq\mathcal{B}^{T}(v,v)+\langle\nabla_{H}B(v,v,\gamma^{T}(v)),w-v\rangle
+12supuH(δ)H2BT(u,v,γT(v))|vw|2,\displaystyle\qquad\qquad\qquad\qquad\qquad+\frac{1}{2}\sup_{u\in H(\delta)}\|\nabla^{2}_{H}B^{T}(u,v,\gamma^{T}(v))\|\cdot|v-w|^{2},

where we used (2.17) and the Taylor expansion. It follows that for sufficiently large TT,

(2.24) 1TT(w,v)1TT(v,v)+1THBT(v,v,γT(v)),wv+12|wv|2(M+1),\frac{1}{T}\mathcal{B}^{T}(w,v)\leq\frac{1}{T}\mathcal{B}^{T}(v,v)+\frac{1}{T}\langle\nabla_{H}B^{T}(v,v,\gamma^{T}(v)),w-v\rangle\\ +\frac{1}{2}|w-v|^{2}(M_{\infty}+1),

where MM_{\infty} is as defined in (2.19). Taking an arbitrary countable dense set 𝒟H(δ)\mathcal{D}\subset H(\delta), fn()=n1n(,v)f_{n}(\cdot)=n^{-1}\mathcal{B}^{n}(\cdot,v), f=Λf=\Lambda, ξn=HB(v,v,γT(v))\xi_{n}=\nabla_{H}B(v,v,\gamma^{T}(v)), h(w)=12|wv|2(M+1)h(w)=\frac{1}{2}|w-v|^{2}(M_{\infty}+1), and noticing that (2.21) for v𝒟{v}v\in\mathcal{D}\cup\{v\} is a consequence of (2.18), (2.22) is a consequence of (2.24), and recalling that Λ\Lambda is convex, we apply Lemma 2.2 to conclude that Λ\Lambda is differentiable at vv relative to HH with derivative given by (2.20). To complete the proof of the lemma, it now suffices to apply Lemma 2.2. ∎

Remark 6.

If instead of (2.19) we assume that for some v,w𝒞v,w\in\mathcal{C}^{\circ} there is δ>0\delta>0 such that

(2.25) lim supT1Tsupww+H:|ww|<δH2B(w,v,ψT(w,v))<,\limsup_{T\to\infty}\frac{1}{T}\sup_{w^{\prime}\in w+H\,:\,|w^{\prime}-w|<\delta}\|\nabla_{H}^{2}B(w^{\prime},v,\psi^{T}(w,v))\|<\infty,

then a similar proof to that of Theorem 2.4, except using a Taylor expansion around ww rather than around vv, shows that Λ\Lambda is differentiable at ww and for all uH,u\in H,

Λ(w),u=limT1THB(w,v,ψT(w,v)),u\langle\nabla\Lambda(w),u\rangle=\lim_{T\to\infty}\frac{1}{T}\langle\nabla_{H}B(w,v,\psi^{T}(w,v)),u\rangle

almost surely.

Remark 7.

One can also state a version of Theorem 2.4 where H(δ)H(\delta) is replaced by a (d1)(d-1)-dimensional C2C^{2}-hypersurface in d\mathbb{R}^{d} containing vv and transversal to the radial direction at vv, provided that the analogous condition to (2.19) holds when H2B\nabla_{H}^{2}B is replaced by the intrinsic second derivative.

Remark 8.

The C2C^{2} condition in (2.19) or (2.25) could be replaced by an analogous C1+αC^{1+\alpha} condition (that is, a bound on the Hölder constant of the first derivative). In this case, a bound similar to (2.24) will hold except the |vw|2|v-w|^{2} term is replaced by |vw|1+α.|v-w|^{1+\alpha}. Since this term is o(|vw|)o(|v-w|), Lemma 2.2 can still be applied.

Finally, we can state a result on differentiability of the boundary of the limit shape defined by

M=EΛ𝒞={v𝒞:Λ(v)=1}.M=\partial E_{\Lambda}\cap\mathcal{C}^{\circ}=\{v\in\mathcal{C}^{\circ}:\Lambda(v)=1\}.

The following theorem follows directly from Theorem 2.4, Remark 3, 11-homogeneity of Λ\Lambda, and the implicit function theorem.

Theorem 2.5.

Suppose MM is nonempty, and assume that the conditions of Theorem 2.4 are satisfied for all vMv\in M. Then MM is a C1C^{1} manifold. If

(2.26) 𝒞=d and Λ(v)>0 for all v0,\displaystyle\mathcal{C}=\mathbb{R}^{d}\text{\ and }\ \Lambda(v)>0\text{\ for all }v\neq 0,

then MM is C1C^{1}-diffeomorphic to the (d1)(d-1)-dimensional sphere.

Condition (2.26) means that the graph of Λ\Lambda is a cone with vertex at the origin and section MM.

Theorem 1.1 stated informally in Section 1 is a combination of rigorous Theorems 2.12.5.

Of course, the power of these theorems is that they apply to a broad class of situations satisfying requirements (A1)5 and (B1)2. In Sections 3 and 4, we give two families of such models of FPP type. In Section 5, we explain that these results also apply to the directed setting of stochastic HJB equations studied in [BD23a].

3. Example I: Riemannian First Passage Percolation

The goal of this section is to describe a class of situations where the general requirements of Section 2 hold. This class is defined via random Riemannian metrics.

3.1. General setup for Riemannian FPP

A Riemanninan metric on d\mathbb{R}^{d} is a function from d\mathbb{R}^{d} to the space of positive definite symmetric matrices

+d:={Md×d:M=M,M is positive definite}.\mathcal{M}^{d}_{+}:=\{M\in\mathbb{R}^{d\times d}\,:\,M^{\top}=M,\,M\text{\rm\ is positive definite\}}.

Note that M+dM\in\mathcal{M}^{d}_{+} can be identified with a quadratic form given by M(u,u)=Mu,u.M(u,u)=\langle Mu,u\rangle.

For an absolutely continuous curve γ:[0,t]d\gamma:[0,t]\to\mathbb{R}^{d}, its length under a Riemannian metric gg defined by

(3.1) A(γ)=0tgγs(γ˙s,γ˙s)𝑑sA(\gamma)=\int_{0}^{t}\sqrt{g_{\gamma_{s}}(\dot{\gamma}_{s},\dot{\gamma}_{s})}ds

plays the role of action. The distance between arbitrary x,ydx,y\in\mathbb{R}^{d} is defined by

(3.2) 𝒜(x,y)=infγ𝒮x,y,A(γ),\mathcal{A}(x,y)=\inf_{\gamma\in\mathcal{S}_{x,y,*}}A(\gamma),

and throughout this section the cone of admissible direction 𝒞\mathcal{C} is set to be d\mathbb{R}^{d}.

We will use the notation M\|M\| for the operator norm of a matrix MM. For a C2C^{2} function f:d+df:\mathbb{R}^{d}\to\mathcal{M}^{d}_{+}, we let

fC2,x=f(x)+i=1dxif(x)+i,j=1dxjxif(x),\|f\|_{C^{2},x}=\|f(x)\|+\sum_{i=1}^{d}\|\partial_{x_{i}}f(x)\|+\sum_{i,j=1}^{d}\|\partial_{x_{j}x_{i}}f(x)\|,
fC2=supxdfC2,x.\|f\|_{C^{2}}=\sup_{x\in\mathbb{R}^{d}}\|f\|_{C^{2},x}.

The space Cloc2(d;+d)C^{2}_{loc}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}) is the space of such functions ff such that supxAfC2,x<\sup_{x\in A}\|f\|_{C^{2},x}<\infty for all bounded sets AdA\subset\mathbb{R}^{d}. We endow Cloc2(d;+d)C^{2}_{loc}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}) with the topology induced by the family of semi-norms given by supx𝖡(0,n)fC2,x\sup_{x\in\mathsf{B}(0,n)}\|f\|_{C^{2},x} for nn\in\mathbb{N}.

In this section, we require that gg is a random element of Cloc2(d;+d)C^{2}_{loc}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}), i.e., it is a measurable map g:ΩCloc2(d;+d)g:\Omega\to C^{2}_{loc}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}). For a fixed xdx\in\mathbb{R}^{d}, this map gives a random matrix gx=gx,ωg_{x}=g_{x,\omega}.

Let us specify a set of general conditions that guarantee that the setting and assumptions of Section 2 hold for the distance given in (3.1)–(3.2). Then our results on the shape function and limit shape including the differentiability result will be applicable to this class of models.

Our first condition concerning stationarity and skew-invariance of gg is stated in terms of spatial translations defined in (2.2):

  1. (C1)

    The probability space (Ω,,)(\Omega,\mathcal{F},\mathbb{P}) is equipped with an ergodic \mathbb{P}-preserving group action (θx)xd(\theta_{*}^{x})_{x\in\mathbb{R}^{d}} synchronized with translations (θx)xd(\theta^{x})_{x\in\mathbb{R}^{d}} on d\mathbb{R}^{d}: for every xdx\in\mathbb{R}^{d}, ωΩ,\omega\in\Omega, and ydy\in\mathbb{R}^{d}, we have gθyx,θyω=gx,ω.g_{\theta^{y}x,\theta^{y}_{\ast}\omega}=g_{x,\omega}.

For our second condition, we need to fix an arbitrary v0v\neq 0, introduce a space HH, and define a family of transformations (Ξvw)wv+H(\Xi_{v\to w})_{w\in v+H} that will satisfy the conditions in (B1). Once vv is fixed, we define HH as the orthogonal complement to the line spanned by vv. For v,w0v,w\neq 0 we define the transformation Ξvw\Xi_{v\to w} of d\mathbb{R}^{d} by

(3.3) Ξvwx=v,x|v|2wv,x|v|2v+x=v,x|v|2(wv)+x.\Xi_{v\to w}x=\frac{\langle v,x\rangle}{|v|^{2}}w-\frac{\langle v,x\rangle}{|v|^{2}}v+x=\frac{\langle v,x\rangle}{|v|^{2}}(w-v)+x.

This is a convenient choice for the models we are mostly concerned with but other choices of HH and Ξvw\Xi_{v\to w} are possible.

The map Ξvw\Xi_{v\to w} acts on 𝒮\mathcal{S} in a pointwise manner: (Ξvwγ)s=Ξvwγs.(\Xi_{v\to w}\gamma)_{s}=\Xi_{v\to w}\gamma_{s}. Note that Ξvv\Xi_{v\to v} is the identity map. These maps satisfy (B1) and preserve volume, which is useful in applications. We summarize these facts in the lemma below.

Lemma 3.1.

If v0v\neq 0 and wv+Hw\in v+H, then Ξvw\Xi_{v\to w} is a volume preserving transformation on d\mathbb{R}^{d}; additionally, Ξvw\Xi_{v\to w} is a measurable bijection from 𝒮0,Tv,\mathcal{S}_{0,Tv,*} to 𝒮0,Tw,\mathcal{S}_{0,Tw,*}.

We postpone a proof of these properties until Section 9. Here we only mention that the volume preserving property implies that the homogeneous Poisson process in d\mathbb{R}^{d} (that our example models will be based upon) is distributionally invariant under Ξvw\Xi_{v\to w} for all wv+Hw\in v+H.

Our next condition means that gg and its image under Ξvw\Xi_{v\to w} can be efficiently coupled, with controlled errors.

  1. (2)

    There is a family of \mathbb{P}-preserving transformations (Ξvw)wv+H(\Xi_{v\to w}^{*})_{w\in v+H} on Ω\Omega (with Ξvv\Xi_{v\to v}^{*} the identity), a number δ(0,1)\delta\in(0,1) and a random field Y:d[0,)Y:\mathbb{R}^{d}\to[0,\infty) such that:

    1. (i)

      for all xdx\in\mathbb{R}^{d}, all wdw\in\mathbb{R}^{d} satisfying |wv|<δ|w-v|<\delta and all ωΩ,\omega\in\Omega, the random field gx,ωw,v:=gΞvwx,Ξvwωg^{w,v}_{x,\omega}:=g_{\Xi_{v\to w}x,\Xi_{v\to w}^{*}\omega} satisfies

      (3.4) gxw,v+i=1dwigxw,v+i,j=1dwjwigxw,vY(x);\|g_{x}^{w,v}\|+\sum_{i=1}^{d}\|\partial_{w_{i}}g_{x}^{w,v}\|+\sum_{i,j=1}^{d}\|\partial_{w_{j}}\partial_{w_{i}}g_{x}^{w,v}\|\leq Y(x);
    2. (ii)

      YY satisfies the following conditions:

      1. (a)

        (stationarity) YY is stationary with respect to lattice shifts: For all ad,a\in\mathbb{Z}^{d}, the collection (Y(x+a))xd(Y(x+a))_{x\in\mathbb{R}^{d}} is equal in distribution to (Y(x))xd(Y(x))_{x\in\mathbb{R}^{d}};

      2. (b)

        (finite range) YY has a finite range of dependence: there is R>0R>0 such that if infxA,yB|xy|>R\inf_{x\in A,y\in B}|x-y|>R holds for sets A,BdA,B\subset\mathbb{R}^{d}, then (Y(x))xA(Y(x))_{x\in A} and (Y(x))xB(Y(x))_{x\in B} are independent;

      3. (c)

        (finite moments) supx[0,1]d|Y(x)|\sup_{x\in[0,1]^{d}}|Y(x)| is measurable and for some β>4d\beta>4d,

        𝔼[supx[0,1]d|Y(x)|β]<.\mathbb{E}\Big{[}\sup_{x\in[0,1]^{d}}|Y(x)|^{\beta}\Big{]}<\infty.

The condition 2 implies that gxY(x)\|g_{x}\|\leq Y(x) for all xdx\in\mathbb{R}^{d} since Ξvv\Xi_{v\to v} and Ξvv\Xi_{v\to v}^{*} are the identity maps. Also, note that we do not require finite range dependence on the field gg itself in 2, we only need that it is dominated by a finite range field.

The last condition we need is uniform positive definiteness of the random Riemannian metric:

  1. (3)

    There is λ>0\lambda>0 such that gx,ω(p,p)λ|p|2g_{x,\omega}(p,p)\geq\lambda|p|^{2} for all pdp\in\mathbb{R}^{d}, xdx\in\mathbb{R}^{d}, and ωΩ\omega\in\Omega.

Since for any path γ\gamma, ddt(Ξvwγ)s=Ξvwγ˙s\frac{d}{dt}(\Xi_{v\to w}\gamma)_{s}=\Xi_{v\to w}\dot{\gamma}_{s}, the transformed action introduced in (2.14) can be rewritten for this model as

(3.5) B(w,v,γ)=0tgγsw,v(Ξvwγ˙s,Ξvwγ˙s)𝑑s,γ𝒮0,Tv,t,B(w,v,\gamma)=\int_{0}^{t}\sqrt{g^{w,v}_{\gamma_{s}}(\Xi_{v\to w}\dot{\gamma}_{s},\Xi_{v\to w}\dot{\gamma}_{s})}ds,\quad\gamma\in\mathcal{S}_{0,Tv,t},

and the minimal transformed action BT(w,v)B^{T}(w,v) is defined according to (2.15). For i=1,,di=1,\dots,d and x,p,vdx,p,v\in\mathbb{R}^{d} we let hxi(p;v)h_{x}^{i}(p;v) denote wigxw,v(p,p)|w=v.\partial_{w_{i}}g^{w,v}_{x}(p,p)\Big{|}_{w=v}.

Theorem 3.1.

Under assumptions (C1)3 and the notation defined above, all theorems of Section 2 hold.

Remark 9.

Under the assumptions of Theorem 3.1,

  1. (1)

    condition (2.26) holds due to 3;

  2. (2)

    we can derive an expression for (2.20) in this example: for all vdv\in\mathbb{R}^{d} and wv+Hw\in v+H, setting γT(v):=γ(0,Tv)\gamma^{T}(v):=\gamma(0,Tv) the selection of minimizer in 4, we have

    (3.6) Λ(v),w=limT1T0t[12i=1dwihγsT(v)i(γ˙sT(v))+v,γ˙sT(v)|v|2gγsT(v)(w,γ˙sT(v))]𝑑s.\langle\nabla\Lambda(v),w\rangle=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{t}\Big{[}\frac{1}{2}\sum_{i=1}^{d}w_{i}h^{i}_{\gamma_{s}^{T}(v)}(\dot{\gamma}_{s}^{T}(v))+\frac{\langle v,\dot{\gamma}_{s}^{T}(v)\rangle}{|v|^{2}}g_{\gamma_{s}^{T}(v)}(w,\dot{\gamma}_{s}^{T}(v))\Big{]}ds.

We will prove Theorem 3.1 in Section 7. Part 1 of Remark 9 is a direct consequence of 3. We will justify part 2 in Section 7.2.

In Section 3.2 we present two examples of random Riemannian metrics that satisfy the conditions of this section.

3.2. Examples of random Riemannnian metrics

Let us give two examples of random Riemannian metrics satisfying the above requirements. Let KdK\subset\mathbb{R}^{d} be a compact set and let 𝖰\mathsf{Q} be a probability measure on the space

CK(d;+d)={fCloc(d;+d):supp(f)K}.C_{K}(\mathbb{R}^{d};\mathcal{M}^{d}_{+})=\bigl{\{}f\in C_{loc}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}):\textrm{supp}(f)\subset K\bigr{\}}.

We let 𝐍\mathbf{N} be a Poisson measure on d×CK(d;+d)\mathbb{R}^{d}\times C_{K}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}) with intensity measure given by Leb𝖰\mathrm{Leb}\otimes\mathsf{Q}. Here Leb\mathrm{Leb} is the Lebesgue measure on d\mathbb{R}^{d}. In other words, this is a marked Poisson process with unit intensity on d\mathbb{R}^{d} and i.i.d. marks distributed in CK(d;+d)C_{K}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}) according to 𝖰\mathsf{Q}.

It is convenient to work with the canonical space (Ω,,)(\Omega,\mathcal{F},\mathbb{P}) of locally finite Poisson point configurations on d×CK(d;+d)\mathbb{R}^{d}\times C_{K}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}) equipped with topology of vague convergence. The role of ω\omega of Section 3.1 is played by 𝐍\mathbf{N}.

For ydy\in\mathbb{R}^{d}, the translation θy\theta^{y} on d\mathbb{R}^{d} also gives rise to a \mathbb{P}-preserving transformation θy\theta^{y}_{*} of Ω\Omega: each Poissonian point (xi,φi)(x_{i},\varphi_{i}) is mapped into a Poisson point (θyxi,φi)(\theta^{y}x_{i},\varphi_{i}) of the Poisson Point Process θy𝐍\theta^{y}_{*}\mathbf{N}. Here the transformation θy\theta^{y} applies only to the base point xix_{i} in d\mathbb{R}^{d} but not to the mark φi\varphi_{i}. Equivalently, for all continuous functions ff with bounded support:

(3.7) f(x,φ)(θy𝐍)(dx,dφ)=f(θyx,φ)𝐍(dx,dφ).\int f(x,\varphi)(\theta_{*}^{y}\mathbf{N})(dx,d\varphi)=\int f(\theta^{y}x,\varphi)\mathbf{N}(dx,d\varphi).
Example 1.

We can let

(3.8) gx=φ(xy)𝐍(dy,dφ)+λI=(xi,φi)φi(xxi)+λI,g_{x}=\int\varphi(x-y)\mathbf{N}(dy,d\varphi)+\lambda I=\sum_{(x_{i},\varphi_{i})}\varphi_{i}(x-x_{i})+\lambda I,

where the summation extends over all Poissonian points (xi,φi)(x_{i},\varphi_{i}). We must also assume that for some β>4d\beta>4d

(3.9) 𝖰φC2β<.\mathsf{Q}\|\varphi\|_{C^{2}}^{\beta}<\infty.
Example 2.

We can also consider a product version of Example 1. Specifically, we can let

(3.10) gx=exp(φ(xy)𝐍(dy,dφ)).g_{x}=\exp\Big{(}\int\varphi(x-y)\mathbf{N}(dy,d\varphi)\Big{)}.

We additionally require that there is some C>0C>0 such that φC2C\|\varphi\|_{C^{2}}\leq C with probability one.

Theorem 3.2.

Examples 1 and 2 satisfy our general conditions (C1)3.

We will prove this theorem in Section 9. A crucial step is to define useful transformations Ξvw\Xi_{v\to w}^{*}, wHw\in H, on Ω\Omega preserving the distribution of the Poisson measure. That can be done via the following identity for all continuous functions ff with bounded support:

(3.11) f(x,φ)(Ξvw𝐍)(dx,dφ)=f(Ξvwx,φ)𝐍(dx,dφ).\int f(x,\varphi)(\Xi_{v\to w}^{*}\mathbf{N})(dx,d\varphi)=\int f(\Xi_{v\to w}x,\varphi)\mathbf{N}(dx,d\varphi).

In other words, a marked Poissonian point (xi,φi)(x_{i},\varphi_{i}) of the Poisson Point Process 𝐍\mathbf{N} gives rise to a point (Ξvwxi,φi)(\Xi_{v\to w}x_{i},\varphi_{i}) of Ξvw𝐍\Xi_{v\to w}^{*}\mathbf{N}, where the transformation Ξvw\Xi_{v\to w} applies only to the base point xix_{i} in d\mathbb{R}^{d} but not to the random kernel φi\varphi_{i}. Due to Lemma 3.1, these transformations preserve \mathbb{P}, the distribution of the Poisson measure.

4. Example II: Broken line Poisson FPP.

The goal of this section is to introduce another family of random metrics on d\mathbb{R}^{d} and state our main results for these models. In Section 10, we show that they follow from our general approach by checking the conditions of Section 2.

We will work with a homogeneous Poisson point process of constant intensity 11 on d\mathbb{R}^{d}. Similarly to Section 3.2, it is convenient to work with the canonical space (Ω,,)(\Omega,\mathcal{F},\mathbb{P}) of locally finite Poisson point configurations on d\mathbb{R}^{d} equipped with topology of vague convergence. We usually denote elements of Ω\Omega by ω\omega. They can be viewed either as locally finite point configurations or σ\sigma-finite Borel measure with values in {0}\mathbb{N}\cup\{0\} on bounded Borel sets. For xdx\in\mathbb{R}^{d} and ωΩ\omega\in\Omega, we will write xωx\in\omega if and only if ω({x})=1\omega(\{x\})=1. The space Ω\Omega is equipped with the group of \mathbb{P}-preserving transformations θx:ΩΩ\theta^{x}_{*}:\Omega\to\Omega, xdx\in\mathbb{R}^{d}. Namely, for ωΩ,xd\omega\in\Omega,\ x\in\mathbb{R}^{d}, θxω\theta^{x}_{*}\omega is defined as the pushforward of the measure ω\omega under the transformation θx\theta^{x} defined in (2.2).

For every xdx\in\mathbb{R}^{d}, we introduce a random variable

F(x)=Fω(x)=1ω({x})={1,xω,0,xω.F(x)=F_{\omega}(x)=1-\omega(\{x\})=\begin{cases}1,&x\notin\omega,\\ 0,&x\in\omega.\end{cases}

It will serve as a cost for a path to contain xx. In other words, no cost will be associated with Poissonian points, while all the other points (we will call them penalty points) add cost 11 to a path.

We will also need a cost function, or Lagrangian, LL. Throughout this section, we will assume that LL satisfies the following conditions:

  1. (D1)

    L:d[0,)L:\mathbb{R}^{d}\to[0,\infty) is a convex function such that L(x)=L(x)L(x)=L(-x) for all xdx\in\mathbb{R}^{d} and L(x)=0L(x)=0 if and only if x=0x=0.

  2. (D2)

    LC2(d)L\in C^{2}(\mathbb{R}^{d}).

  3. (D3)

    There is c>0c>0 such that L(x)c|x|2,|x|1.L(x)\geq c|x|^{2},\quad|x|\leq 1.

  4. (D4)

    limxL(x)|x|=+.\lim_{x\to\infty}\frac{L(x)}{|x|}=+\infty.

We first introduce random action for discrete paths, i.e., sequences of points γ=(γ0,,γn)\gamma=(\gamma_{0},\ldots,\gamma_{n}) in d\mathbb{R}^{d}. Points γkd\gamma_{k}\in\mathbb{R}^{d}, k=0,1,,nk=0,1,\ldots,n, are called vertices of γ\gamma.

For x,ydx,y\in\mathbb{R}^{d} and nn\in\mathbb{N}, we define

𝒫x,y,n\displaystyle\mathcal{P}_{x,y,n} ={γ:{0,1,,n}d:γ0=x,γn=y},\displaystyle=\Big{\{}\gamma:\{0,1,\ldots,n\}\to\mathbb{R}^{d}:\ \gamma_{0}=x,\ \gamma_{n}=y\Big{\}},
(4.1) 𝒫x,y,\displaystyle\mathcal{P}_{x,y,*} =n𝒫x,y,n,\displaystyle=\bigcup_{n\in\mathbb{N}}\mathcal{P}_{x,y,n},
𝒫,,n\displaystyle\mathcal{P}_{*,*,n} =x,yd𝒫x,y,n,etc.\displaystyle=\bigcup_{x,y\in\mathbb{R}^{d}}\mathcal{P}_{x,y,n},\quad\mathrm{etc.}

The set 𝒫,,n\mathcal{P}_{*,*,n} can be identified with (n+1)d\mathbb{R}^{(n+1)d} and equipped with the Euclidean topology. The set 𝒫=𝒫,,=n𝒫,,n\mathcal{P}=\mathcal{P}_{*,*,*}=\cup_{n}\mathcal{P}_{*,*,n} is equipped with the disjoint union topology. We can embed 𝒫x,y,n\mathcal{P}_{x,y,n} into 𝒮x,y,n\mathcal{S}_{x,y,n} by considering the piecewise linear interpolations of paths in 𝒫x,y,n\mathcal{P}_{x,y,n}, see Section 10 for details.

For a path γ𝒫,,n\gamma\in\mathcal{P}_{*,*,n}, we define its action by

(4.2) Aω(γ)\displaystyle A_{\omega}(\gamma) =i=0n1L(Δiγ)+12i=0n1(Fω(γi)+Fω(γi+1))\displaystyle=\sum_{i=0}^{n-1}L(\Delta_{i}\gamma)+\frac{1}{2}\sum_{i=0}^{n-1}(F_{\omega}(\gamma_{i})+F_{\omega}(\gamma_{i+1}))
=i=0n1L(Δiγ)+12Fω(γ0)+i=1n1Fω(γi)+12Fω(γn),\displaystyle=\sum_{i=0}^{n-1}L(\Delta_{i}\gamma)+\frac{1}{2}F_{\omega}(\gamma_{0})+\sum_{i=1}^{n-1}F_{\omega}(\gamma_{i})+\frac{1}{2}F_{\omega}(\gamma_{n}),

where Δiγ=γi+1γi\Delta_{i}\gamma=\gamma_{i+1}-\gamma_{i}.

Our results apply to various similar definitions, but we choose this one because it is additive under concatenation of paths and invariant under path reversal, see the proof of Lemma 4.1 below.

For distinct x,ydx,y\in\mathbb{R}^{d} and every ωΩ\omega\in\Omega, we can define

(4.3) 𝒜ω(x,y)=infγ𝒫x,y,Aω(γ).\mathcal{A}_{\omega}(x,y)=\inf_{\gamma\in\mathcal{P}_{x,y,*}}A_{\omega}(\gamma).

We also set 𝒜ω(x,x)=0\mathcal{A}_{\omega}(x,x)=0 for all xdx\in\mathbb{R}^{d}. This is compatible with definition (4.2) with n=0n=0 and γ=(x)𝒫x,x,0\gamma=(x)\in\mathcal{P}_{x,x,0}. If γ𝒫x,y,\gamma\in\mathcal{P}_{x,y,*} satisfies 𝒜ω(x,y)=Aω(γ),\mathcal{A}_{\omega}(x,y)=A_{\omega}(\gamma), then we call γ\gamma a geodesic between xx and yy under ω\omega.

Lemma 4.1.

For all ωΩ\omega\in\Omega, 𝒜ω\mathcal{A}_{\omega} is a finite metric on d\mathbb{R}^{d}.

Proof.

For any two points x,ydx,y\in\mathbb{R}^{d}, let γ\gamma be a path in 𝒫x,y,\mathcal{P}_{x,y,*} satisfying |γi+1γi|1|\gamma_{i+1}-\gamma_{i}|\leq 1 and having at most |xy|+1\lceil|x-y|\rceil+1 steps, and so

(4.4) 𝒜ω(x,y)A(γ)(|xy|+1)(sup|x|1L(x)+1)<.\mathcal{A}_{\omega}(x,y)\leq A(\gamma)\leq(\lceil|x-y|\rceil+1)(\sup_{|x|\leq 1}L(x)+1)<\infty.

The symmetry of 𝒜ω\mathcal{A}_{\omega} follows from the invariance under path reversal: for all ω\omega, nn, and γ𝒫,,n\gamma\in\mathcal{P}_{*,*,n}, we have Aω(γn,γn1,,γ1,γ0)=Aω(γ)A_{\omega}(\gamma_{n},\gamma_{n-1},\ldots,\gamma_{1},\gamma_{0})=A_{\omega}(\gamma).

To prove the triangle inequality, we introduce path concatenation: for any point xdx\in\mathbb{R}^{d}, if γ=(γ0,γ1,,γn1,γn)𝒫,x,\gamma=(\gamma_{0},\gamma_{1},\ldots,\gamma_{n-1},\gamma_{n})\in\mathcal{P}_{*,x,*}, and ψ=(ψ0,ψ1,,ψm1,ψm)𝒫x,,\psi=(\psi_{0},\psi_{1},\ldots,\psi_{m-1},\psi_{m})\in\mathcal{P}_{x,*,*} for some xdx\in\mathbb{R}^{d} we define

γψ=(γ0,γ1,,γn1,x,ψ1,,ψm1,ψm).\gamma\psi=(\gamma_{0},\gamma_{1},\ldots,\gamma_{n-1},x,\psi_{1},\ldots,\psi_{m-1},\psi_{m}).

It follows that if concatenation of paths γ\gamma and ψ\psi is well-defined, then, for all ω\omega,

(4.5) Aω(γψ)=Aω(γ)+Aω(ψ),A_{\omega}(\gamma\psi)=A_{\omega}(\gamma)+A_{\omega}(\psi),

a property similar to (A1) for continuous paths, implying the triangle inequality.

The relation 𝒜ω(x,y)>0\mathcal{A}_{\omega}(x,y)>0 for distinct x,yx,y\in\mathbb{R} is also easy to see. Namely, paths containing at least one penalty point have action at least 1/21/2, and a path γ\gamma containing no penalty points, contains only Poisson points, so its action is bounded below, due to (D3), by c(Δω2(x)1)c(\Delta_{\omega}^{2}(x)\wedge 1), where Δω(x)\Delta_{\omega}(x) is the Euclidean distance from xx to the closest Poissonian point distinct from xx. ∎

Theorem 4.1.

Under assumptions (D1)(D4), all theorems of Section 2 hold.

Remark 10.

Under the assumptions of Theorem 4.1:

  1. (1)

    The shape function positivity condition (2.26) holds. Thus, according to Theorem 2.5, the boundary of the limit shape is diffeomorphic to a sphere.

  2. (2)

    For all vdv\in\mathbb{R}^{d} and wv+Hw\in v+H, setting γT(v):=γ(0,Tv)\gamma^{T}(v):=\gamma(0,Tv) to be the selection of minimizer in 4, we have

    (4.6) Λ(v),w=limT1T|v|2i=0n1L(ΔiγT(v)),wv,ΔiγT(v).\langle\nabla\Lambda(v),w\rangle=\lim_{T\to\infty}\frac{1}{T|v|^{2}}\sum_{i=0}^{n-1}\langle\nabla L(\Delta_{i}\gamma^{T}(v)),w\rangle\langle v,\Delta_{i}\gamma^{T}(v)\rangle.

We prove Theorem 4.1 in Section 10, where we interpret our model in terms of continuous paths from 𝒮\mathcal{S}, check the conditions of Section 2, and apply Theorems 2.12.5. Part 1 of Remark 10 will follow from Lemma 10.7, and part 2 will follow from computations in Section 10.

5. The Directed Setting

In Sections 3 and 4, we discussed examples of FPP type involving no restrictions on admissible path directions. The goal of this section is to explain how the directed setting of [BD23a], where the time coordinate plays a distinguished role, also fits the general framework of the present paper, although we used slightly different notations and definitions in that paper.

Let 𝒮\mathcal{S}^{\uparrow} denote the subset of 𝒮\mathcal{S} given by paths γ\gamma in d\mathbb{R}^{d} satisfying γ˙,e11.\langle\dot{\gamma},e_{1}\rangle\equiv 1. In this directed setting, the first coordinate is interpreted as time. For xdx\in\mathbb{R}^{d}, we let xx^{\uparrow} be the vector composed of the remaining d1d-1 coordinates of xx. For γ𝒮\gamma\in\mathcal{S}^{\uparrow}, the path γ\gamma^{\uparrow} is defined by (γ)s=γs(\gamma^{\uparrow})_{s}=\gamma^{\uparrow}_{s}. In [BD23a], the action was given by

Aω(γ)=0tL(γ˙s)𝑑s+0tFω(γs)𝑑s,γ𝒮𝒮,,t,A_{\omega}(\gamma)=\int_{0}^{t}L(\dot{\gamma}_{s}^{\uparrow})ds+\int_{0}^{t}F_{\omega}(\gamma_{s})ds,\quad\gamma\in\mathcal{S}^{\uparrow}\cap\mathcal{S}_{*,*,t},

for L:d1L:\mathbb{R}^{d-1}\to\mathbb{R} a convex function and FF is a random twice differentiable function, both satisfying certain assumptions.

To apply the general set-up of Section 2.1, we can set the action to be infinite on all other paths and define the cone 𝒞\mathcal{C} of condition 4 to be {xd:x,e1>0}\{x\in\mathbb{R}^{d}\,:\,\langle x,e_{1}\rangle>0\}. To ensure condition (B1), we let H={(0,x):xd}H=\{(0,x)\,:\,x\in\mathbb{R}^{d}\} and, for v,w𝒞v,w\in\mathcal{C}, we define

Ξvwx=x+(wv)x,e1v,e1,xd.\Xi_{v\to w}x=x+(w-v)\frac{\langle x,e_{1}\rangle}{\langle v,e_{1}\rangle},\quad x\in\mathbb{R}^{d}.

Note that this shear transformation satisfies ΞvwTv=Tw\Xi_{v\to w}Tv=Tw for all TT. It can be lifted to a map on 𝒮\mathcal{S} by setting (Ξvwγ)s=Ξvwγs(\Xi_{v\to w}\gamma)_{s}=\Xi_{v\to w}\gamma_{s}. If wv+Hw\in v+H (i.e., the time coordinates of vv and ww coincide), the map Ξvw\Xi_{v\to w}^{*} on Ω\Omega is chosen as a measure preserving transformation such that the derivatives of FΞvwω(Ξvwx)F_{\Xi_{v\to w}^{*}\omega}(\Xi_{v\to w}x) with respect to ww allow for a bound in terms of a stationary, finite dependence range stochastic process with sufficiently high moments. This implies the bound on Bω(w,v,γ)B_{\omega}(w,v,\gamma) required in 2 since

in this setting

Bω(w,v,γ)=AΞvw(Ξvwγ)=0tL(γ˙s+(wv))𝑑s+0tFΞvwω(Ξvwγs)𝑑sB_{\omega}(w,v,\gamma)=A_{\Xi^{*}_{v\to w}}(\Xi_{v\to w}\gamma)=\int_{0}^{t}L(\dot{\gamma}^{\uparrow}_{s}+(w-v)^{\uparrow})ds+\int_{0}^{t}F_{\Xi_{v\to w}^{*}\omega}(\Xi_{v\to w}\gamma_{s})ds

for γ𝒮.\gamma\in\mathcal{S}^{\uparrow}. We refer the reader to [BD23a] for technical details.

6. Proofs of results from Section 2

In this and following sections, we give proofs of results stated in Sections 24.

6.1. Proof of Theorem 2.1

Since θz\theta^{z} maps 𝒮x,y,\mathcal{S}_{x,y,*} bijectively to 𝒮x+z,y+z,,\mathcal{S}_{x+z,y+z,*}, condition (A2) implies

(6.1) 𝒜ω(x,y)=𝒜θzω(x+z,y+z).\mathcal{A}_{\omega}(x,y)=\mathcal{A}_{\theta_{*}^{z}\omega}(x+z,y+z).

If γ1𝒮0,x,\gamma_{1}\in\mathcal{S}_{0,x,*} and γ2𝒮x,x+y,\gamma_{2}\in\mathcal{S}_{x,x+y,*}, then γ1γ2𝒮0,x+y,.\gamma_{1}\gamma_{2}\in\mathcal{S}_{0,x+y,*}. We deduce that for all x,ydx,y\in\mathbb{R}^{d}

(6.2) 𝒜ω(0,x+y)infγ1𝒮0,x,,γ2𝒮x,x+y,(A(γ1)+A(γ2))𝒜ω(0,x)+𝒜θxω(0,y).\mathcal{A}_{\omega}(0,x+y)\leq\inf_{\gamma_{1}\in\mathcal{S}_{0,x,*},\,\gamma_{2}\in\mathcal{S}_{x,x+y,*}}(A(\gamma_{1})+A(\gamma_{2}))\leq\mathcal{A}_{\omega}(0,x)+\mathcal{A}_{\theta_{*}^{-x}\omega}(0,y).

If v0v\neq 0, combining this with (2.4) of condition 4, we obtain existence of the limit in (2.6) from Kingman’s Subadditive Ergodic Theorem (see Theorem 5 in [Kin73]). The fact that Λ\Lambda is deterministic follows from ergodicity of θ.\theta_{*}. If v=0v=0, then |𝒜ω(0,T0)|=|𝒜ω(0,0)|<|\mathcal{A}_{\omega}(0,T0)|=|\mathcal{A}_{\omega}(0,0)|<\infty due to (2.4), which implies Λ(0)=0\Lambda(0)=0.

To prove convexity of Λ\Lambda, we note that if z=αx+(1α)y𝒞z=\alpha x+(1-\alpha)y\in\mathcal{C} for some x,y𝒞x,y\in\mathcal{C}, α(0,1)\alpha\in(0,1) then (6.2) implies

1T𝒜ω(0,Tz)1T𝒜ω(0,Tαx)+1T𝒜θTαxω(0,T(1α)y).\frac{1}{T}\mathcal{A}_{\omega}(0,Tz)\leq\frac{1}{T}\mathcal{A}_{\omega}(0,T\alpha x)+\frac{1}{T}\mathcal{A}_{\theta_{*}^{-T\alpha x}\omega}(0,T(1-\alpha)y).

The left-hand side converges almost surely to Λ(z)\Lambda(z), and the right-hand side converges in probability to αΛ(x)+(1α)Λ(y)\alpha\Lambda(x)+(1-\alpha)\Lambda(y) as TT\to\infty. Hence, Λ\Lambda is convex.

The property Λ(sv)=sΛ(v)\Lambda(sv)=s\Lambda(v) for positive ss follows directly from (2.6) because 1T𝒜T(0,Tsv)=s1Ts𝒜(0,Tsv)\frac{1}{T}\mathcal{A}^{T}(0,Tsv)=s\frac{1}{Ts}\mathcal{A}(0,Tsv).

6.2. Proof of Theorem 2.2

Our argument closely follows that of Theorem 2.16 in [ADH17b]. We need an auxiliary lemma first.

Lemma 6.1.

For all w𝒞w\in\mathcal{C}^{\circ} and all ϵ>0\epsilon>0 there are v,v+𝒞v^{-},v^{+}\in\mathcal{C}^{\circ} and δ>0\delta>0 such that |v±w|ϵ|v^{\pm}-w|\leq\epsilon and the following estimates hold with probability 11:

(6.3) lim supT1Tsupw𝖡(Tw,Tδ)𝒜(0,w)Λ(v)+ϵ,\limsup_{T\to\infty}\frac{1}{T}\sup_{w^{\prime}\in\mathsf{B}(Tw,T\delta)}\mathcal{A}(0,w^{\prime})\leq\Lambda(v^{-})+\epsilon,
(6.4) lim infT1Tinfw𝖡(Tw,Tδ)𝒜(0,w)Λ(v+)ϵ.\liminf_{T\to\infty}\frac{1}{T}\inf_{w^{\prime}\in\mathsf{B}(Tw,T\delta)}\mathcal{A}(0,w^{\prime})\geq\Lambda(v^{+})-\epsilon.
Proof.

We will first establish (6.3).

We will take ϵ>0\epsilon^{\prime}>0 (to be specified later) satisfying ϵϵ.\epsilon^{\prime}\leq\epsilon. Take δ>0\delta>0, v𝒞v^{-}\in\mathcal{C}^{\circ} and 𝒞𝒞\mathcal{C}^{-}\subset\mathcal{C} be such that 𝒞\mathcal{C}^{-} is a cone properly contained in 𝒞\mathcal{C} (see our definition of proper containment just before 5) such that 𝖡(w,δ)v+𝒞\mathsf{B}(w,\delta)\subset v^{-}+\mathcal{C}^{-} and |wv|<ϵ.|w-v^{-}|<\epsilon^{\prime}. Note that this implies that 𝖡(Rw,Rδ)rv+𝒞\mathsf{B}(Rw,R\delta)\subset rv^{-}+\mathcal{C}^{-} for all rr and RR satisfying 0<rR.0<r\leq R.

Due to (2.7) of condition 5, ergodicity with respect to θ\theta_{*}, and skew-invariance condition (A2), we obtain that with probability one there exists an increasing sequence (nk)k(n_{k})_{k\in\mathbb{N}} such that nkn_{k}\to\infty as k,k\to\infty, nk+1/nk1n_{k+1}/n_{k}\to 1 as k,k\to\infty, and

(6.5) 𝒜(nkv,w)κ|wnkv|,wnkv+𝒞.\mathcal{A}(n_{k}v^{-},w^{\prime})\leq\kappa|w^{\prime}-n_{k}v^{-}|,\quad\forall w^{\prime}\in n_{k}v^{-}+\mathcal{C}^{-}.

Additionally, almost surely 1T𝒜(0,Tv)Λ(v)\frac{1}{T}\mathcal{A}(0,Tv^{-})\to\Lambda(v^{-}) as T.T\to\infty. Let us now consider ω\omega satisfying these two conditions.

By concatenating paths from 0 to vv^{-} and from vv^{-} to ww^{\prime}, we deduce from (6.5) that for all kk\in\mathbb{N} and all wnkv+𝒞,w^{\prime}\in n_{k}v^{-}+\mathcal{C}^{-},

(6.6) 𝒜(0,w)𝒜(0,nkv)+𝒜(nkv,w)𝒜(0,nkv)+κ|wnkv|.\mathcal{A}(0,w^{\prime})\leq\mathcal{A}(0,n_{k}v^{-})+\mathcal{A}(n_{k}v^{-},w^{\prime})\leq\mathcal{A}(0,n_{k}v^{-})+\kappa|w^{\prime}-n_{k}v^{-}|.

For T>0T>0 sufficiently large, let k(T)k(T) be such that

nk(T)Tnk(T)+1.n_{k(T)}\leq T\leq n_{k(T)+1}.

For all w𝖡(Tw,Tδ)𝒞w^{\prime}\in\mathsf{B}(Tw,T\delta)\cap\mathcal{C}, (6.6) and |wTw|<Tδ|w^{\prime}-Tw|<T\delta imply

𝒜(0,w)𝒜(0,nk(T)v)+κ(|Twnk(T)v|+Tδ).\mathcal{A}(0,w^{\prime})\leq\mathcal{A}(0,n_{k(T)}v^{-})+\kappa(|Tw-n_{k(T)}v^{-}|+T\delta).

Since nk(T)/T1n_{k(T)}/T\to 1 as TT\to\infty and |wv|<ϵ|w-v^{-}|<\epsilon^{\prime}, the right-hand side of this inequality is bounded above for sufficiently large TT by

𝒜(0,nk(T)v)+κ(2Tϵ+Tδ).\mathcal{A}(0,n_{k(T)}v^{-})+\kappa(2T\epsilon^{\prime}+T\delta).

Taking ϵ<ϵ/(4κ)\epsilon^{\prime}<\epsilon/(4\kappa) and δ<ϵ/2\delta<\epsilon/2, we obtain

supw𝖡(Tw,Tδ)𝒜(0,w)𝒜(0,nk(T)v)+Tϵ.\sup_{w^{\prime}\in\mathsf{B}(Tw,T\delta)}\mathcal{A}(0,w^{\prime})\leq\mathcal{A}(0,n_{k(T)}v^{-})+T\epsilon.

Dividing by TT and taking TT\to\infty establishes (6.3) on an event of full probability.

The argument for the claim (6.4) is almost identical to the preceding argument but using (2.8) in place of (2.7), so we will only give a sketch of the proof. Choose ϵ′′ϵ\epsilon^{\prime\prime}\leq\epsilon and v+w+𝒞v^{+}\in w+\mathcal{C}^{\prime} satisfying |v+w|<ϵ′′|v^{+}-w|<\epsilon^{\prime\prime}. Let δ>0\delta^{\prime}>0 and 𝒞+\mathcal{C}^{+} be a cone properly contained in 𝒞\mathcal{C} such that 𝖡(w,δ)v+𝒞+.\mathsf{B}(w,\delta)\subset v^{+}-\mathcal{C}^{+}. Almost surely there is an increasing sequence nkn_{k}\to\infty satisfying nk+1/nk1n_{k+1}/n_{k}\to 1 and such that

𝒜(w,nkv+)κ|wnkv+|,wnk𝒞+.\mathcal{A}(w^{\prime},n_{k}v^{+})\leq\kappa|w^{\prime}-n_{k}v^{+}|,\quad\forall w^{\prime}\in n_{k}-\mathcal{C}^{+}.

If k=k(T)k=k(T) is such that nk(T)Tnk(T)+1n_{k(T)}\leq T\leq n_{k(T)+1}, then for all w𝖡(Tw,Tδ),w^{\prime}\in\mathsf{B}(Tw,T\delta),

𝒜(0,nkv+)𝒜(0,w)+𝒜(w,nkv+)\displaystyle\mathcal{A}(0,n_{k}v^{+})\leq\mathcal{A}(0,w^{\prime})+\mathcal{A}(w^{\prime},n_{k}v^{+}) 𝒜(0,w)+κ(|nkv+Tw|+Tδ)\displaystyle\leq\mathcal{A}(0,w^{\prime})+\kappa(|n_{k}v^{+}-Tw|+T\delta)
𝒜(0,w)+Tϵ\displaystyle\leq\mathcal{A}(0,w^{\prime})+T\epsilon

for ϵ′′\epsilon^{\prime\prime} and δ\delta sufficiently small. We can then conclude that (6.4) holds on a full measure set by dividing by TT and taking T.T\to\infty.

Proof of Theorem 2.2.

First note that (2.7) implies Λ(v)>\Lambda(v)>-\infty for all v𝒞.v\in\mathcal{C}^{\circ}. Indeed, if 𝒞\mathcal{C}^{\prime} is a cone properly contained in 𝒞\mathcal{C} that contains vv, then

Λ(v)=|v|limT𝒜T(v)|v|T|v|supx𝒞,|x|>1|𝒜(0,x)||x|>|v|κ>\Lambda(v)=|v|\lim_{T\to\infty}\frac{\mathcal{A}^{T}(v)}{|v|T}\geq-|v|\sup_{x\in\mathcal{C}^{\prime},\,|x|>1}\frac{|\mathcal{A}(0,x)|}{|x|}>-|v|\kappa>-\infty

with positive probability. Since Λ(v)\Lambda(v) is nonrandom, we have Λ(v)>\Lambda(v)>-\infty.

For every w𝒞w\in\mathcal{C}^{\circ} and ϵ>0\epsilon>0, let v(w,ϵ),v+(w,ϵ)𝒞v^{-}(w,\epsilon),v^{+}(w,\epsilon)\in\mathcal{C}^{\circ} and δ(w,ϵ)\delta(w,\epsilon) be such that the conclusions of Lemma 6.1 hold.

Since w𝒞𝖡(w,δ(w,ϵ))\bigcup_{w\in\mathcal{C}}\mathsf{B}(w,\delta(w,\epsilon)) is an open cover of 𝒞,\mathcal{C}^{\circ}, for every ϵ>0,\epsilon>0, there is a countable set of tuples (wk(ϵ),vk(ϵ),vk+(ϵ),δk(ϵ))(w_{k}(\epsilon),v_{k}^{-}(\epsilon),v_{k}^{+}(\epsilon),\delta_{k}(\epsilon)) indexed by kk\in\mathbb{N} such that the conclusions of Lemma 6.1 hold for each tuple and such that k𝖡(wk(ϵ),δk(ϵ))\bigcup_{k\in\mathbb{N}}\mathsf{B}(w_{k}(\epsilon),\delta_{k}(\epsilon)) is an open cover of 𝒞.\mathcal{C}^{\circ}.

Define the event Ω(k,ϵ)\Omega(k,\epsilon) as

Ω(k,ϵ)={ω:(6.3) and (6.4) hold for (wk(ϵ),vk(ϵ),vk+(ϵ),δk(ϵ))}\displaystyle\Omega(k,\epsilon)=\Big{\{}\omega\,:\,\textrm{\eqref{eq:AApproxUpperBound} and \eqref{eq:AApproxLowerBound} hold for }(w_{k}(\epsilon),v_{k}^{-}(\epsilon),v_{k}^{+}(\epsilon),\delta_{k}(\epsilon))\Big{\}}

Finally, define the event

Ω0=mkΩ(k,m1).\Omega_{0}=\bigcap_{m\in\mathbb{N}}\bigcap_{k\in\mathbb{N}}\Omega(k,m^{-1}).

By Lemma 6.1, (Ω0)=1.\mathbb{P}(\Omega_{0})=1. We will prove that (2.9) holds for all ωΩ0.\omega\in\Omega_{0}.

Suppose ωΩ0\omega\in\Omega_{0}, let ϵ>0\epsilon>0, and let KK be a compact subset of 𝒞\mathcal{C}^{\circ}. Let mm\in\mathbb{N} be such that m1<ϵm^{-1}<\epsilon and such that if |vw|<m1|v-w|<m^{-1} and wK,w\in K, then

(6.7) |Λ(v)Λ(w)|<ϵ.|\Lambda(v)-\Lambda(w)|<\epsilon.

By compactness of KK, we can find indices i1,,iki_{1},\dots,i_{k} such that

K=1k𝖡(wi(m1),m1).K\subset\bigcup_{\ell=1}^{k}\mathsf{B}(w_{i_{\ell}}(m^{-1}),m^{-1}).

If w𝖡(wi(m1),m1)w\in\mathsf{B}(w_{i_{\ell}}(m^{-1}),m^{-1}) for some {1,,k}\ell\in\{1,\ldots,k\}, then

1T𝒜T(w)Λ(w)1T(𝒜T(w)Λ(vi(m1)))+|Λ(vi(m1))Λ(w)|.\displaystyle\frac{1}{T}\mathcal{A}^{T}(w)-\Lambda(w)\leq\frac{1}{T}(\mathcal{A}^{T}(w)-\Lambda(v^{-}_{i_{\ell}}(m^{-1})))+|\Lambda(v_{i_{\ell}}^{-}(m^{-1}))-\Lambda(w)|.

Applying (6.3) and (6.7), we obtain

lim supnsupwB(wi(m1),m1)[1T𝒜T(w)Λ(w)]2ϵ.\displaystyle\limsup_{n\to\infty}\sup_{w\in B(w_{i_{\ell}}(m^{-1}),m^{-1})}\Big{[}\frac{1}{T}\mathcal{A}^{T}(w)-\Lambda(w)\Big{]}\leq 2\epsilon.

The above holds for all ϵ>0\epsilon>0 and all =1,,k\ell=1,\dots,k. Therefore,

lim supTsupwK(1T𝒜T(w)Λ(w))0.\limsup_{T\to\infty}\sup_{w\in K}(\frac{1}{T}\mathcal{A}^{T}(w)-\Lambda(w))\leq 0.

The reverse inequality can be proven similarly using (6.4), and (2.9) follows. ∎

6.3. Proof of Theorem 2.3

Fix ϵ>0\epsilon>0 and a compact set K𝒞K\subset\mathcal{C}^{\circ}. The set K:=((1+ϵ)EΛ)KK^{\prime}:=((1+\epsilon)E_{\Lambda})\cap K is a compact subset of 𝒞\mathcal{C}^{\circ}. Thus, Theorem 2.2 implies that if ωΩ0\omega\in\Omega_{0}, then

(6.8) limTΔT=0,\lim_{T\to\infty}\Delta_{T}=0,

where

ΔT=supwK|Λ(w)1T𝒜(0,Tw)|.\Delta_{T}=\sup_{w\in K^{\prime}}|\Lambda(w)-\frac{1}{T}\mathcal{A}(0,Tw)|.

If v((1ϵ)EΛ)Kv\in((1-\epsilon)E_{\Lambda})\cap K, then Λ(v)1ϵ\Lambda(v)\leq 1-\epsilon due to 11-homogeneity of Λ\Lambda. Therefore,

1T𝒜(0,Tv)Λ(v)+ΔT1ϵ+ΔT,\frac{1}{T}\mathcal{A}(0,Tv)\leq\Lambda(v)+\Delta_{T}\leq 1-\epsilon+\Delta_{T},

If ΔT<ϵ\Delta_{T}<\epsilon then v1TEω(T)v\in\frac{1}{T}E_{\omega}(T). Additionally, if v1TEω(T)Kv\in\frac{1}{T}E_{\omega}(T)\cap K, then

Λ(v)1T𝒜(0,Tv)+ΔT1+ΔT,\Lambda(v)\leq\frac{1}{T}\mathcal{A}(0,Tv)+\Delta_{T}\leq 1+\Delta_{T},

implying that vKv\in K^{\prime} if ΔTϵ.\Delta_{T}\leq\epsilon. Therefore, (6.8) implies that there is T0>0T_{0}>0 such that the inclusion (2.10) (with N=EΛN=E_{\Lambda}) holds and thus Theorem 2.3 is established.

To prove the claim made in Remark 3, we note that if 𝒞=d\mathcal{C}=\mathbb{R}^{d} and Λ(v)>0\Lambda(v)>0 for all v0v\neq 0, then EΛE_{\Lambda} is compact and contained in 𝒞\mathcal{C}^{\circ}. Thus, (2.12) follows from (2.11) since we can take K=(1+ϵ)EΛK=(1+\epsilon)E_{\Lambda} in the definition of local convergence.

6.4. Proof of Lemma 2.1

Differentiability of ff relatively to HH means that there is a vector FHF\in H such that

(6.9) f(w)=f(v)+F,wv+o(|wv|),H(δ)wv.f(w)=f(v)+\langle F,w-v\rangle+o(|w-v|),\quad H(\delta)\ni w\to v.

We let eie_{i} be the basis vector in d\mathbb{R}^{d} with 1 in the iith coordinate and 0 elsewhere. Let h1,,hd1h_{1},\dots,h_{d-1} be an orthonormal basis for HH and let 𝐇\mathbf{H} be the linear map satisfying 𝐇ei=hi\mathbf{H}e_{i}=h_{i} for i=1,,d1i=1,\dots,d-1 and 𝐇ed=v.\mathbf{H}e_{d}=v. Since vH,v\notin H, 𝐇\mathbf{H} is invertible. Note that since u=𝐇𝐇1u,u=\mathbf{H}\mathbf{H}^{-1}u, the vector 𝐇1u\mathbf{H}^{-1}u is the representation of uu in the basis given by (h1,,hd1,v).(h_{1},\dots,h_{d-1},v).

Define the map

s(w)=1ed,𝐇1w,wd{0}.s(w)=\frac{1}{\langle e_{d},\mathbf{H}^{-1}w\rangle},\quad w\in\mathbb{R}^{d}\setminus\{0\}.

We claim that s(w)wv+Hs(w)w\in v+H for all wd{0}w\in\mathbb{R}^{d}\setminus\{0\}. We have uHu\in H if and only if ed,𝐇1u=0,\langle e_{d},\mathbf{H}^{-1}u\rangle=0, because

u=𝐇𝐇1u=i=1d1ei,𝐇1uhi+ed,𝐇1uv.u=\mathbf{H}\mathbf{H}^{-1}u=\sum_{i=1}^{d-1}\langle e_{i},\mathbf{H}^{-1}u\rangle h_{i}+\langle e_{d},\mathbf{H}^{-1}u\rangle v.

Since 𝐇1v=ed\mathbf{H}^{-1}v=e_{d}, the claim s(w)wv+Hs(w)w\in v+H follows from

ed,𝐇1(s(w)wv)=1ed,𝐇1wed,𝐇1wed,𝐇1v=0.\langle e_{d},\mathbf{H}^{-1}(s(w)w-v)\rangle=\frac{1}{\langle e_{d},\mathbf{H}^{-1}w\rangle}\langle e_{d},\mathbf{H}^{-1}w\rangle-\langle e_{d},\mathbf{H}^{-1}v\rangle=0.

We must show that f(w)f(v)f(w)-f(v) is o(|wv|)o(|w-v|)-close to a linear map, as wv.w\to v. Define I1I_{1} and I2I_{2} in the following way:

f(w)f(v)\displaystyle f(w)-f(v) =(f(w)f(s(w)w))+(f(s(w)w)f(v))=I1+I2.\displaystyle=(f(w)-f(s(w)w))+(f(s(w)w)-f(v))=I_{1}+I_{2}.

Using differentiability of ss, the fact that s(v)=1s(v)=1, and continuity of ff, we obtain

I1=f(w)(1s(w))=f(w)s(v),wv+o(|wv|)=f(v)s(v),wv+o(|wv|).I_{1}=f(w)(1-s(w))=-f(w)\langle\nabla s(v),w-v\rangle+o(|w-v|)\\ =-f(v)\langle\nabla s(v),w-v\rangle+o(|w-v|).

Additionally, since s(w)wvH,s(w)w-v\in H, we can use (6.9) to write

I2=F,s(w)wv+o(|s(w)wv|).I_{2}=\langle F,s(w)w-v\rangle+o(|s(w)w-v|).

We have s(w)wv=(s(w)1)w+wvs(w)w-v=(s(w)-1)w+w-v, so o(|s(w)wv|)=o(|wv|)o(|s(w)w-v|)=o(|w-v|). Therefore,

I2\displaystyle I_{2} =F,s(w)wv+o(|wv|)\displaystyle=\langle F,s(w)w-v\rangle+o(|w-v|)
=F,wv+(s(w)1)F,w+o(|wv|)\displaystyle=\langle F,w-v\rangle+(s(w)-1)\langle F,w\rangle+o(|w-v|)
=F,wv+s(v),wvF,v+o(|wv|).\displaystyle=\langle F,w-v\rangle+\langle\nabla s(v),w-v\rangle\langle F,v\rangle+o(|w-v|).

In sum,

f(w)f(v)=f(v)s(v),wv+F,wv+s(v),wvF,v+o(|wv|).f(w)-f(v)=-f(v)\langle\nabla s(v),w-v\rangle+\langle F,w-v\rangle+\langle\nabla s(v),w-v\rangle\langle F,v\rangle+o(|w-v|).

Hence, ff is differentiable at vv with gradient given by (F,vf(v))s(v)+F(\langle F,v\rangle-f(v))\nabla s(v)+F.

6.5. Proof of Lemma 2.2

In the below we continue to fix v𝒞,v\in\mathcal{C}^{\circ}, δ>0\delta>0 and define H(δ)=(v+H)𝒞𝖡(v,δ).H(\delta)=(v+H)\cap\mathcal{C}^{\circ}\cap\mathsf{B}(v,\delta). For a function f:H(δ)f:H(\delta)\to\mathbb{R} we define the subdifferential to be

Hf(x)={ξH:yH,f(y)f(x)ξ,yx}.\partial^{\vee}_{H}f(x)=\{\xi\in H\,:\,\forall y\in H,\,f(y)-f(x)\geq\langle\xi,y-x\rangle\}.

If ff is convex, then Hf(x)\partial^{\vee}_{H}f(x) is nonempty. A convex function is differentiable on v+Hv+H at vv if and only if Hf(v)\partial^{\vee}_{H}f(v) is a one-point set. Let 𝒢H{}\mathcal{G}\subset H\cup\{\infty\} denote the set of limit points of the sequence (ξn)n(\xi_{n})_{n\in\mathbb{N}} and let ξ𝒢\xi^{*}\in\mathcal{G} be a limit point of some subsequence (ξnk)k(\xi_{n_{k}})_{k\in\mathbb{N}}. First we rule out the case ξ=.\xi^{*}=\infty. If ξnk\xi_{n_{k}}\to\infty we may, by taking a further subsequence, assume that ξnk/|ξnk|\xi_{n_{k}}/|\xi_{n_{k}}| converges to some vector ξ.\xi^{*}_{\infty}. Take a vector w𝒟w\in\mathcal{D} such that ξ,wv<0.\langle\xi^{*}_{\infty},w-v\rangle<0. Then, ξnk/|ξnk|,vw>0\langle\xi_{n_{k}}/|\xi_{n_{k}}|,v-w\rangle>0 for sufficiently large kk and by (2.22) and (2.21), |ξnk|ξnk/|ξnk|,vw|\xi_{n_{k}}|\langle\xi_{n_{k}}/|\xi_{n_{k}}|,v-w\rangle is bounded above. It follows that the sequence (|ξn|)n(|\xi_{n}|)_{n\in\mathbb{N}} is bounded.

Taking kk\to\infty in the inequality

fnk(w)fnk(v)ξnk,wv+h(w),f_{n_{k}}(w)-f_{n_{k}}(v)\leq\langle\xi_{n_{k}},w-v\rangle+h(w),

we obtain

(6.10) f(w)f(v)ξ,wv+h(w)f(w)-f(v)\leq\langle\xi^{*},w-v\rangle+h(w)

for all w𝒟w\in\mathcal{D}, Let pf(v).p\in\partial^{\vee}f(v). Then, the inequality

f(w)f(v)p,wvf(w)-f(v)\geq\langle p,w-v\rangle

along with (6.10) implies

(6.11) 0ξp,wv+h(w)0\leq\langle\xi^{*}-p,w-v\rangle+h(w)

for all w𝒟w\in\mathcal{D}. Let ζH\zeta\in H satisfy |ζ|=1|\zeta|=1. Since 𝒟\mathcal{D} is dense in H(δ),H(\delta), there is a sequence (wm(ζ))m(w_{m}(\zeta))_{m\in\mathbb{N}} in 𝒟\mathcal{D} converging to vv such that

limmwm(ζ)v|wm(ζ)v|=ζ.\lim_{m\to\infty}\frac{w_{m}(\zeta)-v}{|w_{m}(\zeta)-v|}=\zeta.

Inequality (6.11) then implies

0ξp,ζ.0\leq\langle\xi^{*}-p,\zeta\rangle.

Repeating this procedure with ζ-\zeta gives us

0=ξp,ζ.0=\langle\xi^{*}-p,\zeta\rangle.

Since the above holds for all ζH\zeta\in H satisfying |ζ|=1|\zeta|=1, we have ξ=p.\xi^{*}=p. But ξ𝒢\xi^{*}\in\mathcal{G} and pf(v)p\in\partial^{\vee}f(v) were arbitrary, and so, in fact, limnξn\lim_{n\to\infty}\xi_{n} is well-defined and

𝒢=f(v)={limnξn}.\mathcal{G}=\partial^{\vee}f(v)=\big{\{}\lim_{n\to\infty}\xi_{n}\big{\}}.

Hence, ff is differentiable at vv with derivative equal to limnξn.\lim_{n\to\infty}\xi_{n}.

7. Proof of Theorem 3.1

Theorem 3.1 will follow once we prove that the action and minimal action described in Section 3.1 satisfy the conditions outlined in Section 2.

Conditions (A1)4 are checked in Section 7.1. Conditions (B1)2 and Part 2 of Remark 9 are checked in Section 7.2. Condition 5 is checked in Section 7.3. Proofs of some auxiliary results are given in Section 8.

7.1. Conditions (A1)4

The map A:Ω×𝒮A:\Omega\times\mathcal{S}\to\mathbb{R} defined in (3.1) is measurable due to measurability of gg. A stronger (strictly additive) version of the subadditivity condition (A1) follows due to additivity of integrals. The skew invariance condition (A2) is implied by assumption (C1) and the fact that ddt(θzγ)s=θzγ˙s\frac{d}{dt}(\theta^{z}\gamma)_{s}=\theta^{z}\dot{\gamma}_{s}.

The following lemma verifies the measurability of the minimal action required in condition 3 as well as the existence of a measurable selection of minimizer required in condition 4. See Section 7.4 for the proof.

Proposition 7.1.
  1. (1)

    There is a measurable mapping γ\gamma^{*} from d×d×Ω\mathbb{R}^{d}\times\mathbb{R}^{d}\times\Omega to 𝒮\mathcal{S} such that γ(x,y,ω)𝒮x,y,\gamma^{*}(x,y,\omega)\in\mathcal{S}_{x,y,*} and

    (7.1) A(γ(x,y,ω))=𝒜ω(x,y)A(\gamma^{*}(x,y,\omega))=\mathcal{A}_{\omega}(x,y)

    for all (x,y,ω).(x,y,\omega). Additionally, γ\gamma^{*} can be chosen so that

    (7.2) gγs(x,y)(γ˙s(x,y),γ˙s(x,y))=1,s[0,t].g_{\gamma^{*}_{s}(x,y)}(\dot{\gamma}^{*}_{s}(x,y),\dot{\gamma}^{*}_{s}(x,y))=1,\quad s\in[0,t].
  2. (2)

    𝒜:Ω×d×d{}\mathcal{A}:\Omega\times\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\cup\{\infty\} is jointly measurable.

The requirement (2.4) of assumption 4 follows from

𝔼sups[0,1]|𝒜(0,sx)|𝔼sups[0,1]0sgux(x,x)𝑑u|x|01𝔼gux1/2𝑑u=|x|𝔼g01/2<.\mathbb{E}\sup_{s\in[0,1]}|\mathcal{A}(0,sx)|\leq\mathbb{E}\sup_{s\in[0,1]}\int_{0}^{s}\sqrt{g_{ux}(x,x)}du\\ \leq|x|\int_{0}^{1}\mathbb{E}\|g_{ux}\|^{1/2}du=|x|\mathbb{E}\|g_{0}\|^{1/2}<\infty.

7.2. Conditions (B1)2

Condition (B1) is implied by Lemma 3.1 and 2.

We now verify 2. We fix vd{0}v\in\mathbb{R}^{d}\setminus\{0\} (and HH) and, for δ>0\delta>0, define H(δ)H(\delta) according to (2.13).

We let GwG^{w} be the Riemannian metric defined by Gxw(p,p)=gxw,v(Ξvwp,Ξvwp)G^{w}_{x}(p,p)=g^{w,v}_{x}(\Xi_{v\to w}p,\Xi_{v\to w}p). In particluar,

(7.3) Gxv=gx,xd.G^{v}_{x}=g_{x},\quad x\in\mathbb{R}^{d}.

The transformed action in (3.5) can be rewritten as

(7.4) B(w,v,γ)=0tGγsw(γ˙s,γ˙s)𝑑s.B(w,v,\gamma)=\int_{0}^{t}\sqrt{G^{w}_{\gamma_{s}}(\dot{\gamma}_{s},\dot{\gamma}_{s})}ds.

Recall that YY is the random function given in condition 2. We postpone the proof of the following lemma until Section 8.

Lemma 7.1.

For each v0v\neq 0, there is c,C>0c,C>0 (depending on vv) such that with probability one, for all i=1,,di=1,\dots,d, xdx\in\mathbb{R}^{d}, and wH(δ)w\in H(\delta) (where δ\delta is defined in 22i),

(7.5) Gxw+wiGxw+wjwiGxwCY(x),\displaystyle\|G^{w}_{x}\|+\|\partial_{w_{i}}G^{w}_{x}\|+\|\partial_{w_{j}w_{i}}G^{w}_{x}\|\leq CY(x),

and, for all pdp\in\mathbb{R}^{d},

(7.6) Gxw(p,p)c|p|2.G^{w}_{x}(p,p)\geq c|p|^{2}.

Proposition 7.1 implies that there is t>0t>0 and a path γT=γT(v)𝒮0,Tv,t\gamma^{T}=\gamma^{T}(v)\in\mathcal{S}_{0,Tv,t} such that

(7.7) B(v,v,γT(v))=T(v)B(v,v,\gamma^{T}(v))=\mathcal{B}^{T}(v)

and

(7.8) GγsT(v)v(γ˙sT(v),γ˙sT(v))=1,s[0,t],G^{v}_{\gamma_{s}^{T}(v)}(\dot{\gamma}^{T}_{s}(v),\dot{\gamma}^{T}_{s}(v))=1,\quad\forall s\in[0,t],

where t=AT(v).t=A^{T}(v). We note that

(7.9) wiB(w,v,γT)=120twiGγsTw(γ˙sT,γ˙sT)GγsTw(γ˙sT,γ˙sT)𝑑s\displaystyle\partial_{w_{i}}B(w,v,\gamma^{T})=\frac{1}{2}\int_{0}^{t}\frac{\partial_{w_{i}}G^{w}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})}{\sqrt{G^{w}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})}}ds

and

(7.10) wjwiB(w,v,γT)=120twjwiGγsTw(γ˙sT,γ˙sT)GγsTw(γ˙sT,γ˙sT)𝑑s140twjGγsTw(γ˙sT,γ˙sT)wiGγsTw(γ˙sT,γ˙sT)GγsTw(γ˙sT,γ˙sT)3/2𝑑s\partial_{w_{j}}\partial_{w_{i}}B(w,v,\gamma^{T})=\frac{1}{2}\int_{0}^{t}\frac{\partial_{w_{j}w_{i}}G^{w}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})}{\sqrt{G^{w}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})}}ds\\ -\frac{1}{4}\int_{0}^{t}\frac{\partial_{w_{j}}G^{w}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})\partial_{w_{i}}G^{w}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})}{G^{w}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})^{3/2}}ds

for i,j=1,,di,j=1,\dots,d.

First we use (7.9) to derive B(v,v,γT(v)),w\langle\nabla B(v,v,\gamma^{T}(v)),w\rangle for wdw\in\mathbb{R}^{d}, which will imply the formula in (3.6) once Theorem 3.1 is established. Recall the definition hxi(p;v)=wigxw,v(p,p)|w=vh_{x}^{i}(p;v)=\partial_{w_{i}}g^{w,v}_{x}(p,p)\Big{|}_{w=v} from Section 3.1.

Lemma 7.2.

For v,wdv,w\in\mathbb{R}^{d} and xd,x\in\mathbb{R}^{d},

(7.11) B(v,v,γ),w=0t[12i=1dhγsi(γ˙s;v)wi+v,γ˙s|v|2gγs(w,γ˙s)]𝑑s,\langle\nabla B(v,v,\gamma^{*}),w\rangle=\int_{0}^{t}\Big{[}\frac{1}{2}\sum_{i=1}^{d}h_{\gamma_{s}^{*}}^{i}(\dot{\gamma}_{s}^{*};v)w_{i}+\frac{\langle v,\dot{\gamma}_{s}^{*}\rangle}{|v|^{2}}g_{\gamma_{s}^{*}}(w,\dot{\gamma}_{s}^{*})\Big{]}ds,

where γ:=γ(0,x)𝒮0,y,t.\gamma^{*}:=\gamma^{*}(0,x)\in\mathcal{S}_{0,y,t}.

Proof.

Note that (7.9) and (7.2) imply

(7.12) wiB(v,v,γ)=120twiGγsv(γ˙s,γ˙s)ds.\partial_{w_{i}}B(v,v,\gamma^{*})=\frac{1}{2}\int_{0}^{t}\partial_{w_{i}}G^{v}_{\gamma_{s}^{*}}(\dot{\gamma}_{s}^{*},\dot{\gamma}_{s}^{*})ds.

Also,

(7.13) wiΞvwp=v,p|v|2ei,\partial_{w_{i}}\Xi_{v\to w}p=\frac{\langle v,p\rangle}{|v|^{2}}e_{i},

where eide_{i}\in\mathbb{R}^{d} is the vector with 0 in all coordinates jij\neq i and 11 in coordinate ii. Thus,

wiGxw(p,p)\displaystyle\partial_{w_{i}}G^{w}_{x}(p,p) =wi(gxw,v(Ξvwp,Ξvwp)\displaystyle=\partial_{w_{i}}(g^{w,v}_{x}(\Xi_{v\to w}p,\Xi_{v\to w}p)
=(wigxw,v)(Ξvwp,Ξvwp)+2gxw,v(v,p|v|2ei,p).\displaystyle=(\partial_{w_{i}}g^{w,v}_{x})(\Xi_{v\to w}p,\Xi_{v\to w}p)+2g_{x}^{w,v}\Big{(}\frac{\langle v,p\rangle}{|v|^{2}}e_{i},p\Big{)}.

The formula (7.12), the identity Ξvv=I,\Xi_{v\to v}=I, and the above imply that

B(v,v,γ),w\displaystyle\langle\nabla B(v,v,\gamma^{*}),w\rangle =0t[12i=1dhγsi(γ˙s;v)wi+i=1dv,γ˙s|v|2gγs(ei,γ˙s)wi]𝑑s\displaystyle=\int_{0}^{t}\Big{[}\frac{1}{2}\sum_{i=1}^{d}h_{\gamma^{*}_{s}}^{i}(\dot{\gamma}^{*}_{s};v)w_{i}+\sum_{i=1}^{d}\frac{\langle v,\dot{\gamma}_{s}^{*}\rangle}{|v|^{2}}g_{\gamma_{s}^{*}}(e_{i},\dot{\gamma}_{s}^{*})w_{i}\Big{]}ds
=0t[12i=1dhγsi(γ˙s;v)wi+v,γ˙s|v|2gγs(w,γ˙s)]𝑑s\displaystyle=\int_{0}^{t}\Big{[}\frac{1}{2}\sum_{i=1}^{d}h_{\gamma^{*}_{s}}^{i}(\dot{\gamma}^{*}_{s};v)w_{i}+\frac{\langle v,\dot{\gamma}_{s}^{*}\rangle}{|v|^{2}}g_{\gamma_{s}^{*}}(w,\dot{\gamma}_{s}^{*})\Big{]}ds

completing the proof. ∎

For a measurable set SdS\subset\mathbb{R}^{d} and a path γ\gamma let τS(γ)=0t𝟙γsS𝑑s.\tau_{S}(\gamma)=\int_{0}^{t}\mathds{1}_{\gamma_{s}\in S}ds. For kdk\in\mathbb{Z}^{d}, let

(7.14) Ik=k+[0,1)d.I_{k}=k+[0,1)^{d}.

We will also need the Euclidean length of the path γ𝒮,,t\gamma\in\mathcal{S}_{*,*,t} defined as

(7.15) Θ(γ)=0t|γ˙s|𝑑s.\Theta(\gamma)=\int_{0}^{t}|\dot{\gamma}_{s}|ds.

The following lemma is an application of the theory of greedy lattice animals (see, e.g., [CGGK93]) to bounding actions of continuous paths. Its proof can be found in Section 8.

Lemma 7.3.

Let ZZ be a random function that satisfies the conditions of 22ii for some β>2d\beta>2d. Let (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} be a collection satisfying the following conditions:

  1. (1)

    XX is stationary with respect to lattice shifts, meaning for every ad,a\in\mathbb{Z}^{d}, (Xx+a)xd(X_{x+a})_{x\in\mathbb{Z}^{d}} is equal in distribution to (Xx)xd(X_{x})_{x\in\mathbb{Z}^{d}}.

  2. (2)

    XX has finite range dependence on the lattice.

  3. (3)

    𝔼[|X0|β]<\mathbb{E}[|X_{0}|^{\beta}]<\infty for some β>2d.\beta>2d.

Let

ΓX:={γ𝒮0,,:kd,τIk(γ)Xk}.\Gamma_{X}:=\{\gamma\in\mathcal{S}_{0,\ast,\ast}\,:\,\forall k\in\mathbb{Z}^{d},\,\tau_{I_{k}}(\gamma)\leq X_{k}\}.

Then, with probability one,

(7.16) supγΓX0t(γ)Z(γs)𝑑sΘ(γ)+1<.\sup_{\gamma\in\Gamma_{X}}\frac{\int_{0}^{t(\gamma)}Z(\gamma_{s})ds}{\Theta(\gamma)+1}<\infty.

Using Lemma 7.3 we can now prove the following proposition, which verifies 2 since H2B2B\|\nabla_{H}^{2}B\|\leq\|\nabla^{2}B\|.

Lemma 7.4.

For δ\delta given by Lemma 7.1, with probability one,

lim supT1TsupwH(δ)2B(w,v,γT)<.\limsup_{T\to\infty}\frac{1}{T}\sup_{w\in H(\delta)}\|\nabla^{2}B(w,v,\gamma^{T})\|<\infty.
Proof.

In this proof, the notation CC refers to a nonrandom positive number that may change line by line. By Lemma 7.1 and (7.10), for all wH(δ)w\in H(\delta) we have

2B(w,v,γT)\displaystyle\|\nabla^{2}B(w,v,\gamma^{T})\| C0t|γ˙sT|2Y(γsT)|γ˙sT|𝑑s+C0t|γ˙sT|4Y2(γsT)|γ˙sT|3𝑑s\displaystyle\leq C\int_{0}^{t}\frac{|\dot{\gamma}_{s}^{T}|^{2}Y(\gamma_{s}^{T})}{|\dot{\gamma}^{T}_{s}|}ds+C\int_{0}^{t}\frac{|\dot{\gamma}_{s}^{T}|^{4}Y^{2}(\gamma_{s}^{T})}{|\dot{\gamma}_{s}^{T}|^{3}}ds
Csups[0,t]|γ˙sT|0t|max(Y(γsT),1)|2𝑑s.\displaystyle\leq C\sup_{s\in[0,t]}|\dot{\gamma}_{s}^{T}|\int_{0}^{t}|\max(Y(\gamma_{s}^{T}),1)|^{2}ds.

Since (7.8) holds and GvG^{v} is uniformly positive definite, |γ˙sT|c1GγsTv(γ˙sT,γ˙sT)=c1|\dot{\gamma}_{s}^{T}|\leq c^{-1}G^{v}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})=c^{-1} for all s[0,t].s\in[0,t]. Thus, there is C>0C>0 such that for all T>0T>0,

(7.17) 2B(w,v,γT)C0tmax(Y(γsT),1)2ds.\nabla^{2}B(w,v,\gamma^{T})\leq C\int_{0}^{t}\max(Y(\gamma_{s}^{T}),1)^{2}ds.

We will now bound 0tmax(Y(γsT),1)2ds\int_{0}^{t}\max(Y(\gamma_{s}^{T}),1)^{2}ds using Lemma 7.3. Clearly since YY follows 22ii with β>4d\beta>4d, max(Y(γsT),1)2\max(Y(\gamma_{s}^{T}),1)^{2} will obey 22ii with β>2d.\beta>2d. The collection (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} in Lemma 7.3 can be chosen to be

(7.18) Xk=CsupzIk|Y(z)|1/2X_{k}=C\sup_{z\in I_{k}}|Y(z)|^{1/2}

for a sufficiently large constant CC. Since γT\gamma^{T} is a geodesic with unit speed, we have for all kdk\in\mathbb{Z}^{d},

(7.19) τIk(γT)\displaystyle\tau_{I_{k}}(\gamma^{T}) =0t𝟙γsTIk𝑑s\displaystyle=\int_{0}^{t}\mathds{1}_{\gamma_{s}^{T}\in I_{k}}ds
=0t𝟙γsTIkGγsTv(γ˙sT,γ˙sT)𝑑s\displaystyle=\int_{0}^{t}\mathds{1}_{\gamma_{s}^{T}\in I_{k}}\sqrt{G^{v}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})}ds
Xk\displaystyle\leq X_{k}

if CC in (7.18) is sufficiently large. Indeed, if we let sks_{k} and rkr_{k} be, respectively, the first and last times ss such that γsTIk¯\gamma_{s}^{T}\in\overline{I_{k}}, then

skrkGγsTv(γ˙sT,γ˙sT)𝑑s=infγ:γskTγrkT0tGγsv(γ˙s,γ˙s)𝑑s.\int_{s_{k}}^{r_{k}}\sqrt{G^{v}_{\gamma_{s}^{T}}(\dot{\gamma}_{s}^{T},\dot{\gamma}_{s}^{T})}ds=\inf_{\gamma:\gamma_{s_{k}}^{T}\to\gamma_{r_{k}}^{T}}\int_{0}^{t}\sqrt{G^{v}_{\gamma_{s}}(\dot{\gamma}_{s},\dot{\gamma}_{s})}ds.

The right-hand side is bounded above by supx,yIk|xy|supzIkGzv1/2\sup_{x,y\in I_{k}}|x-y|\sup_{z\in I_{k}}\|G^{v}_{z}\|^{1/2} because the path γ\gamma that is linear between γskT\gamma^{T}_{s_{k}} and γtkT\gamma_{t_{k}}^{T} is admissible. We can then conclude by Lemma 7.1 that (7.19) holds with XkX_{k} defined in (7.18) for some CC.

Inequality (7.19) implies that γT\gamma^{T} is in the set ΓX\Gamma_{X} defined in Lemma 7.3. So, Lemma 7.3 implies that

(7.20) supT>00t|max(Y(γsT),1)|2𝑑sΘ(γT)+1<.\sup_{T>0}\frac{\int_{0}^{t}|\max(Y(\gamma_{s}^{T}),1)|^{2}ds}{\Theta(\gamma^{T})+1}<\infty.

Because sups[0,t]|γ˙sT|c1\sup_{s\in[0,t]}|\dot{\gamma}^{T}_{s}|\leq c^{-1},

(7.21) Θ(γT)=0t|γ˙sT|𝑑sc1t=c1𝒜(γT)=c1𝒜T(v).\Theta(\gamma^{T})=\int_{0}^{t}|\dot{\gamma}_{s}^{T}|ds\leq c^{-1}t=c^{-1}\mathcal{A}(\gamma^{T})=c^{-1}\mathcal{A}^{T}(v).

Since 𝒜(0,x)01gsx(x,x)𝑑s|x|01gsx1/2𝑑s\mathcal{A}(0,x)\leq\int_{0}^{1}\sqrt{g_{sx}(x,x)}ds\leq|x|\int_{0}^{1}\|g_{sx}\|^{1/2}ds, ergodicity of gg with respect to spatial shifts implies that lim supT𝒜T(v)T<\limsup_{T\to\infty}\frac{\mathcal{A}^{T}(v)}{T}<\infty with probability one. Thus,

(7.22) lim supTΘ(γT)T<\limsup_{T\to\infty}\frac{\Theta(\gamma^{T})}{T}<\infty

with probability one. Displays (7.22), (7.20), and (7.17) imply Lemma 7.4. ∎

7.3. Condition 5

Since 𝒜(0,x)=𝒜(x,0)\mathcal{A}(0,x)=\mathcal{A}(x,0) for all xdx\in\mathbb{R}^{d} and all ωΩ\omega\in\Omega, it suffices to prove (2.7). The latter is implied by the following lemma:

Lemma 7.5.

With probability one,

(7.23) supxd𝒜(0,x)|x|+1<.\sup_{x\in\mathbb{R}^{d}}\frac{\mathcal{A}(0,x)}{|x|+1}<\infty.
Proof.

We will apply Proposition 8.1. For xdx\in\mathbb{R}^{d} let γx𝒮0,x,|x|\gamma^{x}\in\mathcal{S}_{0,x,|x|} denote the path γsx=sx|x|.\gamma^{x}_{s}=\frac{sx}{|x|}. Let K(x)K(x) denote those kdk\in\mathbb{Z}^{d} such that γxIk\gamma^{x}\cap I_{k}\neq\emptyset. Define Xk=supzIk|Y(z)|1/2.X_{k}=\sup_{z\in I_{k}}|Y(z)|^{1/2}. The collection (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} satisfies the conditions of Proposition 8.1 due to 22ii. Then,

𝒜(0,x)0|x|gγsx(γ˙sx,γ˙sx)𝑑s0|x|gsx|x|1/2𝑑skK(x)0|x|Xk𝟙sx|x|Ik𝑑sdkK(x)Xk.\mathcal{A}(0,x)\leq\int_{0}^{|x|}\sqrt{g_{\gamma^{x}_{s}}(\dot{\gamma}^{x}_{s},\dot{\gamma}^{x}_{s})}ds\leq\int_{0}^{|x|}\|g_{\frac{sx}{|x|}}\|^{1/2}ds\\ \leq\sum_{k\in K(x)}\int_{0}^{|x|}X_{k}\mathds{1}_{\frac{sx}{|x|}\in I_{k}}ds\leq\sqrt{d}\sum_{k\in K(x)}X_{k}.

In the last line we use the fact that 0|x|𝟙sx|x|Ik𝑑sd\int_{0}^{|x|}\mathds{1}_{\frac{sx}{|x|}\in I_{k}}ds\leq\sqrt{d} for all xdx\in\mathbb{R}^{d} and kd.k\in\mathbb{Z}^{d}. Also, there is C>0C>0 such that for all xd,x\in\mathbb{R}^{d}, |K(x)|C|x|+C|K(x)|\leq C|x|+C, where |K(x)||K(x)| denote the number of elements in the finite set K(x).K(x). Thus,

supxd𝒜(0,x)|x|+1supxdCd|K(x)|kK(x)Xk.\sup_{x\in\mathbb{R}^{d}}\frac{\mathcal{A}(0,x)}{|x|+1}\leq\sup_{x\in\mathbb{R}^{d}}\frac{C\sqrt{d}}{|K(x)|}\sum_{k\in K(x)}X_{k}.

Note also that K(x)K(x) is a \ast-connected set as defined in Section 8. Proposition 8.1 implies that the right-hand side of the above is finite with probability one, and so our proof of Lemma 7.5 is complete. ∎

7.4. Proof of Proposition 7.1

We will appeal to an abstract measurable selection lemma, which we first state below. See Section 8 for its proof. Recall if YY is a Banach space then the weak topology on YY refers to the topology induced by maps f:Yf:Y\to\mathbb{R} in the dual space Y.Y^{*}. A ball BB in a Banach space YY refers to a set of the form {yY:yy0Y<r}\{y\in Y\,:\,\|y-y_{0}\|_{Y}<r\} for y0Yy_{0}\in Y and r>0.r>0. The notation B¯\overline{B} refers to the closure of the ball.

Lemma 7.6.

Let (X,)(X,\mathcal{F}) be a measurable space and (Y,Y)(Y,\|\cdot\|_{Y}) be a separable Banach space. Suppose F:X×Y{}F:X\times Y\to\mathbb{R}\cup\{\infty\} satisfies the following.

  1. (1)

    For every xXx\in X and RR\in\mathbb{R} the set F1(x;R):={yY:F(x,y)R}F^{-1}(x;R):=\{y\in Y\,:\,F(x,y)\leq R\} is weakly compact and, additionally, the set {yY:F(x,y)<}\{y\in Y\,:\,F(x,y)<\infty\} is non-empty.

  2. (2)

    There is a countable subset 𝒢X\mathcal{G}\subset X such that for all balls BYB\subset Y, xXx\in X, R<R<\infty, and ϵ>0\epsilon>0 there is x=x(B,x,R,ϵ)𝒢x^{\prime}=x^{\prime}(B,x,R,\epsilon)\in\mathcal{G} such that

    |F(x,y)F(x,y)|<ϵ|F(x,y)-F(x^{\prime},y)|<\epsilon

    for all yB¯y\in\overline{B} satisfying either F(x,y)RF(x,y)\leq R or F(x,y)R.F(x^{\prime},y)\leq R.

  3. (3)

    For every yYy\in Y the map xF(x,y)x\mapsto F(x,y) is measurable.

Then, there is a measurable function f:(X,)(Y,)f:(X,\mathcal{F})\to(Y,\|\cdot\|) satisfying

(7.24) F(x,f(x))=infyYF(x,y)F(x,f(x))=\inf_{y\in Y}F(x,y)

for all xX.x\in X.

Proof of Proposition 7.1.

Let X0X_{0} be the space of those gCloc2(d;+d)g\in C^{2}_{loc}(\mathbb{R}^{d};\mathcal{M}^{d}_{+}) that satisfy gx(p,p)λ|p|2g_{x}(p,p)\geq\lambda|p|^{2} for all x,pd.x,p\in\mathbb{R}^{d}. We will apply Lemma 7.6 with X=d×d×X0X=\mathbb{R}^{d}\times\mathbb{R}^{d}\times X_{0} and Y={hL1([0,1];d):h=0}Y=\{h\in L^{1}([0,1];\mathbb{R}^{d})\,:\,\int h=0\} (equipped with the L1L^{1} norm). Given (x,y)d×d(x,y)\in\mathbb{R}^{d}\times\mathbb{R}^{d} and hYh\in Y we let γ=γ[x,y,h]𝒮x,y,1\gamma=\gamma[x,y,h]\in\mathcal{S}_{x,y,1} be given by γs=(1s)x+sy+0sh(r)𝑑r\gamma_{s}=(1-s)x+sy+\int_{0}^{s}h(r)dr. Note that the map (x,y,h)γ[x,y,h](x,y,h)\mapsto\gamma[x,y,h] is continuous and γ˙s[x,y,h]=yx+hs\dot{\gamma}_{s}[x,y,h]=y-x+h_{s}.

Let us define the “energy functional” F:X×Y{}F:X\times Y\to\mathbb{R}\cup\{\infty\} by

F(x,y,g,h)=01gγs(γ˙s,γ˙s)𝑑s,F(x,y,g,h)=\int_{0}^{1}g_{\gamma_{s}}(\dot{\gamma}_{s},\dot{\gamma}_{s})ds,

where γ=γ[x,y,h]\gamma=\gamma[x,y,h].

We claim that to prove part (1) of Proposition 7.1, it suffices to prove the existence of a measurable selection f:XYf:X\to Y satisfying

(7.25) F(x,y,g,f(x,y,g))=infhYF(x,y,g,h).F(x,y,g,f(x,y,g))=\inf_{h\in Y}F(x,y,g,h).

To see this, we first let γ~\tilde{\gamma} denote the mapping (x,y,g)γ[x,y,f(x,y,g)](x,y,g)\mapsto{\gamma}[x,y,f(x,y,g)], which is also measurable if ff is. Suppose (x,y,g)X(x,y,g)\in X. By Lemma 2.3 in Chapter 9 of [dC92], the energy minimizing path γ~=γ~(x,y,g)\tilde{\gamma}=\tilde{\gamma}(x,y,g) has constant speed: gγ~(γ~˙,γ~˙)cg_{\tilde{\gamma}}(\dot{\tilde{\gamma}},\dot{\tilde{\gamma}})\equiv c for some c>0,c>0, and the path γ=γ(x,y,g)𝒮\gamma^{*}=\gamma^{*}(x,y,g)\in\mathcal{S} defined by γs=γ~c1/2s\gamma^{*}_{s}=\tilde{\gamma}_{c^{-1/2}s} minimizes

0tgψs(ψ˙s,ψ˙s)𝑑s,t>0,ψ𝒮x,y,t.\int_{0}^{t}g_{\psi_{s}}(\dot{\psi}_{s},\dot{\psi}_{s})ds,\quad t>0,\,\psi\in\mathcal{S}_{x,y,t}.

Additionally, γ\gamma^{*} has unit Riemannian speed: gγ(γ˙,γ˙)1g_{\gamma^{*}}(\dot{\gamma}^{*},\dot{\gamma}^{*})\equiv 1. Due to measurability of the map ωg,ω\omega\mapsto g_{\cdot,\omega} and measurability of the reparametrization operation γ~γ\tilde{\gamma}\mapsto\gamma^{*}, the mapping (x,y,ω)γ(x,y,ω)(x,y,\omega)\mapsto\gamma^{*}(x,y,\omega) is measurable and satisfies (7.1). This proves our claim that it suffices to find ff satisfying (7.25).

To prove existence of a measurable selection f:XYf:X\to Y satisfying (7.25),we will use Lemma 7.6.

First we verify condition (1) of Lemma 7.6. Because the map pgx(p,p)p\mapsto g_{x}(p,p) is convex and nonnegative, Corollary 3.24 of [Dac07] implies that the map γ01gγs(γ˙s,γ˙s)𝑑s\gamma\mapsto\int_{0}^{1}g_{\gamma_{s}}(\dot{\gamma}_{s},\dot{\gamma}_{s})ds is weakly lower semicontinuous in W1,1([0,1];d).W^{1,1}([0,1];\mathbb{R}^{d}). Additionally, by condition 3, the set {hY:F(x,y,g,h)R}\{h\in Y\,:\,F(x,y,g,h)\leq R\} is bounded in L2L^{2} norm and thus is contained in a weakly compact set. It follows that {hY:F(x,y,g,h)R}\{h\in Y\,:\,F(x,y,g,h)\leq R\} is weakly compact for every R>0.R>0.

The set {hY:F(x,y,g,h)<}\{h\in Y\,:\,F(x,y,g,h)<\infty\} is nonempty since it contains the function h0h\equiv 0. This completes the proof of part (1).

Now we establish condition 2 of Lemma 7.6. Since the space X0X_{0} is separable, we can find a countable subset 𝒢0\mathcal{G}_{0} satisfying

(7.26) infg𝒢0supxKggC2,x=0\inf_{g\in\mathcal{G}_{0}}\sup_{x\in K}\|g^{\prime}-g\|_{C^{2},x}=0

for all gX0g^{\prime}\in X_{0} and all compact sets Kd.K\subset\mathbb{R}^{d}. Then, let the set 𝒢\mathcal{G} in Lemma 7.6 be d×d×𝒢0.\mathbb{Q}^{d}\times\mathbb{Q}^{d}\times\mathcal{G}_{0}.

Fix (x,y)d×d(x,y)\in\mathbb{R}^{d}\times\mathbb{R}^{d}, gX0,g\in X_{0}, and BB an open ball in the Banach space YY. There is a constant CC depending on x,y,Bx,y,B such that for all hBh\in B, Θ(γ[x,y,h])|xy|+hL1C.\Theta(\gamma[x,y,h])\leq|x-y|+\|h\|_{L^{1}}\leq C. So, there is a compact set K(x,y,B)dK(x,y,B)\subset\mathbb{R}^{d} such that

(7.27) γs[x,y,h]K(x,y,B),hB¯,s[0,1].\gamma_{s}[x,y,h]\in K(x,y,B),\quad\forall h\in\overline{B},\forall s\in[0,1].

Also, 3 implies that for all (x,y,g)X(x,y,g)\in X and hYh\in Y,

(7.28) hL2|xy|+γ˙L2|xy|+λ1/2F(x,y,g,h)1/2.\|h\|_{L^{2}}\leq|x-y|+\|\dot{\gamma}\|_{L^{2}}\leq|x-y|+\lambda^{-1/2}F(x,y,g,h)^{1/2}.

In the below computation we let hB,h\in B, (x,y,g),(x,y,g)X(x,y,g),(x^{\prime},y^{\prime},g^{\prime})\in X, γ=γ[x,y,h]\gamma=\gamma[x,y,h] and γ=γ[x,y,h]\gamma^{\prime}=\gamma[x^{\prime},y^{\prime},h]. Then,

|F(x,y,g,\displaystyle|F(x^{\prime},y^{\prime},g^{\prime}, h)F(x,y,g,h)|01|gγs(γ˙s,γ˙s)gγs(γ˙s,γ˙s)|ds\displaystyle h)-F(x,y,g,h)|\leq\int_{0}^{1}|g_{\gamma^{\prime}_{s}}^{\prime}(\dot{\gamma}^{\prime}_{s},\dot{\gamma}^{\prime}_{s})-g_{\gamma_{s}}(\dot{\gamma}_{s},\dot{\gamma}_{s})|ds
(7.29) 01|gγs(γ˙sγ˙s,γ˙s)|+|gγs(γ˙s,γ˙sγ˙s)|+|(gγsgγs)(γ˙s,γ˙s)|ds.\displaystyle\leq\int_{0}^{1}|g_{\gamma^{\prime}_{s}}^{\prime}(\dot{\gamma}^{\prime}_{s}-\dot{\gamma}_{s},\dot{\gamma}^{\prime}_{s})|+|g^{\prime}_{\gamma^{\prime}_{s}}(\dot{\gamma}_{s},\dot{\gamma}_{s}^{\prime}-\dot{\gamma}_{s})|+|(g_{\gamma_{s}^{\prime}}^{\prime}-g_{\gamma_{s}})(\dot{\gamma}_{s},\dot{\gamma}_{s})|ds.

Let R<R<\infty and suppose either F(x,y,g,h)RF(x^{\prime},y^{\prime},g^{\prime},h)\leq R or F(x,y,g,h)RF(x,y,g,h)\leq R. By (7.28), hL2max(|xy|,|xy|)+λ1/2R1/2\|h\|_{L^{2}}\leq\max(|x-y|,|x^{\prime}-y^{\prime}|)+\lambda^{-1/2}R^{1/2} and so γ˙L2\|\dot{\gamma}\|_{L^{2}} and γ˙L2\|\dot{\gamma}^{\prime}\|_{L^{2}} are both bounded by a:=2max(|xy|,|xy|)+λ1/2R1/2.a:=2\max(|x-y|,|x^{\prime}-y^{\prime}|)+\lambda^{-1/2}R^{1/2}.

Note that γ˙sγ˙s=(xx)+(yy)\dot{\gamma}^{\prime}_{s}-\dot{\gamma}_{s}=(x-x^{\prime})+(y^{\prime}-y). We can bound the first and second terms in the right-hand side of (7.4) by

sups[0,1]gγsγ˙γ˙L2max(γ˙L2,γ˙L2)a(|xx|+|yy|)supxKgx,\displaystyle\sup_{s\in[0,1]}\|g^{\prime}_{\gamma^{\prime}_{s}}\|\|\dot{\gamma}^{\prime}-\dot{\gamma}\|_{L^{2}}\max(\|\dot{\gamma}^{\prime}\|_{L^{2}},\|\dot{\gamma}\|_{L^{2}})\leq a(|x-x^{\prime}|+|y-y^{\prime}|)\sup_{x\in K}\|g^{\prime}_{x}\|,

where KK is defined as the union of K(x,y,B)K(x,y,B) and K(x,y,B)K(x^{\prime},y^{\prime},B) from (7.27).

We can bound the third term of (7.4) by

sups[0,1]gγs\displaystyle\sup_{s\in[0,1]}\|g^{\prime}_{\gamma_{s}^{\prime}} gγsγ˙L22(supxKgxgx+supxKgC1,xγ˙γ˙L)γ˙L22\displaystyle-g_{\gamma_{s}}\|\|\dot{\gamma}\|_{L^{2}}^{2}\leq\Big{(}\sup_{x\in K}\|g^{\prime}_{x}-g_{x}\|+\sup_{x\in K}\|g\|_{C^{1},x}\|\dot{\gamma}^{\prime}-\dot{\gamma}\|_{L^{\infty}}\Big{)}\|\dot{\gamma}\|_{L^{2}}^{2}
a2(supxKgxgx+supxKgC1,x(|xx|+|yy|)).\displaystyle\leq a^{2}\Big{(}\sup_{x\in K}\|g^{\prime}_{x}-g_{x}\|+\sup_{x\in K}\|g\|_{C^{1},x}(|x-x^{\prime}|+|y-y^{\prime}|)\Big{)}.

Fix now R>0,R>0, x,ydx,y\in\mathbb{R}^{d}, an open ball BY,B\subset Y, and ϵ>0.\epsilon>0. By choosing x,ydx^{\prime},y^{\prime}\in\mathbb{Q}^{d} sufficiently close to xx and yy, respectively, and g𝒢0g^{\prime}\in\mathcal{G}_{0} such that supxKggC2,x\sup_{x\in K}\|g^{\prime}-g\|_{C^{2},x} is sufficiently small, we can guarantee that (7.4) is less than ϵ\epsilon for all yB¯y\in\overline{B} satisfying either F(x,y,g,h)RF(x^{\prime},y^{\prime},g^{\prime},h)\leq R or F(x,y,g,h)R.F(x,y,g,h)\leq R. This implies 2 of Lemma 7.6.

Condition 3 is satisfied because the mapping (x,y,g)01gγs(γ˙s,γ˙s)𝑑s(x,y,g)\mapsto\int_{0}^{1}g_{\gamma_{s}}(\dot{\gamma}_{s},\dot{\gamma}_{s})ds is continuous for every γ𝒮x,y,1\gamma\in\mathcal{S}_{x,y,1} and so the proof is complete. ∎

8. Proofs of Lemmas from Section 7

Proof of Lemma 7.1.

For every i,j=1,,d,i,j=1,\dots,d, and ydy\in\mathbb{R}^{d}

(8.1) wiΞvwy=v,y|v|2ei,wi,wjΞvwy=0.\partial_{w_{i}}\Xi_{v\to w}y=\frac{\langle v,y\rangle}{|v|^{2}}e_{i},\quad\partial_{w_{i},w_{j}}\Xi_{v\to w}y=0.

In particular, wiΞvw\|\partial_{w_{i}}\Xi_{v\to w}\| and wi,wjΞvw\|\partial_{w_{i},w_{j}}\Xi_{v\to w}\| are bounded by some constant depending only on vv. Additionally, Ξvw\|\Xi_{v\to w}\| itself is bounded uniformly for ww in a neighborhood of vv.

Using the product rule for matrices, we can derive

wiGxw\displaystyle\|\partial_{w_{i}}G^{w}_{x}\| 2ΞvwwiΞvwgxw,v+Ξvw2wigxw,v\displaystyle\leq 2\|\Xi_{v\to w}\|\|\partial_{w_{i}}\Xi_{v\to w}\|\|g^{w,v}_{x}\|+\|\Xi_{v\to w}\|^{2}\|\partial_{w_{i}}g^{w,v}_{x}\|
C3max(gxw,v,wigxw,v)\displaystyle\leq C_{3}\max(\|g_{x}^{w,v}\|,\|\partial_{w_{i}}g^{w,v}_{x}\|)
(8.2) C4Y(x)\displaystyle\leq C_{4}Y(x)

for |wv|δ|w-v|\leq\delta for δ\delta as in 22i. Similarly,

wjwiGxw\displaystyle\|\partial_{w_{j}w_{i}}G^{w}_{x}\| C5max(gxw,v,wigxw,v,wjwigxw,v)\displaystyle\leq C_{5}\max(\|g^{w,v}_{x}\|,\|\partial_{w_{i}}g^{w,v}_{x}\|,\|\partial_{w_{j}w_{i}}g^{w,v}_{x}\|)
(8.3) C6Y(x).\displaystyle\leq C_{6}Y(x).

Displays (8) and (8) complete our proof of (7.5).

To see (7.6) simply note that gw,vg^{w,v} itself is uniformly positive definite, and Ξvw\Xi_{v\to w} is invertible for every wv+Hw\in v+H by Lemma 3.1. So, for every wv+Hw\in v+H there is c(w)>0c(w)>0 such that for all x,pd,x,p\in\mathbb{R}^{d},

Gxw(p,p)=gxw,v(Ξvwp,Ξvwp)λ|Ξvwp|2λc(w)|p|2.G^{w}_{x}(p,p)=g^{w,v}_{x}(\Xi_{v\to w}p,\Xi_{v\to w}p)\geq\lambda|\Xi_{v\to w}p|^{2}\geq\lambda c(w)|p|^{2}.

The constant c(w)c(w) can be chosen as the square of the smallest eigenvalue of Ξvw\Xi_{v\to w}. Because the map wΞvww\mapsto\Xi_{v\to w} is continuous, there is a c>0c>0 such that Gxw(p,p)c|p|G^{w}_{x}(p,p)\geq c|p| for all x,pdx,p\in\mathbb{R}^{d} and wv+Hw\in v+H satisfying |vw|1,|v-w|\leq 1, and so the lemma is proved. ∎

To prove Lemma 7.3 we use a greedy lattice animal estimate extending implied by the results in [CGGK93] and [Mar02]. We can consider d\mathbb{Z}^{d} as a graph where xx is connected to yy whenever

maxi=1,,d|xiyi|=1.\max_{i=1,\dots,d}|x_{i}-y_{i}|=1.

We say that AdA\subset\mathbb{Z}^{d} is \ast-connected whenever it is a connected component of the aforementioned graph. We let 𝒞(n)\mathcal{C}(n) denote the set of all \ast-connected subsets of d\mathbb{Z}^{d} with nn elements containing the origin.

Proposition 8.1.

Let (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} be a collection of nonnegative random variables obeying the following conditions:

  1. (1)

    (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} is stationary with respect to lattice shifts, meaning for every ad,a\in\mathbb{Z}^{d}, (Xk+a)kd(X_{k+a})_{k\in\mathbb{Z}^{d}} is equal in distribution to (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}}.

  2. (2)

    (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} has finite range dependence on the lattice.

  3. (3)

    0(1F(x))1/d𝑑x<,\int_{0}^{\infty}(1-F(x))^{1/d}dx<\infty, where FF is the cdf of X0X_{0}.

Then,

(8.4) supn1nmaxA𝒞(n)kAXk<.\sup_{n\in\mathbb{N}}\frac{1}{n}\max_{A\in\mathcal{C}(n)}\sum_{k\in A}X_{k}<\infty.
Remark 11.

As remarked in [Mar02], if 𝔼|X0|β<\mathbb{E}|X_{0}|^{\beta}<\infty for some β>d,\beta>d, then condition 3 is satisfied: 0(1F(x))1/d𝑑x<\int_{0}^{\infty}(1-F(x))^{1/d}dx<\infty.

Proof.

We will use Theorem 1.1 in [Mar02]. There are two differences between our setting and that of [Mar02]. First, in [Mar02], two nodes x,ydx,y\in\mathbb{Z}^{d} are connected by an edge if

i=1d|xiyi|=1.\sum_{i=1}^{d}|x_{i}-y_{i}|=1.

We call BdB\subset\mathbb{Z}^{d} 1\ell^{1}-connected if it is connected with respect to this 1\ell^{1} graph structure. We denote by 𝒞1(n)\mathcal{C}_{1}(n) the set of all 1\ell^{1}-connected components of d\mathbb{Z}^{d} containing the origin of size nn. If AA is a \ast-connected subset of d\mathbb{Z}^{d} of size nn, then there is an 1\ell^{1}-connected subset of d\mathbb{Z}^{d} of size at most 2dn2^{d}n such that ABA\subset B. For instance, the set BB can be constructed by adding all 1\ell^{1}-nearest neighbors of elements in AA. Since Xk0X_{k}\geq 0, this argument shows that

maxA𝒞(n)kAXkmaxA𝒞1(2dn)kAXk,\max_{A\in\mathcal{C}(n)}\sum_{k\in A}X_{k}\leq\max_{A\in\mathcal{C}_{1}(2dn)}\sum_{k\in A}X_{k},

so it suffices to prove (8.4) for the case where 𝒞(n)\mathcal{C}(n) is replaced by 𝒞1(n)\mathcal{C}_{1}(n) defined via 1\ell^{1}-connectedness, the graph structure considered in [Mar02].

The second difference with Theorem 1.1 in [Mar02] is that in our case the random variables (Xx)xd(X_{x})_{x\in\mathbb{Z}^{d}} are not independent but rather have finite range of dependence. However, we can reduce the problem to the i.i.d. case. Let KK\in\mathbb{N} be an upper bound on the dependence range of (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} and let MK=[0,K)dd.M_{K}=[0,K)^{d}\cap\mathbb{Z}^{d}.

For kMKk\in M_{K}, we can regard

Ek:=k+Kd={k+Kx:xd}E_{k}:=k+K\mathbb{Z}^{d}=\{k+Kx\,:\,x\in\mathbb{Z}^{d}\}

as a graph isomorphic to d\mathbb{Z}^{d} with 1\ell^{1} nearest neighbor edges. Additionally,

kMKEk=d.\bigcup_{k\in M_{K}}E_{k}=\mathbb{Z}^{d}.

The i.i.d. family (Xx)xEk(X_{x})_{x\in E_{k}} satisfies the requirements of [Mar02]. Let 𝒞1(n,k)\mathcal{C}_{1}(n,k) denote the set of 1\ell^{1}-connected subsets of EkE_{k} of size at most nn containing k.k. We claim that for each kMKk\in M_{K} and A𝒞1(n)A\subset\mathcal{C}_{1}(n) there exists a set Fk(A)𝒞1(n,k)F_{k}(A)\in\mathcal{C}_{1}(n,k) satisfying

(8.5) AEkFk(A),A𝒞1(n).A\cap E_{k}\subset F_{k}(A),\quad\forall A\in\mathcal{C}_{1}(n).

If xdx\in\mathbb{Z}^{d}, then we can write x=k+Kyx=k+Ky for some (unique) kMKk\in M_{K} and ydy\in\mathbb{Z}^{d}. Let R(x)R(x) denote the set MK+KyM_{K}+Ky. Define the map FkF_{k} in the following way:

(8.6) Fk(A)={xEk:R(x)A}.F_{k}(A)=\{x\in E_{k}\,:\,R(x)\cap A\neq\emptyset\}.

Now suppose A𝒞1(n)A\in\mathcal{C}_{1}(n). The fact that

#{xEk:R(x)A}xEk|AR(x)|=|A|\#\{x\in E_{k}\,:\,R(x)\cap A\}\leq\sum_{x\in E_{k}}|A\cap R(x)|=|A|

implies that |Fk(A)|n.|F_{k}(A)|\leq n. Now let x,xFk(A)x,x^{\prime}\in F_{k}(A). Then, there are zAR(x)z\in A\cap R(x) and zAR(x)z^{\prime}\in A\cap R(x^{\prime}) and an 1\ell^{1}-conected path (w0,,wm)(w_{0},\dots,w_{m}) in AA connecting zz to z.z^{\prime}. If wiEkw^{\prime}_{i}\subset E_{k} for i=1,,mi=1,\dots,m are such that wiR(wi),w_{i}\in R(w^{\prime}_{i}), then (wi)i(w^{\prime}_{i})_{i} is an 1\ell^{1}-connected path in EkE_{k} and wiFk(A)w^{\prime}_{i}\in F_{k}(A) for each ii. Additionally, w0=xw_{0}^{\prime}=x and wm=xw_{m}^{\prime}=x^{\prime}. It follows that xx and xx^{\prime} are connected by a path (considered as a subset of the graph EkE_{k}) in Fk(A)EkF_{k}(A)\cap E_{k} and so the set Fk(A)F_{k}(A) is connected as a subset of Ek.E_{k}. Thus, Fk(A)𝒞1(n,k).F_{k}(A)\in\mathcal{C}_{1}(n,k).

We have

maxA𝒞1(n)xAXx\displaystyle\max_{A\in\mathcal{C}_{1}(n)}\sum_{x\in A}X_{x} =maxA𝒞1(n)kMKxAEkXx\displaystyle=\max_{A\in\mathcal{C}_{1}(n)}\sum_{k\in M_{K}}\sum_{x\in A\cap E_{k}}X_{x}
maxA𝒞1(n)kMKxFk(A)XxkMKmaxB𝒞1(n,k)xBXx.\displaystyle\leq\max_{A\in\mathcal{C}_{1}(n)}\sum_{k\in M_{K}}\sum_{x\in F_{k}(A)}X_{x}\leq\sum_{k\in M_{K}}\max_{B\in\mathcal{C}_{1}(n,k)}\sum_{x\in B}X_{x}.

Theorem 1.1 in [Mar02] directly implies that

supn1nmaxB𝒞1(n,k)xBXx<\sup_{n\in\mathbb{N}}\frac{1}{n}\max_{B\in\mathcal{C}_{1}(n,k)}\sum_{x\in B}X_{x}<\infty

for each kMKk\in M_{K} and so Proposition 8.1 follows. ∎

Proof of Lemma 7.3.

We will discretize our path and use Proposition 8.1. Let

Zk=supxIkZ(x),Z_{k}=\sup_{x\in I_{k}}Z(x),
χk={1,s s.t. γsIk,0,otherwise,\chi_{k}=\begin{cases}1,&\exists s\text{ s.t. }\gamma_{s}\in I_{k},\\ 0,&\text{otherwise},\end{cases}

and χ(γ)={kd:χk=1}\chi(\gamma)=\{k\in\mathbb{Z}^{d}\,:\,\chi_{k}=1\} for γ𝒮.\gamma\in\mathcal{S}. If γΓX,\gamma\in\Gamma_{X}, then

(8.7) 0tZ(γ)𝑑skdZk0t𝟙γsIk𝑑s\displaystyle\int_{0}^{t}Z(\gamma)ds\leq\sum_{k\in\mathbb{Z}^{d}}Z_{k}\int_{0}^{t}\mathds{1}_{\gamma_{s}\in I_{k}}ds =kdZkτIk(γ)\displaystyle=\sum_{k\in\mathbb{Z}^{d}}Z_{k}\tau_{I_{k}}(\gamma)
kdZkXkχkkχ(γ)Zk2kχ(γ)Xk2.\displaystyle\leq\sum_{k\in\mathbb{Z}^{d}}Z_{k}X_{k}\chi_{k}\leq\sqrt{\sum_{k\in\chi(\gamma)}Z_{k}^{2}\sum_{k\in\chi(\gamma)}X_{k}^{2}}.

Also, the collections (Zk2)kd(Z_{k}^{2})_{k\in\mathbb{Z}^{d}} and (Xk2)kd(X_{k}^{2})_{k\in\mathbb{Z}^{d}} both satisfy the conditions of Proposition 8.1. Also, note that by continuity of γ\gamma, χ(γ)\chi(\gamma) is a \ast-connected set in d\mathbb{Z}^{d} that contains the origin. In particular, χ(γ)𝒞(|χ(γ)|)\chi(\gamma)\in\mathcal{C}(|\chi(\gamma)|). Proposition 8.1 implies that almost surely

(8.8) supnmaxS𝒞(n)1n2kSZk2kSXk2<\sup_{n\in\mathbb{N}}\max_{S\in\mathcal{C}(n)}\frac{1}{n^{2}}\sum_{k\in S}Z_{k}^{2}\sum_{k\in S}X_{k}^{2}<\infty

So, almost surely

(8.9) supγ𝒮0,,1|χ(γ)|2kχ(γ)Zk2kχ(γ)Xk2<.\sup_{\gamma\in\mathcal{S}_{0,\ast,\ast}}\frac{1}{|\chi(\gamma)|^{2}}\sum_{k\in\chi(\gamma)}Z_{k}^{2}\sum_{k\in\chi(\gamma)}X_{k}^{2}<\infty.

Following the argument of Lemma 4.5 in [BD23a], one can show that there is C>0C>0 such that for all paths γ𝒮0,,\gamma\in\mathcal{S}_{0,\ast,\ast},

(8.10) |χ(γ)|CΘ(γ)+C.|\chi(\gamma)|\leq C\Theta(\gamma)+C.

The claim then follows by combining (8.7), (8.9), and (8.10). ∎

Proof of Lemma 7.6.

First, note that the Eberlein–Šmulian theorem (see Theorem 13.1 in Chapter V of [Con10]) implies that weak compactness is equivalent to weak sequential compactness in a Banach space. Thus, condition 1 of Lemma 7.6 implies that F(x,)F(x,\cdot) is weakly sequentially lower semicontinuous for all xXx\in X.

Consider the set-valued function

Ψ:X\displaystyle\Psi:X 𝒫(Y)\displaystyle\to\mathcal{P}(Y)
x\displaystyle\ x {yY:F(x,y)=infyYF(x,y)},\displaystyle\mapsto\{y\in Y\,:\,F(x,y)=\inf_{y^{\prime}\in Y}F(x,y^{\prime})\},

where 𝒫(Y)\mathcal{P}(Y) is the power set of YY. We will apply the Kuratowski–Ryll-Nardzewski (KRL) Selection Theorem (see Theorem 18.13 in [AB06]). If the conditions of this theorem are met, then there is a measurable map f:XYf:X\to Y such that f(x)Ψ(x)f(x)\in\Psi(x) for all xXx\in X, which is equivalent to (7.24). First note that as a separable Banach space YY is also a Polish space, one condition of the KRL theorem. We must additionally verify that the map Ψ\Psi takes values in closed, nonempty sets and satisfies a set valued measurability condition known as being weakly measurable.

First we verify that Ψ(x)\Psi(x) is nonempty for all xX.x\in X. Let I(x)=infyF(x,y)<.I(x)=\inf_{y}F(x,y)<\infty. Take a sequence (yn)n(y_{n})_{n\in\mathbb{N}} such that F(x,yn)I(x)F(x,y_{n})\to I(x). The sequence (yn)n(y_{n})_{n\in\mathbb{N}} has a weakly convergent subsequence to a yYy^{*}\in Y because the set {y:F(x,y)I(x)+1}\{y\,:\,F(x,y)\leq I(x)+1\} is weakly sequentially compact. Since FF is weakly sequentially lower semicontinuous, F(x,y)=I(x)F(x,y^{*})=I(x), and so in fact yΨ(x)y^{*}\in\Psi(x).

The fact that Ψ\Psi takes value in closed sets follows directly from weak sequential lower semicontinuity of FF. Indeed, if (yn)n(y_{n})_{n\in\mathbb{N}} is a sequence such that ynΨ(x)y_{n}\in\Psi(x) and ynyy_{n}\to y, then F(x,y)I(x)F(x,y)\leq I(x), which implies yΨ(x)y\in\Psi(x).

Now we prove that the map Ψ\Psi is weakly measurable. Weak measurability means that for every open set UY,U\subset Y, the set

UΨ1:={xX:Ψ(x)U0}U_{\Psi^{-1}}:=\{x\in X\,:\,\Psi(x)\cap U\neq 0\}

is measurable in XX. Let 𝒜𝒫(Y)\mathcal{A}\subset\mathcal{P}(Y) denote a countable basis of open balls in YY. For x𝒢x\in\mathcal{G} (where 𝒢\mathcal{G} is as in condition 2 of the lemma) and B𝒜B\in\mathcal{A}, define

Ix,B:=inf{F(x,y):yB}.I_{x,B}:=\inf\{F(x,y)\,:\,y\in B\}.

If Ix,B<I_{x,B}<\infty, then there exists a yx,BB¯y_{x,B}\in\overline{B} satisfying F(x,yx,B)=Ix,B.F(x,y_{x,B})=I_{x,B}. Indeed, B¯\overline{B} is convex and strongly closed and thus weakly closed. It follows that the set {yB¯:F(x,y)Ix,B+1}\{y^{\prime}\in\overline{B}\,:\,F(x,y^{\prime})\leq I_{x,B}+1\} is weakly compact and F(x,)F(x,\cdot) is weakly lower semicontinuous, and so the existence of yx,By_{x,B} follows as soon as Ix,B<.I_{x,B}<\infty. For B𝒜B\in\mathcal{A}, let 𝒢B\mathcal{G}_{B} be those x𝒢x\in\mathcal{G} such that Ix,B<.I_{x,B}<\infty. For UYU\subset Y open, let 𝒜U\mathcal{A}_{U} denote those sets B𝒜B\in\mathcal{A} such that B¯U.\overline{B}\subset U. We claim that

(8.11) UΨ1=B𝒜Uk=1x𝒢ByY𝒢,𝒜C(k,yx,B,y)U_{\Psi^{-1}}=\bigcup_{B\in\mathcal{A}_{U}}\bigcap_{k=1}^{\infty}\bigcup_{x^{\prime}\in\mathcal{G}_{B}}\bigcap_{y^{\prime}\in Y_{\mathcal{G},\mathcal{A}}}C(k,y_{x^{\prime},B},y^{\prime})

where

(8.12) C(k,y,y):={xX:F(x,y)F(x,y)+1/k}.C(k,y,y^{\prime}):=\{x\in X\,:\,F(x,y)\leq F(x,y^{\prime})+1/k\}.

For every yYy\in Y the map xF(x,y)x\mapsto F(x,y) is measurable, and so C(k,y,y)C(k,y,y^{\prime}) is measurable for each kk\in\mathbb{N} and y,yY.y,y^{\prime}\in Y. It follows that if (8.11) holds, then UΨ1U_{\Psi^{-1}} is measurable, and we can conclude that a measurable selection exists. We will now prove that the equality (8.11) holds.

First we will prove the forward inclusion of (8.11). Let xx be in UΨ1U_{\Psi^{-1}}. By definition there is yUy^{*}\in U such that F(x,y)=I(x)F(x,y^{*})=I(x). Since UU is open, we can find B𝒜UB\in\mathcal{A}_{U} such that yBy^{*}\in B. By condition 2, for all kk\in\mathbb{N} there exists xk𝒢x_{k}\in\mathcal{G} such that for all yB¯y\in\overline{B} satisfying either F(x,y)I(x)+1F(x,y)\leq I(x)+1 or F(xk,y)I(x)+1,F(x_{k},y)\leq I(x)+1,

(8.13) |F(xk,y)F(x,y)|<12k.|F(x_{k},y)-F(x,y)|<\frac{1}{2k}.

Since F(x,y)=I(x),F(x,y^{*})=I(x), we must have F(xk,y)I(x)+1/(2k)F(x_{k},y^{*})\leq I(x)+1/(2k) and so in particular xk𝒢B.x_{k}\in\mathcal{G}_{B}. Also, F(xk,yxk,B)I(x)+1/(2k)F(x_{k},y_{x_{k},B})\leq I(x)+1/(2k) by minimality of yxk,By_{x_{k},B}. So, (8.13) and minimality conditions of yxk,By_{x_{k},B} and yy^{*} imply

F(x,yxk,B)\displaystyle F(x,y_{x_{k},B}) F(xk,yxk,B)+12k\displaystyle\leq F(x_{k},y_{x_{k},B})+\frac{1}{2k}
F(xk,y)+12k\displaystyle\leq F(x_{k},y^{*})+\frac{1}{2k}
F(x,y)+1k\displaystyle\leq F(x,y^{*})+\frac{1}{k}
F(x,y)+1k\displaystyle\leq F(x,y^{\prime})+\frac{1}{k}

for all yY.y^{\prime}\in Y. We conclude that xx is in the right-hand side of (8.11) because there is B𝒜UB\in\mathcal{A}_{U} such that for all kk\in\mathbb{N} there is xk𝒢x_{k}\in\mathcal{G}_{\mathcal{B}} such that F(x,yxk,B)F(x,y)+1/kF(x,y_{x_{k},B})\leq F(x,y^{\prime})+1/k for all yY𝒢,𝒜.y^{\prime}\in Y_{\mathcal{G},\mathcal{A}}. This implies the forward inclusion in (8.11). Note that the sequence yk=yxk,B,y_{k}=y_{x_{k},B}, kk\in\mathbb{N}, satisfies

(8.14) limkF(x,yk)=I(x).\lim_{k\to\infty}F(x,y_{k})=I(x).

Now suppose

xB𝒜Uk=1x𝒢ByY𝒢,𝒜C(k,yx,B,y).x\in\bigcup_{B\in\mathcal{A}_{U}}\bigcap_{k=1}^{\infty}\bigcup_{x\in\mathcal{G}_{B}}\bigcap_{y^{\prime}\in Y_{\mathcal{G},\mathcal{A}}}C(k,y_{x,B},y^{\prime}).

We wish to show that there exists a yUy^{*}\in U such that F(x,y)=I(x).F(x,y^{*})=I(x). Let B𝒜UB\in\mathcal{A}_{U} be such that for all kk\in\mathbb{N}, there is xk𝒢Bx_{k}\in\mathcal{G}_{B} such that

(8.15) F(x,yxk,B)F(x,y)+1/kF(x,y_{x_{k},B})\leq F(x,y^{\prime})+1/k

for all yY𝒢,𝒜.y^{\prime}\in Y_{\mathcal{G},\mathcal{A}}.

We first claim that there is yY𝒢,𝒜y^{\prime}\in Y_{\mathcal{G},\mathcal{A}} such that F(x,y)<.F(x,y^{\prime})<\infty. Indeed, I(x)<I(x)<\infty by assumption, and there is a sequence (yk)kY𝒢,𝒜(y_{k})_{k\in\mathbb{N}}\subset Y_{\mathcal{G},\mathcal{A}} satisfying (8.15) and so there exists such a yY𝒢,𝒜.y^{\prime}\in Y_{\mathcal{G},\mathcal{A}}. Since xC(k,yxk,B,y)x\in C(k,y_{x_{k},B},y^{\prime}), it follows that F(x,yxk,B)RF(x,y_{x_{k},B})\leq R for all kk\in\mathbb{N}, where R:=F(x,y)+1R:=F(x,y^{\prime})+1.

The set F1(x;R)B¯F^{-1}(x;R)\cap\overline{B} is weakly sequentially compact, and so there is a subsequence of (yxk,B)k(y_{x_{k},B})_{k\in\mathbb{N}} that weakly converges to some yB¯U.y^{*}\in\overline{B}\subset U. By lower semicontinuity of F(x,)F(x,\cdot) and (8.15), we have F(x,y)F(x,y)F(x,y^{*})\leq F(x,y^{\prime}) for all yY𝒢,𝒜.y^{\prime}\in Y_{\mathcal{G},\mathcal{A}}. Because there exists a sequence (yk)k(y_{k})_{k\in\mathbb{N}} in Y𝒢,𝒜Y_{\mathcal{G},\mathcal{A}} such that (8.14) holds, F(x,y)=I(x).F(x,y^{*})=I(x). Since yUy^{*}\in U is follows that xUΨ1.x\in U_{\Psi^{-1}}. Thus, the reverse inclusion in (8.11) is proven, and we may conclude that UΨ1U_{\Psi^{-1}} is measurable for all open sets UY.U\subset Y.

The conclusions of Kuratowski–Ryll-Nardzewski Selection Theorem are satisfied and so we may conclude the existence of the measurable selection ff and the lemma. ∎

9. Proof of Theorem 3.2

Proof of Lemma 3.1.

Let u1,,ud1u_{1},\dots,u_{d-1} be an orthogonal basis for the subspace HH orthogonal to vv and define the change-of-basis matrix 𝐇=[u1ud1v].\mathbf{H}=\begin{bmatrix}u_{1}&\dots&u_{d-1}&v\end{bmatrix}.

Since wvHw-v\in H, w=v+i=1d1aiuiw=v+\sum_{i=1}^{d-1}a_{i}u_{i} for some scalars a1,,ad1.a_{1},\dots,a_{d-1}. If xH,x\in H, then Ξvwx=x,\Xi_{v\to w}x=x, and so Ξvw\Xi_{v\to w} acts as the identity on HH. Also, Ξvwv=w.\Xi_{v\to w}v=w. It follows that in the basis {u1,,ud1,v}\{u_{1},\dots,u_{d-1},v\} the matrix Ξvw\Xi_{v\to w} is

(9.1) M=[100a1010a21ad1001].M=\begin{bmatrix}1&0&0&\dots&a_{1}\\ 0&1&0&\dots&a_{2}\\ \vdots&&\ddots&&\vdots\\ &&&1&a_{d-1}\\ 0&&\dots&0&1\end{bmatrix}.

The matrix MM has determinant one, and since Ξvw\Xi_{v\to w} is similar to MM, so does Ξvw\Xi_{v\to w}.

The above analysis implies additionally that Ξvw\Xi_{v\to w} is a nondegenerate linear map, hence the induced map on 𝒮\mathcal{S} is bijective. Additionally, since ΞvwTv=Tw\Xi_{v\to w}Tv=Tw for all T>0T>0, Ξvw\Xi_{v\to w} restricted to 𝒮0,Tv,\mathcal{S}_{0,Tv,*} is a bijective map from 𝒮0,Tv,\mathcal{S}_{0,Tv,*} to 𝒮0,Tw,\mathcal{S}_{0,Tw,*}. ∎

Proof of Theorem 3.2.

We will work out the details fully for Example 1 and give a sketch for Example 2. Now take gg to be the function described in Example 1. We first check condition (C1). The transformations (θx)xd(\theta^{x}_{*})_{x\in\mathbb{R}^{d}} defined in Section 3.2 are ergodic by ergodicity of marked Poisson processes with respect to spatial shifts (see [Kin93] or [DVJ03]). Now let rd.r\in\mathbb{R}^{d}. From the definition of θr\theta^{r}_{*} in this model:

gθrx,θrω\displaystyle g_{\theta^{r}x,\theta^{r}_{*}\omega} =φ(θrxθry)𝐍(dy,dφ)+λI\displaystyle=\int\varphi(\theta^{r}x-\theta^{r}y)\mathbf{N}(dy,d\varphi)+\lambda I
=φ(x+ryr)𝐍(dy,dφ)+λI\displaystyle=\int\varphi(x+r-y-r)\mathbf{N}(dy,d\varphi)+\lambda I
=gx,ω,\displaystyle=g_{x,\omega},

and so condition (C1) is satisfied.

We now establish condition 2. We set δ=1\delta=1. The fact that Ξvw\Xi_{v\to w}^{*} is measure preserving for all wv+Hw\in v+H follows from Lemma 3.1. By the uniform compact support requirement, we can find R>0R>0 such that if |y|>R|y|>R and |wv|1|w-v|\leq 1, then 𝖰{φ(Ξvwy)=0}=1\mathsf{Q}\{\varphi(\Xi_{v\to w}y)=0\}=1. Thus,

gxw,vλ+φ(Ξvw(xy))𝐍(dy,dφ)λ+φC2𝟙|xy|R𝐍(dy,dφ).\|g^{w,v}_{x}\|\leq\lambda+\int\|\varphi(\Xi_{v\to w}(x-y))\|\mathbf{N}(dy,d\varphi)\leq\lambda+\int\|\varphi\|_{C^{2}}\mathds{1}_{|x-y|\leq R}\mathbf{N}(dy,d\varphi).

Also, there is C1>0C_{1}>0 such that for all ww satisfying |wv|1|w-v|\leq 1,

wigxw,v\displaystyle\|\partial_{w_{i}}g^{w,v}_{x}\| wiφ(Ξw(xy))𝐍(dy,dφ)\displaystyle\leq\int\|\partial_{w_{i}}\varphi(\Xi_{w}(x-y))\|\mathbf{N}(dy,d\varphi)
φ(Ξvw(xy))wiΞvw|xy|𝐍(dy,dφ)\displaystyle\leq\int\|\nabla\varphi(\Xi_{v\to w}(x-y))\|\|\partial_{w_{i}}\Xi_{v\to w}\||x-y|\mathbf{N}(dy,d\varphi)
C1φC2𝟙|xy|R𝐍(dy,dφ).\displaystyle\leq C_{1}\int\|\varphi\|_{C^{2}}\mathds{1}_{|x-y|\leq R}\mathbf{N}(dy,d\varphi).

Similarly, there is C2>0C_{2}>0 such that for all ww satisfying |wv|1|w-v|\leq 1,

wjwigxw,vC2φC2𝟙|xy|R𝐍(dy,dφ).\|\partial_{w_{j}}\partial_{w_{i}}g^{w,v}_{x}\|\leq C_{2}\int\|\varphi\|_{C^{2}}\mathds{1}_{|x-y|\leq R}\mathbf{N}(dy,d\varphi).

Let η\eta be a smooth function whose support is contained by the ball centered at the origin of radius 2R2R such that η(x)𝟙|x|R\eta(x)\geq\mathds{1}_{|x|\leq R} for all xd.x\in\mathbb{R}^{d}. For a sufficiently large constant CC random function YY defined by

(9.2) Y(x)=λ+CφC2η(xy)𝐍(dy,dφ)Y(x)=\lambda+C\int\|\varphi\|_{C^{2}}\eta(x-y)\mathbf{N}(dy,d\varphi)

satisfies (3.4) and 22ii. Indeed, YY is stationary with respect to lattice shifts by stationarity of Poisson points, verifying 2(2ii)(a). In addition, YY has finite range dependence due to the compact support of η\eta, and so 2(2ii)(b) follows. Now we verify the moment condition in 2(2ii)(c). If (φi)i(\varphi_{i})_{i\in\mathbb{N}} are and i.i.d. family with distribution 𝖰\mathsf{Q}, and N(R)N(R) the number of Poisson points in a ball of radius 2R,2R, then, for >0\ell>0 such that 𝖰φ1<\mathsf{Q}\|\varphi_{1}\|^{\ell}<\infty, the Marcinkiewicz–Zygmund inequality ([MZ37]) implies, for some C,C>0,C,C^{\prime}>0,

𝔼[supx[0,1]d|Y(x)|]C+C𝔼[|i=1N(R)φi|]C+C𝔼|N(R)|/2<.\mathbb{E}\Big{[}\sup_{x\in[0,1]^{d}}|Y(x)|^{\ell}\Big{]}\leq C+C\mathbb{E}\Big{[}\Big{|}\sum_{i=1}^{N(R)}\|\varphi_{i}\|\Big{|}^{\ell}\Big{]}\leq C+C^{\prime}\mathbb{E}|N(R)|^{\ell/2}<\infty.

Because Example 1 assumes that 𝖰φ1β<\mathsf{Q}\|\varphi_{1}\|^{\beta}<\infty for some β>4d\beta>4d, the above display implies 2(2ii)(c) of Example 1 for the same NN.

The uniform positive definite condition in 3 is satisfied due to the λI\lambda I factor in (3.8).

This completes the proof in the case where gg is as in Example 1.

Now take gg as given in Example 2. The only meaningful difference to the preceding argument is in computing derivatives of gw,vg^{w,v} and bounding them by an appropriate field YY. We have

gxw,v=exp(φ(Ξvw(xy))𝐍(dy,dφ)).g^{w,v}_{x}=\exp\Big{(}\int\varphi(\Xi_{v\to w}(x-y))\mathbf{N}(dy,d\varphi)\Big{)}.

Under Example 2, φC2\|\varphi\|_{C^{2}} is bounded by a deterministic constant. We let η\eta be the smooth function used previously whose suppose contains the ball of radius 2R2R. Then, it suffices to take

(9.3) Y(x)=exp(Cη(xy)𝐍(dy,dφ))+CY(x)=\exp\Big{(}C\int\eta(x-y)\mathbf{N}(dy,d\varphi)\Big{)}+C

for a sufficiently large constant CC. Specifically, there is a constant C>0C>0 such that

(9.4) exp(φ(Ξvw(xy))𝐍(dy,dφ))C2,xY(x)\|\exp\Big{(}\int\varphi(\Xi_{v\to w}(x-y))\mathbf{N}(dy,d\varphi)\Big{)}\|_{C^{2},x}\leq Y(x)

holds The verification that (9.3) satisfies 22ii is similar to the argument for (9.2). Indeed, logY\log Y is of the same sum form as in (9.2), and so the stationarity and finite range dependence conditions follow in the same manner. Additionally, 𝔼|supx[0,1]dY(x)|<\mathbb{E}|\sup_{x\in[0,1]^{d}}Y(x)|^{\ell}<\infty for all >0\ell>0 by compact support of η\eta and the fact that Poisson random variables have finite moment of all orders.

Now we will sketch the argument for (9.4). For a path X(t)X(t) in matrix space,

ddtexp(X(t))=01eαX(t)dX(t)dte(1α)X(t)𝑑α\frac{d}{dt}\exp(X(t))=\int_{0}^{1}e^{\alpha X(t)}\frac{dX(t)}{dt}e^{(1-\alpha)X(t)}d\alpha

(see Theorem 2.19 in Chapter IX of [Kat66]). Also, for a matrix MM, we have eMeM.\|e^{M}\|\leq e^{\|M\|}. It follows that

ddteX(t)eX(t)ddtX(t).\displaystyle\|\frac{d}{dt}e^{X(t)}\|\leq e^{\|X(t)\|}\|\frac{d}{dt}X(t)\|.

The above formula and the fact almost surely φC1\|\varphi\|\leq C_{1} and ΞvwC2\|\Xi_{v\to w}\|\leq C_{2} for deterministic constants C1,C2C_{1},C_{2} can be used to show that wiexp(φ(Ξvw(xy))𝐍(dy,dφ))\partial_{w_{i}}\exp\Big{(}\int\varphi(\Xi_{v\to w}(x-y))\mathbf{N}(dy,d\varphi)\Big{)} and wjwiexp(φ(Ξvw(xy))𝐍(dy,dφ))\partial_{w_{j}}\partial_{w_{i}}\exp\Big{(}\int\varphi(\Xi_{v\to w}(x-y))\mathbf{N}(dy,d\varphi)\Big{)} are bounded by (9.3) for a sufficiently large deterministic constant CC.

Finally, the relation φ(xy)𝐍(dy,dφ)0\int\varphi(x-y)\mathbf{N}(dy,d\varphi)\succeq 0 implies gxIg_{x}\succeq I and so the uniform positive definite condition in 3 holds with λ=1\lambda=1.

10. Proof of Theorem 4.1.

10.1. Checking conditions (A1)5, (B1)2

For r>0r>0, distinct x,ydx,y\in\mathbb{R}^{d} and nn\in\mathbb{N}, and ωΩ\omega\in\Omega we define 𝒬x,y,nr(ω)\mathcal{Q}^{r}_{x,y,n}(\omega) to be the set of paths γ𝒫x,y,n\gamma\in\mathcal{P}_{x,y,n} satisfying the following condition: for all i=0,1,,n1i=0,1,\ldots,n-1, |γi+1γi|r|\gamma_{i+1}-\gamma_{i}|\leq r, and there are numbers kk and i0,i1,,iki_{0},i_{1},\ldots,i_{k} such that

  1. (i)

    0=i0<i1<<ik=n0=i_{0}<i_{1}<\ldots<i_{k}=n;

  2. (ii)

    γijω\gamma_{i_{j}}\in\omega for j=1,,k1j=1,\ldots,k-1;

  3. (iii)

    γijγim\gamma_{i_{j}}\neq\gamma_{i_{m}} if jmj\neq m;

  4. (iv)

    for each j=0,1,,k1j=0,1,\ldots,k-1, and every i{ij,ij+1,,ij+1},i\in\{i_{j},i_{j}+1,\ldots,i_{j+1}\},

    γi=iijij+1ijγij+1+ij+1iij+1ijγij.\gamma_{i}=\frac{i-i_{j}}{i_{j+1}-i_{j}}\gamma_{i_{j+1}}+\frac{i_{j+1}-i}{i_{j+1}-i_{j}}\gamma^{\prime}_{i_{j}}.

We also use notation 𝒬,,nr(ω)\mathcal{Q}^{r}_{*,*,n}(\omega), 𝒬,,r(ω)\mathcal{Q}^{r}_{*,*,*}(\omega), etc., similarly to (4.1).

Lemma 10.1.

There a set Ω\Omega^{\prime}\in\mathcal{F} with (Ω)=1\mathbb{P}(\Omega^{\prime})=1, a number r>0r>0, and a jointly measurable map

γ:Ω×d×d\displaystyle\gamma:\Omega\times\mathbb{R}^{d}\times\mathbb{R}^{d} 𝒫\displaystyle\to\mathcal{P}
(ω,x,y)\displaystyle(\omega,x,y) γω(x,y)\displaystyle\mapsto\gamma_{\omega}(x,y)

such that for all ωΩ\omega\in\Omega^{\prime} and x,ydx,y\in\mathbb{R}^{d}, γω(x,y)𝒬x,y,r(ω)\gamma_{\omega}(x,y)\in\mathcal{Q}^{r}_{x,y,*}(\omega) and it is a geodesic under ω\omega.

We prove this lemma in Section 10.2.

To apply our general theorems to this model, we need to interpret the action as a function of continuous paths from 𝒮\mathcal{S}. We will define AωA_{\omega} separately on broken line paths and other paths. Namely, we set

Aω(ψ)=Aω(ψ0,ψ1,ψn)A_{\omega}(\psi)=A_{\omega}(\psi_{0},\psi_{1},\ldots\psi_{n})

if nn\in\mathbb{N} and ψ𝒮,,n\psi\in\mathcal{S}_{*,*,n} satisfies ψk+t=(1t)ψk+tψk+1\psi_{k+t}=(1-t)\psi_{k}+t\psi_{k+1} for all k=0,1,n1k=0,1\ldots,n-1 and t[0,1]t\in[0,1]. If a path ψ𝒮\psi\in\mathcal{S} is not of this form, we set Aω(ψ)=+A_{\omega}(\psi)=+\infty. There is a natural bimeasurable bijection between discrete paths in 𝒫\mathcal{P} and finite action paths in 𝒮\mathcal{S}. In particular, Lemma 10.1 automatically provides a measurable representation of continuous optimal paths γ𝒮\gamma\in\mathcal{S}: for x,ydx,y\in\mathbb{R}^{d} and almost all ωΩ\omega\in\Omega, there is an action minimizer from 𝒮\mathcal{S} traveling from xx to yy and switching directions at finitely many Poissonian points. These Poissonian points and the endpoints xx and yy will be called binding vertices.

With this continuous path interpretation at hand, we can check conditions (A1)5 of Section 2.1. Condition (A1) is a corollary of (4.5). Condition (A2) follows directly from the definition of action in (4.2) and the identity Fω(x)=Fθx(0)F_{\omega}(x)=F_{\theta_{*}^{-x}}(0). Condition 3 is implied by Lemma 10.1. Conditions 4 and 5 with 𝒞=d\mathcal{C}=\mathbb{R}^{d} follow from (4.4).

Let us now check conditions (B1) and 2.

Fixing an arbitrary vd{0}v\in\mathbb{R}^{d}\setminus\{0\}, we define HH as the orthogonal complement to the line spanned by vv. The family of transformations (Ξvw)wv+H(\Xi_{v\to w})_{w\in v+H} of d\mathbb{R}^{d} is defined by (3.3). Also, for wv+Hw\in v+H, we define the transformation Ξvw\Xi^{*}_{v\to w} of Ω\Omega as the pushforward of ωΩ\omega\in\Omega by Ξvw\Xi_{v\to w}. In other words, we apply the transformation Ξvw\Xi_{v\to w} to each Poissonian point. Choosing δ=1\delta=1, we see that the setup requirement of (B1) holds. It remains to check 2. Let us first compute for all ydy\in\mathbb{R}^{d}:

wkL(Ξvwy)\displaystyle\partial_{w_{k}}L(\Xi_{v\to w}y) =j=1djL(Ξvwy)wk(Ξvwy)j\displaystyle=\sum_{j=1}^{d}\partial_{j}L(\Xi_{v\to w}y)\partial_{w_{k}}(\Xi_{v\to w}y)_{j}
=j=1djL(Ξvwy)v,y|v|2δkj=kL(Ξvwy)v,y|v|2,k=1,,d,\displaystyle=\sum_{j=1}^{d}\partial_{j}L(\Xi_{v\to w}y)\frac{\langle v,y\rangle}{|v|^{2}}\delta_{kj}=\partial_{k}L(\Xi_{v\to w}y)\frac{\langle v,y\rangle}{|v|^{2}},\quad k=1,\dots,d,

and

wkwjL(Ξvwy)=kjL(Ξvwy)v,y2|v|4,k,j=1,,d.\displaystyle\partial_{w_{k}w_{j}}L(\Xi_{v\to w}y)=\partial_{kj}L(\Xi_{v\to w}y)\frac{\langle v,y\rangle^{2}}{|v|^{4}},\quad k,j=1,\dots,d.

Therefore, if γT(v)𝒬,,nr(ω)\gamma^{T}(v)\in\mathcal{Q}^{r}_{*,*,n}(\omega), then, since

B(w,v,γT(v))=\displaystyle B(w,v,\gamma^{T}(v))= i=0n1L(ΔiΞvwγT(v))\displaystyle\sum_{i=0}^{n-1}L(\Delta_{i}\Xi_{v\to w}\gamma^{T}(v))
+12i=0n1(FΞwvω(Ξvwγi)+FΞwvω(Ξvwγi+1))\displaystyle\qquad\qquad\qquad+\frac{1}{2}\sum_{i=0}^{n-1}(F_{\Xi^{*}_{w\to v}\omega}(\Xi_{v\to w}\gamma_{i})+F_{\Xi^{*}_{w\to v}\omega}(\Xi_{v\to w}\gamma_{i+1}))
=\displaystyle= i=0n1L(ΞvwΔiγT(v))+12i=0n1(Fω(γi)+Fω(γi+1)),\displaystyle\sum_{i=0}^{n-1}L(\Xi_{v\to w}\Delta_{i}\gamma^{T}(v))+\frac{1}{2}\sum_{i=0}^{n-1}(F_{\omega}(\gamma_{i})+F_{\omega}(\gamma_{i+1})),

we have

(10.1) wjB(w,v,γT(v))=1|v|2i=0n1jL(ΞvwΔiγT(v))v,ΔiγT(v)\partial_{w_{j}}B(w,v,\gamma^{T}(v))=\frac{1}{|v|^{2}}\sum_{i=0}^{n-1}\partial_{j}L(\Xi_{v\to w}\Delta_{i}\gamma^{T}(v))\langle v,\Delta_{i}\gamma^{T}(v)\rangle

and

wkwjB(w,v,γT(v)))\displaystyle\partial_{w_{k}w_{j}}B(w,v,\gamma^{T}(v))) =i=0n1wkwjL(ΞvwΔiγT(v))\displaystyle=\sum_{i=0}^{n-1}\partial_{w_{k}w_{j}}L(\Xi_{v\to w}\Delta_{i}\gamma^{T}(v))
=1|v|4i=0n1kjL(ΞvwΔiγT(v))v,ΞvwΔiγT(v)2.\displaystyle=\frac{1}{|v|^{4}}\sum_{i=0}^{n-1}\partial_{kj}L(\Xi_{v\to w}\Delta_{i}\gamma^{T}(v))\langle v,\Xi_{v\to w}\Delta_{i}\gamma^{T}(v)\rangle^{2}.

Since LC2()L\in C^{2}(\mathbb{R}) and the increments of γT(v)\gamma^{T}(v) are bounded by rr, we obtain that there is a number D=D(v)D=D(v) such that if |wv|<1|w-v|<1, then

(10.2) |wkwjB(w,v,γT(v)))|\displaystyle|\partial_{w_{k}w_{j}}B(w,v,\gamma^{T}(v)))| D(v)i=0n1|ΔiγT(v)|2.\displaystyle\leq D(v)\sum_{i=0}^{n-1}|\Delta_{i}\gamma^{T}(v)|^{2}.

Using (D3) we obtain that there is c(r)>0c(r)>0 such that if |y|r|y|\leq r, then

L(y)>c(r)|y|2.L(y)>c(r)|y|^{2}.

Therefore, we can extend (10.2):

|wkwjBω(w,v,γT(v)))|\displaystyle|\partial_{w_{k}w_{j}}B_{\omega}(w,v,\gamma^{T}(v)))| c1(r)D(v)i=0n1L(ΔiγT(v))\displaystyle\leq c^{-1}(r)D(v)\sum_{i=0}^{n-1}L(\Delta_{i}\gamma^{T}(v))
c1(r)D(v)Aω(γT(v))\displaystyle\leq c^{-1}(r)D(v)A_{\omega}(\gamma^{T}(v))

and 2 follows since lim supT(A(γT(v))/T)=Λ(v)\limsup_{T\to\infty}(A(\gamma^{T}(v))/T)=\Lambda(v).

The expression for Λ\nabla\Lambda in (4.6) follows from (10.1) and (2.20).

10.2. Proof of Lemma 10.1

We will need several auxiliary lemmas first.

For every x,ydx,y\in\mathbb{R}^{d} and all nn\in\mathbb{N}, we define γ(x,y,n)𝒫x,y,n\gamma(x,y,n)\in\mathcal{P}_{x,y,n} by

(10.3) γk(x,y,n)=kny+(1kn)x,k=0,,n.\gamma_{k}(x,y,n)=\frac{k}{n}y+\Big{(}1-\frac{k}{n}\Big{)}x,\quad k=0,\ldots,n.
Lemma 10.2.
  1. 1.

    For all ωΩ\omega\in\Omega, if γ=(γ0,γ1,,γn)𝒫,,n\gamma=(\gamma_{0},\gamma_{1},\ldots,\gamma_{n})\in\mathcal{P}_{*,*,n} is a geodesic, then so is (γi,γi+1,,γk)𝒫,,ki(\gamma_{i},\gamma_{i+1},\ldots,\gamma_{k})\in\mathcal{P}_{*,*,k-i} for all i,ki,k satisfying 0i<kn0\leq i<k\leq n.

  2. 2.

    For all ωΩ\omega\in\Omega, x,ydx,y\in\mathbb{R}^{d}, nn\in\mathbb{N}, all γ𝒫x,y,n\gamma\in\mathcal{P}_{x,y,n}, if γkω\gamma_{k}\notin\omega for all k=1,,n1k=1,\ldots,n-1, then

    Aω(γ)Aω(γ(x,y,n)),A_{\omega}(\gamma)\geq A_{\omega}(\gamma(x,y,n)),

    where γ(x,y,n)\gamma(x,y,n) is defined in (10.3).

  3. 3.

    There is r>1r>1 such that if γ\gamma is a geodesic for some ωΩ\omega\in\Omega, then the distance between any consecutive points of γ\gamma is bounded by rr.

  4. 4.

    Let rr be the number provided in part 3. For all ωΩ\omega\in\Omega, all distinct x,ydx,y\in\mathbb{R}^{d}, all nn\in\mathbb{N}, and every path γ𝒫x,y,\gamma\in\mathcal{P}_{x,y,*}, there is γ𝒬x,y,r(ω)\gamma^{\prime}\in\mathcal{Q}^{r}_{x,y,*}(\omega) satisfying Aω(γ)Aω(γ)A_{\omega}(\gamma^{\prime})\leq A_{\omega}(\gamma).

  5. 5.

    For all distinct x,ydx,y\in\mathbb{R}^{d} and all ωΩ\omega\in\Omega,

    (10.4) 𝒜ω(x,y)=infγ𝒬x,y,r(ω)Aω(γ).\mathcal{A}_{\omega}(x,y)=\inf_{\gamma\in\mathcal{Q}^{r}_{x,y,*}(\omega)}A_{\omega}(\gamma).
Proof.

Part 1 is obvious.

To prove part 2, it suffices to note that since iΔiγi=yx\sum_{i}\Delta_{i}\gamma_{i}=y-x, convexity of LL implies

1n(Aω(γ)Aω(γ(x,y,n)))\displaystyle\frac{1}{n}(A_{\omega}(\gamma)-A_{\omega}(\gamma(x,y,n))) 1ni=0n1L(Δiγ)L(yxn)0.\displaystyle\geq\frac{1}{n}\sum_{i=0}^{n-1}L(\Delta_{i}\gamma)-L\Big{(}\frac{y-x}{n}\Big{)}\geq 0.

To prove part 3, we need to find rr such that if |xy|>r|x-y|>r, then there is n2n\geq 2 such that Aω(γ(x,y,n))<Aω(x,y)A_{\omega}(\gamma(x,y,n))<A_{\omega}(x,y) (here (x,y)𝒫x,y,1(x,y)\in\mathcal{P}_{x,y,1}). It suffices to check that

(10.5) nL((yx)/n)+n<L(yx)nL((y-x)/n)+n<L(y-x)

for some n2n\geq 2. Let L=sup|x|2L(x)<L^{*}=\sup_{|x|\leq 2}L(x)<\infty. We can use the superlinearity condition (D4) to pick r>2r>2 such that |yx|>r|y-x|>r implies L(yx)>(L+1)|yx|L(y-x)>(L^{*}+1)|y-x|.

If |yx|>r|y-x|>r, we set n=|yx|n=\lfloor|y-x|\rfloor. Then n2n\geq 2. In addition, |yx|/n2|y-x|/n\leq 2 implies

nL((yx)/n)+n|yx|L+|yx|<L(yx),\displaystyle nL((y-x)/n)+n\leq|y-x|L^{*}+|y-x|<L(y-x),

i.e., (10.5) holds.

Part 4 follows from parts 2 and 3. Part 5 follows from part (4) ∎

Lemma 10.3.

For all x,ydx,y\in\mathbb{R}^{d}, 𝒜ω(x,y)\mathcal{A}_{\omega}(x,y) is a random variable.

Proof.

Due to (10.4), we can write

𝒜ω(x,y)=limm𝒜ω(x,y,m),\mathcal{A}_{\omega}(x,y)=\lim_{m\to\infty}\mathcal{A}_{\omega}(x,y,m),

where

𝒜ω(x,y,m)=minγ𝒬x,y,r(ω)γ𝖡(0,m)Aω(γ).\mathcal{A}_{\omega}(x,y,m)=\min_{\begin{subarray}{c}\gamma\in\mathcal{Q}^{r}_{x,y,*}(\omega)\\ \gamma\subset\mathsf{B}(0,m)\end{subarray}}A_{\omega}(\gamma).

Since to compute 𝒜ω(x,y,m)\mathcal{A}_{\omega}(x,y,m) we just need to search through finitely many paths defined by Poisson points they pass through, 𝒜ω(x,y,m)\mathcal{A}_{\omega}(x,y,m) is a random variable for a fixed mm. Therefore, the limit as mm\to\infty, 𝒜ω(x,y)\mathcal{A}_{\omega}(x,y), is also a random variable. ∎

Using part 2 of Lemma 10.2, we can define distances between any two points along a straight line:

(10.6) ρω(x,y)\displaystyle\rho_{\omega}(x,y) =infnAω(γ(x,y,n))\displaystyle=\inf_{n\in\mathbb{N}}A_{\omega}(\gamma(x,y,n))
=infn(nL(xyn)+i=1n1Fω(γi(x,y,n)))+12(Fω(x)+Fω(y)).\displaystyle=\inf_{n\in\mathbb{N}}\Big{(}nL\Big{(}\frac{x-y}{n}\Big{)}+\sum_{i=1}^{n-1}F_{\omega}(\gamma_{i}(x,y,n))\Big{)}+\frac{1}{2}(F_{\omega}(x)+F_{\omega}(y)).

We also define ρω(x,x)=0\rho_{\omega}(x,x)=0 for all xdx\in\mathbb{R}^{d}. Let Ω¯\bar{\Omega} be the set of all ω\omega such that no three points of ω\omega are on the same straight line. Then (Ω¯)=1\mathbb{P}(\bar{\Omega})=1.

Lemma 10.4.

Let ωΩ¯\omega\in\bar{\Omega}. Then for all distinct x,ydx,y\in\mathbb{R}^{d}, ρω(x,y)<\rho_{\omega}(x,y)<\infty, and for all compact sets KdK\subset\mathbb{R}^{d},

infxK,yd,yxρω(x,y)>0.\inf_{x\in K,\,y\in\mathbb{R}^{d},y\neq x}\rho_{\omega}(x,y)>0.
Proof.

The upper bound is trivial. Now we prove the lower bound. Let δ(0,1)\delta\in(0,1) be less than the minimum distance between any Poisson point p1Kωp_{1}\in K\cap\omega and any other Poisson point p2ω{p1}.p_{2}\in\omega\setminus\{p_{1}\}. Then, if xKx\in K, yxy\neq x and nn\in\mathbb{N} satisfy |yx|/n<δ|y-x|/n<\delta, then either xx or γ1(x,y,n)\gamma_{1}(x,y,n) are not a Poisson point and so Aω(γ(x,y,n))>12A_{\omega}(\gamma(x,y,n))>\frac{1}{2}. If, rather, |yx|/nδ|y-x|/n\geq\delta, then, due to (D3), Aω(γ(x,y,n))nL(|yx|/n)cδ2A_{\omega}(\gamma(x,y,n))\geq nL(|y-x|/n)\geq c\delta^{2}. Thus, the infimum in question is bounded below by 1/2(cδ2)1/2\wedge(c\delta^{2}). ∎

We recall that \ast-connected sets are defined in Section 8.

For a random field (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} and a set UdU\in\mathbb{Z}^{d}, we denote

X(U)=kUXk.X(U)=\sum_{k\in U}X_{k}.
Lemma 10.5 ([LW10]).

Let (Xk)kd(X_{k})_{k\in\mathbb{Z}^{d}} be a stationary random field with finite dependence range. Suppose that

(10.7) {X0=0}=0.\mathbb{P}\{X_{0}=0\}=0.

Then, there is β>0\beta>0 such that for all A>0A>0, the following holds. With probability 11, there is NN such that for nNn\geq N, if Γ\Gamma is a \ast-connected subset of \mathbb{Z} containing 0d0\in\mathbb{Z}^{d} and X(Γ)AnX(\Gamma)\leq An, then |Γ|βAn|\Gamma|\leq\beta An.

Remark 12.

Lemma 2.2 was stated slightly differently in [LW10]. We replace a condition on the atom mass at 0 by the stronger no atom at 0 condition in (10.7). We also replace the condition on sets Γ\Gamma that was allowed to vary with nn with a stricter requirement independent of nn. In [LW10], only an estimate |Γ|Bn|\Gamma|\leq Bn is stated as the conclusion of the lemma but it follows from the proof that BB can be taken in the form of βA\beta A, where β\beta only depends on the distribution of X0X_{0}.


We will use the notation A+B={x+y:xA,yB}A+B=\{x+y:x\in A,\ y\in B\} for A,BdA,B\subset\mathbb{R}^{d}. Let’s fix R>2rR>2r and for kk\in\mathbb{Z} define

𝖨k=Rk+[0,R]d,\mathsf{I}_{k}=Rk+[0,R]^{d},
𝖨k+=𝖨k+[r,r]d,\mathsf{I}_{k}^{+}=\mathsf{I}_{k}+[-r,r]^{d},
Δ𝖨k=𝖨k+𝖨k,\Delta\mathsf{I}_{k}=\mathsf{I}_{k}^{+}\setminus\mathsf{I}_{k},
𝖩k=𝖨k+[R,R]d,\mathsf{J}_{k}=\mathsf{I}_{k}+[-R,R]^{d},
𝖩k=𝖨k+[R+r,Rr]d,\mathsf{J}^{-}_{k}=\mathsf{I}_{k}+[-R+r,R-r]^{d},
Δ𝖩k=𝖩k𝖩k.\Delta\mathsf{J}_{k}=\mathsf{J}_{k}\setminus\mathsf{J}^{-}_{k}.
Lemma 10.6.

There is a stationary (0,)(0,\infty)-valued finite dependence range random field (ξk)kd(\xi_{k})_{k\in\mathbb{Z}^{d}} such that if kd,k\in\mathbb{Z}^{d}, nn\in\mathbb{N}, and γ𝒬,,nr(ω)\gamma\in\mathcal{Q}^{r}_{*,*,n}(\omega) is contained entirely in 𝖩k\mathsf{J}_{k} and satisfies γ0Δ𝖨k\gamma_{0}\in\Delta\mathsf{I}_{k}, γnΔ𝖩k\gamma_{n}\in\Delta\mathsf{J}_{k}, then

(10.8) Aω(γ)ξk(ω),a.s.A_{\omega}(\gamma)\geq\xi_{k}(\omega),\quad\text{\rm a.s.}
Proof.

For kdk\in\mathbb{Z}^{d} let PkP_{k} denote the set of Poisson points in 𝖩k\mathsf{J}_{k} and define the random variable

ξk(ω)=inf{ρω(x,y):xΔ𝖨k,yPkΔ𝖩k,yx}.\xi_{k}(\omega)=\inf\{\rho_{\omega}(x,y)\,:\,x\in\Delta\mathsf{I}_{k},\ y\in P_{k}\cup\Delta\mathsf{J}_{k},\ y\neq x\}.

First, note that the collection (ξk)kd(\xi_{k})_{k\in\mathbb{Z}^{d}} is stationary by stationarity of the Poisson process. Since each ξk\xi_{k} is a function of Poisson points contained in 𝖩k\mathsf{J}_{k}, a bounded set, the collection (ξk)kd(\xi_{k})_{k\in\mathbb{Z}^{d}} has finite range of dependence. Additionally, Lemma 10.4 implies that ξk>0\xi_{k}>0 almost surely.

We claim that Aω(γ)ξk(ω)A_{\omega}(\gamma)\geq\xi_{k}(\omega) for all γ𝒬,,nr\gamma\in\mathcal{Q}^{r}_{*,*,n} satisfying the conditions in the lemma. Note that Δ𝖨kΔ𝖩k=.\Delta\mathsf{I}_{k}\cap\Delta\mathsf{J}_{k}=\emptyset. As a consequence, there exists an i{1,,n}i^{*}\in\{1,\dots,n\} such that γiPkΔ𝖩\gamma_{i^{*}}\in P_{k}\cup\Delta\mathsf{J} and γjPkΔ𝖩k\gamma_{j}\notin P_{k}\cup\Delta\mathsf{J}_{k} for all j{0,,i1}.j\in\{0,\dots,i^{*}-1\}. Then, Aω(γ)Aω(γ0,,γi).A_{\omega}(\gamma)\geq A_{\omega}(\gamma_{0},\dots,\gamma_{i^{*}}). Finally, part 2 of Lemma 10.2 implies that

Aω(γ0,,γi)ρω(γ0,γi),A_{\omega}(\gamma_{0},\dots,\gamma_{i^{*}})\geq\rho_{\omega}(\gamma_{0},\gamma_{i^{*}}),

and the right-hand side is bounded below by ξk(ω).\xi_{k}(\omega).

Lemma 10.7.

There is C>0C>0 such that with probability 1, there is D>0D>0 such that if xI0x\in I_{0}, |y|>D|y|>D and γ𝒬x,y,r(ω)\gamma\in\mathcal{Q}^{r}_{x,y,*}(\omega), then Aω(γ)>C|y|A_{\omega}(\gamma)>C|y|.

Once the existence of the shape function Λ\Lambda is established, it follows from this lemma that Λ(v)>0\Lambda(v)>0 for all v0v\neq 0, which, according to Theorem 2.5, implies that the boundary of the limit shape is diffeomorphic to a sphere.

Proof.

Assume that no CC described in the statement exists. It means that with positive probability, for every ε>0\varepsilon>0, there are sequences nmn_{m}\in\mathbb{N}, xmI0x_{m}\in I_{0}, ymdy_{m}\in\mathbb{R}^{d}, γm𝒬xm,ym,nmr\gamma^{m}\in\mathcal{Q}^{r}_{x_{m},y_{m},n_{m}} with Aω(γm)<ε|ym|A_{\omega}(\gamma^{m})<\varepsilon|y_{m}| and |ym||y_{m}|\to\infty. Recalling that β\beta is the constant provided by Lemma 10.5 and choosing ε\varepsilon to satisfy

(10.9) 0<ε<(2dRβ)1,0<\varepsilon<(2\sqrt{d}R\beta)^{-1},

we will arrive at a contradiction.

We are going to decompose γm\gamma^{m} into smaller pieces. We will use the fact that the increments of γm\gamma^{m} are bounded by rr. First, we set k0=0dk_{0}=0\in\mathbb{Z}^{d} and

i0=min{s:γsΔ𝖨0}.i_{0}=\min\{s\in\mathbb{N}:\ \gamma_{s}\in\Delta\mathsf{I}_{0}\}.

Then, inductively, for j=0,1,2,j=0,1,2,\ldots, we define

ij+1\displaystyle i_{j+1} =min{s>ij:γs𝖩kj}nm,\displaystyle=\min\{s>i_{j}:\ \gamma_{s}\notin\mathsf{J}_{k_{j}}\}\wedge n_{m},

and choose kj+1dk_{j+1}\in\mathbb{Z}^{d} so that kj+1kj{1,0,1}dk_{j+1}-k_{j}\in\{-1,0,1\}^{d} and γij+1𝖨kj+1+\gamma_{i_{j+1}}\in\mathsf{I}^{+}_{k_{j+1}}. The latter can always be accomplished since the distance between two consecutive vertices of γ\gamma is bounded by rr. The same argument implies γij+11Δ𝖩kj\gamma_{i_{j+1}-1}\in\Delta\mathsf{J}_{k_{j}}. We define Nm=min{j:ij=nm}N_{m}=\min\{j:\ i_{j}=n_{m}\} and

γm,j=(γijm,γij+1m,,γij+11m),j=0,,Nm1.\gamma^{m,j}=(\gamma^{m}_{i_{j}},\gamma^{m}_{i_{j}+1},\ldots,\gamma^{m}_{i_{j+1}-1}),\quad j=0,\ldots,N_{m}-1.

These paths satisfy the conditions of Lemma 10.6, Hence, for Γm={k0,,kNm1}\Gamma_{m}=\{k_{0},\ldots,k_{N_{m}-1}\}, we can use the random field (ξk)kd(\xi_{k})_{k\in\mathbb{Z}^{d}} provided by Lemma 10.6 to obtain

ξ(Γm)=kΓmξkj=0Nm1Aω(γm,j)Aω(γm)ε|ym|ε|ym|.\xi(\Gamma_{m})=\sum_{k\in\Gamma_{m}}\xi_{k}\leq\sum_{j=0}^{N_{m}-1}A_{\omega}(\gamma^{m,j})\leq A_{\omega}(\gamma^{m})\leq\varepsilon|y_{m}|\leq\varepsilon\lceil|y_{m}|\rceil.

Since Γm\Gamma_{m} is \ast-connected, we can apply Lemma 10.5. Choosing A=εA=\varepsilon and using (10.9), we obtain for sufficiently large mm,

(10.10) |Γm|βε|ym|<12dR|ym|.|\Gamma_{m}|\leq\beta\varepsilon\lceil|y_{m}|\rceil<\frac{1}{2\sqrt{d}R}\lceil|y_{m}|\rceil.

But Γm\Gamma_{m} a \ast-connected set containing both 0 and kNm1k_{N_{m}-1}. Since |ymRkNm1|2R|y_{m}-Rk_{N_{m}-1}|\leq 2R and |RkNm1||Γm|dR|Rk_{N_{m}-1}|\leq|\Gamma_{m}|\sqrt{d}R, we obtain |ym|2R+|Γm|dR|y_{m}|\leq 2R+|\Gamma_{m}|\sqrt{d}R contradicting (10.10) and completing the proof. ∎

Now we can complete the proof of Lemma 10.1

Proof.

Due to Lemma 10.7, for almost all ωΩ\omega\in\Omega, the following holds for all R>0R>0: there is DR=DR(ω)D_{R}=D_{R}(\omega) such that if x,y𝖡(0,R)x,y\in\mathsf{B}(0,R) and a path γ𝒬x,y,r(ω)\gamma\in\mathcal{Q}^{r}_{x,y,*}(\omega) is not contained in 𝖡(0,DR)\mathsf{B}(0,D_{R}) then Aω(γ)𝒜ω(x,y)+1A_{\omega}(\gamma)\geq\mathcal{A}_{\omega}(x,y)+1. Thus, paths in 𝒬x,y,r(ω)\mathcal{Q}^{r}_{x,y,*}(\omega) with smaller actions are contained in 𝖡(0,DR)\mathsf{B}(0,D_{R}). Since there are finitely many paths in 𝒬x,y,r(ω)\mathcal{Q}^{r}_{x,y,*}(\omega) contained in that ball, at least one of them realizes 𝒜ω(x,y)\mathcal{A}_{\omega}(x,y). If such path is unique, we set γω(x,y)\gamma_{\omega}(x,y) to be that path. If there are at least two minimizing paths, we need a tie-breaking rule. For example, if there is a minimizer not passing through any Poissonian points, we let γω(x,y)\gamma_{\omega}(x,y) to be that minimizer (it is unique). If all minimizers pass through some Poissonian points, we choose γω(x,y)\gamma_{\omega}(x,y) to be the one containing the Poissonian point with minimal Euclidean norm. On a set of probability 1, this procedure results in a unique path. We define γω(x,y)=(x,y)𝒫x,y,1\gamma_{\omega}(x,y)=(x,y)\in\mathcal{P}_{x,y,1} on the complement of this event.

To prove that thus defined geodesic γ\gamma is measurable, we note that (i) γω(x,y)\gamma_{\omega}(x,y) is the a.s.-limit of action minimizers restricted to the ball 𝖡(0,D)\mathsf{B}(0,D), as DD\to\infty; (ii) these restricted minimizers are measurable since they are chosen among finitely many paths. ∎

References

  • [AB06] Charalambos D. Aliprantis and Kim C. Border. Infinite dimensional analysis: a hitchhiker’s guide. Springer, Berlin, third edition, 2006.
  • [AD95] D. Aldous and P. Diaconis. Hammersley’s interacting particle process and longest increasing subsequences. Probab. Theory Related Fields, 103(2):199–213, 1995.
  • [ADH17a] Antonio Auffinger, Michael Damron, and Jack Hanson. 50 years of first-passage percolation, volume 68 of University Lecture Series. American Mathematical Society, Providence, RI, 2017.
  • [ADH17b] Antonio Auffinger, Michael Damron, and Jack Hanson. 50 years of first-passage percolation, volume 68 of University Lecture Series. American Mathematical Society, Providence, RI, 2017.
  • [Bak16] Yuri Bakhtin. Inviscid Burgers equation with random kick forcing in noncompact setting. Electron. J. Probab., 21:50 pp., 2016.
  • [Bar01] Yu. Baryshnikov. GUEs and queues. Probab. Theory Related Fields, 119(2):256–274, 2001.
  • [BCK14] Yuri Bakhtin, Eric Cator, and Konstantin Khanin. Space-time stationary solutions for the Burgers equation. J. Amer. Math. Soc., 27(1):193–238, 2014.
  • [BD23a] Yuri Bakhtin and Douglas Dow. Differentiability of the effective Lagrangian for Hamilton-Jacobi-Bellman equations in dynamic random environments. arxiv preprint, https://arxiv.org/abs/2305.17276, 2023.
  • [BD23b] Yuri Bakhtin and Douglas Dow. Differentiability of the shape function for directed polymers in continuous space. arxiv preprint, https://arxiv.org/abs/2303.04224, 2023.
  • [BK18] Yuri Bakhtin and Konstantin Khanin. On global solutions of the random Hamilton-Jacobi equations and the KPZ problem. Nonlinearity, 31(4):R93–R121, 2018.
  • [BKMV23] Yuri Bakhtin, Konstantin Khanin, András Mészáros, and Jeremy Voltz. Last passage percolation in a product-type random environment, 2023.
  • [BL18] Yuri Bakhtin and Liying Li. Zero temperature limit for directed polymers and inviscid limit for stationary solutions of stochastic Burgers equation. J. Stat. Phys., 172(5):1358–1397, 2018.
  • [BL19] Yuri Bakhtin and Liying Li. Thermodynamic limit for directed polymers and stationary solutions of the Burgers equation. Comm. Pure Appl. Math., 72(3):536–619, 2019.
  • [CD81] J. Theodore Cox and Richard Durrett. Some limit theorems for percolation processes with necessary and sufficient conditions. Ann. Probab., 9(4):583–603, 1981.
  • [CGGK93] J. Theodore Cox, Alberto Gandolfi, Philip S. Griffin, and Harry Kesten. Greedy Lattice Animals I: Upper Bounds. The Annals of Applied Probability, 3(4):1151 – 1169, 1993.
  • [Con10] John B. Conway. A course in functional analysis / John B. Conway. Graduate texts in mathematics ; 96. Springer Science+Business Media, New York, 2nd ed. edition, 2010.
  • [CP11] Eric Cator and Leandro P.R. Pimentel. A shape theorem and semi-infinite geodesics for the Hammersley model with random weights. ALEA, 8:163–175, 2011.
  • [Dac07] Bernard Dacorogna. Direct methods in the calculus of variations, volume 78. Springer Science & Business Media, 2007.
  • [dC92] Manfredo Perdigão do Carmo. Riemannian geometry. Mathematics. Theory and applications. Birkhäuser, Boston, 1992.
  • [DL81] Richard Durrett and Thomas M. Liggett. The shape of the limit set in Richardson’s growth model. Ann. Probab., 9(2):186–193, 1981.
  • [DVJ03] D. J. Daley and D. Vere-Jones. An introduction to the theory of point processes. Vol. I. Probability and its Applications (New York). Springer-Verlag, New York, second edition, 2003. Elementary theory and methods.
  • [GTW01] Janko Gravner, Craig A. Tracy, and Harold Widom. Limit theorems for height fluctuations in a class of discrete space and time growth models. J. Statist. Phys., 102(5-6):1085–1132, 2001.
  • [Ham72] J. M. Hammersley. A few seedlings of research. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ. California, Berkeley, Calif., 1970/1971), Vol. I: Theory of statistics, pages 345–394. Univ. California Press, Berkeley, Calif., 1972.
  • [HM95] Olle Häggström and Ronald Meester. Asymptotic shapes for stationary first passage percolation. Ann. Probab., 23(4):1511–1522, 1995.
  • [HMO02] B. M. Hambly, James B. Martin, and Neil O’Connell. Concentration results for a Brownian directed percolation problem. Stochastic Process. Appl., 102(2):207–220, 2002.
  • [HN97] C. Douglas Howard and Charles M. Newman. Euclidean models of first-passage percolation. Probability Theory and Related Fields, 108:153–170, 1997. 10.1007/s004400050105.
  • [HW65] J. M. Hammersley and D. J. A. Welsh. First-Passage Percolation, Subadditive Processes, Stochastic Networks, and Generalized Renewal Theory, pages 61–110. Springer Berlin Heidelberg, Berlin, Heidelberg, 1965.
  • [JRAS22] Christopher Janjigian, Firas Rassoul-Agha, and Timo Seppäläinen. Ergodicity and synchronization of the Kardar-Parisi-Zhang equation, 2022. arXiv preprint, https://doi.org/10.48550/arxiv.2211.06779.
  • [Kat66] Tosio Kato. Perturbation Theory for Linear Operators. 1966.
  • [Kin68] J. F. C. Kingman. The ergodic theory of subadditive stochastic processes. J. Roy. Statist. Soc. Ser. B, 30:499–510, 1968.
  • [Kin73] J. F. C. Kingman. Subadditive ergodic theory. Ann. Probability, 1:883–909, 1973. With discussion by D. L. Burkholder, Daryl Daley, H. Kesten, P. Ney, Frank Spitzer and J. M. Hammersley, and a reply by the author.
  • [Kin93] J. F. C. Kingman. Poisson processes, volume 3 of Oxford Studies in Probability. The Clarendon Press Oxford University Press, New York, 1993. Oxford Science Publications.
  • [LW10] T. LaGatta and J. Wehr. A shape theorem for riemannian first-passage percolation. Journal of Mathematical Physics, 51(5):053502, 2010.
  • [Mar02] James B. Martin. Linear growth for greedy lattice animals. Stochastic Process. Appl., 98(1):43–66, 2002.
  • [Mar04] James B. Martin. Limiting shape for directed percolation models. Ann. Probab., 32(4):2908–2937, 2004.
  • [Mat90] John N. Mather. Differentiability of the minimal average action as a function of the rotation number. Bol. Soc. Brasil. Mat. (N.S.), 21(1):59–70, 1990.
  • [MO07] J. Moriarty and N. O’Connell. On the free energy of a directed polymer in a Brownian environment. Markov Process. Related Fields, 13(2):251–266, 2007.
  • [MZ37] Józef Marcinkiewicz and Antoni Zygmund. Sur les fonctions indépendantes. Fundamenta Mathematicae, 29(1):60–90, 1937.
  • [Ric73] Daniel Richardson. Random growth in a tessellation. Proc. Cambridge Philos. Soc., 74:515–528, 1973.
  • [Ros81] H. Rost. Nonequilibrium behaviour of a many particle process: density profile and local equilibria. Z. Wahrsch. Verw. Gebiete, 58(1):41–53, 1981.
  • [Sep12] Timo Seppäläinen. Scaling for a one-dimensional directed polymer with boundary conditions. Ann. Probab., 40(1):19–73, 2012.
  • [Szn98] Alain-Sol Sznitman. Brownian motion, obstacles and random media. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 1998.
  • [TZ24] Son Tu and Jianlu Zhang. On the regularity of stochastic effective Hamiltonian. arxiv preprint, https://arxiv.org/abs/2312.15649, 2024.