This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Excursions away from the Lipschitz minorant of a Lévy process

Steven N. Evans Department of Statistics #3860
367 Evans Hall
University of California at Berkeley
Berkeley, CA 94720-3860
USA
evans@stat.berkeley.edu
 and  Mehdi Ouaki Department of Statistics #3860
367 Evans Hall
University of California at Berkeley
Berkeley, CA 94720-3860
USA
mouaki@berkeley.edu
(Date: July 28, 2025)
Abstract.

For α>0\alpha>0, the α\alpha-Lipschitz minorant of a function f:f:\mathbb{R}\rightarrow\mathbb{R} is the greatest function m:m:\mathbb{R}\rightarrow\mathbb{R} such that mfm\leq f and |m(s)m(t)|α|st||m(s)-m(t)|\leq\alpha|s-t| for all s,ts,t\in\mathbb{R}, should such a function exist. If X=(Xt)tX=(X_{t})_{t\in\mathbb{R}} is a real-valued Lévy process that is not a pure linear drift with slope ±α\pm\alpha, then the sample paths of XX have an α\alpha-Lipschitz minorant almost surely if and only if 𝔼[|X1|]<\mathbb{E}[|X_{1}|]<\infty and |𝔼[X1]|<α|\mathbb{E}[X_{1}]|<\alpha. Denoting the minorant by MM, we consider the contact set 𝒵:={t:Mt=XtXt}\mathcal{Z}:=\{t\in\mathbb{R}:M_{t}=X_{t}\wedge X_{t-}\}, which, since it is regenerative and stationary, has the distribution of the closed range of some subordinator “made stationary” in a suitable sense. We provide a description of the excursions of the Lévy process away from its contact set similar to the one presented in Itô excursion theory. We study the distribution of the excursion on the special interval straddling zero. We also give an explicit path decomposition of the other “generic” excursions in the case of Brownian motion with drift β\beta with |β|<α|\beta|<\alpha. Finally, we investigate the progressive enlargement of the Brownian filtration by the random time that is the first point of the contact set after zero.

Key words and phrases:
fluctuation theory, regenerative set, subordinator, last exit decomposition, global minimum, path decomposition, enlargement of filtration
2010 Mathematics Subject Classification:
60G51, 60G55, 60J65
SNE supported in part by NSF grant DMS-1512933 and NIH grant 1R01GM109454-01.

1. Introduction

Recall that a function g:g:\mathbb{R}\to\mathbb{R} is α\alpha-Lipschitz for some α>0\alpha>0 if |g(s)g(t)|α|st||g(s)-g(t)|\leq\alpha|s-t| for all s,ts,t\in\mathbb{R}. Given a function f:f:\mathbb{R}\to\mathbb{R}, we say that ff dominates the α\alpha-Lipschitz function gg if g(t)f(t)g(t)\leq f(t) for all tt\in\mathbb{R}. A necessary and sufficient condition that ff dominates some α\alpha-Lipschitz function is that ff is bounded below on compact intervals and satisfies lim inftf(t)αt>\liminf_{t\to-\infty}f(t)-\alpha t>-\infty and lim inft+f(t)+αt>\liminf_{t\to+\infty}f(t)+\alpha t>-\infty. When the function ff dominates some α\alpha-Lipschitz function there is an α\alpha-Lipschitz function mm dominated by ff such that g(t)m(t)g(t)\leq m(t) for all tt\in\mathbb{R} for any α\alpha-Lipschitz function gg dominated by ff; we call mm the α\alpha-Lipschitz minorant of ff. The α\alpha-Lipschitz minorant is given concretely by

(1.1) m(t)=sup{h:hα|ts|f(s) for all s}=inf{f(s)+α|ts|:s}.\begin{split}m(t)&=\sup\{h\in\mathbb{R}:h-\alpha|t-s|\leq f(s)\text{ for all }s\in\mathbb{R}\}\\ &=\inf\{f(s)+\alpha|t-s|:s\in\mathbb{R}\}.\\ \end{split}

The purpose of the present paper is to continue the study of the α\alpha-Lipschitz minorants of the sample paths of a two-sided Lévy process begun in [AE14].

A two-sided Lévy process is a real-valued stochastic process indexed by the real numbers that has càdlàg paths, stationary independent increments, and takes the value 0 at time 0. The distribution of a two-sided Lévy process XX is characterized by the Lévy-Khintchine formula 𝔼[eiθ(XtXs)]=e(ts)Ψ(θ)\mathbb{E}[e^{i\theta(X_{t}-X_{s})}]=e^{-(t-s)\Psi(\theta)} for θ\theta\in\mathbb{R} and <st<-\infty<s\leq t<\infty, where

Ψ(θ)=iaθ+12σ2θ2+(1eiθx+iθx1{|x|1})Π(dx)\Psi(\theta)=-ia\theta+\frac{1}{2}\sigma^{2}\theta^{2}+\int_{\mathbb{R}}(1-e^{i\theta x}+i\theta x\mathbbold{1}_{\{|x|\leq 1\}})\,\Pi(dx)

with aa\in\mathbb{R}, σ+\sigma\in\mathbb{R}_{+}, and Π\Pi a σ\sigma-finite measure concentrated on {0}\mathbb{R}\setminus\{0\} satisfying (1x2)Π(dx)<\int_{\mathbb{R}}(1\wedge x^{2})\,\Pi(dx)<\infty (see [Ber96, Sat99] for information about (one-sided) Lévy processes — the two-sided case involves only trivial modifications). In order to avoid having to consider annoying, but trivial, special cases in what follows, we henceforth assume that XX is not just deterministic linear drift Xt=atX_{t}=at, tt\in\mathbb{R}, for some aa\in\mathbb{R}; that is, we assume that there is a non-trivial Brownian component (σ>0\sigma>0) or a non-trivial jump component (Π0\Pi\neq 0).

The sample paths of XX have bounded variation almost surely if and only if σ=0\sigma=0 and (1|x|)Π(dx)<\int_{\mathbb{R}}(1\wedge|x|)\,\Pi(dx)<\infty. In this case Ψ\Psi can be rewritten as

Ψ(θ)=idθ+(1eiθx)Π(dx).\Psi(\theta)=-id\theta+\int_{\mathbb{R}}(1-e^{i\theta x})\,\Pi(dx).

We call dd\in\mathbb{R} the drift coefficient.

We now recall a few facts about the α\alpha-Lipschitz minorants of the sample paths of XX from [AE14].

Either the α\alpha-Lipschitz minorant exists for almost all sample paths of XX or it fails to exist for almost sample paths of XX. A necessary and sufficient condition for the α\alpha-Lipschitz minorant to exist for almost all sample paths is that 𝔼[|X1|]<\mathbb{E}[|X_{1}|]<\infty and |𝔼[X1]|<α|\mathbb{E}[X_{1}]|<\alpha. We assume from now on that this condition holds and denote the corresponding minorant process by (Mt)t(M_{t})_{t\in\mathbb{R}}. Figure 1.1 shows an example of a typical Brownian motion sample path and its associated α\alpha-Lipschitz minorant.

Refer to caption
Figure 1.1. A typical Brownian motion sample path and its associated α\alpha-Lipschitz minorant.

Set 𝒵:={t:Mt=XtXt}\mathcal{Z}:=\{t\in\mathbb{R}:M_{t}=X_{t}\wedge X_{t-}\}. We call 𝒵\mathcal{Z} the contact set. The random closed set 𝒵\mathcal{Z} is non-empty, stationary, and regenerative in the sense of [FT88] (see Definition 2.1 below for a re-statement of the definition). Such a random closed set either has infinite Lebesgue measure almost surely or zero Lebesgue measure almost surely.

  • If the sample paths of XX have unbounded variation almost surely, then 𝒵\mathcal{Z} has zero Lebesgue measure almost surely.

  • If XX has sample paths of bounded variation and |d|>α|d|>\alpha, then 𝒵\mathcal{Z} has zero Lebesgue measure almost surely.

  • If XX has sample paths of bounded variation and |d|<α|d|<\alpha, then 𝒵\mathcal{Z} has infinite Lebesgue measure almost surely.

  • If XX has sample paths of bounded variation and |d|=α|d|=\alpha, then whether the Lebesgue measure of 𝒵\mathcal{Z} is infinite or zero is determined by an integral condition involving the Lévy measure Π\Pi that we omit. In particular, if σ=0\sigma=0, Π()<\Pi(\mathbb{R})<\infty, and |d|=α|d|=\alpha, then the Lebesgue measure of 𝒵\mathcal{Z} is almost surely infinite.

If 𝒵\mathcal{Z} has zero Lebesgue measure, then 𝒵\mathcal{Z} is either almost surely a discrete set or almost surely a perfect set with empty interior.

  • If σ>0\sigma>0, then 𝒵\mathcal{Z} is almost surely discrete.

  • If σ=0\sigma=0 and Π()=\Pi(\mathbb{R})=\infty, then 𝒵\mathcal{Z} is almost surely discrete if and only if

    01t1{Xt[αt,αt]}𝑑t<.\int_{0}^{1}t^{-1}\mathbb{P}\{X_{t}\in[-\alpha t,\alpha t]\}\,dt<\infty.
  • If σ=0\sigma=0 and Π()<\Pi(\mathbb{R})<\infty, then 𝒵\mathcal{Z} is almost surely discrete if and only if |d|>α|d|>\alpha.

The outline of the remainder of the paper is as follows.

In Section 2 we show that the pair ((Xt)t,𝒵)((X_{t})_{t\in\mathbb{R}},\mathcal{Z}) is a space-time regenerative system in the sense that if Dt:=inf{st:s𝒵}D_{t}:=\inf\{s\geq t:s\in\mathcal{Z}\} for any tt\in\mathbb{R}, then ((XDt+uXDt)u0,𝒵[Dt,)Dt)((X_{D_{t}+u}-X_{D_{t}})_{u\geq 0},\mathcal{Z}\cap[D_{t},\infty)-D_{t}) is independent of ((Xu)uDt,𝒵(,Dt])((X_{u})_{u\leq D_{t}},\mathcal{Z}\cap(-\infty,D_{t}]) with a distribution that does not depend on tt\in\mathbb{R}. It follows that if 𝒵\mathcal{Z} is discrete, we write 0<T1<T2<0<T_{1}<T_{2}<\ldots for the successive positive elements of 𝒵\mathcal{Z}, and we set Yn=(XTn+tXTn, 0tTn+1Tn)Y^{n}=(X_{T_{n}+t}-X_{T_{n}},\,0\leq t\leq T_{n+1}-T_{n}), nn\in\mathbb{N}, for the corresponding sequence of excursions away from the contact set, then these excursions are independent and identically distributed. When 𝒵\mathcal{Z} is not discrete there is a “local time” on 𝒵[0,)\mathcal{Z}\cap[0,\infty) and we give a description of the corresponding excursions away from the contact set as the points of a Poisson point process that is analogous to Itô’s description of the excursions of a Markov process away from a regular point.

Because ((Xt)t,𝒵)((X_{t})_{t\in\mathbb{R}},\mathcal{Z}) is stationary, the key to establishing the space-time regenerative property is to show that if D:=D0D:=D_{0} is the first positive point in 𝒵\mathcal{Z}, then ((XD+tXD)t0,𝒵[D,)D)((X_{D+t}-X_{D})_{t\geq 0},\mathcal{Z}\cap[D,\infty)-D) is independent of ((Xt)tD,𝒵(,D])((X_{t})_{t\leq D},\mathcal{Z}\cap(-\infty,D]). This is nontrivial because DD is most definitely not a stopping time for the canonical filtration of XX and so we can’t just apply the strong Markov property. We derive the claimed fact in Section 3 using a result from [Mil78] on the path decomposition of a real-valued Markov process at the time it achieves its global minimum. This result in turn is based on general last-exit decompositions from [PS72, GS74].

When the contact set is discrete we obtain some information about the excursion away from the α\alpha-Lipschitz minorant that contains the time zero in Section 4 using ideas from [Tho00]. If GG is the last contact time before zero and DD, as above, is the first contact time after zero, we show that DDG\frac{D}{D-G} is independent of (XtXG,Gt<D)(X_{t}-X_{G},\,G\leq t<D) and uniformly distributed on [0,1][0,1]. This observation allows us to describe the finite-dimensional distributions of (Xt,Gt<D)(X_{t},\,G\leq t<D) in terms of those of (Xt, 0t<D)(X_{t},\,0\leq t<D), and we are able to determine the latter explicitly. The argument here is based on a generalization of the fact that if VV is a nonnegative random variable, UU is uniformly distributed on [0,1][0,1], and UU and VV are independent, then it is possible to express the distribution of VV in terms of that of UVUV.

As before, write YnY_{n}, nn\in\mathbb{N}, for the independent, identically distributed sequence of excursions away from the contact set that occur at positive times in the case where the contact set is discrete. When XX is Brownian motion with drift β\beta, where |β|<α|\beta|<\alpha in order for the α\alpha-Lipschitz minorant to exist, we establish a path decomposition description for the common distribution of the YnY_{n} in Section 5. Using this path decomposition we can determine the distributions of quantities such as the length Tn+1TnT_{n+1}-T_{n} and the distribution of the final value XTn+1XTnX_{T_{n+1}}-X_{T_{n}}. Moreover, if we write Y0Y_{0} for the excursion straddling time zero, then we have the “size-biasing” relationship 𝔼[f(Y0)]=𝔼[f(Yn)(Tn+1Tn)]/𝔼[Tn+1Tn]\mathbb{E}[f(Y_{0})]=\mathbb{E}[f(Y_{n})(T_{n+1}-T_{n})]/\mathbb{E}[T_{n+1}-T_{n}], nn\in\mathbb{N}, for nonnegative measurable functions ff, and this allows us to recover information about the distribution of Y0Y_{0} from a knowledge of the common distribution of the “generic” excursions YnY_{n}, nn\in\mathbb{N}.

As we noted above, the random time DD is not a stopping time for the canonical filtration of XX. In Section 6 we investigate the filtration obtained by enlarging the Brownian filtration in such way that DD becomes a stopping time. Martingales for the Brownian filtration become semimartingales in the enlarged filtration and we are able to describe their canonical semimartingale decompositions quite explicitly.

The paper finishes with two auxiliary sections. Section 7 contains some (deterministic) results about the α\alpha-Lipschitz minorant construction that are used throughout the paper. Section 8 details two general lemmas about random times for Lévy processes that are used in Section 3 and Section 6.

2. Space-time regenerative systems

Let Ω\Omega^{\leftrightarrow} (resp. Ω\Omega^{\rightarrow}) denote the space of càdlàg \mathbb{R}-valued paths indexed by \mathbb{R} (resp. +\mathbb{R}_{+}). For tt\in\mathbb{R}, define τt:ΩΩ\tau_{t}:\Omega^{\leftrightarrow}\to\Omega^{\rightarrow} by

(τt(ω))s:=ωt+sωt,s0.(\tau_{t}(\omega^{\leftrightarrow}))_{s}:=\omega^{\leftrightarrow}_{t+s}-\omega^{\leftrightarrow}_{t},\quad s\geq 0.

For tt\in\mathbb{R} define xt:Ωx_{t}:\Omega^{\leftrightarrow}\to\mathbb{R} by

xt(ω):=ωt.x_{t}(\omega^{\leftrightarrow}):=\omega^{\leftrightarrow}_{t}.

For tt\in\mathbb{R}, define kt:ΩΩk_{t}:\Omega^{\leftrightarrow}\to\Omega^{\leftrightarrow} by

(kt(ω))s:={ωs,if st,ωt,if s>t.(k_{t}(\omega^{\leftrightarrow}))_{s}:=\begin{cases}\omega^{\leftrightarrow}_{s},&\text{if }s\leq t,\\ \omega^{\leftrightarrow}_{t},&\text{if }s>t.\end{cases}

Let Ω~\tilde{\Omega}^{\leftrightarrow} (resp. Ω~\tilde{\Omega}^{\rightarrow}) denote the class of closed subsets of \mathbb{R} (resp. +\mathbb{R}_{+}). For tt\in\mathbb{R} define τ~t:Ω~Ω~\tilde{\tau}_{t}:\tilde{\Omega}^{\leftrightarrow}\to\tilde{\Omega}^{\rightarrow} by

τ~t(ω~):={st:sω~[t,)}.\tilde{\tau}_{t}(\tilde{\omega}^{\leftrightarrow}):=\{s-t:s\in\tilde{\omega}^{\leftrightarrow}\cap[t,\infty)\}.

For tt\in\mathbb{R} define dt:Ω~{+}d_{t}:\tilde{\Omega}^{\leftrightarrow}\to\mathbb{R}\cup\{+\infty\} by

dt(ω~):=inf{s>t:sω~}d_{t}(\tilde{\omega}^{\leftrightarrow}):=\inf\{s>t:s\in\tilde{\omega}^{\leftrightarrow}\}

and rt:Ω~+{+}r_{t}:\tilde{\Omega}^{\leftrightarrow}\to\mathbb{R}_{+}\cup\{+\infty\} by

rt(ω~):=dt(ω~)t.r_{t}(\tilde{\omega}^{\leftrightarrow}):=d_{t}(\tilde{\omega}^{\leftrightarrow})-t.

With a slight abuse of notation, also use dtd_{t} and rtr_{t}, t+t\in\mathbb{R}_{+}, to denote the analogously defined maps from Ω~\tilde{\Omega}^{\rightarrow} to +{+}\mathbb{R}_{+}\cup\{+\infty\}.

Put Ω¯:=Ω×Ω~\bar{\Omega}^{\leftrightarrow}:=\Omega^{\leftrightarrow}\times\tilde{\Omega}^{\leftrightarrow} and Ω¯:=Ω×Ω~\bar{\Omega}^{\rightarrow}:=\Omega^{\rightarrow}\times\tilde{\Omega}^{\rightarrow}. Define τ¯t:Ω¯Ω¯\bar{\tau}_{t}:\bar{\Omega}^{\leftrightarrow}\to\bar{\Omega}^{\rightarrow} by

τ¯t(ω,ω~):=(τt(ω),τ~t(ω~)).\bar{\tau}_{t}(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow}):=(\tau_{t}(\omega^{\leftrightarrow}),\tilde{\tau}_{t}(\tilde{\omega}^{\leftrightarrow})).

Define d¯t:Ω¯{+}\bar{d}_{t}:\bar{\Omega}^{\leftrightarrow}\to\mathbb{R}\cup\{+\infty\} by

d¯t(ω,ω~):=dt(ω~).\bar{d}_{t}(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow}):=d_{t}(\tilde{\omega}^{\leftrightarrow}).

Finally, for tt\in\mathbb{R} define the following σ\sigma-fields on Ω¯\bar{\Omega}^{\leftrightarrow}:

𝒢t¯:=σ{d¯s,kd¯s,st}\bar{\mathcal{G}_{t}}^{\leftrightarrow}:=\sigma\{\bar{d}_{s},k_{\bar{d}_{s}},s\leq t\}

and

𝒢¯:=σ{d¯s,kd¯s,s}.\bar{\mathcal{G}}^{\leftrightarrow}:=\sigma\{\bar{d}_{s},k_{\bar{d}_{s}},s\in\mathbb{R}\}.

Define 𝒢t¯\bar{\mathcal{G}_{t}}^{\rightarrow} and 𝒢¯\bar{\mathcal{G}}^{\rightarrow} analogously.

Definition 2.1.

Let ¯\bar{\mathbb{Q}}^{\leftrightarrow} (resp. ¯\bar{\mathbb{Q}}^{\rightarrow}) be a probability measure on (Ω¯,𝒢¯)(\bar{\Omega}^{\leftrightarrow},\bar{\mathcal{G}}^{\leftrightarrow}) (resp. (Ω¯,𝒢¯)(\bar{\Omega}^{\rightarrow},\bar{\mathcal{G}}^{\rightarrow}). Then ¯\bar{\mathbb{Q}}^{\leftrightarrow} is regenerative with regeneration law ¯\bar{\mathbb{Q}}^{\rightarrow} if

  • (i)

    ¯{d¯t=+}=0\bar{\mathbb{Q}}^{\leftrightarrow}\{\bar{d}_{t}=+\infty\}=0, for all tt\in\mathbb{R};

  • (ii)

    for all tt\in\mathbb{R} and for all 𝒢¯\bar{\mathcal{G}}^{\rightarrow}-measurable nonnegative functions FF,

    ¯[F(τ¯d¯t)|𝒢¯t+]=¯[F],\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{t}})\,|\,\bar{\mathcal{G}}_{t+}\right]=\bar{\mathbb{Q}}^{\rightarrow}[F],

    where we write ¯[]\bar{\mathbb{Q}}^{\leftrightarrow}[\cdot] and ¯[]\bar{\mathbb{Q}}^{\rightarrow}[\cdot] for expectations with respect to ¯\bar{\mathbb{Q}}^{\leftrightarrow} and ¯\bar{\mathbb{Q}}^{\rightarrow}.

Remark 2.2.

Suppose that the probability measure ¯\bar{\mathbb{Q}}^{\leftrightarrow} on (Ω¯,𝒢¯)(\bar{\Omega}^{\leftrightarrow},\bar{\mathcal{G}}^{\leftrightarrow}) is stationary; that is, that under ¯\bar{\mathbb{Q}}^{\leftrightarrow} the process (ω,ω~)(xt(ω),rt(ω~))t(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow})\mapsto(x_{t}(\omega^{\leftrightarrow}),r_{t}(\tilde{\omega}^{\leftrightarrow}))_{t\in\mathbb{R}} has the same distribution as the process (ω,ω~)(xs+t(ω)xs(ω),rs+t(ω~))t(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow})\mapsto(x_{s+t}(\omega^{\leftrightarrow})-x_{s}(\omega^{\leftrightarrow}),r_{s+t}(\tilde{\omega}^{\leftrightarrow}))_{t\in\mathbb{R}} for all ss\in\mathbb{R}. Then, in order to check conditions (i) and (ii) of Definition 2.1, it suffices to check them for the case t=0t=0.

Theorem 2.3.
  • (i)

    In order to check that the probability measure ¯\bar{\mathbb{Q}}^{\leftrightarrow} on (Ω¯,𝒢¯)(\bar{\Omega}^{\leftrightarrow},\bar{\mathcal{G}}^{\leftrightarrow}) is space-time regenerative with the probability measure ¯\bar{\mathbb{Q}}^{\rightarrow} on (Ω¯,𝒢¯)(\bar{\Omega}^{\rightarrow},\bar{\mathcal{G}}^{\rightarrow}) as regeneration law, it suffices to check

    • (a)

      ¯{d¯t=+}=0\bar{\mathbb{Q}}^{\leftrightarrow}\{\bar{d}_{t}=+\infty\}=0, for all tt\in\mathbb{R};

    • (b)

      for all tt\in\mathbb{R} and for all 𝒢¯\bar{\mathcal{G}}^{\rightarrow}-measurable nonnegative functions FF,

      ¯[F(τ¯d¯t)|𝒢¯t]=¯[F].\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{t}})\,|\,\bar{\mathcal{G}}_{t}\right]=\bar{\mathbb{Q}}^{\rightarrow}[F].
  • (ii)

    Suppose that the probability measure ¯\bar{\mathbb{Q}}^{\leftrightarrow} on (Ω¯,𝒢¯)(\bar{\Omega}^{\leftrightarrow},\bar{\mathcal{G}}^{\leftrightarrow}) is space-time regenerative with the probability measure ¯\bar{\mathbb{Q}}^{\rightarrow} on (Ω¯,𝒢¯)(\bar{\Omega}^{\rightarrow},\bar{\mathcal{G}}^{\rightarrow}) as regeneration law and that TT is an almost surely finite (𝒢¯t+)t(\bar{\mathcal{G}}^{\leftrightarrow}_{t+})_{t\in\mathbb{R}}-stopping time. Then for all 𝒢¯\bar{\mathcal{G}}^{\rightarrow}-measurable nonnegative functions FF

    ¯[F(τ¯d¯T)|𝒢¯T+]=¯[F].\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{T}})\,|\,\bar{\mathcal{G}}_{T+}\right]=\bar{\mathbb{Q}}^{\rightarrow}[F].
Proof.

(i) Fix tt\in\mathbb{R}. For nn\in\mathbb{N} set tn:=t+2nt_{n}:=t+2^{-n}.

Consider F:Ω¯+F:\bar{\Omega}^{\rightarrow}\to\mathbb{R}_{+} of the form

F((ω,ω~))=f(ωs1,,ωs,rs1(ω~),,rs(ω~))F((\omega^{\rightarrow},\tilde{\omega}^{\rightarrow}))=f(\omega^{\rightarrow}_{s_{1}},\ldots,\omega^{\rightarrow}_{s_{\ell}},r_{s_{1}}(\tilde{\omega}^{\rightarrow}),\ldots,r_{s_{\ell}}(\tilde{\omega}^{\rightarrow}))

for some 0s1<s2<<s0\leq s_{1}<s_{2}<\ldots<s_{\ell} and bounded, continuous function f:×({+})+f:\mathbb{R}^{\ell}\times(\mathbb{R}\cup\{+\infty\})^{\ell}\to\mathbb{R}_{+}. For such an FF we have

limnF(τ¯d¯tn(ω¯))=F(τ¯d¯t(ω¯))\lim_{n\to\infty}F(\bar{\tau}_{\bar{d}_{t_{n}}}(\bar{\omega}^{\leftrightarrow}))=F(\bar{\tau}_{\bar{d}_{t}}(\bar{\omega}^{\leftrightarrow}))

for all ω¯Ω¯\bar{\omega}^{\leftrightarrow}\in\bar{\Omega}^{\leftrightarrow} and it suffices by a monotone class argument to show that

¯[F(τ¯d¯tn)|𝒢¯t+]=¯[F]\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{t_{n}}})\,|\,\bar{\mathcal{G}}^{\leftrightarrow}_{t+}\right]=\bar{\mathbb{Q}}^{\rightarrow}[F]

for all nn\in\mathbb{N}. This, however, is clear because 𝒢¯t+𝒢¯tn\bar{\mathcal{G}}^{\leftrightarrow}_{t+}\subseteq\bar{\mathcal{G}}^{\leftrightarrow}_{t_{n}} and

¯[F(τ¯d¯tn)|𝒢¯tn]=¯[F]\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{t_{n}}})\,|\,\bar{\mathcal{G}}^{\leftrightarrow}_{t_{n}}\right]=\bar{\mathbb{Q}}^{\rightarrow}[F]

by assumption.

(ii) For nn\in\mathbb{N} define a (𝒢¯t)t(\bar{\mathcal{G}}^{\leftrightarrow}_{t})_{t\in\mathbb{R}}-stopping time TnT_{n} by declaring that Tn:=k2nT_{n}:=\frac{k}{2^{n}} when T[k12n,k2n)T\in[\frac{k-1}{2^{n}},\frac{k}{2^{n}}), kk\in\mathbb{Z}.

Let FF be as in the proof of part (i). For such an FF we have

limnF(τ¯d¯Tn(ω¯))=F(τ¯d¯T(ω¯))\lim_{n\to\infty}F(\bar{\tau}_{\bar{d}_{T_{n}}}(\bar{\omega}^{\leftrightarrow}))=F(\bar{\tau}_{\bar{d}_{T}}(\bar{\omega}^{\leftrightarrow}))

for all ω¯Ω¯\bar{\omega}^{\leftrightarrow}\in\bar{\Omega}^{\leftrightarrow} and it suffices by a monotone class argument to show that

¯[F(τ¯d¯Tn)|𝒢¯T+]=¯[F]\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{T_{n}}})\,|\,\bar{\mathcal{G}}^{\leftrightarrow}_{T+}\right]=\bar{\mathbb{Q}}^{\rightarrow}[F]

for all nn\in\mathbb{N}. Since 𝒢¯T+𝒢¯Tn+\bar{\mathcal{G}}_{T+}\subseteq\bar{\mathcal{G}}_{T_{n}+} for all nn\in\mathbb{N}, it further suffices to show that

¯[F(τ¯d¯Tn)|𝒢¯Tn+]=¯[F].\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{T_{n}}})\,|\,\bar{\mathcal{G}}^{\leftrightarrow}_{T_{n}+}\right]=\bar{\mathbb{Q}}^{\rightarrow}[F].

Fix nn\in\mathbb{N} and suppose that GG is a nonnegative 𝒢¯Tn+\bar{\mathcal{G}}^{\leftrightarrow}_{T_{n}+}-measurable random variable. We have

¯[F(τ¯d¯Tn)G]=k¯[F(τ¯d¯Tn)G 1{Tn=k2n}]=k¯[F(τ¯d¯k2n)G 1{Tn=k2n}]=¯[F]k¯[G 1{Tn=k2n}]=¯[F]¯[G],\begin{split}\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{T_{n}}})\,G\right]&=\sum_{k\in\mathbb{Z}}\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{T_{n}}})\,G\,\mathbbold{1}\left\{T_{n}=\frac{k}{2^{n}}\right\}\right]\\ &=\sum_{k\in\mathbb{Z}}\bar{\mathbb{Q}}^{\leftrightarrow}\left[F(\bar{\tau}_{\bar{d}_{\frac{k}{2^{n}}}})\,G\,\mathbbold{1}\left\{T_{n}=\frac{k}{2^{n}}\right\}\right]\\ &=\bar{\mathbb{Q}}^{\rightarrow}[F]\sum_{k\in\mathbb{Z}}\bar{\mathbb{Q}}^{\leftrightarrow}\left[G\,\mathbbold{1}\left\{T_{n}=\frac{k}{2^{n}}\right\}\right]\\ &=\bar{\mathbb{Q}}^{\rightarrow}[F]\,\bar{\mathbb{Q}}^{\leftrightarrow}\left[G\right],\\ \end{split}

where in the penultimate equality we used the fact that G 1{Tn=k2n}G\,\mathbbold{1}\{T_{n}=\frac{k}{2^{n}}\} is 𝒢¯k2n\bar{\mathcal{G}}^{\leftrightarrow}_{\frac{k}{2^{n}}}-measurable (see, for example, [Kal02, Lemma 7.1(ii)]). This completes the proof. ∎

Theorem 2.4.

Suppose for the Lévy process (Xt+αt)t(X_{t}+\alpha t)_{t\in\mathbb{R}} that 0 is regular for (0,)(0,\infty). Then the distribution of ((Xt)t,𝒵)((X_{t})_{t\in\mathbb{R}},\mathcal{Z}) is space-time regenerative.

Proof.

Use Theorem 3.5 below, Remark 2.2, and part (i) of Theorem 2.3. ∎

Remark 2.5.

If 0 is not regular for (0,)(0,\infty) for the Lévy process (Xt+αt)t(X_{t}+\alpha t)_{t\in\mathbb{R}}, then 0 is regular for (,0)(-\infty,0) for the Lévy process (Xtαt)t(X_{t}-\alpha t)_{t\in\mathbb{R}}. Equivalently, if 0 is not regular for (0,)(0,\infty) for the Lévy process (Xt+αt)t(X_{t}+\alpha t)_{t\in\mathbb{R}}, then 0 is regular for (0,)(0,\infty) for the Lévy process (Xt+αt)t(-X_{t}+\alpha t)_{t\in\mathbb{R}} and hence for the Lévy process (Xt+αt)t(X_{-t-}+\alpha t)_{t\in\mathbb{R}}. Thus, either the distribution of ((Xt)t,𝒵)((X_{t})_{t\in\mathbb{R}},\mathcal{Z}) is space-time regenerative or the distribution of ((Xt)t,𝒵)((X_{-t-})_{t\in\mathbb{R}},\mathcal{Z}) is space-time regenerative.

Write π~:Ω¯Ω~\tilde{\pi}^{\leftrightarrow}:\bar{\Omega}^{\leftrightarrow}\to\tilde{\Omega}^{\leftrightarrow} for the projection ω¯=(ω,ω~)ω~\bar{\omega}^{\leftrightarrow}=(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow})\mapsto\tilde{\omega}^{\leftrightarrow}. Define π~:Ω¯Ω~\tilde{\pi}^{\rightarrow}:\bar{\Omega}^{\rightarrow}\to\tilde{\Omega}^{\rightarrow} similarly. If ¯\bar{\mathbb{Q}}^{\leftrightarrow} is space-time regenerative with regeneration law ¯\bar{\mathbb{Q}}^{\rightarrow}, then, in the sense of [FT88], the push-forward of ¯\bar{\mathbb{Q}}^{\leftrightarrow} by π~\tilde{\pi}^{\leftrightarrow} is regenerative with regeneration law the push-forward of ¯\bar{\mathbb{Q}}^{\rightarrow} by π~\tilde{\pi}^{\rightarrow}. It follows that ¯{(ω,ω~):ω~is discrete}\bar{\mathbb{Q}}^{\rightarrow}\{(\omega^{\rightarrow},\tilde{\omega}^{\rightarrow}):\tilde{\omega}^{\rightarrow}\;\text{is discrete}\} is either 11 or 0.

Suppose that the probability in question is 11. Define (𝒢¯t)t(\bar{\mathcal{G}}^{\leftrightarrow}_{t})_{t\in\mathbb{R}}-stopping times T1,T2,T_{1},T_{2},\ldots with 0<T1<T2<0<T_{1}<T_{2}<\ldots almost surely by

T1:=d¯0T_{1}:=\bar{d}_{0}

and

Tn+1(ω¯):=d¯Tn(ω¯)(ω¯)=Tn(ω¯)+d¯0θ¯Tn(ω¯)(ω¯),n,T_{n+1}(\bar{\omega}^{\leftrightarrow}):=\bar{d}_{T_{n}(\bar{\omega}^{\leftrightarrow})}(\bar{\omega}^{\leftrightarrow})=T_{n}(\bar{\omega}^{\leftrightarrow})+\bar{d}_{0}\circ\bar{\theta}_{T_{n}(\bar{\omega}^{\leftrightarrow})}(\bar{\omega}^{\leftrightarrow}),\quad n\in\mathbb{N},

where θ¯t:Ω¯Ω¯\bar{\theta}_{t}:\bar{\Omega}^{\leftrightarrow}\to\bar{\Omega}^{\leftrightarrow}, tt\in\mathbb{R}, are the shift maps given by θ¯(ω,ω~)=((ωt+u)u,ω~t)\bar{\theta}(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow})=((\omega^{\leftrightarrow}_{t+u})_{u\in\mathbb{R}},\tilde{\omega}^{\leftrightarrow}-t). Let \partial be an isolated cemetery state adjoined to \mathbb{R}. Define càdlàg {}\mathbb{R}\cup\{\partial\}-valued processes Yn=(Ytn)t+Y^{n}=(Y^{n}_{t})_{t\in\mathbb{R}_{+}}, nn\in\mathbb{N}, by

Ytn(ω,ω~):={πτT¯n(ω,ω~)(ω,ω~)t,0t<d¯0θ¯Tn(ω¯)(ω¯)=ζn,,td¯0θ¯Tn(ω¯)(ω¯)=ζn,Y^{n}_{t}(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow}):=\begin{cases}\pi^{\rightarrow}\circ\tau_{\bar{T}_{n}(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow})}(\omega^{\leftrightarrow},\tilde{\omega}^{\leftrightarrow})_{t},&0\leq t<\bar{d}_{0}\circ\bar{\theta}_{T_{n}(\bar{\omega}^{\leftrightarrow})}(\bar{\omega}^{\leftrightarrow})=\zeta_{n},\\ \partial,&t\geq\bar{d}_{0}\circ\bar{\theta}_{T_{n}(\bar{\omega}^{\leftrightarrow})}(\bar{\omega}^{\leftrightarrow})=\zeta_{n},\\ \end{cases}

where π:Ω¯Ω\pi^{\rightarrow}:\bar{\Omega}^{\rightarrow}\to\Omega^{\rightarrow} is the projection (ω,ω~)ω(\omega^{\rightarrow},\tilde{\omega}^{\rightarrow})\mapsto\omega^{\rightarrow}. Then, under ¯\bar{\mathbb{Q}}^{\leftrightarrow}, the sequence YnY^{n}, nn\in\mathbb{N}, is independent and identically distributed.

The path of of each YnY_{n} lies in the set Ω0,\Omega^{0,\partial} consisting of càdlàg functions f:+{}f:\mathbb{R}_{+}\to\mathbb{R}\cup\{\partial\} such that f(0)=0f(0)=0, 0<inf{s0:f(s)=}<0<\inf\{s\geq 0:f(s)=\partial\}<\infty, and f(t)=f(t)=\partial for all tinf{s0:f(s)=}t\geq\inf\{s\geq 0:f(s)=\partial\}.

When the probability in question is 0 there is a local time on our regenerative set and we can construct a Poisson random measure on the set ×Ω0,\mathbb{R}\times\Omega^{0,\partial} that records the excursions away from the contact set and the order in which they occur. We use the following theorem which is a restatement of [GP80, Corollary 3.1].

Theorem 2.6.

Let (Ok)k(O_{k})_{k\in\mathbb{N}} be an increasing family of measurable sets in a measurable space (O,𝒪)(O,\mathcal{O}) such that O=kOkO=\bigcup_{k\in\mathbb{N}}O_{k}. Let 𝐕\mathbf{V} be an OO-valued point process; that is, 𝐕=(𝐕t)t0\mathbf{V}=(\mathbf{V}_{t})_{t\geq 0} is a stochastic process with values in O{}O\cup\{{\dagger}\} for some adjoined point {\dagger} such that {t0:𝐕t}\{t\geq 0:\mathbf{V}_{t}\neq{\dagger}\} is almost surely countable. Suppose that {t0:𝐕tOk}\{t\geq 0:\mathbf{V}_{t}\in O_{k}\} is almost surely discrete and unbounded for all kk\in\mathbb{N} while {t0:𝐕tO}\{t\geq 0:\mathbf{V}_{t}\in O\} is almost surely not discrete. Suppose that the sequence {𝐕t:𝐕tOk}\{\mathbf{V}_{t}:\mathbf{V}_{t}\in O_{k}\} is independent and identically distributed for each kk\in\mathbb{N}. For kk\in\mathbb{N} define (Nk(t))t0(N_{k}(t))_{t\geq 0} by setting Nk(t)=#{0ut:𝐕uOk}N_{k}(t)=\#\{0\leq u\leq t:\mathbf{V}_{u}\in O_{k}\} for t0t\geq 0. For kk\in\mathbb{N} set Tk=inf{t>0:𝐕tOk}T_{k}=\inf\{t>0:\mathbf{V}_{t}\in O_{k}\} and put pk={𝐕TkO1}p_{k}=\mathbb{P}\{\mathbf{V}_{T_{k}}\in O_{1}\}. Then, for almost all ωΩ\omega\in\Omega, uniformly for bounded t0t\geq 0,

limkpkNk(t,ω)=L(t,ω),\lim_{k\rightarrow\infty}p_{k}N_{k}(t,\omega)=L(t,\omega),

where L(t,ω)L(t,\omega) is continuous and nondecreasing in t0t\geq 0, and strictly increasing on {t0:𝐕(t,ω)}\{t\geq 0:\mathbf{V}(t,\omega)\neq{\dagger}\}. For such ω\omega, set

𝐕(s,ω)={V(t,ω),if s=L(t,ω),,otherwise.\mathbf{V}^{*}(s,\omega)=\begin{cases}V(t,\omega),&\text{if }s=L(t,\omega),\\ {\dagger},&\text{otherwise.}\end{cases}

Then 𝐕\mathbf{V}^{*} is a homogeneous Poisson point process; that is, the random measure that puts mass 11 at each point (t,f)+×O(t,f)\in\mathbb{R}_{+}\times O such that 𝐕t=f\mathbf{V}^{*}_{t}=f is a Poisson random measure with intensity of the form λν\lambda\otimes\nu, where λ\lambda is Lebesgue measure on +\mathbb{R}_{+} and ν\nu is a σ\sigma-finite measure on (O,𝒪)(O,\mathcal{O}). Moreover, for almost all ωΩ\omega\in\Omega for all tt with 𝐕(t,ω)\mathbf{V}(t,\omega)\neq{\dagger}, 𝐕(t,ω)=𝐕(L(t,ω),ω)\mathbf{V}(t,\omega)=\mathbf{V}^{*}(L(t,\omega),\omega).

We can apply this theorem if we consider OO to be the space Ω0,\Omega^{0,\partial} of càdlàg paths that vanish at the origin and have finite lifetimes, and take OkO_{k} to be the subspace of paths with lifetime at least 1k\frac{1}{k}. We define 𝐕\mathbf{V} to be the point process of the excursions such that, for every t0t\geq 0, 𝐕t\mathbf{V}_{t} is equal to the excursion whose right end point is tt, with the convention that 𝐕t=\mathbf{V}_{t}={\dagger} if tt is not the right end point of an excursion. In the case where 𝒵\mathcal{Z} is not discrete, all the conditions of Theorem 2.6 can readily be checked and we obtain a time-changed Poisson point process.

3. The process after the first positive point in the contact set

Notation 3.1.

For tt\in\mathbb{R} set Gt:=sup(𝒵(,t))G_{t}:=\sup(\mathcal{Z}\cap(-\infty,t)) and Dt:=inf(𝒵(t,+))D_{t}:=\inf(\mathcal{Z}\cap(t,+\infty)). Put G:=G0G:=G_{0} and D:=D0D:=D_{0}.

Remark 3.2.

We have from Lemma 7.3 that

D=inf{tS:XtXt+αt=inf{Xu+αu:uS}},D=\inf\{t\geq S:X_{t}\wedge X_{t-}+\alpha t=\inf\{X_{u}+\alpha u:u\geq S\}\},

where

S=S0:=inf{s>0:XsXsαsinf{Xuαu:u0}}S=S_{0}:=\inf\{s>0:X_{s}\wedge X_{s-}-\alpha s\leq\inf\{X_{u}-\alpha u:u\leq 0\}\}

because almost surely XSXSX_{S}\leq X_{S-}. The latter result was shown in the proof of [AE14, Theorem 2.6].

Notation 3.3.

For tt\in\mathbb{R} put t=ϵ>0σ{Xs:<st+ϵ}\mathcal{F}_{t}=\bigcap_{\epsilon>0}\sigma\{X_{s}:-\infty<s\leq t+\epsilon\}. Define the σ\sigma-field U\mathcal{F}_{U} for any nonnegative random time UU to be the σ\sigma-field generated by all the random variables of the form ξU\xi_{U} where (ξt)t(\xi_{t})_{t\in\mathbb{R}} is an optional process with respect to the filtration (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}. Similarly, define U\mathcal{F}_{U-} to be the σ\sigma-field generated by all the random variables of the form ξU\xi_{U} where (ξt)t(\xi_{t})_{t\in\mathbb{R}} is now a previsible process with respect to the filtration (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}

Notation 3.4.

Let X~=(Ω~,~,~t,X~t,θ~t,~x)\tilde{X}=(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\mathcal{F}}_{t},\tilde{X}_{t},\tilde{\theta}_{t},\tilde{\mathbb{P}}^{x}) be a Hunt process such that the distribution of X~\tilde{X} under ~x\tilde{\mathbb{P}}^{x} is that of (x+Xt+αt)t0(x+X_{t}+\alpha t)_{t\geq 0}. Put T˘:=inf{t>0:X~sX~s<0}\breve{T}:=\inf\{t>0:\tilde{X}_{s}\wedge\tilde{X}_{s-}<0\}, and for t0t\geq 0 and x,y>0x,y>0 put

H˘t(x,dy):=~x{X~tdy,t<T˘}~y{T˘=}~x{T˘=}.\breve{H}_{t}(x,dy):=\tilde{\mathbb{P}}^{x}\{\tilde{X}_{t}\in dy,t<\breve{T}\}\frac{\tilde{\mathbb{P}}^{y}\{\breve{T}=\infty\}}{\tilde{\mathbb{P}}^{x}\{\breve{T}=\infty\}}.

We interpret (H˘t)t0(\breve{H}_{t})_{t\geq 0} as the transition functions of the Markov process X~\tilde{X} conditioned to stay positive.

Theorem 3.5.

Suppose that 0 is regular for (0,)(0,\infty) for the Markov process X~\tilde{X}. Then the process (Xt+DXD)t0(X_{t+D}-X_{D})_{t\geq 0} is independent of (Xt,<tD)(X_{t},-\infty<t\leq D). Moreover, the process (Xt+DXD+αt)t0(X_{t+D}-X_{D}+\alpha t)_{t\geq 0} is Markovian with transition functions (H˘t)t0(\breve{H}_{t})_{t\geq 0} and a certain family of entrance laws (Q˘t)t0(\breve{Q}_{t})_{t\geq 0}.

Proof.

Because (Xt)t(X_{t})_{t\in\mathbb{R}} is a two-sided Lévy process and SS is a stopping time, the process Xˇ:=(Xt+SXS+αt)t0\check{X}:=(X_{t+S}-X_{S}+\alpha t)_{t\geq 0} is, by the strong Markov property, independent of S\mathcal{F}_{S} and has the same distribution as the process X~\tilde{X} under ~0\tilde{\mathbb{P}}^{0}.

By [Mil77, Proposition 2.4], the set {t0:X~tX~t=inf{X~s:s0}}\{t\geq 0:\tilde{X}_{t}\wedge\tilde{X}_{t-}=\inf\{\tilde{X}_{s}:s\geq 0\}\} consists ~x\tilde{\mathbb{P}}^{x}-almost surely of a single point T~\tilde{T} for all xx\in\mathbb{R}. Consequently, the set {t0:XˇtXˇt=inf{Xˇs:s0}}\{t\geq 0:\check{X}_{t}\wedge\check{X}_{t-}=\inf\{\check{X}_{s}:s\geq 0\}\} also consists almost surely of a single point Tˇ\check{T}. From Remark 3.2 we have D=S+TˇD=S+\check{T}.

Because 0 is regular for (0,)(0,\infty) for the Markov process X~\tilde{X} and thus X~T~=inf{X~s:s0}\tilde{X}_{\tilde{T}}=\inf\{\tilde{X}_{s}:s\geq 0\}, it follows from the sole theorem in [Mil78] that the process (X~T~+t)t0(\tilde{X}_{\tilde{T}+t})_{t\geq 0} is independent of ~T~\tilde{\mathcal{F}}_{\tilde{T}} given X~T~\tilde{X}_{\tilde{T}}.

Moreover, there exists a family of entrance laws (Qt(x;))t0(Q_{t}(x;\cdot))_{t\geq 0} for each xx\in\mathbb{R} and a family of transition functions (Ht(x;,)t0(H_{t}(x;\cdot,\cdot)_{t\geq 0} for each xx\in\mathbb{R} such that

~x{X~t+T~A|~T~}=Qt(X~T~;A),t0,\tilde{\mathbb{P}}^{x}\{\tilde{X}_{t+\tilde{T}}\in A\,|\,\tilde{\mathcal{F}}_{\tilde{T}}\}=Q_{t}(\tilde{X}_{\tilde{T}};A),\quad t\geq 0,

and

~x{X~t+T~A|~T~+s}=Hts(X~T~;X~T~+s,A),0<s<t.\tilde{\mathbb{P}}^{x}\{\tilde{X}_{t+\tilde{T}}\in A\,|\,\tilde{\mathcal{F}}_{\tilde{T}+s}\}=H_{t-s}(\tilde{X}_{\tilde{T}};\tilde{X}_{\tilde{T}+s},A),\quad 0<s<t.

Using the fact that the processes (x+X~t+T~)t0(x+\tilde{X}_{t+\tilde{T}})_{t\geq 0} under ~0\tilde{\mathbb{P}}^{0} and (X~t+T~)t0(\tilde{X}_{t+\tilde{T}})_{t\geq 0} under ~x\tilde{\mathbb{P}}^{x} have the same law, it follows that Qt(x;x+A)=Qt(0,A)Q_{t}(x;x+A)=Q_{t}(0,A) and Ht(x;x+y,x+A)=Ht(0;y,A)H_{t}(x;x+y,x+A)=H_{t}(0;y,A). Thus the process (X~t+T~X~T~)t0(\tilde{X}_{t+\tilde{T}}-\tilde{X}_{\tilde{T}})_{t\geq 0} is independent of ~T~\mathcal{\tilde{F}}_{\tilde{T}} and, moreover, this process is Markovian with the entrance law AQt(0;A)=:Q˘t(A)A\mapsto Q_{t}(0;A)=:\breve{Q}_{t}(A), t0t\geq 0 and transition functions (y,A)Ht(0;y,A)=H˘t(y,A)(y,A)\mapsto H_{t}(0;y,A)=\breve{H}_{t}(y,A), t0t\geq 0. Applying Lemma  8.2 we get that (Xt+DXD+αt)t0(X_{t+D}-X_{D}+\alpha t)_{t\geq 0} = (Xˇt+TˇXˇTˇ)t0(\check{X}_{t+\check{T}}-\check{X}_{\check{T}})_{t\geq 0} is independent of Dσ{XD}\mathcal{F}_{D-}\vee\sigma\{X_{D}\} and Markovian with transition functions (H˘t)t0(\breve{H}_{t})_{t\geq 0} and entrance laws (Q˘t)t0(\breve{Q}_{t})_{t\geq 0}.

Introduce the killed process (X¯)t(\bar{X})_{t\in\mathbb{R}} defined by

X¯t:={Xt,t<D,XD,t=D,,t>D,\bar{X}_{t}:=\begin{cases}X_{t},&t<D,\\ X_{D},&t=D,\\ \partial,&t>D,\\ \end{cases}

where \partial is an adjoined isolated point. To complete the proof, it suffices to show that σ{Xt,<tD}σ{X¯t,t}Dσ{XD}\sigma\{X_{t},-\infty<t\leq D\}\equiv\sigma\{\bar{X}_{t},t\in\mathbb{R}\}\subseteq\mathcal{F}_{D-}\vee\sigma\{X_{D}\}; that is, that X¯t\bar{X}_{t} is Dσ{XD}\mathcal{F}_{D-}\vee\sigma\{X_{D}\}-measurable for all tt\in\mathbb{R}. For all uu\in\mathbb{R} the process (1{s>u})s(\mathbbold{1}_{\{s>u\}})_{s\in\mathbb{R}} is left-continuous, right-limited, and (s)s(\mathcal{F}_{s})_{s\in\mathbb{R}}-adapted. Therefore, for all uu\in\mathbb{R}, the random variable 1{D>u}\mathbbold{1}_{\{D>u\}} is D\mathcal{F}_{D-}-measurable and so the random variable DD is D\mathcal{F}_{D-}-measurable. In particular, the event {t>D}\{t>D\} is D\mathcal{F}_{D-}-measurable. Next, the process (Xt1t<s)s(X_{t}\mathbbold{1}_{t<s})_{s\in\mathbb{R}} is also left-continuous, right-limited, and (s)s(\mathcal{F}_{s})_{s\in\mathbb{R}}-adapted and hence the random variable Xt1{t<D}X_{t}\mathbbold{1}_{\{t<D\}} is D\mathcal{F}_{D-}-measurable. Consequently, for any Borel subset A{}A\subseteq\mathbb{R}\cup\{\partial\} we have

{X¯tA}=({X¯tA}{t<D})({X¯tA}{t=D})({X¯tA}{t>D})={({Xt1{t<D}A}{t<D})({XDA}{t=D}){t>D},A,({Xt1{t<D}A}{t<D})({XDA}{t=D}),A,Dσ{XD},\begin{split}&\{\bar{X}_{t}\in A\}\\ &\quad=(\{\bar{X}_{t}\in A\}\cap\{t<D\})\cup(\{\bar{X}_{t}\in A\}\cap\{t=D\})\cup(\{\bar{X}_{t}\in A\}\cap\{t>D\})\\ &\quad=\begin{cases}(\{X_{t}\mathbbold{1}_{\{t<D\}}\in A\}\cap\{t<D\})\cup(\{X_{D}\in A\}\cap\{t=D\})\cup\{t>D\},&\partial\in A,\\ (\{X_{t}\mathbbold{1}_{\{t<D\}}\in A\}\cap\{t<D\})\cup(\{X_{D}\in A\}\cap\{t=D\}),&\partial\notin A,\\ \end{cases}\\ &\quad\in\mathcal{F}_{D-}\vee\sigma\{X_{D}\},\\ \end{split}

as claimed. ∎

4. The excursion straddling zero

In this section we focus on the excursion away from the contact set that straddles the time zero; that is, the piece of the path of XX between the times GG and DD of Notation 3.1.

The following proposition gives an explicit path decomposition for, and hence the distribution of, the process (Xu, 0uD)(X_{u},\,0\leq u\leq D).

Proposition 4.1.

Set

I:=inf{Xuαu:u0}.I^{-}:=\inf\{X_{u}-\alpha u:u\leq 0\}.

Consider the following independent random objects :

  • a random variable Γ\Gamma with the same distribution as II^{-},

  • (Xt)t0(X^{\prime}_{t})_{t\geq 0} and (Xt′′)t0(X^{\prime\prime}_{t})_{t\geq 0} two independent copies of (Xt)t0(X_{t})_{t\geq 0}.

Define the process (Zt)t0(Z_{t})_{t\geq 0} by

Zt:={Xt,0tTΓ,XtTΓ′′+XTΓ,TΓtTΓ+T′′~,,t>TΓ+T′′~,Z_{t}:=\begin{cases}X^{\prime}_{t},&0\leq t\leq T^{\prime}_{\Gamma},\\ X^{\prime\prime}_{t-T^{\prime}_{\Gamma}}+X^{\prime}_{T^{\prime}_{\Gamma}},&T^{\prime}_{\Gamma}\leq t\leq T^{\prime}_{\Gamma}+\tilde{T^{\prime\prime}},\\ \partial,&t>T^{\prime}_{\Gamma}+\tilde{T^{\prime\prime}},\\ \end{cases}

where

TΓ:=inf{t0:XtXtαtΓ}T^{\prime}_{\Gamma}:=\inf\{t\geq 0:X^{\prime}_{t}\wedge X^{\prime}_{t-}-\alpha t\leq\Gamma\}

and

T′′~:=inf{t0:Xt′′Xt′′+αt=inf{Xu′′+αu:u0}}.\tilde{T^{\prime\prime}}:=\inf\{t\geq 0:X^{\prime\prime}_{t}\wedge X^{\prime\prime}_{t-}+\alpha t=\inf\{X^{\prime\prime}_{u}+\alpha u:u\geq 0\}\}.

Then,

(Xt, 0tD)=d(Zt, 0tTΓ+T′′~).(X_{t},\,0\leq t\leq D)\,{\buildrel d\over{=}}\,(Z_{t},\,0\leq t\leq T^{\prime}_{\Gamma}+\tilde{T^{\prime\prime}}).
Proof.

The path decomposition follows from the construction of the points SS and DD in Remark 3.2. The proof is left to the reader. ∎

Now that we have the distribution of the path of XX on [0,D][0,D], let us extend it to the whole interval [G,D][G,D]. First of all, we will prove that the random variable U=GDGU=\frac{-G}{D-G} is independent from the straddling excursion (Xt+GXG, 0tDG)(X_{t+G}-X_{G},\,0\leq t\leq D-G) and has a uniform distribution on the interval (0,1](0,1].

Our approach here uses ideas from [Tho00, Chapter 8] but with a modification of the particular shift operator considered there; see also [BB03] for a framework with general shift operators that encompasses the setting we work in. There is a large literature in this area of general Palm theory that is surveyed in [Tho00, BB03] but we mention [Nev77, Pit87] as being of particular relevance.

We will prove general results for the path space (H,)(H,\mathcal{H}) and sequence space (L,)(L,\mathcal{L}) defined by

H:={(zt)t:z is real-valued and càdlàg with z(0)=0}H:=\{(z_{t})_{t\in\mathbb{R}}:z\text{ is real-valued and c\`{a}dl\`{a}g with }z(0)=0\}

and

L:={(sk)k:<s1<0s0<s1<}.L:=\{(s_{k})_{k\in\mathbb{Z}}\in\mathbb{R}^{\mathbb{Z}}:-\infty\leftarrow\dots<s_{-1}<0\leq s_{0}<s_{1}<\dots\rightarrow\infty\}.

We take \mathcal{H} to be the σ\sigma-field on HH that makes all of the maps zztz\mapsto z_{t}, tt\in\mathbb{R}, measurable, and \mathcal{L} to be the trace of the product σ\sigma-field on LL.

For tt\in\mathbb{R} define the shift θt:H×LH×L\theta_{t}:H\times L\to H\times L by

θt((zs)s,(sk)k)=((zt+szt)s,(snt+kt)k)\theta_{t}((z_{s})_{s\in\mathbb{R}},(s_{k})_{k\in\mathbb{Z}})=((z_{t+s}-z_{t})_{s\in\mathbb{R}},(s_{n_{t}+k}-t)_{k\in\mathbb{Z}})

where nt=nn_{t}=n if and only if t[sn1,sn)t\in[s_{n-1},s_{n}). The family (θt)t(\theta_{t})_{t\in\mathbb{R}} is measurable in the sense that the mapping

((zs)s,(sk)k,t)H×L×θt((zs)s,(sk)k)H×L((z_{s})_{s\in\mathbb{R}},(s_{k})_{k\in\mathbb{Z}},t)\in H\times L\times\mathbb{R}\mapsto\theta_{t}((z_{s})_{s\in\mathbb{R}},(s_{k})_{k\in\mathbb{Z}})\in H\times L

is /\mathcal{H}\otimes\mathcal{L}\otimes\mathcal{B}/\mathcal{H}\otimes\mathcal{L} measurable, where \mathcal{B} is the Borel σ\sigma-field on \mathbb{R}.

We consider a probability space (Ω,,)(\Omega,\mathcal{F},\mathbb{P}) equipped with a random pair (K,P)(K,P) that take values on H×LH\times L. We assume furthermore that (K,P)(K,P) is space-homogeneous stationary in the sense that

θs(K,P)=d(K,P), for all s.\theta_{s}(K,P)\,{\buildrel d\over{=}}\,(K,P),\text{ for all }s\in\mathbb{R}.
Remark 4.2.

When 𝒵\mathcal{Z} is discrete, the space-time regenerative system ((Xt)t,𝒵)((X_{t})_{t\in\mathbb{R}},\mathcal{Z}) is obviously space-homogeneous stationarity due to the two facts that for any ss\in\mathbb{R} we have (Xt+sXs)t=d(Xt)t(X_{t+s}-X_{s})_{t\in\mathbb{R}}\,{\buildrel d\over{=}}\,(X_{t})_{t\in\mathbb{R}} and that the contact set for (Xt+sXs)t(X_{t+s}-X_{s})_{t\in\mathbb{R}} is, by Lemma 7.1, just 𝒵s\mathcal{Z}-s.

Definition 4.3.
  • Write lnl_{n} for the nthn^{\mathrm{th}} cycle length defined by ln=PnPn1l_{n}=P_{n}-P_{n-1}.

  • For tt\in\mathbb{R}, put Nt=nN_{t}=n for t[Pn1,Pn)t\in[P_{n-1},P_{n}).

  • Define the relative position of tt in [PNt1,PNt)[P_{N_{t}-1},P_{N_{t}}) by Ut:=tPNt1PNtU_{t}:=\frac{t-P_{N_{t}-1}}{P_{N_{t}}}.

  • Define the random variable (K,P)(K^{\circ},P^{\circ}) by

    (K,P)=θP0(K,P)=((Kt+P0KP0)t,(PkP0)k).(K^{\circ},P^{\circ})=\theta_{P_{0}}(K,P)=((K_{t+P_{0}}-K_{P_{0}})_{t\in\mathbb{R}},(P_{k}-P_{0})_{k\in\mathbb{Z}}).

The following are two important features of the family (θt)t(\theta_{t})_{t\in\mathbb{R}} that are useful in proving results analogous to those in [Tho00, Chapter 8, Section 3].

Proposition 4.4.

The family of shifts (θt)t(\theta_{t})_{t\in\mathbb{R}} enjoys the two following properties.

  • (i)

    The family (θt)t(\theta_{t})_{t\in\mathbb{R}} is semigroup; that is, for every t,s:θtθs=θt+st,s\in\mathbb{R}:\theta_{t}\circ\theta_{s}=\theta_{t+s}.

  • (ii)

    For all ss\in\mathbb{R} and (K,P)H×L(K,P)\in H\times L we have θs(K,P)=θNs(K,P)\theta_{s}(K,P)^{\circ}=\theta_{N_{s}}(K,P).

Proof.

For all t,st,s\in\mathbb{R}, and (K,P)H×L(K,P)\in H\times L

ProjH[(Ktθs(K,P))]u=θs(K)u+tθs(K)t=(Ku+t+sKs)(Kt+sKs)=Ku+t+sKt+s=(θt+s(K))u=ProjH[θt+s(K,P)]u\begin{split}\mathrm{Proj}_{H}[(K_{t}\circ\theta_{s}(K,P))]_{u}&=\theta_{s}(K)_{u+t}-\theta_{s}(K)_{t}\\ &=(K_{u+t+s}-K_{s})-(K_{t+s}-K_{s})\\ &=K_{u+t+s}-K_{t+s}\\ &=(\theta_{t+s}(K))_{u}\\ &=\mathrm{Proj}_{H}[\theta_{t+s}(K,P)]_{u}\end{split}

where ProjH\mathrm{Proj}_{H} is the projection from H×LH\times L to HH. The proof for the action of the shift on the sequence component is given in [Tho00, Chapter 8, Section 2].

We prove the (ii) in a similar manner. We have

θs(K,P)=((θs(K)t+θs(P)0θs(K)θs(P)0)t,(PNs+kPNs)k)=(((Kt+s+θs(P)0Ks)(Kθs(P)0+sKs))t,(PNs+kPNs)k)=((Kt+s+PNssKs+PNss)t,(PNs+kPNs)k)=((Kt+PNsKPNs,(PNs+kPNs)k)=θPNs(K,P)\begin{split}\theta_{s}(K,P)^{\circ}&=((\theta_{s}(K)_{t+\theta_{s}(P)_{0}}-\theta_{s}(K)_{\theta_{s}(P)_{0}})_{t\in\mathbb{R}},(P_{N_{s}+k}-P_{N_{s}})_{k\in\mathbb{Z}})\\ &=(((K_{t+s+\theta_{s}(P)_{0}}-K_{s})-(K_{\theta_{s}(P)_{0}+s}-K_{s}))_{t\in\mathbb{R}},(P_{N_{s}+k}-P_{N_{s}})_{k\in\mathbb{Z}})\\ &=((K_{t+s+P_{N_{s}}-s}-K_{s+P_{N_{s}}-s})_{t\in\mathbb{R}},(P_{N_{s}+k}-P_{N_{s}})_{k\in\mathbb{Z}})\\ &=((K_{t+P_{N_{s}}}-K_{P_{N_{s}}},(P_{N_{s}+k}-P_{N_{s}})_{k\in\mathbb{Z}})\\ &=\theta_{P_{N_{s}}}(K,P)\end{split}

We state now a theorem that is analogous to parts of [Tho00, Chapter 8, Theorem 3.1]. The proof uses the same key ideas as that result and just exploits the two properties of the family of shifts laid out in Proposition 4.4.

Theorem 4.5.

The random variable U0U_{0} is uniform on [0,1)[0,1) and is independent of (K,P)(K^{\circ},P^{\circ}). Also,

𝔼[k=1Ntf(θPk(K,P))]=t𝔼[f(K,P)l0].\mathbb{E}\left[\sum_{k=1}^{N_{t}}f(\theta_{P_{k}}(K,P))\right]=t\mathbb{E}\left[\frac{f(K^{\circ},P^{\circ})}{l_{0}}\right].
Proof.

Consider a nonnegative Borel function gg on [0,1)[0,1) and a nonnegative \mathcal{H}\otimes\mathcal{L}-measurable function ff. To establish both claims of the theorem, it suffices to prove that

(4.1) t𝔼[g(U0)f(K,P)l0]=(01g(x)𝑑x)𝔼[k=1Ntf(θPk(K,P))].t\mathbb{E}\left[g(U_{0})\frac{f(K^{\circ},P^{\circ})}{l_{0}}\right]=\left(\int_{0}^{1}g(x)\,dx\right)\mathbb{E}\left[\sum_{k=1}^{N_{t}}f(\theta_{P_{k}}(K,P))\right].

By stationarity, the left-hand side of equation (4.1) is

t𝔼[g(U0)f(K,P)l0]=0t𝔼[θs[g(U0)f(K,P)l0]]𝑑st\mathbb{E}\left[g(U_{0})\frac{f(K^{\circ},P^{\circ})}{l_{0}}\right]=\int_{0}^{t}\mathbb{E}\left[\theta_{s}\left[g(U_{0})\frac{f(K^{\circ},P^{\circ})}{l_{0}}\right]\right]\,ds

Because θs(U0)=Us\theta_{s}(U_{0})=U_{s} and θs(l0)=lNs\theta_{s}(l_{0})=l_{N_{s}}, and using the fact that θs(K,P)=θNs(K,P)\theta_{s}(K,P)^{\circ}=\theta_{N_{s}}(K,P). We have

t𝔼[g(U0)f(K,P)X0]=0t𝔼[g(Us)f(θPNs(K,P)lNs]𝑑s=𝔼[k=1NtPk1Pkg(Us)f(θPNs(K,P))lNs𝑑s]+𝔼[0P0g(Us)f(θPNs(K,P)lNs𝑑s]𝔼[tPNtg(Us)f(θPNs(K,P)lNs𝑑s]=𝔼[k=1Ntf(θk(K,P))Pk1Pkg(Us)lk𝑑s].\begin{split}t\mathbb{E}\left[g(U_{0})\frac{f(K^{\circ},P^{\circ})}{X_{0}}\right]&=\int_{0}^{t}\mathbb{E}\left[g(U_{s})\frac{f(\theta_{P_{N_{s}}}(K,P)}{l_{N_{s}}}\right]\,ds\\ &=\mathbb{E}\left[\sum_{k=1}^{N_{t}}\int_{P_{k-1}}^{P_{k}}\frac{g(U_{s})f(\theta_{P_{N_{s}}}(K,P))}{l_{N_{s}}}\,ds\right]\\ &\quad+\mathbb{E}\left[\int_{0}^{P_{0}}g(U_{s})\frac{f(\theta_{P_{N_{s}}}(K,P)}{l_{N_{s}}}ds\right]\\ &\quad-\mathbb{E}\left[\int_{t}^{P_{N_{t}}}g(U_{s})\frac{f(\theta_{P_{N_{s}}}(K,P)}{l_{N_{s}}}ds\right]\\ &=\mathbb{E}\left[\sum_{k=1}^{N_{t}}f(\theta_{k}(K,P))\int_{P_{k-1}}^{P_{k}}\frac{g(U_{s})}{l_{k}}\,ds\right].\\ \end{split}

It follows from stationarity that

𝔼[0P0g(Us)f(θPNs(K,P)lNs𝑑s]=𝔼[tPNtg(Us)f(θPNs(K,P)lNs𝑑s].\mathbb{E}\left[\int_{0}^{P_{0}}g(U_{s})\frac{f(\theta_{P_{N_{s}}}(K,P)}{l_{N_{s}}}ds\right]=\mathbb{E}\left[\int_{t}^{P_{N_{t}}}g(U_{s})\frac{f(\theta_{P_{N_{s}}}(K,P)}{l_{N_{s}}}ds\right].

A change of variable in the integral shows that

Pk1Pkg(Us)lk𝑑s=0lkg(slk)lk𝑑s=01g(x)𝑑x,\int_{P_{k-1}}^{P_{k}}\frac{g(U_{s})}{l_{k}}\,ds=\int_{0}^{l_{k}}\frac{g(\frac{s}{l_{k}})}{l_{k}}\,ds=\int_{0}^{1}g(x)\,dx,

and this proves the claim (4.1). ∎

The next result is the analogue of [Tho00, Chapter 8, Theorem 4.1].

Corollary 4.6.

For any nonnegative \mathcal{H}\otimes\mathcal{L}- measurable function ff and every nn\in\mathbb{Z}, we have

𝔼[f(θPn(K,P))l0]=𝔼[f(K,P)l0].\mathbb{E}\left[\frac{f(\theta_{P_{n}}(K,P))}{l_{0}}\right]=\mathbb{E}\left[\frac{f(K^{\circ},P^{\circ})}{l_{0}}\right].
Proof.

It suffices to consider the case when ff is bounded by a constant AA. Applying Theorem 4.1 with the function ff replaced by fθPnf\circ\theta_{P_{n}}, we have, for all t0t\geq 0,

t𝔼[f(θPn(K,P))l0]\displaystyle t\mathbb{E}\left[\frac{f(\theta_{P_{n}}(K,P))}{l_{0}}\right] =𝔼[k=1Ntf(θPk+n(K,P))]\displaystyle=\mathbb{E}\left[\sum_{k=1}^{N_{t}}f(\theta_{P_{k+n}}(K,P))\right]
=𝔼[k=1Ntf(θPk(K,P))]𝔼[k=1nf(θPk(K,P))]\displaystyle=\mathbb{E}\left[\sum_{k=1}^{N_{t}}f(\theta_{P_{k}}(K,P))\right]-\mathbb{E}\left[\sum_{k=1}^{n}f(\theta_{P_{k}}(K,P))\right]
+𝔼[k=Nt+1Nt+nf(θPk(K,P))]\displaystyle\quad+\mathbb{E}\left[\sum_{k=N_{t}+1}^{N_{t}+n}f(\theta_{P_{k}}(K,P))\right]
=t𝔼[f(K,P)l0]𝔼[k=1nf(θPk(K,P))]\displaystyle=t\mathbb{E}\left[\frac{f(K^{\circ},P^{\circ})}{l_{0}}\right]-\mathbb{E}\left[\sum_{k=1}^{n}f(\theta_{P_{k}}(K,P))\right]
+𝔼[k=Nt+1Nt+nf(θPk(K,P))].\displaystyle\quad+\mathbb{E}\left[\sum_{k=N_{t}+1}^{N_{t}+n}f(\theta_{P_{k}}(K,P))\right].

Hence

|𝔼[f(θPn(K,P))l0]𝔼[f(K,P)l0]|Ant.\left|\mathbb{E}\left[\frac{f(\theta_{P_{n}}(K,P))}{l_{0}}\right]-\mathbb{E}\left[\frac{f(K^{\circ},P^{\circ})}{l_{0}}\right]\right|\leq\frac{An}{t}.

Letting tt\rightarrow\infty finishes the proof. ∎

Of particular interest to us in our Lévy process setting is the case where the sequence ((Kt+Pn1KPn1, 0t<ln),ln)n((K_{t+P_{n-1}}-K_{P_{n-1}},\,0\leq t<l_{n}),\,l_{n})_{n\in\mathbb{Z}} is independent, in which case the sequence ((Kt+Pn1KPn1, 0t<ln),ln)n0((K_{t+P_{n-1}}-K_{P_{n-1}},\,0\leq t<l_{n}),\,l_{n})_{n\neq 0} is independent and identically distributed (cf. [Tho00, Chapter 8, Remark 4.1]). Part (i) of the following result is a straightforward consequence of Corollary 4.6. Part (ii) is immediate from part (i). We omit the proofs.

Corollary 4.7.

Suppose that the sequence ((Kt+Pn1KPn1, 0t<ln),ln)n((K_{t+P_{n-1}}-K_{P_{n-1}},\,0\leq t<l_{n}),\,l_{n})_{n\in\mathbb{Z}} is independent

  • (i)

    For a nonnegative measurable function ff,

    𝔼[f((Kt+P0KP0, 0t<l1),l1)]=𝔼[1l0]1𝔼[f((Kt+P1KP1, 0t<l0),l0)1l0].\begin{split}&\mathbb{E}\left[f((K_{t+P_{0}}-K_{P_{0}},\,0\leq t<l_{1}),\,l_{1})\right]\\ &\quad=\mathbb{E}\left[\frac{1}{l_{0}}\right]^{-1}\mathbb{E}\left[f((K_{t+P_{-1}}-K_{P_{-1}},\,0\leq t<l_{0}),\,l_{0})\frac{1}{l_{0}}\right].\\ \end{split}
  • (ii)

    For a nonnegative measurable function ff,

    𝔼[f((Kt+P1KP1, 0t<l0),l0)]=𝔼[l1]1𝔼[f((Kt+P0KP0, 0t<l1),l1)l1].\begin{split}&\mathbb{E}\left[f((K_{t+P_{-1}}-K_{P_{-1}},\,0\leq t<l_{0}),\,l_{0})\right]\\ &\quad=\mathbb{E}[l_{1}]^{-1}\mathbb{E}\left[f((K_{t+P_{0}}-K_{P_{0}},\,0\leq t<l_{1}),\,l_{1})l_{1}\right].\\ \end{split}

We return to our Lévy process set-up and assume that 𝒵\mathcal{Z} is discrete. The pair ((Xt)t,𝒵)((X_{t})_{t\in\mathbb{R}},\mathcal{Z}) is space-homogeneous stationary and hence, by Theorem 4.5, the random variable U0=GDGU_{0}=\frac{-G}{D-G} is uniform on [0,1)[0,1) and independent of the process (Xt+DXD)t(X_{t+D}-X_{D})_{t\in\mathbb{R}}. Put

Vt:={XDX(Dt),0t<DG=:ζV,,tDG.V_{t}:=\begin{cases}X_{D}-X_{(D-t)-},&0\leq t<D-G=:\zeta_{V},\\ \partial,&t\geq D-G.\\ \end{cases}

It is easy to check that DGD-G is the first positive point of the contact set of the process (XDX(Dt))t(X_{D}-X_{(D-t)-})_{t\in\mathbb{R}} and so the process (Vt)t(V_{t})_{t\in\mathbb{R}} can be written as

V=F(XD+XD),V=F(X_{D+\cdot}-X_{D}),

where FF is a measurable function from the space of càdlàg functions on the real line to the space of càdlàg functions on the positive real line. Hence the random variable U=1U0U=1-U_{0} is independent of (Vt)t0(V_{t})_{t\geq 0}. We have already observed that we know the distribution of the process

Wt:={Vt,0t<D=U(DG)=:ζW,,tD=U(DG).W_{t}:=\begin{cases}V_{t},&0\leq t<D=U(D-G)=:\zeta_{W},\\ \partial,&t\geq D=U(D-G).\\ \end{cases}

We now show that it is possible to derive the distribution of VV from that of WW.

Corollary 4.8.

Recall that ζV\zeta_{V} (resp. ζW\zeta_{W}) the lifetime of the process (Vt)t0(V_{t})_{t\geq 0} (resp. (Wt)t0(W_{t})_{t\geq 0}). For bounded, measurable functions f1,,fnf_{1},\ldots,f_{n} that take the value 0 at \partial and times 0t1<<tn<t<0\leq t_{1}<\ldots<t_{n}<t<\infty,

𝔼[f1(Vt1)fn(Vtn) 1{t<ζV}]=𝔼[f1(Wt1)fn(Wtn) 1{t<ζW}]+t𝔼[f1(Wt1)fn(Wtn) 1{ζWdt}]dt.\begin{split}\mathbb{E}[f_{1}(V_{t_{1}})\cdots f_{n}(V_{t_{n}})\,\mathbbold{1}\{t<\zeta_{V}\}]&=\mathbb{E}[f_{1}(W_{t_{1}})\cdots f_{n}(W_{t_{n}})\,\mathbbold{1}\{t<\zeta_{W}\}]\\ &\quad+t\frac{\mathbb{E}[f_{1}(W_{t_{1}})\cdots f_{n}(W_{t_{n}})\,\mathbbold{1}\{\zeta_{W}\in dt\}]}{dt}.\\ \end{split}
Proof.

Observe that

𝔼[f1(Wt1)fn(Wtn) 1{t<ζW}]=𝔼[f1(Vt1)fn(Vtn) 1{t<UζV}]=01𝔼[f1(Vt1)fn(Vtn) 1{t/u<ζV}]𝑑u=1𝔼[f1(Vt1)fn(Vtn) 1{ts<ζV}]1s2𝑑s=t𝔼[f1(Vt1)fn(Vtn) 1{r<ζV}]t2r21t𝑑r=tt𝔼[f1(Vt1)fn(Vtn) 1{r<ζV}]1r2𝑑r,\begin{split}&\mathbb{E}[f_{1}(W_{t_{1}})\cdots f_{n}(W_{t_{n}})\,\mathbbold{1}\{t<\zeta_{W}\}]\\ &\quad=\mathbb{E}[f_{1}(V_{t_{1}})\cdots f_{n}(V_{t_{n}})\,\mathbbold{1}\{t<U\zeta_{V}\}]\\ &\quad=\int_{0}^{1}\mathbb{E}[f_{1}(V_{t_{1}})\cdots f_{n}(V_{t_{n}})\,\mathbbold{1}\{t/u<\zeta_{V}\}]\,du\\ &\quad=\int_{1}^{\infty}\mathbb{E}[f_{1}(V_{t_{1}})\cdots f_{n}(V_{t_{n}})\,\mathbbold{1}\{ts<\zeta_{V}\}]\frac{1}{s^{2}}\,ds\\ &\quad=\int_{t}^{\infty}\mathbb{E}[f_{1}(V_{t_{1}})\cdots f_{n}(V_{t_{n}})\,\mathbbold{1}\{r<\zeta_{V}\}]\frac{t^{2}}{r^{2}}\frac{1}{t}\,dr\\ &\quad=t\int_{t}^{\infty}\mathbb{E}[f_{1}(V_{t_{1}})\cdots f_{n}(V_{t_{n}})\,\mathbbold{1}\{r<\zeta_{V}\}]\frac{1}{r^{2}}\,dr,\\ \end{split}

so that

t𝔼[f1(Vt1)fn(Vtn) 1{r<ζV}]1r2𝑑r=1t𝔼[f1(Wt1)fn(Wtn) 1{t<ζW}].\int_{t}^{\infty}\mathbb{E}[f_{1}(V_{t_{1}})\cdots f_{n}(V_{t_{n}})\,\mathbbold{1}\{r<\zeta_{V}\}]\frac{1}{r^{2}}\,dr=\frac{1}{t}\mathbb{E}[f_{1}(W_{t_{1}})\cdots f_{n}(W_{t_{n}})\,\mathbbold{1}\{t<\zeta_{W}\}].

Differentiating both sides with respect to tt and rearranging gives the result. ∎

Remark 4.9.

(i) The proof of Corollary 4.8 is similar to that of [SH04, Appendix, Proposition 3.12] which gives an analytic link between the distributions of ZZ and UZUZ where ZZ and UU are independent nonnegative random variables with UU uniform on [0,1][0,1].

(ii) We gave above a way to find the distribution of the excursion straddling zero. To determine the distribution of (Xt,GtD)(X_{t},\,G\leq t\leq D) we generate the process VV according to the distribution described above with lifetime ζV\zeta_{V}, and then take an independent random variable UU uniform on [0,1][0,1], then we have the equality of distributions

(Xt,GtD)=d(Vt+UζVVUζV,UζVt(1U)ζV).(X_{t},\,G\leq t\leq D)\,{\buildrel d\over{=}}\,(V_{t+U\zeta_{V}}-V_{U\zeta_{V}},\,-U\zeta_{V}\leq t\leq(1-U)\zeta_{V}).

(iii) As a particular consequence of the Theorem 4.5, we can find the distribution of the straddling excursion length DGD-G if we know the distribution of the right-hand endpoint DD. See [AE14, Remark 8.2] where the relevant calculations are carried out to find the Laplace transform DGD-G.

From now on, we distinguish between the generic excursions (that is, all the excursions that start after time DD or finish before GG). These excursions are independent and identically distributed and independent of the excursion straddling zero between GG and DD. In the next section we give a description of the common distribution of the generic excursions in the case of the Brownian motion with drift.

5. A generic excursion for Brownian motion with drift

Suppose in this section that XX is two-sided Brownian motion with drift β\beta such that |β|<α|\beta|<\alpha; that is, X=(Bt+βt)tX=(B_{t}+\beta t)_{t\in\mathbb{R}}, where BB is a standard linear Brownian motion.

We recall the Williams path decomposition for Brownian motion with drift (see, for example, [RW87, Chapter VI, Theorem 55.9]).

Theorem 5.1.

Let μ>0\mu>0 on some probability space, take three independent random elements:

  • (Bt(μ),t0)(B^{(-\mu)}_{t},\,t\geq 0) a BM with drift μ-\mu;

  • (Rt(μ),t0)(R^{(\mu)}_{t},t\geq 0) a diffusion that is solution of the following SDE

    dRt(μ)=dBt+μcoth(μRt(μ))dt,R0(μ)=0,dR^{(\mu)}_{t}=dB_{t}+\mu\coth(\mu R^{(\mu)}_{t})dt,\quad R^{(\mu)}_{0}=0,

    where BB is a standard Brownian motion;

  • γ\gamma an exponential r.v with rate 2μ2\mu.

Set τ=inf{t0:Bt(μ)=γ}\tau=\inf\{t\geq 0:B^{(-\mu)}_{t}=-\gamma\} and

Ht={Bt(μ),0tτ,Rtτ(μ)γ,tτ.H_{t}=\begin{cases}B^{(-\mu)}_{t},&0\leq t\leq\tau,\\ R^{(\mu)}_{t-\tau}-\gamma,&t\geq\tau.\\ \end{cases}

Then, (Ht)t0(H_{t})_{t\geq 0} is a Brownian motion with drift μ\mu.

Remark 5.2.

The diffusion R(μ)R^{(\mu)} is called a 3-dimensional Bessel process with drift μ\mu denoted BES(3,μ)\mathrm{BES}(3,\mu). We may use a superscript to refer to the starting position of this process, when there is no superscript it implicitly means we start at zero. This process has the same distribution as the radial part of a 3-dimensional Brownian motion with drift of magnitude μ\mu [RP81, Section 3]. This process may be thought of as a Brownian motion with drift μ\mu conditioned to stay positive.

We give some results about Bessel processes that will be useful later in our proofs. The first result is a last exit decomposition of a Bessel process presented in [RY05, Chapter 6, Proposition 3.9].

Proposition 5.3.

Let ρ\rho be BESx(3)\mathrm{BES}^{x}(3); that is, ρ\rho is a 3-dimensional Bessel process started at x0x\geq 0. Let TT be a stopping time with respect to the filtration ρ,J:=(σ{ρs,Js,0st})t0\mathcal{F}^{\rho,J}:=(\sigma\{\rho_{s},J_{s},0\leq s\leq t\})_{t\geq 0}, where Jt:=infstρsJ_{t}:=\inf_{s\leq t}\rho_{s} is the future infinimum of ρ\rho. Then (ρT+tρT)t0\rho_{T+t}-\rho_{T})_{t\geq 0} is a BES0(3)\mathrm{BES}^{0}(3) that is independent of (ρt, 0tT)(\rho_{t},\,0\leq t\leq T). In particular if

gx,y:=sup{t0:ρt=y}=inf{t0:ρt=Jt=y},yx,g_{x,y}:=\sup\{t\geq 0:\rho_{t}=y\}=\inf\{t\geq 0:\rho_{t}=J_{t}=y\},\quad y\geq x,

then (ρt+gx,yy)t0(\rho_{t+g_{x,y}}-y)_{t\geq 0} is a BES0(3)\mathrm{BES}^{0}(3) independent of (ρt, 0tgx,y)(\rho_{t},\,0\leq t\leq g_{x,y}).

The next result relates the time-reversed Bessel process and the Brownian motion. It is from [RY05, Chapter 7, Corollary 4.6]

Proposition 5.4.

Let b>0b>0, ρ\rho be a BES0(3)\mathrm{BES}^{0}(3), and BB be a standard linear Brownian motion. We have the equality of distributions

(ρLbt, 0tLb)=d(bBt, 0tTb),(\rho_{L_{b}-t},\,0\leq t\leq L_{b})\,{\buildrel d\over{=}}\,(b-B_{t},\,0\leq t\leq T_{b}),

where Lb:=sup{t0:ρt=b}L_{b}:=\sup\{t\geq 0:\rho_{t}=b\} is the last passage time of ρ\rho at the level bb and Tb:=inf{t0:Bt=b}T_{b}:=\inf\{t\geq 0:B_{t}=b\} is the first hitting time of the Brownian motion BB started at zero to bb. In particular,

Lb=dTb.L_{b}\,{\buildrel d\over{=}}\,T_{b}.

The final result we will need is a path decomposition of a 3-dimensional Bessel process with drift started at a positive initial state when it hits its ultimate minimum. We don’t know a reference for this result, so we give its proof for the sake of completeness.

Theorem 5.5.

Let b,μ>0b,\mu>0. Consider the following three independent random elements :

  • a random variable gg with density proportional to e2μxe^{2\mu x} supported on [0,b][0,b];

  • a Brownian motion (Bt(b,μ))t0(B^{(b,-\mu)}_{t})_{t\geq 0} with drift μ-\mu started at bb;

  • a 3-dimensional Bessel process (Rt(μ))t0(R^{(\mu)}_{t})_{t\geq 0} with drift μ\mu started at zero.

where T~g:=inf{t0:Bt(b,μ)=g}\tilde{T}_{g}:=\inf\{t\geq 0:B^{(b,-\mu)}_{t}=g\}.

Rt(b,μ)={Bt(b,μ),0tT~g,g+RtT~g(μ),tT~g.R^{(b,\mu)}_{t}=\begin{cases}B^{(b,-\mu)}_{t},&0\leq t\leq\tilde{T}_{g},\\ g+R^{(\mu)}_{t-\tilde{T}_{g}},&t\geq\tilde{T}_{g}.\\ \end{cases}

Then, R(b,μ)=dBESb(3,μ)R^{(b,\mu)}\,{\buildrel d\over{=}}\,\mathrm{BES}^{b}(3,\mu); that is, R(b,μ)R^{(b,\mu)} is a 3-dimensional Bessel process with drift μ\mu started at bb.

Proof.

The distribution of a 3-dimensional Bessel process with drift μ\mu and started at b>0b>0 is the conditional distribution of a Brownian motion with drift μ\mu started at bb conditioned to stay positive (see the Remarks at the end of [RP81, Section 3]). The event we condition on has a positive probability, so it is just the usual naive conditioning

(bBM0(μ))|{supt0BM0(μ)tb}=dBESb(3,μ),(b-\mathrm{BM}^{0}(-\mu))\;\bigg{|}\;\left\{\sup_{t\geq 0}\mathrm{BM}^{0}(-\mu)_{t}\leq b\right\}\,{\buildrel d\over{=}}\,\mathrm{BES}^{b}(3,\mu),

where BM0(μ)\mathrm{BM}^{0}(-\mu) is a Brownian motion with drift μ-\mu and started at zero. The theorem is then just an application of the Williams path decomposition Theorem 5.1. ∎

Recall that in this section (Xt)t(X_{t})_{t\in\mathbb{R}} is a Brownian motion with drift β\beta. The discussion in Theorem  3.5 and the Williams path decomposition Theorem 5.1 shows that (Xt+DXD+αt)t0(X_{t+D}-X_{D}+\alpha t)_{t\geq 0} has the same distribution as (Bt+(α+β)t)t(B_{t}+(\alpha+\beta)_{t})_{t\in\mathbb{R}} conditioned to stay positive. Thus,

(Xt+DXD,t0)=(Rt(α+β)αt,t0),(X_{t+D}-X_{D},\,t\geq 0)=(R^{(\alpha+\beta)}_{t}-\alpha t,\,t\geq 0),

where R(α+β)=dBES(3,α+β)R^{(\alpha+\beta)}\,{\buildrel d\over{=}}\,\text{BES}(3,\alpha+\beta). We aim now to provide a path decomposition of the first positive generic excursion away from the contact set (and thus all generic excursions), that is the path of (Wt)t0:=(Xt+DXD)t0=(Xt+D0XD0)t0(W_{t})_{t\geq 0}:=(X_{t+D}-X_{D})_{t\geq 0}=(X_{t+D_{0}}-X_{D_{0}})_{t\geq 0} until it hits the first contact point DD0D0D_{D_{0}}-D_{0}.

Notation 5.6.

Using Lemma  7.4, let us define the following times that are the analogues of s and d for this generic excursion.

𝔗:=inf{t>0:Wtαt0}=inf{t>0:Rt(α+β)t=2α}\mathfrak{T}:=\inf\{t>0:W_{t}-\alpha t\leq 0\}=\inf\left\{t>0:\frac{R^{(\alpha+\beta)}_{t}}{t}=2\alpha\right\}

and

ζ:=inf{t𝔗:Wt+αt=inf{Wu+αu:u𝔗}}=inf{t𝔗:Rt(α+β)=inf{Ru(α+β):u𝔗}}.\begin{split}\zeta&:=\inf\{t\geq\mathfrak{T}:W_{t}+\alpha t=\inf\{W_{u}+\alpha u:u\geq\mathfrak{T}\}\}\\ &=\inf\{t\geq\mathfrak{T}:R^{(\alpha+\beta)}_{t}=\inf\{R^{(\alpha+\beta)}_{u}:u\geq\mathfrak{T}\}\}.\end{split}

The following theorem is a path decomposition of a generic excursion away from the contact set.

Theorem 5.7.

Consider the following independent random elements:

  • a pair of random variables (τ,γ^)(\tau,\hat{\gamma}) with the joint density

    (5.1) fτ,γ^(t,x)=exp((αβ)2t22(α+β)x)2πt310x2αt,t>0,f_{\tau,\hat{\gamma}}(t,x)=\frac{\exp\left(-\frac{(\alpha-\beta)^{2}t}{2}-2(\alpha+\beta)x\right)}{\sqrt{2\pi t^{3}}}\mathbbold{1}_{0\leq x\leq 2\alpha t},\quad t>0,
  • a standard Brownian excursion 𝐞\mathbf{e} on [0,1][0,1],

  • a linear Brownian motion (B~t(α+β))t0(\tilde{B}^{-(\alpha+\beta)}_{t})_{t\geq 0} with drift (α+β)-(\alpha+\beta).

Define the process

𝔈t={τ𝐞(tτ)+2αt,0tτ,2ατ+B~tτ(α+β),τtτ+T~γ^,\mathfrak{E}_{t}=\begin{cases}\sqrt{\tau}\mathbf{e}(\frac{t}{\tau})+2\alpha t,&0\leq t\leq\tau,\\ 2\alpha\tau+\tilde{B}^{-(\alpha+\beta)}_{t-\tau},&\tau\leq t\leq\tau+\tilde{T}_{\hat{\gamma}},\\ \end{cases}

where T~γ^:=inf{t0:B~t(α+β)=γ^}\tilde{T}_{\hat{\gamma}}:=\inf\{t\geq 0:\tilde{B}^{-(\alpha+\beta)}_{t}=-\hat{\gamma}\}. Then,

(Xt+DXD+αt, 0tζ)=d(𝔈t, 0tτ+T~γ^).(X_{t+D}-X_{D}+\alpha t,\,0\leq t\leq\zeta)\,{\buildrel d\over{=}}\,(\mathfrak{E}_{t},\,0\leq t\leq\tau+\tilde{T}_{\hat{\gamma}}).
Proof.

Let us first find first the distribution of the path of R(α+β)R^{(\alpha+\beta)} on [0,𝔗][0,\mathfrak{T}]. As 𝔗\mathfrak{T} is a stopping time (with respect to the filtration generated by R(α+β)R^{(\alpha+\beta)}) and R(α+β)R^{(\alpha+\beta)} is a time-homogenous strong Markov process, conditioning on the value of 𝔗\mathfrak{T} (and thus on R𝔗(α+β)=2α𝔗R^{(\alpha+\beta)}_{\mathfrak{T}}=2\alpha\mathfrak{T} is enough to have the independence between the two components of our path). Define (Yt)t>0(Y_{t})_{t>0} by

Yt:=tR1t(α+β),t>0.Y_{t}:=tR^{(\alpha+\beta)}_{\frac{1}{t}},\quad t>0.

By the time-inversion property of Brownian motion, YY is a BESα+β(3)\mathrm{BES}^{\alpha+\beta}(3); that is, YY is a 33-dimensional Bessel process started at α+β\alpha+\beta (with no drift). The stopping time 𝔗\mathfrak{T} can be expressed as

(5.2) 𝔗=1sup{t0:Yt=2α}=d1gα+β,2α\mathfrak{T}=\frac{1}{\sup\{t\geq 0:Y_{t}=2\alpha\}}\,{\buildrel d\over{=}}\,\frac{1}{g_{\alpha+\beta,2\alpha}}

Hence by applying Proposition  5.3 to our process YY we find that

(Gt,t0):=(Yt+1𝔗2α,t0)(G_{t},\,t\geq 0):=(Y_{t+\frac{1}{\mathfrak{T}}}-2\alpha,\,t\geq 0)

is a BES0(3)\mathrm{BES}^{0}(3) independent from σ{Yu:u1𝔗}=σ{Ru(α+β):u𝔗}\sigma\{Y_{u}:u\leq\frac{1}{\mathfrak{T}}\}=\sigma\{R^{(\alpha+\beta)}_{u}:u\geq\mathfrak{T}\}. Now, conditionally on {𝔗=T}\{\mathfrak{T}=T\}, we have :

(Ru(α+β), 0uT)\displaystyle(R^{(\alpha+\beta)}_{u},\,0\leq u\leq T) =(u(G1u1T+2α), 0uT)\displaystyle=(u(G_{\frac{1}{u}-\frac{1}{T}}+2\alpha),\,0\leq u\leq T)
=(uGTuuT+2αu, 0uT).\displaystyle=(uG_{\frac{T-u}{uT}}+2\alpha u,\,0\leq u\leq T).

However, it is known that (uGTuuT, 0uT)(uG_{\frac{T-u}{uT}},\,0\leq u\leq T) is just a Brownian excursion of length TT (that is a 3-dimensional Bessel bridge between (0,0)(0,0) and (T,0)(T,0)). This can easily be seen from the same time transformation that maps Brownian motions to Brownian bridges). For a reference to this path transformation, see [page 226][Pit83]. Hence, given {𝔗=T}\{\mathfrak{T}=T\},

(Wu, 0uT)=(𝐞T(u)+αu,0uT)=d(T𝐞(uT)+αu, 0uT),(W_{u},\,0\leq u\leq T)=(\mathbf{e}_{T}(u)+\alpha u,0\leq u\leq T)\,{\buildrel d\over{=}}\,\left(\sqrt{T}\mathbf{e}\left(\frac{u}{T}\right)+\alpha u,\,0\leq u\leq T\right),

where 𝐞T\mathbf{e}_{T} is a Brownian excursion on [0,T][0,T], and 𝐞\mathbf{e} is a standard Brownian excursion on [0,1][0,1] obtained by Brownian scaling.

Now lets move to the second fragment of our path; that is, the process WW on [𝔗,ζ][\mathfrak{T},\zeta]. Because of the fact that 𝔗\mathfrak{T} is a stopping time and R(α+β)R^{(\alpha+\beta)} is a strong Markov process, conditionally on {𝔗=T}\{\mathfrak{T}=T\}, the process (Rt+𝔗(α+β), 0tζ𝔗)(R^{(\alpha+\beta)}_{t+\mathfrak{T}},\,0\leq t\leq\zeta-\mathfrak{T}) is just a BES2αT(3,α+β)\mathrm{BES}^{2\alpha T}(3,\alpha+\beta) stopped at the time it hits its ultimate minimum. Hence, by applying Theorem 5.5,

(Rt+𝔗(α+β), 0tζ𝔗)=d(B~t(2αT,(α+β)), 0tT~γ),(R^{(\alpha+\beta)}_{t+\mathfrak{T}},\,0\leq t\leq\zeta-\mathfrak{T})\,{\buildrel d\over{=}}\,(\tilde{B}^{(2\alpha T,-(\alpha+\beta))}_{t},\,0\leq t\leq\tilde{T}_{\gamma}),

where B~(2αT,(α+β))\tilde{B}^{(2\alpha T,-(\alpha+\beta))} is a standard Brownian motion with drift (α+β)-(\alpha+\beta) started at 2αT2\alpha T and T~γ:=inf{t0:B~t(2αT,(α+β))=γ}\tilde{T}_{\gamma}:=\inf\{t\geq 0:\tilde{B}^{(2\alpha T,-(\alpha+\beta))}_{t}=\gamma\}, and γ\gamma is independent of B~(2αT,(α+β))\tilde{B}^{(2\alpha T,-(\alpha+\beta))} with density on [0,2αT][0,2\alpha T] proportional to xe2(α+β)xx\mapsto e^{2(\alpha+\beta)x}. Finally by setting γ^=2α𝔗γ\hat{\gamma}=2\alpha\mathfrak{T}-\gamma, it suffices to prove that (𝔗,γ^)(\mathfrak{T},\hat{\gamma}) has the joint density in (5.1) to finish our proof.

We know that the conditional density of γ^\hat{\gamma} given {𝔗=t}\{\mathfrak{T}=t\} is proportional to xe2(α+β)xx\mapsto e^{-2(\alpha+\beta)x} restricted to [0,2αt][0,2\alpha t]. That is,

(5.3) fγ^|𝔗=t(x)=2(α+β)e2(α+β)x1e4(α+β)αt10x2αt.f_{\hat{\gamma}|\mathfrak{T}=t}(x)=\frac{2(\alpha+\beta)e^{-2(\alpha+\beta)x}}{1-e^{-4(\alpha+\beta)\alpha t}}\mathbbold{1}_{0\leq x\leq 2\alpha t}.

To finish, let us find the distribution of 𝔗\mathfrak{T}. Recall from (5.2) that we have

𝔗=d1gα+β,2α.\mathfrak{T}\,{\buildrel d\over{=}}\,\frac{1}{g_{\alpha+\beta,2\alpha}}.

Now gα+β,2αg_{\alpha+\beta,2\alpha} is the last time a 3-dimensional Bessel process started at α+β\alpha+\beta visits the state 2α2\alpha. Consider (Y~t)t0(\tilde{Y}_{t})_{t\geq 0} a BES0(3)\mathrm{BES}^{0}(3), and let Hα+β:=inf{t0:Y~t=α+β}H_{\alpha+\beta}:=\inf\{t\geq 0:\tilde{Y}_{t}=\alpha+\beta\} be the first hitting time of α+β\alpha+\beta. Then, by the strong Markov property at time Hα+βH_{\alpha+\beta}, we have

L2α=dHα+β+gα+β,2α,L_{2\alpha}\,{\buildrel d\over{=}}\,H_{\alpha+\beta}+g_{\alpha+\beta,2\alpha},

where gα+β,2αg_{\alpha+\beta,2\alpha} and Hα+βH_{\alpha+\beta} are independent, and L2αL_{2\alpha} is the last time Y~\tilde{Y} visits 2α2\alpha. Hence we get the Laplace transform of gα+β,2αg_{\alpha+\beta,2\alpha} is

𝔼[exp(λgα+β,2α)]=𝔼[exp(λL2α)]𝔼[exp(λHα+β)].\mathbb{E}[\exp(-\lambda g_{\alpha+\beta,2\alpha})]=\frac{\mathbb{E}[\exp(-\lambda L_{2\alpha})]}{\mathbb{E}[\exp(-\lambda H_{\alpha+\beta})]}.

Using Proposition 5.4, we know with the same notation that L2α=dT2αL_{2\alpha}\,{\buildrel d\over{=}}\,T_{2\alpha}. Thus,

𝔼[exp(λT2α)]=exp(2α2λ).\mathbb{E}[\exp(-\lambda T_{2\alpha})]=\exp(-2\alpha\sqrt{2\lambda}).

On the other hand, we obtain the Laplace transform of Hα+βH_{\alpha+\beta} from [BS02, equation 2.1.4, p463], namely,

𝔼[exp(λHα+β)]=(α+β)2λsinh((α+β)2λ).\mathbb{E}[\exp(-\lambda H_{\alpha+\beta})]=\frac{(\alpha+\beta)\sqrt{2\lambda}}{\sinh((\alpha+\beta)\sqrt{2\lambda})}.

Thus,

𝔼[exp(λgα+β,2α)]=e2α2λsinh((α+β)2λ)(α+β)2λ.\mathbb{E}[\exp(-\lambda g_{\alpha+\beta,2\alpha})]=\frac{e^{-2\alpha\sqrt{2\lambda}}\sinh((\alpha+\beta)\sqrt{2\lambda})}{(\alpha+\beta)\sqrt{2\lambda}}.

Inverting this Laplace transform, we get the density of gα+β,2αg_{\alpha+\beta,2\alpha}; that is,

fgα+β,2α(t)=e(αβ)22te(3α+β)22t2(α+β)2πt.f_{g_{\alpha+\beta,2\alpha}}(t)=\frac{e^{-\frac{(\alpha-\beta)^{2}}{2t}}-e^{-\frac{(3\alpha+\beta)^{2}}{2t}}}{2(\alpha+\beta)\sqrt{2\pi t}}.

The density of 𝔗\mathfrak{T} is thus

(5.4) f𝔗(t)=1t2fgα+β,2α(1t)=e(αβ)2t2e(3α+β)2t22(α+β)2πt31t>0.f_{\mathfrak{T}}(t)=\frac{1}{t^{2}}f_{g_{\alpha+\beta,2\alpha}}\left(\frac{1}{t}\right)=\frac{e^{-\frac{(\alpha-\beta)^{2}t}{2}}-e^{-\frac{(3\alpha+\beta)^{2}t}{2}}}{2(\alpha+\beta)\sqrt{2\pi t^{3}}}\mathbbold{1}_{t>0}.

Multiplying the (5.4) and (5.3) gives the desired equality. ∎

Now we have an explicit path decomposition of a generic excursion and we know the expression of the α\alpha-Lipschitz minorant on the same interval in terms of the locations of the excursion at its end-points using Lemma 7.5. It is interesting to identify the distributions of the most important features such as:

  • the lifetime ζ\zeta of the excursion;

  • the time LL at which the α\alpha-Lipschitz minorant of the excursion attains its maximal value;

  • the final value WζW_{\zeta} of the excursion

— see Figure 5.1.

Refer to caption
Figure 5.1. A generic excursion away from the contact set.

Using the notation from Theorem  5.7 and from Lemma 7.5 we have the following expressions

ζ=τ+T~γ^,L=τγ^2α,ζL=T~γ^+γ^2α,Wζ=α(τT~γ^γ^α).\begin{split}\zeta&=\tau+\tilde{T}_{\hat{\gamma}},\\ L&=\tau-\frac{\hat{\gamma}}{2\alpha},\\ \zeta-L&=\tilde{T}_{\hat{\gamma}}+\frac{\hat{\gamma}}{2\alpha},\\ W_{\zeta}&=\alpha\left(\tau-\tilde{T}_{\hat{\gamma}}-\frac{\hat{\gamma}}{\alpha}\right).\\ \end{split}
Proposition 5.8.
  • (i)

    The joint Laplace transform of (ζ,L,ζL,Wζ)(\zeta,L,\zeta-L,W_{\zeta}) is

    𝔼[exp((ρ1ζ+ρ2L+ρ3(ζL)+ρ4Wζ))]=4α2α+2(ρ1+ρ3αρ4)+(α+β)2+2(ρ1+ρ2+αρ4)+(αβ)2.\begin{split}&\mathbb{E}[\exp(-(\rho_{1}\zeta+\rho_{2}L+\rho_{3}(\zeta-L)+\rho_{4}W_{\zeta}))]\\ &\quad=\frac{4\alpha}{2\alpha+\sqrt{2(\rho_{1}+\rho_{3}-\alpha\rho_{4})+(\alpha+\beta)^{2}}+\sqrt{2(\rho_{1}+\rho_{2}+\alpha\rho_{4})+(\alpha-\beta)^{2}}}.\end{split}
  • (ii)

    The Laplace transform of the excursion length ζ\zeta is

    𝔼[exp(λζ)]=4α2α+2λ+(α+β)2+2λ+(αβ)2.\mathbb{E}[\exp(-\lambda\zeta)]=\frac{4\alpha}{2\alpha+\sqrt{2\lambda+(\alpha+\beta)^{2}}+\sqrt{2\lambda+(\alpha-\beta)^{2}}}.

    In particular, for β=0\beta=0 the probability density of ζ\zeta is

    l2αeα2l22πl2α2Φ¯(αl)l\mapsto 2\alpha\frac{e^{-\frac{\alpha^{2}l}{2}}}{\sqrt{2\pi l}}-2\alpha^{2}\overline{\Phi}(\alpha\sqrt{l})

    where Φ¯(x):=xeu2/22π𝑑u\overline{\Phi}(x):=\int_{x}^{\infty}\frac{e^{-u^{2}/2}}{\sqrt{2\pi}}\,du.

  • (iii)

    The Laplace transform of the time LL to the peak of the minorant during the excursion is

    𝔼[exp(λL)]=4α3α+β+2λ+(αβ)2\mathbb{E}[\exp(-\lambda L)]=\frac{4\alpha}{3\alpha+\beta+\sqrt{2\lambda+(\alpha-\beta)^{2}}}

    The corresponding density is

    l4αe(αβ)2l22πl4α(3α+β)e4α(α+β)lΦ¯(l(3α+β).l\mapsto 4\alpha\frac{e^{-\frac{(\alpha-\beta)^{2}l}{2}}}{\sqrt{2\pi l}}-4\alpha(3\alpha+\beta)e^{4\alpha(\alpha+\beta)l}\overline{\Phi}(\sqrt{l}(3\alpha+\beta).
  • (iv)

    The Laplace transform of the time ζL\zeta-L after the peak of the minorant during the excursion is

    𝔼[exp(λ(ζL))]=4α3αβ+2λ+(α+β)2\mathbb{E}[\exp(-\lambda(\zeta-L))]=\frac{4\alpha}{3\alpha-\beta+\sqrt{2\lambda+(\alpha+\beta)^{2}}}

    The corresponding density is

    l4αe(α+β)2l22πl4α(3αβ)e4α(αβ)lΦ¯(l(3αβ).l\mapsto 4\alpha\frac{e^{-\frac{(\alpha+\beta)^{2}l}{2}}}{\sqrt{2\pi l}}-4\alpha(3\alpha-\beta)e^{4\alpha(\alpha-\beta)l}\overline{\Phi}(\sqrt{l}(3\alpha-\beta).
  • (v)

    The Laplace transform of WζW_{\zeta}, the final value of the excursion, is

    𝔼[exp(λWζ)]=4α2α+(α+β)22λα+(αβ)2+2λα.\mathbb{E}[\exp(-\lambda W_{\zeta})]=\frac{4\alpha}{2\alpha+\sqrt{(\alpha+\beta)^{2}-2\lambda\alpha}+\sqrt{(\alpha-\beta)^{2}+2\lambda\alpha}}.

We give the proof of Proposition 5.8 below after some preparatory results. We first recall a result about the distribution of the first hitting time of a Brownian motion with drift. [BS02, equations 2.0.1 & 2.0.2, page 295]

Lemma 5.9.

Let (Bt(μ))t0B^{(\mu)}_{t})_{t\geq 0} a Brownian motion with drift μ>0\mu>0 started at zero. Let y>0y>0 and define Tμ,y:=inf{t0:Bt(μ)=y}T_{\mu,y}:=\inf\{t\geq 0:B^{(\mu)}_{t}=y\}. The density function of Tμ,yT_{\mu,y} is

fTμ,y(t)=y2πt3exp((yμt)22t)f_{T_{\mu,y}}(t)=\frac{y}{\sqrt{2\pi t^{3}}}\exp\left(-\frac{(y-\mu t)^{2}}{2t}\right)

and its Laplace transform is

𝔼[exp(λTμ,y)]=ey(2λ+μ2μ).\mathbb{E}[\exp(-\lambda T_{\mu,y})]=e^{-y(\sqrt{2\lambda+\mu^{2}}-\mu)}.

For the sake of completeness, we include the proof of the following simple lemma.

Lemma 5.10.

For a,b>0a,b>0,

0eatebt2πt3𝑑t=2b2a.\int_{0}^{\infty}\frac{e^{-at}-e^{-bt}}{\sqrt{2\pi t^{3}}}\,dt=\sqrt{2b}-\sqrt{2a}.
Proof.

Suppose without loss of generality that 0<a<b0<a<b. We have

0eatebt2πt3𝑑t=12π0abtetx𝑑xt32𝑑t=12πab0t121ext𝑑t𝑑x=12πabx120u121eu𝑑u𝑑x=π2πabx12𝑑x=2b2a,\begin{split}\int_{0}^{\infty}\frac{e^{-at}-e^{-bt}}{\sqrt{2\pi t^{3}}}\,dt&=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}\int_{a}^{b}te^{-tx}\,dx\,t^{-\frac{3}{2}}\,dt\\ &=\frac{1}{\sqrt{2\pi}}\int_{a}^{b}\int_{0}^{\infty}t^{\frac{1}{2}-1}e^{-xt}\,dt\,dx\\ &=\frac{1}{\sqrt{2\pi}}\int_{a}^{b}x^{-\frac{1}{2}}\int_{0}^{\infty}u^{\frac{1}{2}-1}e^{-u}\,du\,dx\\ &=\frac{\sqrt{\pi}}{\sqrt{2\pi}}\int_{a}^{b}x^{-\frac{1}{2}}\,dx\\ &=\sqrt{2b}-\sqrt{2a},\\ \end{split}

where in the third equality we used the substitution u=xtu=xt and in the fourth equality we used the fact that 0u121eu𝑑u=Γ(12)=π\int_{0}^{\infty}u^{\frac{1}{2}-1}e^{-u}\,du=\Gamma(\frac{1}{2})=\sqrt{\pi}. ∎

We now give the proof of Proposition  5.8.

Proof.

We claim that

(5.5) 𝔼[exp(λ1τλ2γ^λ3T~γ^)]=4α2(λ1λ3)+4αλ2+(2α+2λ3+(α+β)2)2+2λ1+(αβ)2.\begin{split}&\mathbb{E}[\exp(-\lambda_{1}\tau-\lambda_{2}\hat{\gamma}-\lambda_{3}\tilde{T}_{\hat{\gamma}})]\\ &=\frac{4\alpha}{\sqrt{2(\lambda_{1}-\lambda_{3})+4\alpha\lambda_{2}+(2\alpha+\sqrt{2\lambda_{3}+(\alpha+\beta)^{2}})^{2}}+\sqrt{2\lambda_{1}+(\alpha-\beta)^{2}}}.\\ \end{split}

The stated equation for 𝔼[exp((ρ1ζ+ρ2L+ρ3(ζL)+ρ4Wζ))]\mathbb{E}[\exp(-(\rho_{1}\zeta+\rho_{2}L+\rho_{3}(\zeta-L)+\rho_{4}W_{\zeta}))] then follows by noting that

ρ1ζ+ρ2L+ρ3(ζL)+ρ4Wζ=ρ1(τ+T~γ^)+ρ2(τγ^2α)+ρ3(T~γ^+γ^2α)+ρ4α(τT~γ^γ^α)=(ρ1+ρ2+αρ4)τ+(ρ22α+ρ32αρ4)γ^+(ρ1+ρ3αρ4)T~γ^.\begin{split}&\rho_{1}\zeta+\rho_{2}L+\rho_{3}(\zeta-L)+\rho_{4}W_{\zeta}\\ &\quad=\rho_{1}(\tau+\tilde{T}_{\hat{\gamma}})+\rho_{2}\left(\tau-\frac{\hat{\gamma}}{2\alpha}\right)+\rho_{3}\left(\tilde{T}_{\hat{\gamma}}+\frac{\hat{\gamma}}{2\alpha}\right)+\rho_{4}\alpha\left(\tau-\tilde{T}_{\hat{\gamma}}-\frac{\hat{\gamma}}{\alpha}\right)\\ &\quad=(\rho_{1}+\rho_{2}+\alpha\rho_{4})\tau+\left(-\frac{\rho_{2}}{2\alpha}+\frac{\rho_{3}}{2\alpha}-\rho_{4}\right)\hat{\gamma}+(\rho_{1}+\rho_{3}-\alpha\rho_{4})\tilde{T}_{\hat{\gamma}}.\\ \end{split}

The Laplace transforms for the individual random variables follow by specialization and the claimed expressions for densities then follow from standard inversion formulas.

Rather than deriving (5.5) we will instead derive directly the Laplace transform of ζ\zeta. This illustrates the method of proof with less notational overhead. We have

𝔼[exp(λζ)]\displaystyle\mathbb{E}[\exp(-\lambda\zeta)]
=𝔼[eλτ𝔼[eλT~γ^|τ,γ^]]\displaystyle\quad=\mathbb{E}[e^{-\lambda\tau}\mathbb{E}[e^{-\lambda\tilde{T}_{\hat{\gamma}}}|\tau,\hat{\gamma}]]
=𝔼[eλτeγ^(2λ+(α+β)2(α+β))]\displaystyle\quad=\mathbb{E}[e^{-\lambda\tau}e^{-\hat{\gamma}(\sqrt{2\lambda+(\alpha+\beta)^{2}}-(\alpha+\beta))}]
=002αteλtex(2λ+(α+β)2(α+β))exp((αβ)2t22(α+β)x)2πt3𝑑t𝑑x\displaystyle\quad=\int_{0}^{\infty}\int_{0}^{2\alpha t}e^{-\lambda t}e^{-x(\sqrt{2\lambda+(\alpha+\beta)^{2}}-(\alpha+\beta))}\frac{\exp\left(-\frac{(\alpha-\beta)^{2}t}{2}-2(\alpha+\beta)x\right)}{\sqrt{2\pi t^{3}}}\,dt\,dx
=12λ+(α+β)2+α+β0e(λ+(αβ)22)t(1e2αt(2λ+(α+β)2+α+β))2πt3𝑑t\displaystyle\quad=\frac{1}{\sqrt{2\lambda+(\alpha+\beta)^{2}}+\alpha+\beta}\int_{0}^{\infty}\frac{e^{-(\lambda+\frac{(\alpha-\beta)^{2}}{2})t}(1-e^{-2\alpha t(\sqrt{2\lambda+(\alpha+\beta)^{2}}+\alpha+\beta)})}{\sqrt{2\pi t^{3}}}\,dt
=12λ+(α+β)2+α+β0eatebt2πt3𝑑t\displaystyle\quad=\frac{1}{\sqrt{2\lambda+(\alpha+\beta)^{2}}+\alpha+\beta}\int_{0}^{\infty}\frac{e^{-at}-e^{-bt}}{\sqrt{2\pi t^{3}}}dt

for

a=λ+(αβ)22,b=λ+(αβ)22+2α(2λ+(α+β)2+α+β).a=\lambda+\frac{(\alpha-\beta)^{2}}{2},\quad b=\lambda+\frac{(\alpha-\beta)^{2}}{2}+2\alpha(\sqrt{2\lambda+(\alpha+\beta)^{2}}+\alpha+\beta).

A little algebra shows that

a=12(2λ+(αβ)2),b=12(2λ+(α+β)2+2α)2.a=\frac{1}{2}(2\lambda+(\alpha-\beta)^{2}),~~b=\frac{1}{2}(\sqrt{2\lambda+(\alpha+\beta)^{2}}+2\alpha)^{2}.

Hence, using Lemma 5.10, we get that

𝔼[exp(λζ)]=2α+2λ+(α+β)22λ+(αβ)22λ+(α+β)2+α+β.\mathbb{E}[\exp(-\lambda\zeta)]=\frac{2\alpha+\sqrt{2\lambda+(\alpha+\beta)^{2}}-\sqrt{2\lambda+(\alpha-\beta)^{2}}}{\sqrt{2\lambda+(\alpha+\beta)^{2}}+\alpha+\beta}.

After multiplying top and bottom by the conjugate this has following simple form

𝔼[exp(λζ)]=4α2α+2λ+(α+β)2+2λ+(αβ)2.\mathbb{E}[\exp(-\lambda\zeta)]=\frac{4\alpha}{2\alpha+\sqrt{2\lambda+(\alpha+\beta)^{2}}+\sqrt{2\lambda+(\alpha-\beta)^{2}}}.

Remark 5.11.

(i) Write H:=WLML=τ𝐞(Lτ)H:=W_{L}-M_{L}=\sqrt{\tau}\mathbf{e}(\frac{L}{\tau}) for the difference between the Brownian motion and its minorant at time LL — see Figure 5.1. We can get an explicit description for the distribution of this random variable, though computing either its Laplace transform or density seems tedious to do. Indeed we know that for every 0u10\leq u\leq 1, we have that 𝐞(u)=du(1u)χ3\mathbf{e}(u)\,{\buildrel d\over{=}}\,\sqrt{u(1-u)}\chi_{3}, where χ32=dQ12+Q22+Q32\chi_{3}^{2}\,{\buildrel d\over{=}}\,Q_{1}^{2}+Q_{2}^{2}+Q_{3}^{2} for Q1,Q2,Q3Q_{1},Q_{2},Q_{3} three independent standard Gaussian random variables. Hence,

H=dL(1Lτ)χ3=L(γ^2α)χ3=τ𝔘(1𝔘)χ3.H\,{\buildrel d\over{=}}\,\sqrt{L\left(1-\frac{L}{\tau}\right)}\chi_{3}=\sqrt{L\left(\frac{\hat{\gamma}}{2\alpha}\right)}\chi_{3}=\tau\sqrt{\mathfrak{U}\left(1-\mathfrak{U}\right)}\chi_{3}.

where 𝔘:=γ^2ατ\mathfrak{U}:=\frac{\hat{\gamma}}{2\alpha\tau}. Using the density in Theorem 5.7 and a change of variable gives that the joint density of (τ,𝔘)(\tau,\mathfrak{U}) at the point (t,u)(0,)×[0,1](t,u)\in(0,\infty)\times[0,1] is

fτ,𝔘(t,u)=2α2πtexp((αβ)22tα+βαtu)f_{\tau,\mathfrak{U}}(t,u)=\frac{2\alpha}{\sqrt{2\pi t}}\exp\left(-\frac{(\alpha-\beta)^{2}}{2}t-\frac{\alpha+\beta}{\alpha}tu\right)

and χ3\chi_{3} independent of (τ,𝔘)(\tau,\mathfrak{U}).

(ii) Set Ψ(ρ1,ρ2,ρ3,ρ4;α,β)=𝔼[exp((ρ1ζ+ρ2L+ρ3(ζL)+ρ4Wζ))]\Psi(\rho_{1},\rho_{2},\rho_{3},\rho_{4};\alpha,\beta)=\mathbb{E}[\exp(-(\rho_{1}\zeta+\rho_{2}L+\rho_{3}(\zeta-L)+\rho_{4}W_{\zeta}))]. From the time-reversal symmetry (Bt)t=d(Bt)t(B_{t})_{t\in\mathbb{R}}\,{\buildrel d\over{=}}\,(B_{-t})_{t\in\mathbb{R}}, we expect that

Ψ(ρ1,ρ2,ρ3,ρ4;α,β)=Ψ(ρ1,ρ3,ρ2,ρ4;α,β),\Psi(\rho_{1},\rho_{2},\rho_{3},\rho_{4};\alpha,\beta)=\Psi(\rho_{1},\rho_{3},\rho_{2},-\rho_{4};\alpha,-\beta),

and this is indeed the case. This symmetry is somewhat surprising, as it is certainly not apparent from our path decomposition. Similarly, from the Brownian scaling (c1Bc2t)t=d(Bt)t(c^{-1}B_{c^{2}t})_{t\in\mathbb{R}}\,{\buildrel d\over{=}}\,(B_{t})_{t\in\mathbb{R}}, c>0c>0, we expect that

Ψ(ρ1,ρ2,ρ3,ρ4;α,β)=Ψ(c2ρ1,c2ρ2,c2ρ3,c2ρ4;cα,cβ),\Psi(\rho_{1},\rho_{2},\rho_{3},\rho_{4};\alpha,\beta)=\Psi(c^{2}\rho_{1},c^{2}\rho_{2},c^{2}\rho_{3},c^{2}\rho_{4};c\alpha,c\beta),

and this also holds.

(iii) It follows from the proposition that

𝔼[ζ]=ddλ𝔼[exp(λζ)]|λ=0=12(α2β2).\mathbb{E}[\zeta]=-\frac{d}{d\lambda}\mathbb{E}[\exp(-\lambda\zeta)]|_{\lambda=0}=\frac{1}{2(\alpha^{2}-\beta^{2})}.

Similarly,

𝔼[L]=14α(αβ),\mathbb{E}[L]=\frac{1}{4\alpha(\alpha-\beta)},
𝔼[ζL]=14α(α+β),\mathbb{E}[\zeta-L]=\frac{1}{4\alpha(\alpha+\beta)},

and

𝔼[Wζ]=β2(α2β2).\mathbb{E}[W_{\zeta}]=\frac{\beta}{2(\alpha^{2}-\beta^{2})}.

Note that since limt(Bt+βt)/t=β\lim_{t\to\infty}(B_{t}+\beta t)/t=\beta almost surely, we expect 𝔼[Wζ]=β𝔼[ζ]\mathbb{E}[W_{\zeta}]=\beta\mathbb{E}[\zeta] by a renewal–reward argument.

(iv) The results of this section advance the study of the excursion straddling zero in the case of the Brownian motion with drift carried out in [AE14, Section 8]. Indeed, the previous study only determined the four-dimensional distribution (G,D,T,H~)(G,D,T,\tilde{H}), where T:=argmax{Mt:GtD}T:=\mathrm{argmax}\{M_{t}:G\leq t\leq D\} and H~:=XTMT\tilde{H}:=X_{T}-M_{T}. Our approach here gives the distribution of the whole path of a generic excursion. Let us define

Wstraddle:=(Xt+GXG, 0tDG)W^{\mathrm{straddle}}:=(X_{t+G}-X_{G},\,0\leq t\leq D-G)

and

Wgeneric:=(Xt+DXD, 0tζ).W^{\mathrm{generic}}:=(X_{t+D}-X_{D},\,0\leq t\leq\zeta).

By Corollary 4.7, we have

𝔼[F(Wstraddle)]=𝔼[ζ]1𝔼[ζF(Wgeneric)]\mathbb{E}\left[F(W^{\mathrm{straddle}})\right]=\mathbb{E}[\zeta]^{-1}\mathbb{E}\left[\zeta F(W^{\mathrm{generic}})\right]

Because we know the distribution of WgenericW^{\mathrm{generic}}, the distribution of the straddling excursion can be recovered. In particular, the distribution of DGD-G is just the size-biasing of the distribution of ζ\zeta; that is, E[f(DG)]=𝔼[ζ]1𝔼[ζf(ζ)]E[f(D-G)]=\mathbb{E}[\zeta]^{-1}\mathbb{E}[\zeta f(\zeta)] for any nonnegative measurable function ff. For example, the joint Laplace transform of the analogues of (ζ,L,ζL,Wζ)(\zeta,L,\zeta-L,W_{\zeta}) for the straddling excursion is

ddρ1Ψ(ρ1,ρ2,ρ3,ρ4)𝔼[ζ].\frac{-\frac{d}{d\rho_{1}}\Psi(\rho_{1},\rho_{2},\rho_{3},\rho_{4})}{\mathbb{E}[\zeta]}.

Finally, if we denote by Λ\Lambda the Lévy measure of the subordinator associated with the regenerative set 𝒵\mathcal{Z}, then it has the density given by the following formula

Λ(dx)Λ(+)=(2αeα2x22πx2α2Φ¯(αx))dx\frac{\Lambda(dx)}{\Lambda(\mathbb{R}+)}=\left(2\alpha\frac{e^{-\frac{\alpha^{2}x}{2}}}{\sqrt{2\pi x}}-2\alpha^{2}\overline{\Phi}(\alpha\sqrt{x})\right)\,dx

(recall that Λ\Lambda is only defined up to a multiplicative constant).

6. Enlargement of the Brownian filtration

In this section, the Lévy process (Xt)t(X_{t})_{t\in\mathbb{R}} is the standard two-sided linear Brownian motion. Set

¯t:=σ{Xu:ut}{the null sets of },t.\mathcal{\overline{F}}_{t}:=\sigma\{X_{u}:u\leq t\}\vee\{\text{the null sets of }\mathbb{P}\},\;t\in\mathbb{R}.

From [RY05, Chapter 3, Proposition 2.10] (¯t)t(\mathcal{\overline{F}}_{t})_{t\in\mathbb{R}} is then right-continuous and (Xt)t(X_{t})_{t\in\mathbb{R}} is a (¯t)t(\mathcal{\overline{F}}_{t})_{t\in\mathbb{R}}-two-sided linear standard Brownian motion. We denote (Mt)t(M_{t})_{t\in\mathbb{R}} the α\alpha-Lipschitz minorant of XX and we let DD be defined, as above, by

D:=inf{t0:Xt=Mt}.D:=\inf\{t\geq 0:X_{t}=M_{t}\}.

By Lemma 7.3, the random time DD can be constructed as follows. Consider first the stopping time SS given by

S=inf{t>0:Xtαt=inf{Xuαu:u0}}.S=\inf\{t>0:X_{t}-\alpha t=\inf\{X_{u}-\alpha u:u\leq 0\}\}.

Then

D=inf{tS:Xt+αt=inf{Xu+αu:uS}}.D=\inf\{t\geq S:X_{t}+\alpha t=\inf\{X_{u}+\alpha u:u\geq S\}\}.

Thus, if we introduce the one-sided Brownian motion Xˇ=(Xt+SXS)t0\check{X}=(X_{t+S}-X_{S})_{t\geq 0} which is independent of ¯S\mathcal{\overline{F}}_{S}, and we let Tˇ\check{T} be the time at which the process (Xˇt+αt)t0(\check{X}_{t}+\alpha t)_{t\geq 0} hits its ultimate infimum (this point is almost surely unique), then

D=S+Tˇ.D=S+\check{T}.

As we have seen previously, the random time DD is not a stopping time. However, DD is an honest time in the sense of the following definition.

Definition 6.1.

Let LL be a random variable with values in [0,][0,\infty], LL is said to be honest with respect to the filtration (¯t)t(\mathcal{\overline{F}}_{t})_{t\in\mathbb{R}} if, for every t0t\geq 0, there exists an ¯t\mathcal{\overline{F}}_{t}-measurable random variable LtL_{t} such that on the set {L<t}\{L<t\} we have L=LtL=L_{t}.

Lemma 6.2.

The random time DD is an honest time. Moreover, if TT is a stopping time, then {D=T}=0\mathbb{P}\{D=T\}=0.

Proof.

We can write DD on the event {D<a}\{D<a\} as

D1D<a=S1{S<a}+inf{t0:Xt+S1{S<a}XS1{S<a}+αt=inf{Xt+αt:S1{S<a}ta}}.\begin{split}D\mathbbold{1}_{D<a}&=S\mathbbold{1}_{\{S<a\}}+\inf\{t\geq 0:X_{t+S\mathbbold{1}_{\{S<a\}}}-X_{S\mathbbold{1}_{\{S<a\}}}+\alpha t\\ &=\inf\{X_{t}+\alpha t:S\mathbbold{1}_{\{S<a\}}\leq t\leq a\}\}.\\ \end{split}

The right-hand side is ¯a\mathcal{\overline{F}}_{a}-measurable and hence DD is an honest time. Also, {D=T}=0\mathbb{P}\{D=T\}=0 for any stopping time TT because {XD+t>XDαt,t>0}=1\mathbb{P}\{X_{D+t}>X_{D}-\alpha t,\,\forall t>0\}=1 whereas (ϵ>0{0<t<ϵ,XT+t<XTαt})=1\mathbb{P}(\bigcap_{\epsilon>0}\{\exists 0<t<\epsilon,X_{T+t}<X_{T}-\alpha t\})=1. ∎

We introduce now a larger filtration that is the smallest filtration containing (¯t)t(\mathcal{\overline{F}}_{t})_{t\in\mathbb{R}} that makes DD a stopping time.

Notation 6.3.

For tt\in\mathbb{R}, set

¯tD:=ϵ>0(¯t+ϵσ(D(t+ϵ))).\mathcal{\overline{F}}_{t}^{D}:=\bigcap_{\epsilon>0}(\mathcal{\overline{F}}_{t+\epsilon}\vee\sigma(D\wedge(t+\epsilon))).
Remark 6.4.

For honest times,

¯tD={A¯:At,Bt¯t,A=(At{D>t})(Bt{Dt})}\mathcal{\overline{F}}_{t}^{D}=\{A\in\mathcal{\overline{F}}_{\infty}:\exists A_{t},B_{t}\in\mathcal{\overline{F}}_{t},\,A=(A_{t}\cap\{D>t\})\cup(B_{t}\cap\{D\leq t\})\}

– see [Jeu79, Chapter 5].

Our goal now is to verify that every (¯t)t0(\mathcal{\overline{F}}_{t})_{t\geq 0}-semimartingale remains a (¯tD)t0(\mathcal{\overline{F}}_{t}^{D})_{t\geq 0}-semimartingale, and to give a formula for the canonical semimartingale decomposition in the larger filtration.

Definition 6.5.

For any random time ρ\rho, we call the (¯t)t0(\mathcal{\overline{F}}_{t})_{t\geq 0}-supermartingale defined by

Ztρ=[ρ>t|¯t]Z_{t}^{\rho}=\mathbb{P}[\rho>t\,|\,\mathcal{\overline{F}}_{t}]

the Azéma supermartingale associated with ρ\rho. We choose versions of the conditional expectations so that this process is càdlàg.

We recall the following result from [Bar78, Theorem A].

Theorem 6.6.

Let LL be an honest time. A (¯t)t0(\mathcal{\overline{F}}_{t})_{t\geq 0} local martingale (𝔐t)t0(\mathfrak{M}_{t})_{t\geq 0} is a semimartingale in the larger filtration (¯tL)t0(\mathcal{\overline{F}}_{t}^{L})_{t\geq 0} and decomposes as

𝔐t=𝔐~t+0tLd𝔐,ZLsZsLLtd𝔐,ZLs1ZsL,\mathfrak{M}_{t}=\tilde{\mathfrak{M}}_{t}+\int_{0}^{t\wedge L}\frac{d\langle\mathfrak{M},Z^{L}\rangle_{s}}{Z_{s-}^{L}}-\int_{L}^{t}\frac{d\langle\mathfrak{M},Z^{L}\rangle_{s}}{1-Z_{s-}^{L}},

where (𝔐~t)t0(\tilde{\mathfrak{M}}_{t})_{t\geq 0} is a ((¯tL)t0,)((\mathcal{\overline{F}}_{t}^{L})_{t\geq 0},\,\mathbb{P})-local martingale.

It remains to find an explicit formula for ZtDZ_{t}^{D}. Define a decreasing sequence of stopping times (Sn)n0(S_{n})_{n\geq 0} that converges almost surely to SS by

Sn:=k=0k+12n1{k2nS<k+12n}.S_{n}:=\sum_{k=0}^{\infty}\frac{k+1}{2^{n}}\mathbbold{1}_{\{\frac{k}{2^{n}}\leq S<\frac{k+1}{2^{n}}\}}.

Define the random times (Tˇn)n0(\check{T}_{n})_{n\geq 0} by

Tˇn=sup{t0:Xt+SnXSn+αt=inf{Xu+SnXSn+αu,u0}}.\check{T}_{n}=\sup\{t\geq 0:X_{t+S_{n}}-X_{S_{n}}+\alpha t=\inf\{X_{u+S_{n}}-X_{S_{n}}+\alpha u,u\geq 0\}\}.

Note that TˇnnTˇ\check{T}_{n}\underset{n\rightarrow\infty}{\rightarrow}\check{T} almost surely because Tˇn=argmin{Xu+SXS+αu:uSnS}+SSn\check{T}_{n}=\mathrm{argmin}\{X_{u+S}-X_{S}+\alpha u:u\geq S_{n}-S\}+S-S_{n} and Tˇ>0\check{T}>0 with probability 11. Hence,

ZtD={D>t|¯t}={S+Tˇ>t|¯t}=limn{Tˇn+Sn>t|¯t}=limn1{Snt}+{Tˇn>tSn,Snt|¯t}=limn1{Snt}+k=0,k+12nt{Tˇn>tk+12n,Sn=k+12n|¯t}=limn1{Snt}+k=0,k+12nt{Tˇn>tk+12n|¯t}1{Sn=k+12n}\begin{split}Z_{t}^{D}&=\mathbb{P}\{D>t\,|\,\mathcal{\overline{F}}_{t}\}=\mathbb{P}\{S+\check{T}>t\,|\,\mathcal{\overline{F}}_{t}\}\\ &=\lim_{n\rightarrow\infty}\mathbb{P}\{\check{T}_{n}+S_{n}>t\,|\,\mathcal{\overline{F}}_{t}\}\\ &=\lim_{n\rightarrow\infty}\mathbbold{1}_{\{S_{n}\geq t\}}+\mathbb{P}\{\check{T}_{n}>t-S_{n},\,S_{n}\leq t\,|\,\mathcal{\overline{F}}_{t}\}\\ &=\lim_{n\rightarrow\infty}\mathbbold{1}_{\{S_{n}\geq t\}}+\sum_{k=0,\frac{k+1}{2^{n}}\leq t}\mathbb{P}\left\{\check{T}_{n}>t-\frac{k+1}{2^{n}},\,S_{n}=\frac{k+1}{2^{n}}\,|\,\mathcal{\overline{F}}_{t}\right\}\\ &=\lim_{n\rightarrow\infty}\mathbbold{1}_{\{S_{n}\geq t\}}+\sum_{k=0,\frac{k+1}{2^{n}}\leq t}\mathbb{P}\left\{\check{T}_{n}>t-\frac{k+1}{2^{n}}\,|\,\mathcal{\overline{F}}_{t}\right\}\mathbbold{1}_{\{S_{n}=\frac{k+1}{2^{n}}\}}\\ \end{split}

If we apply Lemma 8.1 for :=Sn\mathfrak{R}:=S_{n} and 𝔛:=1{Tˇn>tk+12n}\mathfrak{X}:=\mathbbold{1}_{\{\check{T}_{n}>t-\frac{k+1}{2^{n}}\}} we get

ZtD=limn1{Snt}+k=0,k+12nt{Tˇn>tk+12n|ˇtk+12n(n)}1{Sn=k+12n}Z_{t}^{D}=\lim_{n\rightarrow\infty}\mathbbold{1}_{\{S_{n}\geq t\}}+\sum_{k=0,\frac{k+1}{2^{n}}\leq t}\mathbb{P}\left\{\check{T}_{n}>t-\frac{k+1}{2^{n}}\,|\,\mathcal{\check{F}}^{(n)}_{t-\frac{k+1}{2^{n}}}\right\}\mathbbold{1}_{\{S_{n}=\frac{k+1}{2^{n}}\}}

where (ˇt(n))t0=(ϵ>0σ{Xu+SnXSn:0ut+ϵ})t0(\mathcal{\check{F}}^{(n)}_{t})_{t\geq 0}=(\bigcap_{\epsilon>0}\sigma\{X_{u+S_{n}}-X_{S_{n}}:0\leq u\leq t+\epsilon\})_{t\geq 0}. Now we use the following theorem from [Nik06, Theorem 8.22] .

Proposition 6.7.

Let (Nt)t0(N_{t})_{t\geq 0} be a continuous local martingale such that N0=1N_{0}=1 and limtNt=0\lim_{t\rightarrow\infty}N_{t}=0. Let St=supstNsS_{t}=\sup_{s\leq t}N_{s}. Set

g:=sup{t0:Nt=S}=sup{t0:Nt=St}g:=\sup\{t\geq 0:N_{t}=S_{\infty}\}=\sup\{t\geq 0:N_{t}=S_{t}\}

Then, the Azéma supermartingale associated with the honest time gg is given by

Ztg={g>t|t}=NtSt.Z_{t}^{g}=\mathbb{P}\{g>t\,|\,\mathcal{F}_{t}\}=\frac{N_{t}}{S_{t}}.

We apply Proposition 6.7 to our case for g:=Tˇng:=\check{T}_{n} and the filtration (ˇt(n))t0=(ϵ>0σ{Xu+SnXSn:0ut+ϵ})t0(\mathcal{\check{F}}^{(n)}_{t})_{t\geq 0}=(\bigcap_{\epsilon>0}\sigma\{X_{u+S_{n}}-X_{S_{n}}:0\leq u\leq t+\epsilon\})_{t\geq 0}.

By definition, we have Tˇn=sup{t0:Xˇt(n)+αt=inf{Xˇu(n)+αu:u0}}\check{T}_{n}=\sup\{t\geq 0:\check{X}^{(n)}_{t}+\alpha t=\inf\{\check{X}^{(n)}_{u}+\alpha u:u\geq 0\}\}, where Xˇt(n)=Xt+SnXSn\check{X}^{(n)}_{t}=X_{t+S_{n}}-X_{S_{n}}. Set

Nt=exp(2α(Xˇt(n)+αt)).N_{t}=\exp(-2\alpha(\check{X}^{(n)}_{t}+\alpha t)).

The process NN is clearly a local martingale that verifies the conditions of the last proposition and we also have

Tˇn=sup{t0:Nt=sups0Ns}.\check{T}_{n}=\sup\{t\geq 0:N_{t}=\sup_{s\geq 0}N_{s}\}.

Hence

{Tˇn>t|ˇt(n)}=exp(2α(Xˇt(n)+αt)+2α(infst(Xˇs(n)+αs))).\mathbb{P}\{\check{T}_{n}>t\,|\,\mathcal{\check{F}}^{(n)}_{t}\}=\exp\left(-2\alpha(\check{X}^{(n)}_{t}+\alpha t)+2\alpha(\inf_{s\leq t}(\check{X}^{(n)}_{s}+\alpha s))\right).

Finally, we get the expression of the Azéma supermartingale associated with DD as

ZtD=limn1{Snt}+k=0,k+12ntexp(2α(Xˇtk+12n(n)+α(tk+12n))+2αinf0stk+12n(Xˇs(n)+αs))×1{Sn=k+12n}.\begin{split}Z_{t}^{D}&=\lim_{n\rightarrow\infty}\mathbbold{1}_{\{S_{n}\geq t\}}\\ &\quad+\sum_{k=0,\frac{k+1}{2^{n}}\leq t}^{\infty}\exp\left(-2\alpha\left(\check{X}^{(n)}_{t-\frac{k+1}{2^{n}}}+\alpha\left(t-\frac{k+1}{2^{n}}\right)\right)+2\alpha\inf_{0\leq s\leq t-\frac{k+1}{2^{n}}}\left(\check{X}^{(n)}_{s}+\alpha s\right)\right)\\ &\qquad\times\mathbbold{1}_{\{S_{n}=\frac{k+1}{2^{n}}\}}.\end{split}

That is,

ZtD=limn1{Snt}+[exp(2α(Xt+α(tSn))+2α(infstSn(Xs+Sn+αs))]×1{Sn<t}.\begin{split}&Z_{t}^{D}\\ &\quad=\lim_{n\rightarrow\infty}\mathbbold{1}_{\{S_{n}\geq t\}}\\ &\qquad+\left[\exp\left(-2\alpha(X_{t}+\alpha(t-S_{n})\right)+2\alpha\left(\inf_{s\leq t-S_{n}}(X_{s+S_{n}}+\alpha s)\right)\right]\\ &\quad\qquad\times\mathbbold{1}_{\{S_{n}<t\}}.\\ \end{split}

Thus, by sending nn\rightarrow\infty, we get that

ZtD=1{St}+[exp(2α(XˇtS+α(tS))+2α(infstS(Xˇs+αs))]1{S<t}.\begin{split}&Z_{t}^{D}\\ &\quad=\mathbbold{1}_{\{S\geq t\}}\\ &\qquad+\left[\exp\left(-2\alpha(\check{X}_{t-S}+\alpha(t-S)\right)+2\alpha\left(\inf_{s\leq t-S}(\check{X}_{s}+\alpha s)\right)\right]\mathbbold{1}_{\{S<t\}}.\\ \end{split}

Now, using Theorem 6.6, every (𝔐t)t0(\mathfrak{M}_{t})_{t\geq 0} (¯t)t0(\mathcal{\overline{F}}_{t})_{t\geq 0}-local martingale is a (¯tD)t0(\mathcal{\overline{F}}_{t}^{D})_{t\geq 0}-semimartingale and decomposes as follows

𝔐t=𝔐~t+0tDd𝔐,ZDsZsDDtd𝔐,ZDs1ZsD,\mathfrak{M}_{t}=\tilde{\mathfrak{M}}_{t}+\int_{0}^{t\wedge D}\frac{d\langle\mathfrak{M},Z^{D}\rangle_{s}}{Z_{s}^{D}}-\int_{D}^{t}\frac{d\langle\mathfrak{M},Z^{D}\rangle_{s}}{1-Z_{s}^{D}},

where (𝔐~t)t0(\tilde{\mathfrak{M}}_{t})_{t\geq 0} denotes a ((¯tD),)((\mathcal{\overline{F}}_{t}^{D}),\mathbb{P})-local martingale.

We develop further the expression of ZDZ^{D} to get an explicit integral representation of its local martingale part.

Lemma 6.8.

Let BB be a standard Brownian motion and α>0\alpha>0. Define the process (t)t0(\mathfrak{H}_{t})_{t\geq 0} by

t=exp(2α[(Bt+αt)infst(Bs+αs)])\mathfrak{H}_{t}=\exp\left(-2\alpha\left[(B_{t}+\alpha t)-\inf_{s\leq t}(B_{s}+\alpha s)\right]\right)

Put It=infst(Bs+αs)I_{t}=\inf_{s\leq t}(B_{s}+\alpha s). Then,

t=12α0tu𝑑Bu+2αIt.\mathfrak{H}_{t}=1-2\alpha\int_{0}^{t}\mathfrak{H}_{u}\,dB_{u}+2\alpha I_{t}.
Proof.

Applying Itô’s formula on the semimartingale t=F(Bt+αt,It)\mathfrak{H}_{t}=F(B_{t}+\alpha t,I_{t}), where F(x,y)=exp(2α(yx))F(x,y)=\exp(2\alpha(y-x)), gives

dt=2αtdBt2α2tdt+2αtdIt+12(4α2)tdt=2αtdBt+2αtdItdt=2αtdBt+2αdIt\begin{split}\,d\mathfrak{H}_{t}&=-2\alpha\mathfrak{H}_{t}dB_{t}-2\alpha^{2}\mathfrak{H}_{t}dt+2\alpha\mathfrak{H}_{t}dI_{t}+\frac{1}{2}(4\alpha^{2})\mathfrak{H}_{t}dt\\ &=-2\alpha\mathfrak{H}_{t}dB_{t}+2\alpha\mathfrak{H}_{t}dI_{t}\\ \,d\mathfrak{H}_{t}&=-2\alpha\mathfrak{H}_{t}dB_{t}+2\alpha dI_{t}\end{split}

The last line follows from the fact that the measure dItdI_{t} is carried on the set {t:Bt+αt=It}={t:t=1}\{t:B_{t}+\alpha t=I_{t}\}=\{t:\mathfrak{H}_{t}=1\}. ∎

Substituting formula from Lemma 6.8 into the expression for ZDZ^{D} we get that

ZtD=1{St}+1{S<t}(12α0tSexp(2α(Xˇu+αu)+2α(infsu(Xˇs+αs))dXˇu+2αinfstS(Xˇs+αs)).\begin{split}Z_{t}^{D}&=\mathbbold{1}_{\{S\geq t\}}\\ &\quad+\mathbbold{1}_{\{S<t\}}(1-2\alpha\int_{0}^{t-S}\exp(-2\alpha(\check{X}_{u}+\alpha u)+2\alpha(\inf_{s\leq u}(\check{X}_{s}+\alpha s))\,d\check{X}_{u}\\ &\quad+2\alpha\inf_{s\leq t-S}(\check{X}_{s}+\alpha s)).\\ \end{split}

This can also be written as

ZtD=1+2α1{S<t}infstS(Xˇs+αs)2α0(tS)0exp(2α(Xˇu+αu)+2α(infsu(Xˇs+αs)))𝑑Xˇu.\begin{split}Z_{t}^{D}&=1+2\alpha\mathbbold{1}_{\{S<t\}}\inf_{s\leq t-S}(\check{X}_{s}+\alpha s)\\ &\quad-2\alpha\int_{0}^{(t-S)\vee 0}\exp\left(-2\alpha(\check{X}_{u}+\alpha u)+2\alpha\left(\inf_{s\leq u}(\check{X}_{s}+\alpha s)\right)\right)\,d\check{X}_{u}.\\ \end{split}

Put Hu:=exp(2α(Xˇu+αu)+2α(infsu(Xˇs+αs)))H_{u}:=\exp(-2\alpha(\check{X}_{u}+\alpha u)+2\alpha(\inf_{s\leq u}(\check{X}_{s}+\alpha s))). We want to write the integral 0(tS)0Hu𝑑Xˇu\int_{0}^{(t-S)\vee 0}H_{u}d\check{X}_{u} as a stochastic integral with respect to the original Brownian motion XX. For that we consider the time-change (Ct,t0)(C_{t},\,t\geq 0) defined by Ct:=t+SC_{t}:=t+S. It is clear that this a family of stopping times such that the maps sCss\mapsto C_{s} are almost surely increasing and continuous. Using [RY05, Chapter V, Proposition 1.5], we get that for every bounded (¯t)t0(\mathcal{\overline{F}}_{t})_{t\geq 0}-progressively measurable process (Ht)t0(H_{t})_{t\geq 0} we have

C0CtHu𝑑Xu=0tHCu𝑑XCu.\int_{C_{0}}^{C_{t}}H_{u}\,dX_{u}=\int_{0}^{t}H_{C_{u}}\,dX_{C_{u}}.

In our case this becomes

St+SHuS𝑑Xu=0tHu𝑑Xˇu.\int_{S}^{t+S}H_{u-S}\,dX_{u}=\int_{0}^{t}H_{u}\,d\check{X}_{u}.

Hence,

0(tS)0Hu𝑑Xˇu=S(tS)0+SHuS𝑑Xu=0t1uSHuS𝑑Xu.\int_{0}^{(t-S)\vee 0}H_{u}\,d\check{X}_{u}=\int_{S}^{(t-S)\vee 0+S}H_{u-S}\,\,dX_{u}=\int_{0}^{t}\mathbbold{1}_{u\geq S}H_{u-S}\,dX_{u}.

Finally,

ZtD=1+2α1{S<t}infstS(Xˇs+αs)2α0tAu𝑑Xu,Z_{t}^{D}=1+2\alpha\mathbbold{1}_{\{S<t\}}\inf_{s\leq t-S}(\check{X}_{s}+\alpha s)-2\alpha\int_{0}^{t}A_{u}\,dX_{u},

where

Au=1uSHuS=1uSexp(2α(XˇuS+αu)+2α(infsuS(Xˇs+αs)));\begin{split}A_{u}&=\mathbbold{1}_{u\geq S}H_{u-S}\\ &=\mathbbold{1}_{u\geq S}\exp\left(-2\alpha(\check{X}_{u-S}+\alpha u)+2\alpha\left(\inf_{s\leq u-S}(\check{X}_{s}+\alpha s)\right)\right);\\ \end{split}

that is,

Au=1uSexp(2α(Xu+αu)+2α(infSsu(Xs+αs))).A_{u}=\mathbbold{1}_{u\geq S}\exp\left(-2\alpha(X_{u}+\alpha u)+2\alpha\left(\inf_{S\leq s\leq u}(X_{s}+\alpha s)\right)\right).

The process t2α1{S<t}infstS(Xˇs+αs))t\mapsto 2\alpha\mathbbold{1}_{\{S<t\}}\inf_{s\leq t-S}(\check{X}_{s}+\alpha s)) is decreasing and so the (¯t)t0(\mathcal{\overline{F}}_{t})_{t\geq 0}-local martingale part of ZDZ^{D} is equal to

2α0tAu𝑑Xu.-2\alpha\int_{0}^{t}A_{u}\,dX_{u}.

From the integral representation of martingales with respect to the Brownian filtration (see [RY05, Chapter 5, Theorem 3.4], every bounded (¯t)t0(\mathcal{\overline{F}}_{t})_{t\geq 0}-martingale (𝔐t)t0(\mathfrak{M}_{t})_{t\geq 0} can be written as

𝔐t=C+0tμs𝑑Xs.\mathfrak{M}_{t}=C+\int_{0}^{t}\mu_{s}\,dX_{s}.

Such a process decomposes as a (¯tD)t0(\mathcal{\overline{F}}_{t}^{D})_{t\geq 0}-semimartingale in the following way

𝔐t=𝔐~t2α0tDμsAsdsZsD+2αDtμsAsds1ZsD,\mathfrak{M}_{t}=\tilde{\mathfrak{M}}_{t}-2\alpha\int_{0}^{t\wedge D}\frac{\mu_{s}A_{s}\,ds}{Z_{s}^{D}}+2\alpha\int_{D}^{t}\frac{\mu_{s}A_{s}\,ds}{1-Z_{s}^{D}},

where (𝔐~t)t0(\tilde{\mathfrak{M}}_{t})_{t\geq 0} is a ((¯tD)t0,)((\mathcal{\overline{F}}_{t}^{D})_{t\geq 0},\mathbb{P})-local martingale.

7. General facts about the α\alpha-Lipschitz minorant

Recall that a function f:f:\mathbb{R}\mapsto\mathbb{R} admits an α\alpha-Lipschitz minorant mm if and only if ff is bounded below on compact sets, lim inftf(t)αt>\liminf_{t\rightarrow-\infty}f(t)-\alpha t>-\infty, and lim inft+f(t)+αt>\liminf_{t\rightarrow+\infty}f(t)+\alpha t>-\infty. In this case,

(7.1) m(t)=inf{f(s)+α|ts|:s},t.m(t)=\inf\{f(s)+\alpha|t-s|:s\in\mathbb{R}\},\quad t\in\mathbb{R}.

The following result is obvious from (7.1).

Lemma 7.1.

Suppose that f:f:\mathbb{R}\to\mathbb{R} is a function with an α\alpha-Lipschitz minorant. For x,sx,s\in\mathbb{R}, define g:g:\mathbb{R}\to\mathbb{R} by g=x+f(s+)g=x+f(s+\cdot). Write mfm_{f} and mgm_{g} for the respective α\alpha-Lipschitz minorants of ff and gg. Then mg=x+mf(s+)m_{g}=x+m_{f}(s+\cdot).

The next result is a consequence of [AE14, Corollary 9.2] and Lemma 7.1, but we include a proof for the sake of completeness.

Lemma 7.2.

Consider a function f:f:\mathbb{R}\to\mathbb{R} for which the α\alpha-Lipschitz minorant mm exists. Fix aa\in\mathbb{R} such that m(a)=f(a)m(a)=f(a). Define f:f^{\rightarrow}:\mathbb{R}\to\mathbb{R} by

f(t)={f(a)+α(ta),ta,f(t),t>a.f^{\rightarrow}(t)=\begin{cases}f(a)+\alpha(t-a),&t\leq a,\\ f(t),&t>a.\\ \end{cases}

Denote the α\alpha-Lipschitz minorant of ff^{\rightarrow} by mm^{\rightarrow}. Then m(t)=m(t)m(t)=m^{\rightarrow}(t) for all tat\geq a.

Proof.

From the expression of mm^{\rightarrow} we have for every tat\geq a

m(t)=inf{f(s)+α|ts|:s>a}(m(a)+α(ta)).m^{\rightarrow}(t)=\inf\{f(s)+\alpha|t-s|:s>a\}\wedge(m(a)+\alpha(t-a)).

Note that

m(a)+α(ta)=inf{f(s)+α|sa|:s}+α(ta)inf{f(s)+α|sa|:sa}+α(ta)=inf{f(s)+α(as)+α(ta):sa}=inf{f(s)+α|ts|:sa}\begin{split}m(a)+\alpha(t-a)&=\inf\{f(s)+\alpha|s-a|:s\in\mathbb{R}\}+\alpha(t-a)\\ &\leq\inf\{f(s)+\alpha|s-a|:s\leq a\}+\alpha(t-a)\\ &=\inf\{f(s)+\alpha(a-s)+\alpha(t-a):s\leq a\}\\ &=\inf\{f(s)+\alpha|t-s|:s\leq a\}\end{split}

and so m(t)m(t)m^{\rightarrow}(t)\leq m(t) for tat\geq a.

For the reverse inequality, it suffices to prove that

inf{f(s)+α|ts|:s}m(a)+α(ta),ta.\inf\{f(s)+\alpha|t-s|:s\in\mathbb{R}\}\leq m(a)+\alpha(t-a),\quad t\geq a.

By definition, m(a)f(s)+α|sa|m(a)\leq f(s)+\alpha|s-a| for all ss\in\mathbb{R}, and so, by the triangle inequality,

m(a)+α(ta)f(s)+α|sa|+α|at|f(s)+α|ts|m(a)+\alpha(t-a)\geq f(s)+\alpha|s-a|+\alpha|a-t|\geq f(s)+\alpha|t-s|

for every ss\in\mathbb{R}. ∎

The following result is [AE14, Lemma 9.4].

Lemma 7.3.

Let f:f:\mathbb{R}\to\mathbb{R} be a càdlàg function with α\alpha-Lipschitz minorant m:m:\mathbb{R}\to\mathbb{R}. Set

𝐝:=inf{t>0:f(t)f(t)=m(t)},\mathbf{d}:=\inf\{t>0:f(t)\wedge f(t-)=m(t)\},
𝐬:=inf{t>0:f(t)f(t)αtinf{f(u)αu:u0}},\mathbf{s}:=\inf\left\{t>0:f(t)\wedge f(t-)-\alpha t\leq\inf\{f(u)-\alpha u:u\leq 0\}\right\},

and

𝐞:=inf{t𝐬:f(t)f(t)+α(t𝐬)=inf{f(u)+α(u𝐬):u𝐬}}.\mathbf{e}:=\inf\left\{t\geq\mathbf{s}:f(t)\wedge f(t-)+\alpha(t-\mathbf{s})=\inf\{f(u)+\alpha(u-\mathbf{s}):u\geq\mathbf{s}\}\right\}.

Suppose that f(𝐬)f(𝐬)f(\mathbf{s})\leq f(\mathbf{s}-). Then, 𝐞=𝐝\mathbf{e}=\mathbf{d}.

Let us also state here a simple expression of the time 𝐬\mathbf{s} when the time zero is a contact point.

Lemma 7.4.

Let f:f:\mathbb{R}\to\mathbb{R} be a continuous function with α\alpha-Lipschitz minorant m:m:\mathbb{R}\to\mathbb{R}, and suppose that we have m(0)=f(0)=0m(0)=f(0)=0, then 𝐬\mathbf{s} defined in Lemma 7.3 takes the following form

𝐬=inf{t>0:f(t)=αt}.\mathbf{s}=\inf\{t>0:f(t)=\alpha t\}.
Proof.

This is straightforward, as

0inf{f(u)αu:u0}inf{f(u)+α|u|:u}=m(0)=00\geq\inf\{f(u)-\alpha u:u\leq 0\}\geq\inf\{f(u)+\alpha|u|:u\in\mathbb{R}\}=m(0)=0

because f(0)=0f(0)=0. ∎

The following lemma describes the shape of the α\alpha-Lipschitz minorant between two consecutive points of the contact set. It is [AE14, Lemma 8.3].

Lemma 7.5.

Suppose that f:f:\mathbb{R}\to\mathbb{R} that is a càdlàg with α\alpha-Lipschitz minorant m:m:\mathbb{R}\to\mathbb{R}. The set {t:m(t)=f(t)f(t)}\{t\in\mathbb{R}:m(t)=f(t)\wedge f(t-)\} is closed. If t’<t” are such that f(t)f(t)=m(t)f(t^{\prime})\wedge f(t^{\prime}-)=m(t^{\prime}), f(t′′)f(t′′)=m(t′′)f(t^{\prime\prime})\wedge f(t^{\prime\prime}-)=m(t^{\prime\prime}), and f(t)f(t)>m(t)f(t)\wedge f(t-)>m(t) for all t<t<t′′t^{\prime}<t<t^{\prime\prime}, then setting t=(f(t′′)f(t′′)f(t)f(t)+α(t′′+t))/(2α)t^{*}=(f(t^{\prime\prime})\wedge f(t^{\prime\prime}-)-f(t^{\prime})\wedge f(t^{\prime}-)+\alpha(t^{\prime\prime}+t^{\prime}))/(2\alpha),

m(t)={f(t)f(t)+α(tt),ttt,f(t′′)f(t′′)+α(t′′t),,ttt′′.m(t)=\begin{cases}f(t^{\prime})\wedge f(t^{\prime}-)+\alpha(t-t^{\prime}),&t^{\prime}\leq t\leq t^{*},\\ f(t^{\prime\prime})\wedge f(t^{\prime\prime})+\alpha(t^{\prime\prime}-t),&,t^{*}\leq t\leq t^{\prime\prime}.\\ \end{cases}

8. Two random time lemmas

We detail in this section two lemmas that we used previously in Section 3 and Section 6. We consider here (Xt)t(X_{t})_{t\in\mathbb{R}} to be two-sided Lévy process with (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}} as its canonical right-continuous filtration; that is, t:=ϵ>0σ{Xs,<st+ϵ}\mathcal{F}_{t}:=\bigcap_{\epsilon>0}\sigma\{X_{s},\,-\infty<s\leq t+\epsilon\}, tt\in\mathbb{R}.

Lemma 8.1.

Let \mathfrak{R} be a (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}-stopping time that takes values in a countable subset of \mathbb{R}. Define the σ\sigma-fields ˇt:=ϵ>0σ({Xu+X:0ut+ϵ})\check{\mathcal{F}}_{t}:=\bigcap_{\epsilon>0}\sigma(\{X_{u+\mathfrak{R}}-X_{\mathfrak{R}}:0\leq u\leq t+\epsilon\}), t0t\geq 0, and put ˇ=t0ˇ\check{\mathcal{F}}_{\infty}=\bigvee_{t\geq 0}\check{\mathcal{F}}_{\infty}. For every random variable 𝔛\mathfrak{X} measurable with respect to ˇ\check{\mathcal{F}}_{\infty} we have for every tt\in\mathbb{R} and rtr\leq t that almost surely

𝔼[𝔛|t]1{=r}=𝔼[𝔛1{=r}|t]=𝔼[𝔛|ˇtr]1{=r}.\mathbb{E}\left[\mathfrak{X}\,|\,\mathcal{F}_{t}\right]\mathbbold{1}_{\{\mathfrak{R}=r\}}=\mathbb{E}\left[\mathfrak{X}\mathbbold{1}_{\{\mathfrak{R}=r\}}\,|\,\mathcal{F}_{t}\right]=\mathbb{E}\left[\mathfrak{X}\,|\,\mathcal{\check{F}}_{t-r}\right]\mathbbold{1}_{\{\mathfrak{R}=r\}}.
Proof.

The first equality is trivial because the event {=r}\{\mathfrak{R}=r\} is r\mathcal{F}_{r}-measurable and hence t\mathcal{F}_{t}-measurable. We therefore need only prove the second equality.

By a monotone class argument, it suffices to show that the second inequality holds for 𝔛=i=1nfi(Xui+X)\mathfrak{X}=\prod_{i=1}^{n}f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}), where 0u1<u2<<un0\leq u_{1}<u_{2}<\ldots<u_{n} and f1,,fnf_{1},\ldots,f_{n} are nonnegative Borel functions.

We have for any t\mathcal{F}_{t}-measurable nonnegative random variable AtA_{t} that

𝔼[At1{=r}i=1n(fi(Xui+X))]=𝔼[At1{=r}i=1n(fi(Xui+rXr))]=𝔼[At1{=r}ui<tr(fi(Xui+rXr))×𝔼[uitr(fi(Xui+rXr))|t]]=𝔼[At1{=r}ui<tr(fi(Xui+rXr))×𝔼[uitr(fi(Xui+rXt+X(tr)+rXr))|t]].\begin{split}&\mathbb{E}\left[A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{i=1}^{n}(f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}))\right]\\ &\quad=\mathbb{E}\left[A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{i=1}^{n}(f_{i}(X_{u_{i}+r}-X_{r}))\right]\\ &\quad=\mathbb{E}\Bigg{[}A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{u_{i}<t-r}(f_{i}(X_{u_{i}+r}-X_{r}))\\ &\qquad\times\mathbb{E}\Bigg{[}\prod_{u_{i}\geq t-r}(f_{i}(X_{u_{i}+r}-X_{r}))\,|\,\mathcal{F}_{t}\Bigg{]}\Bigg{]}\\ &\quad=\mathbb{E}\Bigg{[}A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{u_{i}<t-r}(f_{i}(X_{u_{i}+r}-X_{r}))\\ &\qquad\times\mathbb{E}\Bigg{[}\prod_{u_{i}\geq t-r}(f_{i}(X_{u_{i}+r}-X_{t}+X_{(t-r)+r}-X_{r}))\,|\,\mathcal{F}_{t}\Bigg{]}\Bigg{]}.\\ \end{split}

Using the independence and stationarity of the increments of the Lévy process XX gives

𝔼[At1{=r}i=1n(fi(Xui+X))]=𝔼[At1{=r}ui<trn(fi(Xui+rXr))×uitrn(gi(X(tr)+rXr))]\begin{split}&\mathbb{E}\Bigg{[}A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{i=1}^{n}(f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}))\Bigg{]}\\ &\quad=\mathbb{E}\Bigg{[}A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{u_{i}<t-r}^{n}(f_{i}(X_{u_{i}+r}-X_{r}))\\ &\qquad\times\prod_{u_{i}\geq t-r}^{n}(g_{i}(X_{(t-r)+r}-X_{r}))\Bigg{]}\\ \end{split}

for gi:=𝔼[fi(Xui+rt+)]g_{i}:=\mathbb{E}\left[f_{i}(X_{u_{i}+r-t}+\cdot)\right]. Thus,

𝔼[At1{=r}i=1n(fi(Xui+X))]=𝔼[At1{=r}ui<tr(fi(Xui+X))×uitr(gi(X(tr)+X))].\begin{split}&\mathbb{E}\Bigg{[}A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{i=1}^{n}(f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}))\Bigg{]}\\ &\quad=\mathbb{E}\Bigg{[}A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{u_{i}<t-r}(f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}))\\ &\quad\times\prod_{u_{i}\geq t-r}(g_{i}(X_{(t-r)+\mathfrak{R}}-X_{\mathfrak{R}}))\Bigg{]}.\\ \end{split}

Because the process (Xu+X)u0(X_{u+\mathfrak{R}}-X_{\mathfrak{R}})_{u\geq 0} is itself a Lévy process with respect to the filtration (ˇt)t0(\mathcal{\check{F}}_{t})_{t\geq 0} and it has the same distribution as (Xt)t0(X_{t})_{t\geq 0}, we have

𝔼[i=1n(fi(Xui+X))|ˇtr]=ui<trn(fi(Xui+X))uitrn(gi(X(tr)+X)).\begin{split}&\mathbb{E}\left[\prod_{i=1}^{n}(f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}))\,|\,\mathcal{\check{F}}_{t-r}\right]\\ &\quad=\prod_{u_{i}<t-r}^{n}(f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}))\prod_{u_{i}\geq t-r}^{n}(g_{i}(X_{(t-r)+\mathfrak{R}}-X_{\mathfrak{R}})).\\ \end{split}

Thus we finally get the desired equality

𝔼[At1{=r}i=1n(fi(Xui+X))]=𝔼[At1{=r}𝔼[i=1n(fi(Xui+X))|ˇtr]].\begin{split}&\mathbb{E}\left[A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\prod_{i=1}^{n}(f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}))\right]\\ &\quad=\mathbb{E}\left[A_{t}\mathbbold{1}_{\{\mathfrak{R}=r\}}\mathbb{E}\left[\prod_{i=1}^{n}(f_{i}(X_{u_{i}+\mathfrak{R}}-X_{\mathfrak{R}}))\,|\,\mathcal{\check{F}}_{t-r}\right]\right].\\ \end{split}

Lemma 8.2.

Suppose that almost surely limtXt=\lim_{t\rightarrow\infty}X_{t}=\infty and that zero is regular for (0,)(0,\infty) for the process (Xt)t(X_{t})_{t\in\mathbb{R}}. Let \mathfrak{R} be a (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}-stopping time. Put (Xˇt)t0:=(Xt+X)t0(\check{X}_{t})_{t\geq 0}:=(X_{t+\mathfrak{R}}-X_{\mathfrak{R}})_{t\geq 0}. Consider the random time 𝔏:=sup{t0:XˇtXˇt=inf{Xˇu:u0}}\mathfrak{L}:=\sup\{t\geq 0:\check{X}_{t}\wedge\check{X}_{t-}=\inf\{\check{X}_{u}:u\geq 0\}\}. Then, setting 𝔇:=+𝔏\mathfrak{D}:=\mathfrak{R}+\mathfrak{L}, the σ\sigma-field σ{Xˇt+𝔏Xˇ𝔏:t0}\sigma\{\check{X}_{t+\mathfrak{L}}-\check{X}_{\mathfrak{L}}:t\geq 0\} is independent of the σ\sigma-field 𝔇σ{X𝔇}\mathcal{F}_{\mathfrak{D-}}\vee\sigma\{X_{\mathfrak{D}}\}.

Proof.

We begin with an observation. Define the σ\sigma-fields ˇt:=ϵ>0σ({Xs+X:0st+ϵ})\check{\mathcal{F}}_{t}:=\bigcap_{\epsilon>0}\sigma(\{X_{s+\mathfrak{R}}-X_{\mathfrak{R}}:0\leq s\leq t+\epsilon\}), t0t\geq 0, and put ˇ=t0ˇ\check{\mathcal{F}}_{\infty}=\bigvee_{t\geq 0}\check{\mathcal{F}}_{\infty}. It follows from the part of the proof of Theorem 3.5 which comes before we employ the current lemma that σ{Xˇt+𝔏Xˇ𝔏:t0}\sigma\{\check{X}_{t+\mathfrak{L}}-\check{X}_{\mathfrak{L}}:t\geq 0\} is independent of

ˇ𝔏:=σ{ξˇ𝔏:(ξˇt)t0 is an optional process with respect to the filtration (tˇ)t0}.\check{\mathcal{F}}_{\mathfrak{L}}:=\sigma\{\check{\xi}_{\mathfrak{L}}:\,(\check{\xi}_{t})_{t\geq 0}\text{ is an optional process with respect to the filtration }(\check{\mathcal{F}_{t}})_{t\geq 0}\}.

Returning to the statement of the lemma, and by noticing that XD=X+Xˇ𝔏X_{D}=X_{\mathfrak{R}}+\check{X}_{\mathfrak{L}}, it suffices to prove for any bounded, nonnegative σ{Xˇt+𝔏Xˇ𝔏:t0}\sigma\{\check{X}_{t+\mathfrak{L}}-\check{X}_{\mathfrak{L}}:t\geq 0\}-measurable random variable 𝔜\mathfrak{Y}, any bounded, nonnegative, continuous functions g1,,gn,h1,h2g^{1},\ldots,g^{n},h^{1},h^{2}, and any previsible processes ξ1,,ξn\xi^{1},\ldots,\xi^{n} with respect to the filtration (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}} that

(8.1) 𝔼[𝔜i=1ngi(ξ𝔇i)h1(X)h2(Xˇ𝔏)]=𝔼[𝔜]𝔼[i=1ngi(ξ𝔇i)h1(X)h2(Xˇ𝔏)].\mathbb{E}\left[\mathfrak{Y}\prod_{i=1}^{n}g^{i}(\xi^{i}_{\mathfrak{D}})h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})\right]=\mathbb{E}[\mathfrak{Y}]\mathbb{E}\left[\prod_{i=1}^{n}g^{i}(\xi^{i}_{\mathfrak{D}})h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})\right].

However, (i=1ngi(ξti))t(\prod_{i=1}^{n}g^{i}(\xi^{i}_{t}))_{t\in\mathbb{R}} is itself a previsible process, so it suffices for (8.1) to prove for any bounded, nonnegative σ{Xˇt+𝔏Xˇ𝔏:t0}\sigma\{\check{X}_{t+\mathfrak{L}}-\check{X}_{\mathfrak{L}}:t\geq 0\}-measurable random variable 𝔜\mathfrak{Y}, any bounded, nonnegative process ξ\xi that is previsible with respect to filtration (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}, and any bounded, nonnegative, continuous functions h1,h2h^{1},h^{2} that

(8.2) 𝔼[𝔜ξ𝔇h1(X)h2(Xˇ𝔏)]=𝔼[𝔜]𝔼[ξ𝔇h1(X)h2(Xˇ𝔏)].\mathbb{E}[\mathfrak{Y}\xi_{\mathfrak{D}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})]=\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\xi_{\mathfrak{D}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})].

A stochastic process viewed as a map from Ω×\Omega\times\mathbb{R} to \mathbb{R} is previsible with respect to the filtration (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}} if and only if it is measurable with respect to the σ\sigma-field generated by the maps (ω,t)1t>T(ω)(\omega,t)\mapsto\mathbbold{1}_{t>T(\omega)}, where TT ranges through the set of {+}\mathbb{R}\cup\{+\infty\}-valued (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}-stopping times (see [RW87, Chapter IV, Corollary 6.9] for the analogous fact about previsible processes indexed by (0,)(0,\infty)). Also, note that the collection of the sets 𝒜={{(ω,t):t>T(ω)}:T is a stopping time}\mathcal{A}=\{\{(\omega,t):t>T(\omega)\}:T\text{ is a stopping time}\} is a π\pi-system because the minimum of two stopping times is a stopping time. Hence, to establish (8.2), it suffices by a monotone class argument to show for any {+}\mathbb{R}\cup\{+\infty\} -valued (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}-stopping time TT that

(8.3) 𝔼[𝔜1{𝔇>T}h1(X)h2(Xˇ𝔏)]=𝔼[𝔜]𝔼[1{𝔇>T}h1(X)h2(Xˇ𝔏)].\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{D}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})]=\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{D}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})].

Because we have that 1{D>T}=limn1{D>Tn}\mathbbold{1}_{\{D>T\}}=\lim_{n\rightarrow\infty}\mathbbold{1}_{\{D>T\wedge n\}}, it further suffices to check (8.3) for TT an \mathbb{R}-valued (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}-stopping time.

Consider (8.3) in the special case when \mathfrak{R} and TT take values in the countable set {rk:=k2n,k}\{r_{k}:=\frac{k}{2^{n}},k\in\mathbb{Z}\}. We then have

𝔼[𝔜1{𝔇>T}h1(X)h2(Xˇ𝔏)]\displaystyle\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{D}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})] =𝔼[𝔜1{+𝔏>T}h1(X)h2(Xˇ𝔏)]\displaystyle=\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{R}+\mathfrak{L}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})]
=k,l𝔼[𝔜1{𝔏>rlrk}\displaystyle\quad=\sum_{k,l\in\mathbb{Z}}\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}}
×h1(Xrk)h2(Xˇ𝔏)1{=rk,T=rl}]\displaystyle\quad\times h^{1}(X_{r_{k}})h^{2}(\check{X}_{\mathfrak{L}})\mathbbold{1}_{\{\mathfrak{R}=r_{k},T=r_{l}\}}]
=l<k𝔼[𝔜1{=rk,T=rl}h1(Xrk)h2(Xˇ𝔏)]\displaystyle=\sum_{l<k}\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{R}=r_{k},T=r_{l}\}}h^{1}(X_{r_{k}})h^{2}(\check{X}_{\mathfrak{L}})]
+kl𝔼[𝔜1{𝔏>rlrk}1{=rk,T=rl}\displaystyle\quad+\sum_{k\leq l}\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}}\mathbbold{1}_{\{\mathfrak{R}=r_{k},T=r_{l}\}}
×h1(Xrk)h2(Xˇ𝔏)]\displaystyle\quad\times h^{1}(X_{r_{k}})h^{2}(\check{X}_{\mathfrak{L}})]
=l<k𝔼[𝔜]𝔼[1{=rk,T=rl}h1(Xrk)h2(Xˇ𝔏)]\displaystyle=\sum_{l<k}\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{R}=r_{k},T=r_{l}\}}h^{1}(X_{r_{k}})h^{2}(\check{X}_{\mathfrak{L}})]
+kl𝔼[𝔼[𝔜1{𝔏>rlrk}h2(Xˇ𝔏)|rl]\displaystyle\quad+\sum_{k\leq l}\mathbb{E}[\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}}h^{2}(\check{X}_{\mathfrak{L}})\,|\,\mathcal{F}_{r_{l}}]
×1{=rk,T=rl}h1(Xrk)]\displaystyle\quad\times\mathbbold{1}_{\{\mathfrak{R}=r_{k},T=r_{l}\}}h^{1}(X_{r_{k}})]

By applying Lemma  8.1 for 𝔛=𝔜1{𝔏>rlrk}h2(Xˇ𝔏)\mathfrak{X}=\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}}h^{2}(\check{X}_{\mathfrak{L}}), we have, for klk\leq l, that

𝔼[𝔜1{𝔏>rlrk}h2(Xˇ𝔏)|rl]1{=rk,T=rl}\displaystyle\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}}h^{2}(\check{X}_{\mathfrak{L}})|\mathcal{F}_{r_{l}}]\mathbbold{1}_{\{\mathfrak{R}=r_{k},T=r_{l}\}}
=𝔼[𝔜1{𝔏>rlrk}h2(Xˇ𝔏)|ˇrlrk]1{=rk,T=rl}.\displaystyle\quad=\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}}h^{2}(\check{X}_{\mathfrak{L}})|\mathcal{\check{F}}_{r_{l}-r_{k}}]\mathbbold{1}_{\{\mathfrak{R}=r_{k},T=r_{l}\}}.

Moreover, if we let Aˇ\check{A} to be an event in ˇrlrk\mathcal{\check{F}}_{r_{l}-r_{k}}, then

𝔼[𝔜1{𝔏>rlrk}Aˇh2(Xˇ𝔏)]=𝔼[𝔜]𝔼[1{𝔏>rlrk}Aˇh2(Xˇ𝔏)],\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}\cap\check{A}}h^{2}(\check{X}_{\mathfrak{L}})]=\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}\cap\check{A}}h^{2}(\check{X}_{\mathfrak{L}})],

because the process (ξˇ)t0=(1{t>rlrk}Aˇh2(Xˇt))t0(\check{\xi})_{t\geq 0}=(\mathbbold{1}_{\{t>r_{l}-r_{k}\}\cap\check{A}}h^{2}(\check{X}_{t}))_{t\geq 0} is clearly an (ˇt)t0(\mathcal{\check{F}}_{t})_{t\geq 0}-optional process (as it is the product of the left-continuous, right-limited (ˇt)t0(\mathcal{\check{F}}_{t})_{t\geq 0}-adapted process (1{t>rlrk}Aˇ)t0(\mathbbold{1}_{\{t>r_{l}-r_{k}\}\cap\check{A}})_{t\geq 0} and the càdlàg (ˇt)t0(\mathcal{\check{F}}_{t})_{t\geq 0}-adapted process (h2(Xˇt)t0)(h^{2}(\check{X}_{t})_{t\geq 0})). Hence

𝔼[𝔜1{𝔏>rlrk}|ˇrlrk]=𝔼[𝔜]𝔼[1{𝔏rlrk}h2(Xˇ𝔏)|ˇrlrk].\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k}\}}|\mathcal{\check{F}}_{r_{l}-r_{k}}]=\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{L}\geq r_{l}-r_{k}\}}h^{2}(\check{X}_{\mathfrak{L}})|\mathcal{\check{F}}_{r_{l}-r_{k}}].

Substituting in this equality gives

𝔼[𝔜1{𝔇>T}h1(X)h2(Xˇ𝔏)]\displaystyle\mathbb{E}[\mathfrak{Y}\mathbbold{1}_{\{\mathfrak{D}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})]
=l<k𝔼[𝔜]𝔼[1{=rk,T=rl}h1(X)h2(Xˇ𝔏)]\displaystyle\quad=\sum_{l<k}\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{R}=r_{k},T=r_{l}\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})]
+kl𝔼[𝔜]𝔼[1{𝔏>rlrk,R=rk,T=rl}h1(X)h2(Xˇ𝔏)]\displaystyle\qquad+\sum_{k\leq l}\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{L}>r_{l}-r_{k},\,R=r_{k},\,T=r_{l}\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})]
=𝔼[𝔜]𝔼[1{𝔏>T}h1(X)h2(Xˇ𝔏)]\displaystyle\quad=\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{L}>T-\mathfrak{R}\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})]
=𝔼[𝔜]𝔼[1{𝔇>T}h1(X)h2(Xˇ𝔏)].\displaystyle\quad=\mathbb{E}[\mathfrak{Y}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{D}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}})].

We have thus proved (8.3) when \mathfrak{R} and TT both take values in the set {rk:=k2n,k}\{r_{k}:=\frac{k}{2^{n}},k\in\mathbb{Z}\}. Suppose now that TT is an arbitrary \mathbb{R}-valued stopping time but that \mathfrak{R} still takes values in {rk:=k2n,k}\{r_{k}:=\frac{k}{2^{n}},k\in\mathbb{Z}\}. For mm\in\mathbb{N} set Tm:=k2mT_{m}:=\frac{k}{2^{m}} when k12m<Tk2m\frac{k-1}{2^{m}}<T\leq\frac{k}{2^{m}}, kk\in\mathbb{Z}. Thus (Tm)m(T_{m})_{m\in\mathbb{N}} is a decreasing sequence of (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}-stopping times converging to TT. Taking (8.3) with TT replaced by TmT_{m} and letting mm\to\infty we get (8.3) for \mathfrak{R} taking values in the set {rk:=k2n,k}\{r_{k}:=\frac{k}{2^{n}},k\in\mathbb{Z}\} and general \mathbb{R}-valued TT.

We now to extend to the completely general case of (8.3). Put (Xˇt)t0:=(Xt+X)t0(\check{X}_{t}^{\mathfrak{R}})_{t\geq 0}:=(X_{t+\mathfrak{R}}-X_{\mathfrak{R}})_{t\geq 0}. Denote the corresponding random variables 𝔏\mathfrak{L}, 𝔜\mathfrak{Y}, and 𝔇\mathfrak{D} by 𝔏\mathfrak{L}^{\mathfrak{R}},𝔜\mathfrak{Y}^{\mathfrak{R}}, and 𝔇\mathfrak{D}^{\mathfrak{R}}, respectively. Recalling that 𝔜\mathfrak{Y}^{\mathfrak{R}} is an arbitrary bounded, nonnegative random variable measurable with respect to σ{Xˇt+𝔏Xˇ𝔏,t0}\sigma\{\check{X}_{t+\mathfrak{L}^{\mathfrak{R}}}^{\mathfrak{R}}-\check{X}_{\mathfrak{L}^{\mathfrak{R}}}^{\mathfrak{R}},t\geq 0\}, it suffices by a monotone class argument it suffices to show (8.3) in the special case where

𝔜=i=1mfi(Xˇti+𝔏Xˇ𝔏)=i=1mfi(Xti+𝔏+X𝔏+)\mathfrak{Y}^{\mathfrak{R}}=\prod_{i=1}^{m}f^{i}(\check{X}_{t_{i}+\mathfrak{L}^{\mathfrak{R}}}^{\mathfrak{R}}-\check{X}_{\mathfrak{L}^{\mathfrak{R}}}^{\mathfrak{R}})=\prod_{i=1}^{m}f^{i}(X_{t_{i}+\mathfrak{L}^{\mathfrak{R}}+\mathfrak{R}}-X_{\mathfrak{L}^{\mathfrak{R}}+\mathfrak{R}})

for fif^{i} , i=1,,mi=1,\dots,m, bounded, nonnegative, continuous functions and 0t1<<tm0\leq t_{1}<\dots<t_{m}.

For nn\in\mathbb{N} set n:=k2n\mathfrak{R}_{n}:=\frac{k}{2^{n}} when k12n<k2n\frac{k-1}{2^{n}}<\mathfrak{R}\leq\frac{k}{2^{n}}, kk\in\mathbb{Z}. Thus (n)n(\mathfrak{R}_{n})_{n\in\mathbb{N}} is a decreasing sequence of (t)t(\mathcal{F}_{t})_{t\in\mathbb{R}}-stopping times converging to \mathfrak{R}. Note that

𝔏𝔫=argmin{Xu+X:u𝔫}+n.\mathfrak{L}^{\mathfrak{R_{n}}}=\mathrm{argmin}\{X_{u+\mathfrak{R}}-X_{\mathfrak{R}}:u\geq\mathfrak{R_{n}}-\mathfrak{R}\}+\mathfrak{R}-\mathfrak{R}_{n}.

Thus, if 𝔏=0\mathfrak{L}^{\mathfrak{R}}=0, then 𝔇𝔫𝔇\mathfrak{D}^{\mathfrak{R_{n}}}\downarrow\mathfrak{D}^{\mathfrak{R}} by the right-continuity of the sample paths of XX. On the other hand, if 𝔏>0\mathfrak{L}^{\mathfrak{R}}>0, then, for nn large enough, we have that 𝔇𝔫=𝔇\mathfrak{D}^{\mathfrak{R_{n}}}=\mathfrak{D}^{\mathfrak{R}}. Hence, by applying the special case of (8.3) for the stopping times n\mathfrak{R}_{n} taking discrete values, and using the fact that XX has càdlàg paths we get

𝔼[𝔜1{𝔇>T}h1(X)h2(Xˇ𝔏)]\displaystyle\mathbb{E}[\mathfrak{Y}^{\mathfrak{R}}\mathbbold{1}_{\{\mathfrak{D}^{\mathfrak{R}}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(\check{X}_{\mathfrak{L}^{\mathfrak{R}}}^{\mathfrak{R}})]
=𝔼[i=1mfi(Xti+𝔇X𝔇)1{𝔇>T}h1(X)h2(X𝔇X)]\displaystyle\quad=\mathbb{E}\left[\prod_{i=1}^{m}f^{i}(X_{t_{i}+\mathfrak{D}^{\mathfrak{R}}}-X_{\mathfrak{D}^{\mathfrak{R}}})\mathbbold{1}_{\{\mathfrak{D}^{\mathfrak{R}}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(X_{\mathfrak{D}^{\mathfrak{R}}}-X_{\mathfrak{R}})\right]
=limn𝔼[i=1mfi(Xti+𝔇𝔫X𝔇𝔫)1{𝔇𝔫>T}h1(X𝔫)h2(X𝔇𝔫X𝔫)]\displaystyle\quad=\lim_{n\rightarrow\infty}\mathbb{E}\left[\prod_{i=1}^{m}f^{i}(X_{t_{i}+\mathfrak{D}^{\mathfrak{R_{n}}}}-X_{\mathfrak{D}^{\mathfrak{R_{n}}}})\mathbbold{1}_{\{\mathfrak{D}^{\mathfrak{R_{n}}}>T\}}h^{1}(X_{\mathfrak{R_{n}}})h^{2}(X_{\mathfrak{D}^{\mathfrak{R_{n}}}}-X_{\mathfrak{R_{n}}})\right]
=limn𝔼[i=1mfi(Xti+𝔇𝔫X𝔇𝔫)]𝔼[1{𝔇𝔫>T}h1(X𝔫)h2(X𝔇𝔫X𝔫)]\displaystyle\quad=\lim_{n\rightarrow\infty}\mathbb{E}\left[\prod_{i=1}^{m}f^{i}(X_{t_{i}+\mathfrak{D}^{\mathfrak{R_{n}}}}-X_{\mathfrak{D}^{\mathfrak{R_{n}}}})\right]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{D}^{\mathfrak{R_{n}}}>T\}}h^{1}(X_{\mathfrak{R_{n}}})h^{2}(X_{\mathfrak{D}^{\mathfrak{R_{n}}}}-X_{\mathfrak{R_{n}}})]
=𝔼[i=1mfi(Xti+𝔇X𝔇)]𝔼[1{𝔇>T}h1(X)h2(X𝔇X)]\displaystyle\quad=\mathbb{E}\left[\prod_{i=1}^{m}f^{i}(X_{t_{i}+\mathfrak{D}^{\mathfrak{R}}}-X_{\mathfrak{D}^{\mathfrak{R}}})\right]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{D}^{\mathfrak{R}}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(X_{\mathfrak{D}^{\mathfrak{R}}}-X_{\mathfrak{R}})]
=𝔼[𝔜]𝔼[1{𝔇>T}h1(X)h2(X𝔇X)],\displaystyle\quad=\mathbb{E}[\mathfrak{Y}^{\mathfrak{R}}]\mathbb{E}[\mathbbold{1}_{\{\mathfrak{D}^{\mathfrak{R}}>T\}}h^{1}(X_{\mathfrak{R}})h^{2}(X_{\mathfrak{D}^{\mathfrak{R}}}-X_{\mathfrak{R}})],

which finishes our proof. ∎

References

  • [AE14] Joshua Abramson and Steven N. Evans, Lipschitz minorants of Brownian motion and Lévy processes., Probab. Theory Relat. Fields 158 (2014), no. 3-4, 809–857 (English).
  • [Bar78] M. T. Barlow, Study of a filtration expanded to include an honest time, Z. Wahrsch. Verw. Gebiete 44 (1978), no. 4, 307–323. MR 509204
  • [BB03] François Baccelli and Pierre Brémaud, Elements of queueing theory, second ed., Applications of Mathematics (New York), vol. 26, Springer-Verlag, Berlin, 2003, Palm martingale calculus and stochastic recurrences, Stochastic Modelling and Applied Probability. MR 1957884
  • [Ber96] Jean Bertoin, Lévy processes, Cambridge Tracts in Mathematics, vol. 121, Cambridge University Press, Cambridge, 1996. MR 1406564 (98e:60117)
  • [BS02] Andrei N. Borodin and Paavo Salminen, Handbook of Brownian motion—facts and formulae, second ed., Probability and its Applications, Birkhäuser Verlag, Basel, 2002. MR 1912205 (2003g:60001)
  • [FT88] P. J. Fitzsimmons and Michael Taksar, Stationary regenerative sets and subordinators, Ann. Probab. 16 (1988), no. 3, 1299–1305. MR 942770 (89m:60176)
  • [GP80] Priscilla Greenwood and Jim Pitman, Construction of local time and Poisson point processes from nested arrays., J. Lond. Math. Soc., II. Ser. 22 (1980), 183–192 (English).
  • [GS74] R. K. Getoor and M. J. Sharpe, Last exit decompositions and distributions, Indiana Univ. Math. J. 23 (1973/74), 377–404. MR 0334335 (48 #12654)
  • [Jeu79] Thierry Jeulin, Grossissement d’une filtration et applications, Séminaire de probabilités de Strasbourg 13 (1979), 574–609 (fr). MR 544826
  • [Kal02] Olav Kallenberg, Foundations of modern probability, second ed., Probability and its Applications (New York), Springer-Verlag, New York, 2002. MR 1876169 (2002m:60002)
  • [Mil77] P. W. Millar, Zero-one laws and the minimum of a Markov process, Trans. Amer. Math. Soc. 226 (1977), 365–391. MR 0433606 (55 #6579)
  • [Mil78] by same author, A path decomposition for Markov processes, Ann. Probability 6 (1978), no. 2, 345–348. MR 0461678 (57 #1663)
  • [Nev77] J. Neveu, Processus ponctuels, 249–445. Lecture Notes in Math., Vol. 598. MR 0474493
  • [Nik06] Ashkan Nikeghbali, An essay on the general theory of stochastic processes, Probab. Surveys 3 (2006), 345–412.
  • [Pit83] J. W. Pitman, Remarks on the convex minorant of Brownian motion, Seminar on stochastic processes, 1982 (Evanston, Ill., 1982), Progr. Probab. Statist., vol. 5, Birkhäuser Boston, Boston, MA, 1983, pp. 219–227. MR 733673 (85f:60119)
  • [Pit87] Jim Pitman, Stationary excursions, Séminaire de Probabilités, XXI, Lecture Notes in Math., vol. 1247, Springer, Berlin, 1987, pp. 289–302. MR 941992
  • [PS72] A. O. Pittenger and C. T. Shih, Coterminal families and the strong Markov property, Bull. Amer. Math. Soc. 78 (1972), 439–443. MR 0297019 (45 #6077)
  • [RP81] L. C. G. Rogers and J. W. Pitman, Markov functions., Ann. Probab. 9 (1981), 573–582 (English).
  • [RW87] L. C. G. Rogers and David Williams, Diffusions, Markov processes, and martingales. Vol. 2, Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, John Wiley & Sons Inc., New York, 1987, Itô calculus. MR 921238 (89k:60117)
  • [RY05] Daniel Revuz and Marc Yor, Continuous martingales and Brownian motion. 3rd ed., 3rd. corrected printing., 3rd ed., 3rd. corrected printing ed., vol. 293, Berlin: Springer, 2005 (English).
  • [Sat99] Ken-iti Sato, Lévy processes and infinitely divisible distributions, Cambridge Studies in Advanced Mathematics, vol. 68, Cambridge University Press, Cambridge, 1999, Translated from the 1990 Japanese original, Revised by the author. MR 1739520 (2003b:60064)
  • [SH04] F.W. Steutel and K. Harn, van, Infinite divisibility of probability distributions on the real line, Pure and applied mathematics : a series of monographs and textbooks, Marcel Dekker Inc., United States, 2004 (English).
  • [Tho00] Hermann Thorisson, Coupling, stationarity, and regeneration., New York, NY: Springer, 2000 (English).