Scalar conservation laws with white noise initial data
Abstract.
The statistical description of the scalar conservation law of the form with a smooth convex function has been an object of interest when the initial profile is random. The special case when (Burgers equation) has in particular received extensive interest in the past and is now understood for various random initial conditions. We prove in this paper a conjecture on the profile of the solution at any time for a general class of Hamiltonians and show that it is a stationary piecewise-smooth Feller process. Along the way, we study the excursion process of the two-sided linear Brownian motion below any strictly convex function with superlinear growth and derive a generalized Chernoff distribution of the random variable . Finally, when is a white noise derived from an abrupt Lévy process, we show that the structure of shocks of the solution is a.s discrete at any fixed time under some mild assumptions on .
Key words and phrases:
scalar conservation law, white noise, random initial data, path decomposition, Chernoff distribution, abrupt process2010 Mathematics Subject Classification:
60G51, 60J65, 60J60, 60J75, 35L651. Introduction
We are interested in the following conservation law problem
(1.1) |
where is a strictly convex function with superlinear growth at infinity and is a white noise. A question of interest is to describe the law of the process at any given time .
1.1. Background
There is a straightforward link between the scalar conservation law and the Hamilton-Jacobi PDE. Indeed, if one defines
and the potential
then solves the PDE
(1.2) |
and is determined by the Hopf-Lax formula (see [6][Theorem 4, Chapter 3.3])
(1.3) |
where is the Legendre transform of defined as
The rightmost maximizer in the equation (1.3) is called the backward Lagrangian, and is directly linked to the entropy solution of the scalar conservation law (1.1) by the Lax-Oleinik formula (see [6][Theorem 1, Chapter 3.4])
The reader may be familiar with this other form of the Hamilton-Jacobi PDE
(1.4) |
If we denote by a solution of (1.4), then it is easy to see that verifies for the Hamiltonian . We will thus only restrict ourselves to the version of the scalar conservation law in (1.1).
When the Hamiltonian takes the simple form , the scalar conservation law (1.1) is called Burgers equation and is written . The Lax-Oleinik formula simplifies to
(1.5) |
The Burgers equation has seen an extensive interest when the initial data is random in the context of Burgers turbulence. We will present thereby the most relevant results in this area.
1.2. Burgers equation when is a Brownian white noise
This is the case when the initial potential is expressed as
(1.6) |
where is a diffusion factor and is a two-sided standard linear Brownian motion. In a remarkable paper [9] with the aim of studying the global behavior of isotonic estimators, Groeneboom completely determined the statistics of the process
He showed that this process is pure-jump with jump kernels expressed in terms of Airy functions.
By the Hopf-Lax formula and (1.5), this process is related to the solution of the Burgers equation with Brownian white noise initial data.
More precisely, let be the entropy solution of Burgers equation when the initial potential is determined by (1.6). Since in the Burgers case the Hamiltonian enjoys the same scaling as the Brownian motion. It follows that for every , the process has the same law as . The following theorem gives a precise description of the law of the entropy solution at time .
Theorem 1.1 (Groeneboom 89, [9]).
The process is a stationary piecewise-linear Markov process with generator acting on a test function as
The jump density is given by the formula
where and are positive functions defined on the line and positive half-line respectively, whose Laplace transforms
are meromorphic functions on given by
where denotes the first Airy function.
Remark 1.2.
For general , the process is also a stationary piecewise-linear Markov process with generator
In particular, the linear pieces have slope .
1.3. Burgers equation when is a spectrally negative Lévy process
A Lévy process is a process with stationary independent increments and such that . By spectrally negative Lévy process, we mean a process that has only downward jumps. For the Burgers equation, Bertoin in [4] proved a remarkable closure theorem for this class of initial data. We quote here his result.
Theorem 1.3 (Bertoin 98, [4]).
Consider Burgers equation of the form with initial data which is a spectrally negative Lévy process for and for . Assume that the expected value of is positive. Then for each fixed , the backward Lagrangian has the property that is independent of and is in the parameter a subordinator, i.e. a nondecreasing Lévy process. Its distribution is that of the first passage process
Furthermore, if we denote by and respectively the Laplace exponents of and ,
then we have the functional identity
Moreover, the process is a Lévy process, and its Laplace exponent verifies the Burgers equation
(1.7) |
Remark 1.4.
This theorem is remarkable in the sense that it provides an infinite-dimensional, nonlinear dynamical system which preserves the independence and homogeneity properties of its random initial configuration. Moreover, it was observed in [14] that the evolution according to Burgers equation of the Laplace exponents in (1.7) corresponds to a Smoluchowski coagulation equation [21] with additive rate which determines the jump statistics. This connection is simply due to the Lévy-Khintchine representation of Laplace exponents.
1.4. Scalar conservation law with general Hamiltonian
A natural question that arises is if the previous phenomenon (the entropy solution at later times having a simple form that can be explicitly described) is intrinsic to the Burgers equation or if the same holds for scalar conservation laws with general Hamiltonians . In an attempt to answer this question, Menon and Srinivasan in [15] proved that when the initial condition is a spectrally positive strong Markov process, then the entropy solution of (1.1) at later times remains Markov and spectrally positive. However, it is not as clear whether the Feller property is preserved through time. The following conjecture was stated in that paper, together with different heuristic but convincing ways to see why that must be true.
Conjecture 1.5.
If the initial data of the scalar conservation law in (1.1) is either
-
(1)
A white noise derived from a spectrally positive Lévy process.
-
(2)
A stationary spectrally positive Feller process with bounded variation.
then the solution for any fixed time is a stationary spectrally positive Feller process with bounded variation. Moreover, its jump kernel and drift verify an integro-differential equation.
Remark 1.6.
By a result of Courrège (see [3][Theorem 3.5.3]), the generator of any spectrally positive Feller process with bounded variation takes the form
given that ( is the space of infinitely differentiable functions with compact support and is the domain of the generator ). Moreover the kernel verifies the integrability condition : .
A variant111Under some mild conditions on the Hamiltonian , and a slight modification of the nature of the initial data. of the second part of this conjecture when the initial data is a piecewise-deterministic spectrally positive Feller process was recently proved by Kaspar and Rezakhanlou in [12] and [13]. We give below an explicit exposition of their result together with the exact form of the integro-differential equation verified by the drift and the jump kernel. This equation (1.8) was formally derived by Menon and Srinivasan in [15] and shown to be equivalent to the following Lax equation
where is the generator of and is the generator of . We give explicit formulas for these generators below in the statement of Theorem 1.8.
Notation 1.7.
We write for the set of probability measures on the real line, and
Theorem 1.8 (Kaspar and Rezakhanlou 20, [13]).
Assume that the initial data is zero of and is a Markov process for that starts at . More precisely, its infinitesimal generator has the form
Furthermore, assume that
-
(1)
The rate kernel is and is supported on
for some constants .
-
(2)
The Hamiltonian function is , convex, has positive right-derivative at and finite left-derivative at .
-
(3)
The initial drift is and satisfies with whenever .
Then for each fixed , the process (where is a solution of (1.1)) has marginal given by where is the unique function such that and
where is the adjoint operator of , that acts on measures with
for any test function . Moreover the process evolves for according to a Markov process with generator given by
Here and are obtained from their initial conditions
solves the ODE with parameter
and solves the following Boltzmann-like kinetic equation
(1.8) |
where the velocities and are given by
the coagulation-like collision kernel is
and the linear operator is given by
The purpose of this paper is to prove the first part of the conjecture when the initial data is a Brownian white noise and thus extend the results of Groeneboom [9] in the Burgers case. We show that at any fixed time , the solution is a stationary piecewise-smooth Feller process and we give an explicit description of its generator. This result proves the complete integrability of scalar conservation laws for this class of initial data and moves away from the unnatural emphasis on Burgers equation. Our method as will be seen by the reader can be extended when the white noise is derived from a spectrally positive Lévy process with non-zero Brownian exponent. Our shortcoming in this case will be not having explicit formulas for the jump kernel. We also show that the structure of shocks of Burgers turbulence holds for the general scalar conservation law under the assumption of rough initial data.
Since the entropy solution is expressed via the Lax-Oleinik formula. It is natural to study the law of the process defined as
(1.9) |
where is a spectrally positive Lévy process and is a strictly convex function with superlinear growth, such that 222We write if and if is bounded. for . The relationship between the process and the entropy solution of (1.1) is the following
Our paper is organized as follows
-
(1)
In Section 2, we give some preliminary results on the process when is a spectrally positive Lévy process such as its Markovian property.
-
(2)
In Section 3, we will focus on the case where is a two-sided Brownian motion and show that the process is pure jump, following similar ideas used by Groeneboom in [9]. The main ingredient being the path decomposition of Markov processes when they reach their ultimate maximum. This result implies that the Brownian motion has excursions below the sequence of convex functions where are the jump times of the process (which is a discrete set by a result of Section 5). However, the justification of many manipulations used in [9] rely on the regularity and asymptotic properties of Airy functions at infinity, as those arise naturally in the expressions of transition densities used throughout the study of the Brownian motion with parabolic drift. Unfortunately, those special functions are intrinsic to this special case as we will explain later, and one do not have similar expressions in the general case.
-
(3)
In Section 4, we circumvent this difficulty by using a more analytic approach to prove the smoothness and integrability of the densities that were used in Section 3. Moreover, via Girsanov theorem we manage to express explicitly the jump kernel of the process in terms of the distribution of Brownian excursion areas. Along the way, we find the joint density of the maximum and its location of the process where is a two-sided Brownian motion. In particular, the density of enjoys a simple expression similar to Chernoff distribution for the parabolic drift.
-
(4)
Finally, in Section 5 we give a sufficient condition on the Lévy process for the process to have discrete range (with the convention that a set is discrete if it is countable with no accumulation points). As a consequence, this implies that the structure of shocks of the entropy solution is discrete for any time when the initial data belongs to the large class of abrupt Lévy processes introduced by Vigon in [20], this result generalizes the findings of Bertoin [5] and Abramson [1] when is spectrally positive.
We give here our main results
Theorem 1.9.
Suppose that the initial potential is a two-sided Brownian motion and let be the solution of the scalar conservation law . Then for every fixed , the process is a stationary piecewise-smooth Feller process. Its generator is given by
for any test function , where
(1.10) |
for , and
where e is a Brownian excursion on the interval .
Remark 1.10.
-
(1)
The profile of the solution at any fixed time is a concatenation of smooth pieces that evolve as solutions of ODEs with vector field (or drift) and are interrupted by stochastic upward jumps distributed via the jump kernel . We prove in Section 5 that in the Brownian white noise case, under mild assumptions on the Hamiltonian , the set of jump times is discrete, i.e. : there are only a finite number of jumps on any given compact interval.
-
(2)
For any , the profile of is a piecewise-deterministic Markov process and belongs to the class of initial data considered in the second part of the Conjecture 1.5. A consequence of this observation would be that the kernel in the expression (1.10) verifies the kinetic equation (1.8). However, Theorem 1.8 only considers a variant of the original statement of the conjecture as it forces the initial data to be flat on the negative real-line (whereas here we deal with a stationary process) and restricts the range of on a compact interval . These technical modifications arise from the very challenging proof of existence and uniqueness of a classical solution to (1.8) under general assumptions. Verifying that the kernel in the Brownian white noise case is a solution to the kinetic equation (1.8) from the explicit expression (1.10) seems also inaccessible at the present due to the complicated term involving the Brownian excursion. This verification was done for the Burgers case by Menon and Srinivasan in [15][Section 6] through many non-trivial calculations, but relied extensively on the connection with Airy functions and an associated Painlevé property.
The following result is a consequence of our study of the process . It gives an explicit formula for the density of the random variable where is a two-sided Brownian motion. From results of Section 4, we also have access to the joint distribution of
but we omit it here because the expression is quite large.
Theorem 1.11.
Let be the location of the maximum of the process where is a two-sided Brownian motion, its density is equal to
for any , and where
with
where e is a Brownian excursion on .
Remark 1.12.
In the parabolic drift case (Chernoff distribution), the term is constant and the Laplace transform of a standard Brownian excursion area is known to be expressed via Airy functions. We will develop on the connection between the formulas found by Groeneboom in [9] and ours at the end of Section 4. Also, we refer the reader to the survey [11] for a more detailed exposition on the distribution and Laplace transform of various Brownian paths areas.
We define now a class of rough Lévy processes called abrupt that were introduced by Vigon in [20].
Definition 1.13.
A Lévy process is said to be abrupt if its paths have unbounded variation and almost surely for all local maxima of we have
Remark 1.14.
A Lévy process with paths of unbounded variation is abrupt if and only if
Examples of abrupt Lévy processes include stable processes with index and any process with non-zero Brownian exponent.
Our last main result determines the structure of shocks of the scalar conservation law when the initial data is a white noise derived from an abrupt Lévy process.
Theorem 1.15.
Assume that the Lévy process is spectrally positive, abrupt and is such that for , then the set
is almost surely discrete for any fixed time . We say then that the structure of shocks of the entropy solution is discrete.
Remark 1.16.
From a point of view of hydrodynamic turbulence, a discontinuity of the entropy solution at position means the presence of a cluster of particles at this location at time . Those clusters interact with each other via inelastic shocks, and the cluster at location and at time contains all the particles that were initially located in . Our result shows that at any given time , the set of clusters is discrete. When the initial data is a Lévy white noise, we can picture that there are infinitely many particles initially scattered everywhere with i.i.d velocities. Therefore, when we assume that this initial profile is rough (as it is the case when the potential is an abrupt Lévy process), this turbulence forces all the particles to aggregate in heavy disjoint lumps instantaneously for any time .
Acknowledgement
I would like to thank my advisor Fraydoun Rezakhanlou for many fruitful discussions.
2. Preliminaries
Notation 2.1.
We will use the notation to denote the rightmost maximizer of a function (i.e. : the last time at which a function reaches its maximum).
Menon and Srinivisan proved in their paper [15] a closure theorem for white noise initial data for the scalar conservation law solutions. They showed that if initially the potential is spectrally positive with independent increments then is a spectrally positive Markov process for any fixed . The proof of this statement follows from standard use of path decomposition of strong Markov processes at their ultimate maximum. The same holds for our process . Precisely, we have the following theorem for which we give the proof for the sake of completeness.
Theorem 2.2.
Assume that is a spectrally positive Lévy process, then the process is a non-decreasing Markov process. Moreover for any , the process has the same distribution as .
Proof.
For and , we have that
By the convexity of , and hence . Also, by definition is a càdlàg process (right continuous with left hand limits). Take , then
(2.1) |
The process is clearly Markov. By Millar’s theorem of path decomposition of Markov processes when they reach their ultimate maximum (see [16]), the process is independent of given (because of the upward jumps of , the maximum is attained at the right hand limit). Moreover, because of the independence of the increments of , the process is independent of given . Now it suffices to see that only depends on the pre-maximum process because of the monotonicity of , this fact alongside the equation (2.1) gives the Markov property of the process . The last statement follows easily from the stationarity of increments of . ∎
Remark 2.3.
Notice that except in the last statement, the stationarity of increments was not used in the proof of the Markovian property of the process .
3. The process in the Brownian case
In this section, we assume that is a two-sided Brownian motion. We proved in the previous section that the process is Markov and enjoys a space-time shifted stationarity property. Hence, we shall only determine its transition function at time zero and consequently the form of its generator at this time. In this section we will differentiate and switch the order of integrals and differentiations without any justification, as Section 4 is devoted to take care of all those technicalities.
Notation 3.1.
In the sequel, we will deal with functions of the form where and play the role of temporal variables, and and that of spatial variables. Without confusion, the notation (resp. ) refer to the partial derivative of with respect to the second variable (resp. fourth variable).
We state here the first result regarding the transition function of the process .
Theorem 3.2.
Let and be two real numbers. Then we have that
where and
-
•
is the Markov process started at zero and Doob-conditioned to stay negative (i.e to hit before ). Precisely, its transition function is given by
(3.1) for and , and where is the first hitting time of zero of the process . The function is the transition density of the process killed at zero, at time and state , formally defined as
Moreover, the entrance law of is given by
(3.2) -
•
The function is defined as where is a constant such that .
Proof.
We have that
Now, using Millar path decomposition of Markov processes when they reach their ultimate maximum, the expression of the transition densities of the post-maximum process in [16][Equation 9] on the process , and the spatial homogeneity of the Brownian motion (and thus of ), we get (3.1). To get the entrance law it suffices to send to and to zero. ∎
Let us now introduce some notation to keep our formulas compact.
Notation 3.3.
Denote by
and define
Also denote
Furthermore, let be the process defined as . We define and analogously.
With this notation, the entrance law of the process is expressed as
(3.3) |
The next result will allow us to recover the transition function of the process .
Theorem 3.4.
Let and . Define to be the unique point such that (such a time exists because of the strict convexity of that makes strictly increasing). Then we have that
Before proving this theorem, we will state a lemma that links the joint distribution of the maximum of the diffusion and its location with the functionals and .
Lemma 3.5.
Let and be respectively the maximum of the process and its location, we have then that
(3.4) |
Proof.
We have by the Markov property that
Now we see that
Hence
Thus
(3.5) |
Now, by Kolmogorov forward and backward equations on the diffusion we have that
and
By interchanging the time partial derivative and the integral sign in (3.5), we find by integration by parts
Now it suffices to see that vanishes at both zero and infinity, from which the first equality follows. For the second equality, it suffices to see that
Differentiating with respect to time and using the Kolmogorov forward equation in the same fashion as was done before gives the result. ∎
Remark 3.6.
All these differentiations and integrations by parts are justified by the fact that and are sufficiently smooth and integrable away from . This fact will be proved in the next section.
Proof of Theorem 3.4.
We have that
Because for , we have that , then by the Markov property we get that
Let us focus first on the second term of this product. The law of the Markov process is that of the process conditioned to stay below . However, when starts from the state at time , the event we condition on has positive probability and hence it is just the naive conditioning. Thus, we can write
This probability is equal to the ratio of this probability
over the probability
For the first probability , notice that on the event that , we always have that , because for . Thus
Now we have that
Hence
Thus by using Lemma 3.5 for and , we get that
Therefore
(3.6) |
Finally for the first term , we have that
Now it is not hard to see that we have the following equality
(3.7) |
This is true because both those functions verify the same PDE with the same boundary and growth conditions, by combining the backward and forward Kolmogorov equations. Hence
Hence, by Lemma 3.5
Thus
(3.8) |
Multiplying equations (3.6) and (3.8) and integrating with respect to on gives the result. ∎
We are ready now to state the main result of this section.
Theorem 3.7.
The transition function of the process is given by
Moreover, the process is pure-jump and its generator at zero is given by its action on any test function
where
Proof.
By integrating the formula in Theorem 3.4 with respect to between and and (as is pointwise at most ), we get that
Now it suffices to do the change of variables and to get the transition density. As for the generator part, it suffices to do the following Taylor expansion for
∎
4. Regularity of the transition functions and explicit formulas
The goal of this section is to prove the regularity of the transition density away from the line , so that we can justify all the operations we did in the previous section and to deduce along the way explicit formulas for the jump kernel of the process .
Processes such as the three-dimensional Bessel process, the three-dimensional Bessel bridges, and the Brownian motion killed at zero will be mentioned in some of the results of this section. We refer the unfamiliar reader to [17][Chapters 3,6,11] for basic facts about these processes.
The following proposition gives a closed formula for the density .
Proposition 4.1.
Let and , the density is given by the formula
where is a three-dimensional Bessel process, and is the transition density function of the Brownian motion killed at zero, given explicitly by
Proof.
The process can be expressed as
Thus by Girsanov theorem, is a Brownian motion under the measure with Radon-Nikodym derivative given by
where is the canonical filtration of . Thus for any function we have that
where
In particular for , we have that
Now if we denote by the Brownian motion killed at zero whose law is defined as
Thus
where is the same as with replaced by , and is the transition density function of the process . However it is a well-known fact that , and the law of the Brownian motion killed at zero between and conditioned on its extreme values is the law of the reflection of a three-dimensional Bessel bridge between and (as our killed Brownian motion stays negative and the Bessel bridges are by definition positive). Finally, by using an integration by parts we have that
Integrating between and , we get the desired result. ∎
Remark 4.2.
From the last proposition, one can readily see that for fixed and
where and are locally bounded, and is locally bounded from below by a positive constant.
Let us now prove that is smooth. First of all, one can extend to the positive line as well by defining
Then verifies in the distribution sense the following PDE (Kolmogorov forward equation)
(4.1) |
and with boundary conditions , and obviously . Now, it is well-known that the function that we defined in Proposition 4.1 verifies the heat equation
with the same boundary conditions as . Moreover, if one defines the function as
it is also a solution for the heat equation but with boundary condition . Thus, in order to study the regularity properties of the solution to (4.1), one might use Duhamel’s principle to get a representation formula for . More precisely, we shall prove the following theorem
Theorem 4.3.
Fix . There exists a function (where here is the space of continuous functions on the real line that are uniformly bounded and absolutely integrable), such that
Furthermore, is smooth.
Proof.
Let us fix . Define the functional from into itself equipped with the norm
by
It is clear that sends to itself due to the growth rate of the Green functions and at infinity in space. Moreover we have that for any two functions and in
Now we see that
Hence
Thus
A similar bound is found for the norm. Thus, for close enough to , the operator becomes a contraction, and thus by Picard theorem, it admits a unique fixed point.
Now define
Suppose that , then it is easy to see by Gronwall inequality that for any sequence such that , the sequence is Cauchy in and thus converge strongly to a unique limit that we denote . This extension thus belongs to . However, for small , one can further extend the fixed point to by the same contraction argument. This contradicts the definition of , and thus from which follow the existence of a global solution. The smoothness of follows readily from that of the Green function and the dominated convergence theorem. ∎
We are now ready to prove the following result
Theorem 4.4.
The function is everywhere smooth in the variables , in particular the function is smooth away from .
Proof.
Define the function by
where is the global solution from Theorem 4.3. By integration by parts we have that
Now it suffices to see that
and thus the function verifies the PDE (4.1) with the boundary conditions . The result now would follow if we can prove that . Consider the function , it verifies the PDE (4.1) with vanishing initial condition. The growth condition of at infinity in space ensures that can be viewed as a tempered distribution. By taking the Fourier transform in space in the PDE (4.1) we get that
Thus
which means that the distribution is constant along the time variable . Moreover, we also have that
in the tempered distribution sense. Indeed for any in the Schwartz space , if we denote by is the diffusion killed at zero we have that
as and by using the dominated convergence theorem. Thus by continuity of the Fourier transform, one deduces that is zero everywhere, and hence as desired. ∎
Let us introduce now a function that is going to play a fundamental role in our calculations. Define by
(4.2) |
for and , where is a three-dimensional Bessel process. Because is smooth away from , the same holds for . We have then the following lemma.
Lemma 4.5.
The function verifies the following PDE
(4.3) |
for .
Proof.
We can replace the Bessel process by the Brownian motion killed at zero in the expression of in (4.2) for the same reasons we gave earlier. Now let be a test function. We apply Ito formula to the following semi-martingale
where is a Brownian motion started at . We get then
We integrate between and (where is the first hitting time of zero of ). As the first term is a bounded local martingale (and hence a true martingale), by taking the expectation we get that
Therefore
By sending and conditioning on the value of , we get
Thus we get the PDE in the distribution sense, but also in the classical sense because is smooth on the interior of its domain. ∎
We give now an explicit formula for the functional that was introduced in the previous section.
Proposition 4.6.
The function can be expressed as
for and . here is a three-dimensional Bessel bridge from to .
Proof.
From Lemma 3.5, we have that
Since
and
it suffices to prove that
as . We have by Hopital’s rule applied twice
In the fourth line we used the fact that . This follows from the PDE (4.3) verified by and the fact that . Moreover because , we can conclude that the limit is equal to zero in the penultimate equality.
To finish the proof, we refer to the fact that the weak limit of the law of the three-dimensional Bessel process conditioned to end at when goes to zero is that of the corresponding three-dimensional Bessel bridge, and thus the result follows from the expression of the Green function . ∎
We are ready to give an explicit formula of the kernel .
Proposition 4.7.
The kernel has the following expression
for , where is a Brownian excursion on .
Proof.
Recall that is given by
Remember that is the same as with the function replaced by . Hence
Consider now a Brownian excursion e on , conditionally on its value at , the two paths and are independent, and each path has the distribution of a three-dimensional Bessel bridge. Furthermore, because of the Brownian scaling we have that
(4.4) |
where is a standard Brownian excursion. Thus, using the fact that
then it follows that for
Thus by the time-reversal property of the three-dimensional Bessel bridges we have that
By integrating with respect to and we get the desired result. ∎
The next theorem gives a closed formula for the function .
Theorem 4.8.
Let , define the function on by
where e is a Brownian excursion on . Then
Proof.
The function is defined as
where the function is defined as
It verifies the following PDE
(4.5) |
Because of the asymptotic behavior of in space at infinity, we can define for every the Laplace transform
From the representation formula of the function (and thus that of ) in the statement of Theorem 4.3 and the fast decay of the Green functions and in space, we can interchange the order of differentiation and integration for the Laplace transform , hence
by integration by parts and using the fact that . From the expression of we deduce that
Since and are fixed for now, we will often omit them when writing out expressions where they do not vary. Thus, the PDE verified by takes the form
This is a first order non-linear PDE that can be solved by the method of characteristics. If we denote the variables by and and the value of the function , the characteristic ODEs take the form
We choose the initial conditions such that and for . Hence
Introduce the function defined by
Then it is clear that
In order to avoid the singularity at , we integrate thus between and for to get that
which is equivalent to
By taking , we get
(4.6) |
As . By sending to in the expression (4.6), we have
It follows that
(4.7) |
since , and we can interchange differentiation and the integral sign in the second term because we are away from the singularity line . Now, we have that
It is clear that is smooth in the parameters as well. Our analysis of regularity of the function consisted of using the Kolmogorov forward equation where the parameters were and , but similarly the Kolmogorov backward equation that holds for the parameters and , we see that the solution enjoys the same smoothness and integrability properties away from the line (it is formally just the adjoint problem). Hence we can differentiate inside the integral sign to get
since we have that
Thus
However, the density of a three-dimensional Bessel process is given by
(4.8) |
Hence
However by Brownian scaling, we know that
Hence
It follows then that
(4.9) |
for small. The expectation of the inverse of is computed using the density given in (4.8). Now, on the other hand for the second term in (4.7), we have
Hence
(4.10) |
and thus, from combining (4.7), (4.9) and (4.10) we get
Finally, see that
and then send to zero to finish the proof. ∎
Remark 4.9.
When is parabolic (), the term in the PDE (4.5) of becomes a constant and thus it takes the simple form
By taking the Fourier transform in time we get
This is a Sturm-Liouville equation. Its solution can be expressed in terms of Airy functions, from which follows all the analytical descriptions that Groeneboom found in [9]. It is clear that when is not constant, this method fails which makes the study more delicate as one doesn’t have any asymptotic or regularity properties of the function , which was a crucial part in the analysis of Groeneboom. For those reasons, we had to take advantage of the space Laplace transform.
As a consequence of the explicit formula of and , we are able to provide the joint distribution of the maximum of the process and its location. This is given by the expression of and and using Lemma 3.5. However, the formula is involving many terms, in particular the Bessel bridge area. On the other hand, the density of the location of the maximum takes a simpler formula. This is a generalization of Chernoff distribution, where the parabolic drift is replaced by any strictly convex drift .
Theorem 4.10.
Let be the location of the unique maximum of the process , its density is equal to
where is the analogue of for the process .
Proof.
We will prove the equality for , the case is completely identical. From Lemma 3.5 with and any
Hence
by independence of the paths and . However by time reversal of the Brownian motion we have
Thus
Notice that the right hand-side is independent of , so we can drop the conditional probability in the left hand-side. Moreover by (3.7), we have
(4.11) |
Using the expression of the entrance law of the process in (3.3), we have
(4.12) |
Hence combining (4.11) and (4.12) we get
which completes the proof.
∎
Remark 4.11.
This last theorem is exactly Theorem 1.11 by noticing that and .
Remark 4.12.
Proposition 4.13.
For any we have
Proof.
From equation (1.6) in [10] 333There is a typo in the published paper, the term in the denominator should be there instead of . , we have that
(4.13) |
where , and is the second Airy function. By differentiating both sides with respect to and sending to zero, we get
(4.14) |
as the Wronskian of the Airy functions and is constant and equal to . In the right-hand side of (4.13), we cannot differentiate inside the integral sign because it becomes divergent. However for fixed , the integrand is absolutely integrable and thus we can use Fubini theorem. Now from [11][Equation 384, Page 141] we have that
where is as usual a three-dimensional Bessel process. Thus, by inverse Laplace transform we have
Hence the integral in the RHS of (4.14) is equal to
(4.15) |
By splitting this integral on and , we can interchange the integral and the differentiation for the integral on , and so we get after sending to zero
(4.16) |
where e is as usual a Brownian excursion on the corresponding interval. As for the first term (the integral on ), by the change of variable (), it is equal to
by Brownian scaling on the Bessel process . Differentiating with respect to , we get by Leibniz rule
(4.17) |
where is equal to
However we have that for small enough (such that )
so
Similarly with the other terms we find that there is a constant (that depends on ) such that
Hence, by combining (4.16) and (4.17), the limit of the derivative of the expression in (4.15) when goes to zero is equal to
Now it suffices to see that
By sending to zero we get the desired result. ∎
We are now ready to prove the Theorem 1.9.
Proof of Theorem 1.9.
Recall that our solution is expressed as
Hence, is stationary by Theorem 2.2, and so it is a time-homogenous Markov process, its generator is determined by
where
By a change of variables we have
Similarly
where
The theorem then follows by appropriately defining the kernel . ∎
Remark 4.14.
While our main study focused on the case where the initial potential is a two-sided Brownian motion. It is not hard to see that we can extend the result about the profile of the entropy solution when the potential is a spectrally positive Lévy process with non-zero Brownian exponent. The main ingredients that were used were respectively the path decomposition of Markov processes at their ultimate maximum and the regularity properties of the transition function . Both these facts hold true in the Lévy case when the initial potential has a non-zero Brownian exponent, as the only difference in the Kolmogorov forward equation is an added integral operator accounting for the jumps of the Lévy process. A similar approach will lead to the same smoothness property away from the singularity line (the presence of the heat operator is key to have parabolic smoothing), which will allow all the operations in the second section to be valid. Moreover, one should be able to extract similar expression for the jump kernel by using the Girsanov theorem version for Lévy processes. We chose in this paper to only discuss the Brownian motion case because it gives a general idea on how things work and also because it simplifies greatly the computations. One would expect to have similar formulas where the equivalent of the Brownian excursion will be the Lévy bridge informally defined as a Lévy process conditioned to stay positive and to start and end at zero. Those bridges are discussed in [19].
5. Structure of shocks of the entropy solution
A priori, from the involved expression of the generator in Theorem 1.9, one cannot easily claim whether if the structure of shocks of the solution is discrete or not. Indeed, this amounts to checking if the following integrability condition on the jump kernel holds
However, using the recent theory of Lipschitz minorants of Lévy processes developed in [2] and [7], and following some of the arguments from the study of the structure of shocks in Burgers equation of [1], it turns out that when the initial potential is an abrupt spectrally positive Lévy process, one can prove that the set of jump times of the solution is discrete.
As we did with Theorem 1.9, we will prove a general statement for the process from which Theorem 1.15 will follow. We state thus the following theorem
Theorem 5.1.
Assume that is an abrupt spectrally positive Lévy process and is a strictly convex function such that and almost surely, then the range of is a.s discrete.
Proof.
From Theorem 2.2, we know that for every
hence it suffices to prove that the set is a.s discrete. Moreover, we can restrict the process on . Indeed, we claim that the probability of the event
goes to zero as goes to infinity. To show this claim, assume that there exists a sequence such that and . By definition we have that
(5.1) |
Up to taking subsequences, we have either that or . If , take in (5.1), then
(5.2) |
As is strictly increasing, we must have , and thus is decreasing for . Hence from (5.2) and the fact that , we get
(5.3) |
for large enough. However, because has the same distribution as , then almost surely , which is a contradiction with (5.3). The case is similar by taking in (5.1), proving thus our claim.
Define now the event as
It suffices to prove that .
Suppose initially that and let . Because of our assumption on , then for large enough we have that . For any such that , we have for all
(5.4) |
For such that , let us consider now the process that is the -Lipschitz majorant of , defined formally as
We refer the reader to the two papers [2] and [7] for a detailed study of the Lipschitz minorant of a Lévy process. Consider (resp. ) to be the last contact point before (resp. the first contact point after ) of with , i.e.
for any . Moreover, let be the contact set of and defined as
Then on the event , from the inequality (5.4), we have
Hence for , we have
Similarly for we get the same result. Together with (5.4), we deduce that for any such that , is in the contact set . However when is abrupt, we know from [2][See proof of Proposition 6.1] that this set is discrete, and hence is finite. Thus
(5.5) |
Now it is not hard to see that for , we have that . Hence, for large enough we have
(5.6) |
where is independent of . However, from [2][Theorem 2.6] we know that the set is stationary and regenerative (see [8] for the precise definition of stationary regenerative sets), thus the random variables and have the same distribution as . Moreover from [2][Equation (4.7)], we have that
where is the Lévy measure of the subordinator associated with the contact set (the stationarity of ensuring that ). It follows thus from (5.6) that the right-hand side of (5.5) goes to zero when , from which we get the desired result that the range of is discrete when .
Now, if , consider for any the truncated process , that is the process started at zero and with its jumps of size greater than removed. It is formally defined as :
(5.7) |
We have that as any Lévy process with uniformly bounded jumps has finite moments of any order (see [18][Lemma 8.2]). Hence, if we denote by the process where we replace by . By what we proved previously, we have that almost surely, the set is finite for every (as the finiteness of the moment of order of ensures by the law of large numbers that ). By the arguments provided before, it suffices to prove that is finite for every . Now, for and we have that
and similarly for . Thus almost surely
as by the law of large numbers . Let such that for all , we have almost surely
Let such that is increasing on and decreasing on , then for and , we have
Hence there exists large enough such that
(5.8) |
Now, the largest jump size of the process on any compact interval is almost surely finite, because
where is the Lévy measure of . Hence there exists a random such that on , and thus if is infinite, then there exists infinitely many such that
which in light of (5.8) implies that
and this is a contradiction with the fact that is finite, thus completing the proof. ∎
Finally, we are left to prove Theorem 1.15
Proof of Theorem 1.15.
In light of Theorem 5.1, it suffices to check that for any we have
However, due to the convexity of , the function is increasing and thus the limits,
exist. However, due to the superlinear growth of (and thus of ), it must be that and , which gives the desired result. ∎
Remark 5.2.
The class of abrupt Lévy processes mentioned in Theorem 1.15 is quite large. Indeed, it contains any linear combination of Brownian motion with linear drift and stable Lévy processes with index with its negative jumps removed.
References
- [1] J. Abramson. Structure of shocks in Burgers turbulence with Lévy noise initial data. J. Stat. Phys., 152(3):541–568, 2013.
- [2] J. Abramson and S. N. Evans. Lipschitz minorants of Brownian motion and Lévy processes. Probab. Theory Relat. Fields, 158(3-4):809–857, 2014.
- [3] D. Applebaum. Lévy processes and stochastic calculus, volume 93 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2004.
- [4] J. Bertoin. The inviscid burgers equation with brownian initial velocity. Communications in Mathematical Physics, 193(2):397, 1998.
- [5] J. Bertoin. Structure of shocks in burgers turbulence with stable noise initial data. Communications in Mathematical Physics, 203(3):729, 1999.
- [6] L. C. Evans. Partial differential equations. American Mathematical Society, Providence, R.I., 2010.
- [7] S. N. Evans and M. Ouaki. Excursions away from the Lipschitz minorant of a Lévy process. arXiv e-prints, page arXiv:1905.07038, May 2019.
- [8] P. J. Fitzsimmons and M. Taksar. Stationary regenerative sets and subordinators. Ann. Probab., 16(3):1299–1305, 1988.
- [9] P. Groeneboom. Brownian motion with a parabolic drift and airy functions. Probability theory and related fields, 81(1):79 – 109, 1989.
- [10] P. Groeneboom, S. Lalley, and N. Temme. Chernoff’s distribution and differential equations of parabolic and Airy type. J. Math. Anal. Appl., 423(2):1804–1824, 2015.
- [11] S. Janson. Brownian excursion area, Wright’s constants in graph enumeration, and other Brownian areas. Probab. Surv., 4:80–145, 2007.
- [12] D. C. Kaspar and F. Rezakhanlou. Scalar conservation laws with monotone pure-jump Markov initial conditions. Probab. Theory Relat. Fields, 165(3-4):867–899, 2016.
- [13] D. C. Kaspar and F. Rezakhanlou. Kinetic statistics of scalar conservation laws with piecewise-deterministic markov process data. Archive for Rational Mechanics and Analysis, 237(1):259, 2020.
- [14] G. Menon and R. L. Pego. Universality classes in Burgers turbulence. Comm. Math. Phys., 273(1):177–202, 2007.
- [15] G. Menon and R. Srinivasan. Kinetic theory and lax equations for shock clustering and burgers turbulence. Journal of Statistical Physics, 140(6):1 – 29, 2010.
- [16] M. P. W. A path decomposition for markov processes. The Annals of Probability, 6(2):345, 1978.
- [17] D. Revuz and M. Yor. Continuous martingales and Brownian motion, volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1991.
- [18] R. L. Schilling. An introduction to Lévy and Feller processes. In From Lévy-type processes to parabolic SPDEs, Adv. Courses Math. CRM Barcelona, pages 1–126. Birkhäuser/Springer, Cham, 2016.
- [19] G. Uribe Bravo. Bridges of Lévy processes conditioned to stay positive. Bernoulli, 20(1):190–206, 02 2014.
- [20] V. Vigon. Abrupt Lévy processes. Stochastic Process. Appl., 103(1):155–168, 2003.
- [21] M. Von Smoluchowski. Drei Vortrage uber Diffusion. Brownsche Bewegung und Koagulation von Kolloidteilchen. Z. Phys., 17:557–585, 1916.