This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

HJB equation for maximization of wealth under insider trading

Jorge A. León Departamento de Control Automático, Cinvestav-IPN, Apartado Postal 14-740, 07000 CDMX, Mexico jleon@ctrl.cinvestav.mx    Liliana Peralta Corresponding author: Departamento de Matemáticas, Facultad de Ciencias, Universidad Nacional Autónoma de México, Circuito Exterior, C.U., 04510 CDMX, Mexico. lylyaanaa@ciencias.unam.mx    Iván Rodríguez Centro de Investigación en Matemáticas, Jalisco S/N, Col.Valenciana, C.P. 36000, Guanajuato, Mexico ivan.rodriguez@cimat.mx
Abstract

In this paper, we combine the techniques of enlargement of filtrations and stochastic control theory to establish an extension of the verification theorem, where the coefficients of the stochastic controlled equation are adapted to the underlying filtration and the controls are adapted to a bigger filtration 𝐆\mathbf{G} than the one generated by the corresponding Brownian motion BB. Using the forward integral defined by Russo and Vallois [17], we show that there is a 𝐆\mathbf{G}-adapted optimal control with respect to a certain cost functional if and only if the Brownian motion BB is a 𝐆\mathbf{G}-semimartingale. The extended verification theorem allows us to study a financial market with an insider in order to take advantage of the extra information that the insider has from the beginning. Finally, we consider two examples throughout the extended verification theorem. These problems appear in financial markets with an insider.

Keywords: Cost and value functions, Enlargement of the filtrations, Forward integral, HJB-equation, Itô’s formula for adapted random fields, Semimartingales, Verification theorem
Mathematical Subject Classification: 93E20 34H05 49L99 60H05

1 Introduction

The theory of enlargement of a filtration was initiated in 1976 by Itô [6]. This author has pointed out that one way to extend the domain of the stochastic integral (in the Itô sense) with respect to an 𝐅\mathbf{F}-martingale YY is to enlarge the filtration 𝐅\mathbf{F} to another filtration 𝐆\mathbf{G} in such a way that YY remains a semimartingale with respect to the new and bigger filtration 𝐆\mathbf{G}. In this way, we can now integrate processes that are 𝐆\mathbf{G}-adapted, which include processes that are not necessarily adapted to the underlying filtration 𝐅\mathbf{F}. In particular, Itô [6] shows that if 𝐆1\mathbf{G}_{1} and 𝐆2\mathbf{G}_{2} are two filtrations such that 𝐆1𝐆2\mathbf{G}_{1}\subset\mathbf{G}_{2} and YY is semimartingale with respect to both filtrations, then the stochastic integrals with respect to the 𝐆1\mathbf{G}_{1} and 𝐆2\mathbf{G}_{2} semimartingale YY are the same in the intersection of the domains of both integrals. But, this problem have not been considered in [6] when 𝐆1𝐆2\mathbf{G}_{1}\not\subset\mathbf{G}_{2} and 𝐆2𝐆1\mathbf{G}_{2}\not\subset\mathbf{G}_{1}. This problem has been solved by Russo and Vallois [17] using the forward integral. The forward integral is a limit in probability and agrees with the Itô integral if the integrator is a semimartingale (see Section 4 and Remark 4.2), which answer the problem that Itô did not address. So, the forward integral is an anticipating integral, that is, it allows us to integrate processes that are not adapted to the underlying filtration with respect to other processes that are not necessarily semimartingales, therefore the forward integral coincides with the Itô’s integral if this last one is well-defined for the filtration 𝐆\mathbf{G}. In consequence, the forward integral becomes an appropriate tool to deal with problems that involve processes that are no adapted to the underlying filtration. Now, we have another anticipating integrals such as the divergence operator in the Malliavin calculus as it is defined in Nualart [14], or the Stratonovich integral introduced in [17] (see also León[9]). But these integrals do not agree with the Itô integral when we apply the enlargement of filtrations. Examples where we can apply anticipating integrals, together with the Malliavin calculus, are the study of stability of solutions to stochastic differential equations with a random variable as initial condition (León et al. [10]), optimal portfolio of an investor with extra information from the beginning (see, for instance, Biagini and Øksendal [3], León et al. [11] and references therein, and Pikovsky and Karatzas [15]), the study of stochastic differential equations driven by fractional Brownian motion, which is not a semimartingale (see, for example, Alòs et al. [1], or Garzón et al. [5]), the study of short-time behaviour of the implied volatility investigated by Alòs et al. [2], etc. The last problem contains only adapted processes to the underlying filtration but employs the future volatility as a main tool, which is a process that is not adapted (i.e., it is an anticipating process).

The use of the forward integral in financial markets was first introduced by León et al. [11] to figure an optimal portfolio out of an insider to maximize the expected logarithmic utility from terminal wealth. An insider is an investor that possesses extra information of the development of the market from the beginning, which is represented by a random variable LL. In this way, we obtain an approach based on the Malliavin calculus to analyse the dynamics of the wealth equation of this insider since the forward integral is related to the divergence and derivative operators, as it is shown in Nualart [14, equality (3.14)], and in Russo and Vallois [17, Remark 2.5].

It is well-know that the wealth equation is a controlled stochastic differential equation. So, the problem of calculating an optimal portfolio to maximize the utility from terminal wealth is nothing else than a problem of stochastic control. That is, we must compute an optimal control that maximize/minimize a cost functional. A main tool in stochastic control theory is the verification theorem, which involves an optimal control and the so called Hamilton-Jacobi-Bellman equation (for short HJB-equation). The version of the classical verification theorem considered in this paper is the one given in the book by Korn and Korn [8]. Therefore, in this verification theorem, it is natural to consider controls that are adapted to a bigger filtration than the underlying one in the HJB-equation, as it is done in Theorem 3.2 below. Thus, the first goal of this paper is to study an extension of the verification theorem that is based on a classical controlled stochastic differential equation and on a classical cost function, but with controls adapted to the filtration generated by the underlying filtration and a random variable LL that stands for a certain extra information of the problem (see the filtration 𝐆\mathbf{G} defined in (1)).

Since the forward integral allows us to integrate with respect to stochastic processes that are not semimartingales, we could think that we can deal with a forward controlled stochastic differential equation driven by a process that is a martingale with respect to the underlying filtration, but with controls adapted to a filtration bigger than the one generated by this martingale. However, we show that if we can find an optimal control in this case, then the driven process is still a semimartingale with respect to the bigger filtration. This is the second goal in this paper.

The paper is organized as follows. In Section 2, we establish the framework that we use in the remaining of this article. Section 3 is devoted to state the extended verification theorem. In Section 4, we analyse an inverse type result for the extended verification theorem. Namely, we show that if there exists an optimal control with respect to certain cost function and certain filtration 𝐆~\tilde{\mathbf{G}}, then the given Brownian motion is still a 𝐆~\tilde{\mathbf{G}}-semimartingale. Finally, as an example, we provide two application of our extended verification theorem, which appear in financial markets.

2 Statement of the problem using initial enlargement of the filtrations

Let B={Bt:t[0,T]}B=\{B_{t}:t\in[0,T]\} be a Brownian motion defined on a complete probability space (Ω,P,)(\Omega,P,\mathcal{F}) and 𝐅={t}t[0,T]\mathbf{F}=\{\mathcal{F}_{t}\}_{t\in[0,T]} the filtration generated by BB augmented with the null sets. It is well-known that {t}t0\{\mathcal{F}_{t}\}_{t\geq 0} satisfies the usual conditions. We know that every σ\sigma-algebra t\mathcal{F}_{t} in 𝐅\mathbf{F} contains the events for which is possible determine their occurrence or not only from the history of the process BB until time tt. If we assume the arrival of new information from a random variable LL this leads up to consider a new filtration 𝐆={𝒢t}t[0,T]\mathbf{G}=\{\mathcal{G}_{t}\}_{t\in[0,T]} given by

𝒢t:=s>t(sσ(L)),\mathcal{G}_{t}:=\bigcap_{s>t}\left(\mathcal{F}_{s}\vee\sigma(L)\right), (1)

which also satisfies the usual conditions. Under suitable assumptions on LL (see, for example, Yor and Mansuy [12, Section 1.3], León [11, Section 3] or Protter [16, Section 6]), BB is still a special 𝐆\mathbf{G}-semimartingale with decomposition

Bt=B~t+0tαs(L)𝑑s,t[0,T],B_{t}=\tilde{B}_{t}+\int_{0}^{t}\alpha_{s}(L)ds,\quad t\in[0,T], (2)

where B~\tilde{B} is a 𝐆\mathbf{G}-Brownian motion and the information drift α={αs(x):s[0,T],andx}\alpha=\{\alpha_{s}(x):s\in[0,T],\ \mbox{and}\ x\in\mathbb{R}\} is an 𝐅\mathbf{F}-adapted random field such that α(L)Lp(0,t)\alpha(L)\in L^{p}(0,t) w.p.1, for each t[0,T]t\in[0,T] and some p>1p>1.

In the financial framework, the initial enlargement of filtrations can be interpreted in this fashion: Consider a classical financial market with one bond and one risky asset. Then, by Karatzas [7], the wealth XX of an honest investor follows the dynamics of the Itô’s stochastic differential equation

dXt=(rtXt+(r~trt)ut)dt+utσtdBt,t[0,T].dX_{t}=\left(r_{t}X_{t}+(\tilde{r}_{t}-r_{t})u_{t}\right)dt+u_{t}\sigma_{t}dB_{t},\quad t\in[0,T]. (3)

Here, uu stands for the amount that the investor invest in the stock (i.e., the risky asset), and the processes rr, r~\tilde{r} and σ\sigma are 𝐅\mathbf{F}-adapted stochastic processes that represent the rate of the bound, the rate of the stock and the volatility of the market, respectively. Now suppose that this investor is an insider. That is, he/she has from the beginning some extra knowledge of the future development of the market given by the random variable LL. So, this insider can use strategies of the form u(L)u(L) to invest in the stock to make profit, where u={us(x):s[0,T],andx}u=\{u_{s}(x):s\in[0,T],\ \mbox{and}\ x\in\mathbb{R}\} is an 𝐅\mathbf{F}-adapted random field (see León et al. [11] or Navarro [13], and Pikovsky and Karatzas [15]). In this case, from (2) and (3), the wealth equation of the insider is

dXt=(rtXt+(r~trt)ut(L)+σtαt(L)ut(L))dt+ut(L)σtdB~t,t[0,T].dX_{t}=\left(r_{t}X_{t}+(\tilde{r}_{t}-r_{t})u_{t}(L)+\sigma_{t}\alpha_{t}(L)u_{t}(L)\right)dt+u_{t}(L)\sigma_{t}d\tilde{B}_{t},\quad t\in[0,T]. (4)

Actually, equations (3) and (4) are equivalent (i.e., they have the same solutions). Also, in this case, we have that equation (3) is a controlled stochastic differential equation driven by the 𝐆\mathbf{G}-semimartingale BB that involves controls that are 𝐆\mathbf{G}-adapted. Hence, to take advantage of the extra information LL, we can figure out a 𝐆\mathbf{G}-adapted optimal control with respect to a certain cost function, unlike the classical stochastic control problem, where the controls are 𝐅\mathbf{F}-adapted processes. This is extended as follows.

Let UU and 𝒪\mathcal{O} be a closed and an open subsets of \mathbb{R}, respectively. For t0[0,T)t_{0}\in[0,T), we will denote Q=(t0,T)×𝒪Q=(t_{0},T)\times\mathcal{O} and Q¯=[t0,T]×𝒪¯\bar{Q}=[t_{0},T]\times\bar{\mathcal{O}}. Throughout this work, we will assume that the extra information is modeled by a random variable LL. Now, consider two measurable functions b,σ:Q¯×Ub,\sigma:\bar{Q}\times U\to\mathbb{R} satisfying suitable conditions that are given in Section 4 and the controlled stochastic differential equation for the filtration 𝐆\mathbf{G}

dYt=b(t,Yt,ut)dt+σ(t,Yt,ut)dBt,t(0,T].dY_{t}=b(t,Y_{t},u_{t})dt+\sigma(t,Y_{t},u_{t})dB_{t},\quad t\in(0,T]. (5)

Here, u:[0,T]×ΩUu:[0,T]\times\Omega\rightarrow U has the form us=us(L)u_{s}=u_{s}(L) as in equation (4). In consequence, under assumption (2), this last equation is also written as

dYt=(b(t,Yt,ut)+σ(t,Yt,ut)αt(L))dt+σ(t,Yt,ut)dB~t,t[0,T].dY_{t}=\left(b(t,Y_{t},u_{t})+\sigma(t,Y_{t},u_{t})\alpha_{t}(L)\right)dt+\sigma(t,Y_{t},u_{t})d\tilde{B}_{t},\quad t\in[0,T]. (6)

That is, the solution Y:Ω×[0,T]𝒪Y:\Omega\times[0,T]\to\mathcal{O} to these two equations is an Itô process adapted to the filtration 𝐆\mathbf{G}. Therefore, YY would be only controlled whenever it remains in the open set 𝒪\mathcal{O}. Thus, it is necessary to introduce the 𝐆\mathbf{G}-stopping time

τ:=inf{s[t0,T](s,Ys)Q}.\tau:=\inf\{s\in[t_{0},T]\mid(s,Y_{s})\notin Q\}. (7)

Remember that τ\tau is a stopping time since the filtration 𝐆\mathbf{G} satisfies the usual conditions, as it is established in Protter [16]. Moreover, by definition, τT\tau\leq T.

The main task in stochastic control consists in determining a control uu^{*} which is optimal with respect to a certain cost function. In this paper, the cost function has the form

𝒥(t,x;u):=𝔼t,x(tτL(s,Ys,us)𝑑s+ψ(τ,Yτ)),\mathcal{J}\left(t,x;u\right):=\mathbb{E}_{t,x}\left(\int_{t}^{\tau}L(s,Y_{s},u_{s})ds+\psi(\tau,Y_{\tau})\right), (8)

where the deterministic functions L:Q×UL:Q\times U\to\mathbb{R} and ψ:Q¯\psi:\bar{Q}\to\mathbb{R} are the initial and final cost functions, respectively. Furthermore, the expectation Et,xE_{t,x} indicates that the solution YY to the controlled equation (5) has initial condition xx at time tt. The classical tool to solve this optimization problem is the so called Hamilton-Jacobi-Bellman equation (HJB-equation), which is related with the value function (see (11) below), through the verification theorem. Consequently, in this paper we are interested in establishing an extension of the verification theorem that allows us to deal with controls adapted to a bigger filtration than the underlying filtration, for which the Brownian motion BB is a semimartingale. This is done in Section 3. Conversely, in Section 4, we show that if we can find an 𝐆\mathbf{G}-adapted optimal control (with respect to a certain cost function), where the filtration 𝐆\mathbf{G} is bigger than the one generated by BB, then BB is an 𝐆\mathbf{G}-semimartingale. Finally, we provide two examples where we apply our extended verification theorem in Section 5.

3 The statement of verification theorem under enlargement of filtration

The goal of this section is to state a verification theorem for the initial enlargement of filtrations. We first introduce the general assumptions and notation that we use throughout this section.

(Ω,P,)(\Omega,P,\mathcal{F}) is a complete probability space where it is defined a Brownian motion B={Bt:t[0,T]}B=\{B_{t}:t\in[0,T]\} and L:ΩL:\Omega\rightarrow\mathbb{R} is a random variable such that there are a 𝐆\mathbf{G}-Brownian motion B~={B~t:t[0,T]}\tilde{B}=\{\tilde{B}_{t}:t\in[0,T]\} and an 𝐅\mathbf{F}-random field α={αs(x):s[0,T],andx}\alpha=\{\alpha_{s}(x):s\in[0,T],\ \mbox{and}\ x\in\mathbb{R}\} satisfying equality (2), for all t[0,T]t\in[0,T], w.p.1. Here, 𝐅\mathbf{F} is defined in Section 2 and 𝐆\mathbf{G} is the filtration introduced in (1). In this paper, we do not necessarily have that \mathcal{F} is the σ\sigma-algebra T\mathcal{F}_{T}. That is, we could have T\mathcal{F}_{T}\subset\mathcal{F}.

In this section, we deal with equation (5). That is, the controlled stochastic differential equation

dXt=b(t,Xt,ut)dt+σ(t,Xt,ut)dBt.t(t0,T].dX_{t}=b(t,X_{t},u_{t})dt+\sigma(t,X_{t},u_{t})dB_{t}.\quad t\in(t_{0},T].

Here, t0[0,T)t_{0}\in[0,T), the coefficients b,σb,\sigma and the control uu satisfy the following hypothesis and definition, respectively. Remember that QQ and UU were introduced in Section 2.

  • (𝐇\mathbf{H})

    The coefficients b,σ:Q×Ub,\sigma:Q\times U\to\mathbb{R} are measurable and satisfy the following conditions:

    • i)

      b(t,,u),σ(t,,u)C1(𝒪)b(t,\cdot,u),\sigma(t,\cdot,u)\in C^{1}(\mathcal{O}), for all (t,u)(t0,T)×U(t,u)\in(t_{0},T)\times U.

    • ii)

      There exists a constant C>0C>0 such that, for all (t,x,u)Q×U(t,x,u)\in Q\times U,

      |xb|C,|xσ|C,and|b(t,x,u)|+|σ(t,x,u)|C(1+|x|+|u|).|\partial_{x}b|\leq C,\quad|\partial_{x}\sigma|\leq C,\quad\mbox{and}\quad|b(t,x,u)|+|\sigma(t,x,u)|\leq C(1+|x|+|u|).

Observe that equation (5) can only have a solution XX up to the first time it exploits and consequently, it will be only controlled as long as it remains in the set 𝒪\mathcal{O}. In this case, it means that equation (5) has a solution tXtt\mapsto X_{t} up to either it reaches the boundary 𝒪\partial\mathcal{O} of the set 𝒪\mathcal{O}, or t=Tt=T.

Now, we are ready to defined the admissible strategies.

Definition 3.1.

Let t0[0,T)t_{0}\in[0,T). A 𝐆\mathbf{G}-progressively measurable process u:[t0,T]×ΩUu:[t_{0},T]\times\Omega\rightarrow U is called an admissible control for equation (5) if

𝔼(tT|us|k𝑑s)<, for all k.\mathbb{E}\left(\int_{t}^{T}|u_{s}|^{k}ds\right)<\infty,\quad\text{ for all }k\in\mathbb{N}. (9)

and, for x𝒪x\in\mathcal{O}, equation (5) has a solution XX such that Xt0=xX_{t_{0}}=x. Moreover, we set 𝒜(t0,x)\mathcal{A}(t_{0},x) as the family of admissible controls defined on [t0,T]×Ω[t_{0},T]\times\Omega.

Note that if u𝒜(t0,x)u\in\mathcal{A}(t_{0},x) and x𝒪x\in\mathcal{O}, then equation (5) has a unique solution such that Xt0=xX_{t_{0}}=x because of the definition of admissible control and Hypothesis (𝐇\mathbf{H}).ii), which implies that the coefficients bb and σ\sigma are Lipschitz on any interval contained in 𝒪\mathcal{O}, uniformly on [0,T]×U[0,T]\times U.

The main task in stochastic control consists in determining a control uu^{*} which is optimal with respect to a certain cost functional. For our purposes, the cost functional has the form as in equality (8) where the deterministic functions L:Q×UL:Q\times U\to\mathbb{R} and ψ:Q¯\psi:\bar{Q}\to\mathbb{R} verify

|L(t,x,u)|C(1+|x|k+|u|k)and|ψ(t,x)|C(1+|x|k),|L(t,x,u)|\leq C(1+|x|^{k}+|u|^{k})\quad\mbox{and}\quad|\psi(t,x)|\leq C(1+|x|^{k}), (10)

for some kk\in\mathbb{N}. Remember that the notation 𝔼t,x\mathbb{E}_{t,x} corresponds to the expectation of functionals of the solution XX to equation (5) with an initial condition xx at time tt.

Before stating the control problem for this work, we need to introduce some extra definitions and conventions.

The control problem that we consider here is to compute u𝒜(t,x)u^{*}\in\mathcal{A}(t,x), which minimizes the cost functional (8). That is, a control uu^{*} in 𝒜(t,x)\mathcal{A}(t,x) satisfying

V(t,x):=infu𝒜(t,x)𝒥(t,x;u)=𝒥(t,x;u).V(t,x):=\inf_{u\in\mathcal{A}(t,x)}\mathcal{J}(t,x;u)=\mathcal{J}(t,x;u^{*}). (11)

Note that the function V:[0,T]×𝒪V:[0,T]\times\mathcal{O}\rightarrow\mathbb{R} describes the evolution of the minimal costs as a function of (t,x)(t,x). This function is called the value function.

In analogy with the adapted case, where α0\alpha\equiv 0, we use the convention

AuG(t,x):\displaystyle A^{u}G(t,x): =\displaystyle= tG(t,x)+12σ2(t,x,u)xxG(t,x)\displaystyle\partial_{t}G(t,x)+\frac{1}{2}\sigma^{2}(t,x,u)\partial_{xx}G(t,x) (12)
+(b(t,x,u)+αt(L)σ(t,x,u))xG(t,x),\displaystyle+\left(b(t,x,u)+\alpha_{t}(L)\sigma(t,x,u)\right)\partial_{x}G(t,x),

for GC1,2(Q)C(Q¯)G\in C^{1,2}(Q)\cap C(\bar{Q}) and (t,x,u)Q×U(t,x,u)\in Q\times U. We observe that we need to deal with the extra term (t,x,u)αt(L)σ(t,x,u)xG(t,x)(t,x,u)\mapsto\alpha_{t}(L)\sigma(t,x,u)\partial_{x}G(t,x) since equation (5) is equivalent to equation (6) due to condition (2). Remember that equation (6) is a controlled stochastic differential equation driven by the 𝐆\mathbf{G}-Brownian motion B~\tilde{B}. So, in the remaining of this section, we assume that α(L)\alpha(L) defined in (2) belongs to Lp([0,T]×Ω)L^{p}([0,T]\times\Omega), for some p>1p>1.

Now we are in position to enunciate the main result of this section, where we use the 𝐆\mathbf{G}-stopping time τ\tau given in (7). Note that τT\tau\equiv T in the case that 𝒪=\mathcal{O}=\mathbb{R}.

Theorem 3.2.

Let Hypothesis (𝐇)\mathbf{H}) be satisfied and let G:Q×ΩG:Q\times\Omega\to\mathbb{R} be a 𝐆\mathbf{G}-adapted random field and Ω0Ω\Omega_{0}\subset\Omega a set of probability 11 such that, for all ωΩ0\omega\in\Omega_{0},

GC1,2(Q)C(Q¯),|G(t,x)|K(1+|x|m),and|Gx(t,x)|J(1+|x|n),G\in C^{1,2}(Q)\cap C(\bar{Q}),\quad|G(t,x)|\leq K(1+|x|^{m}),\quad\mbox{and}\quad|G_{x}(t,x)|\leq J(1+|x|^{n}),

for some random variables KL2(Ω)K\in L^{2}(\Omega) and JL4(Ω)J\in L^{4}(\Omega), and m,nm,n\in\mathbb{N}. In addition, assume that GG is a solution of the Hamilton-Jacobi-Bellman equation

{infuU{AuG(t,x)+L(t,x,u)}=0,(t,x)Q,𝔼t,x(G(τ,Xτ))=𝔼t,x(ψ(τ,Xτ)),(t,x)Q,\begin{cases}\inf_{u\in U}\left\{A^{u}G(t,x)+L(t,x,u)\right\}=0,&(t,x)\in Q,\\ \mathbb{E}_{t,x}\left(G(\tau,X_{\tau})\right)=\mathbb{E}_{t,x}\left(\psi(\tau,X_{\tau})\right),&(t,x)\in Q,\end{cases} (13)

where XX is the solution of either equation (5) or equation (6). Then, if

𝔼t,x(Xβ):=𝔼t,x(sups[t,τ]|Xs|β)<,for(t,x)Q,\mathbb{E}_{t,x}\left(\|X\|^{\beta}\right):=\mathbb{E}_{t,x}\left(\sup_{s\in[t,\tau]}|X_{s}|^{\beta}\right)<\infty,\quad\mbox{for}\ \ (t,x)\in Q, (14)

with β=max(2m,k)\beta=\max(2m,k), where kk is the exponent in (10), we have that

a)

𝔼t,x(G(t,x))𝒥(t,x,u)\mathbb{E}_{t,x}(G(t,x))\leq\mathcal{J}(t,x,u), for all (t,x)Q(t,x)\in Q and u𝒜(t,x)u\in\mathcal{A}(t,x).

b)

If for all (t,x)Q(t,x)\in Q, there exits a control u𝒜(t,x)u^{*}\in\mathcal{A}(t,x) such that

usargminuU(AuG(s,Xs)+L(s,Xs,u)),u^{*}_{s}\in\text{arg}\min_{u\in U}\left(A^{u}G(s,X_{s}^{*})+L(s,X_{s}^{*},u)\right), (15)

for all s[t,τ]s\in[t,\tau], where XsX^{*}_{s} is the controlled process with Xt=xX_{t}^{*}=x corresponding to uu^{*} via (5), then

𝔼t,x(G(t,x))=𝒥(t,x;u)=V(t,x).\mathbb{E}_{t,x}(G(t,x))=\mathcal{J}(t,x;u^{*})=V(t,x).

In particular uu^{*} is an optimal control and (t,x)𝔼t,x(G(t,x))(t,x)\mapsto\mathbb{E}_{t,x}\left(G(t,x)\right) coincides with the value function.

Proof.

Let (t,x)Q(t,x)\in Q and ω0Ω0\omega_{0}\in\Omega_{0}. Also, let τ\tau be the 𝐆\mathbf{G}-stopping time introduced in (7).

We first assume that the open set 𝒪\mathcal{O} is bounded. Then, using that GG is a solution of the HJB-equation (13), we have that, for u𝒜(t,x)u\in\mathcal{A}(t,x) and s[t,τ)s\in[t,\tau),

0AusG(s,Xs)+L(s,Xs,us).0\leq A^{u_{s}}G(s,X_{s})+L(s,X_{s},u_{s}). (16)

On the other hand, consider a 𝐆\mathbf{G}-stopping time θ\theta, such that tθτt\leq\theta\leq\tau. Hence, by (12), Itô’s formula (see [4, Theorem 8.1, pp. 184]) applied to G(θ,Xθ)G(\theta,X_{\theta}) and taking expectation, we obtain

𝔼t,x\displaystyle\mathbb{E}_{t,x} (G(θ,Xθ))\displaystyle\left(G(\theta,X_{\theta})\right)
=\displaystyle= 𝔼t,x(G(t,x)+tθsG(s,Xs)ds+tθxG(s,Xs)[b(s,Xs,us)+σ(s,Xs,us)αs(L)]ds\displaystyle\mathbb{E}_{t,x}\left(G(t,x)+\int_{t}^{\theta}\partial_{s}G(s,X_{s})ds+\int_{t}^{\theta}\partial_{x}G(s,X_{s})\left[b(s,X_{s},u_{s})+\sigma(s,X_{s},u_{s})\alpha_{s}(L)\right]ds\right.
+12tθxxG(s,Xs)σ2(s,Xs,us)ds+tθxG(s,Xs)σ(s,Xs,us)dB~s)\displaystyle+\left.\frac{1}{2}\int_{t}^{\theta}\partial_{xx}G(s,X_{s})\sigma^{2}(s,X_{s},u_{s})ds+\int_{t}^{\theta}\partial_{x}G(s,X_{s})\sigma(s,X_{s},u_{s})d\tilde{B}_{s}\right)
=\displaystyle= 𝔼t,x(G(t,x)+tθAusG(s,Xs)𝑑s)+𝔼t,x(tθxG(s,Xs)σ(s,Xs,us)dB~s).\displaystyle\mathbb{E}_{t,x}\left(G(t,x)+\int_{t}^{\theta}A^{u_{s}}G(s,X_{s})ds\right)+\mathbb{E}_{t,x}\left(\int_{t}^{\theta}\partial_{x}G(s,X_{s})\sigma(s,X_{s},u_{s})d\tilde{B}_{s}\right). (17)

Now, we claim that the expectation of the stochastic integral in equality (17) is equal to zero. Indeed, since σ\sigma satisfies Hypothesis (𝐇\mathbf{H}), and using the assumption on GxG_{x}, we can write

𝔼t,x(tθ|xG(s,Xs)σ(s,Xs,us)|2𝑑s)C2𝔼t,x(tθJ2(1+|Xs|n)2(1+|Xs|+|us|)2𝑑s)\displaystyle\mathbb{E}_{t,x}\left(\int_{t}^{\theta}|\partial_{x}G(s,X_{s})\sigma(s,X_{s},u_{s})|^{2}ds\right)\leq C^{2}\mathbb{E}_{t,x}\left(\int_{t}^{\theta}J^{2}(1+|X_{s}|^{n})^{2}(1+|X_{s}|+|u_{s}|)^{2}ds\right)
=C2𝔼t,x(tθJ2(1+|Xs|n)2(1+|Xs|)2𝑑s)+2C2𝔼t,x(tθJ2(1+|Xs|n)2(1+|Xs|)|us|𝑑s)\displaystyle=C^{2}\mathbb{E}_{t,x}\left(\int_{t}^{\theta}J^{2}(1+|X_{s}|^{n})^{2}(1+|X_{s}|)^{2}ds\right)+2C^{2}\mathbb{E}_{t,x}\left(\int_{t}^{\theta}J^{2}(1+|X_{s}|^{n})^{2}(1+|X_{s}|)|u_{s}|ds\right)
+C2𝔼t,x(tθJ2(1+|Xs|n)2|us|2𝑑s).\displaystyle\quad+C^{2}\mathbb{E}_{t,x}\left(\int_{t}^{\theta}J^{2}(1+|X_{s}|^{n})^{2}|u_{s}|^{2}ds\right).

Therefore, the fact that 𝒪\mathcal{O} is bounded yields that there is a constant C~>0\tilde{C}>0 such that

𝔼t,x(tθ|xG(s,Xs)σ(s,Xs,us)|2𝑑s)\displaystyle\mathbb{E}_{t,x}\left(\int_{t}^{\theta}|\partial_{x}G(s,X_{s})\sigma(s,X_{s},u_{s})|^{2}ds\right) \displaystyle\leq C~𝔼t,x(J2)+C~𝔼t,x(J2tθ|us|𝑑s)\displaystyle\tilde{C}\mathbb{E}_{t,x}\left(J^{2}\right)+\tilde{C}\mathbb{E}_{t,x}\left(J^{2}\int_{t}^{\theta}|u_{s}|ds\right)
+C~𝔼t,x(tθJ2|us|2𝑑s).\displaystyle+\tilde{C}\mathbb{E}_{t,x}\left(\int_{t}^{\theta}J^{2}|u_{s}|^{2}ds\right).

Thus, our claim is satisfied since JL4(Ω)J\in L^{4}(\Omega) and condition (9). That is,

𝔼t,x(tθ|xG(s,Xs)σ(s.Xs,us)|2ds)<,\mathbb{E}_{t,x}\left(\int_{t}^{\theta}|\partial_{x}G(s,X_{s})\sigma(s.X_{s},u_{s})|^{2}ds\right)<\infty,

which implies that

𝔼t,x(tθxG(s,Xs)σ(s,Xs,us)dB~s)=0\mathbb{E}_{t,x}\left(\int_{t}^{\theta}\partial_{x}G(s,X_{s})\sigma(s,X_{s},u_{s})d\tilde{B}_{s}\right)=0

because B~\tilde{B} is a 𝐆\mathbf{G}-Brownian motion. Then, equality (17) becomes the inequality

𝔼t,x(G(t,x))\displaystyle\mathbb{E}_{t,x}\left(G(t,x)\right) =\displaystyle= 𝔼t,x(G(θ,Xθ)tθAusG(s,Xs)𝑑s)\displaystyle\mathbb{E}_{t,x}\left(G(\theta,X_{\theta})-\int_{t}^{\theta}A^{u_{s}}G(s,X_{s})ds\right) (18)
\displaystyle\leq 𝔼t,x(G(θ,Xθ)+tθL(s,Xs,us)𝑑s),\displaystyle\mathbb{E}_{t,x}\left(G(\theta,X_{\theta})+\int_{t}^{\theta}L(s,X_{s},u_{s})ds\right),

where to obtain the last inequality we have used (16). In particular, with θ=τ\theta=\tau, we obtain the assertion in a).

Now consider a general open set 𝒪\mathcal{O}\subset\mathbb{R} and see that (18) is also satisfied in this case. To do so, choose NN\in\mathbb{N} such that 1N<Tt\frac{1}{N}<T-t. For pp\in\mathbb{N} such that p>Np>N, set

𝒪p:=𝒪{x|x|<p,dist(x,𝒪)>1p},\mathcal{O}_{p}:=\mathcal{O}\cap\left\{x\in\mathbb{R}\mid|x|<p,\;\text{dist}(x,\partial\mathcal{O})>\frac{1}{p}\right\},

with

Qp:=[t,T1p)×𝒪p.Q_{p}:=\left[t,T-\frac{1}{p}\right)\times\mathcal{O}_{p}.

Let τp=inf{s[t,T1p)(s,Xs)Qp}\tau_{p}=\inf\{s\in[t,T-\frac{1}{p})\mid(s,X_{s})\notin Q_{p}\}. Then, (18) implies

𝔼t,x(G(t,x))𝔼t,x(tτpL(s,Xs,us)𝑑s+G(τp,Xτp)),\mathbb{E}_{t,x}\left(G(t,x)\right)\leq\mathbb{E}_{t,x}\left(\int_{t}^{\tau_{p}}L(s,X_{s},u_{s})ds+G(\tau_{p},X_{\tau_{p}})\right),

for all (t,x)Qp(t,x)\in Q_{p} and u𝒜(t,x)u\in\mathcal{A}(t,x). Consequently, the dominated convergence theorem, τpτ\tau_{p}\uparrow\tau, (9), (10), (14), and the facts that GG is continuous in Q¯\bar{Q} and G(t,x)K(1+|x|m)G(t,x)\leq K(1+|x|^{m}) lead to

𝔼t,x(G(t,x))<𝔼t,x(tτL(s,Xs,us)𝑑s+ψ(τ,Xτ)).\mathbb{E}_{t,x}\left(G(t,x)\right)<\mathbb{E}_{t,x}\left(\int_{t}^{\tau}L(s,X_{s},u_{s})ds+\psi(\tau,X_{\tau})\right).

To finish the proof, we now assume that, for all (t,x)Q(t,x)\in Q and u𝒜(t,x)u\in\mathcal{A}(t,x), the following strict inequality is satisfied

𝔼t,x(G(t,x))<𝔼t,x(tτL(s,Xs,us)𝑑s+ψ(τ,Xτ)),\mathbb{E}_{t,x}\left(G(t,x)\right)<\mathbb{E}_{t,x}\left(\int_{t}^{\tau}L(s,X_{s},u_{s})ds+\psi(\tau,X_{\tau})\right),

which gives

0<𝔼t,x(tτ(L(s,Xs,us)+AusG(s,Xs))𝑑s).\displaystyle 0<\mathbb{E}_{t,x}\left(\int_{t}^{\tau}(L(s,X_{s},u_{s})+A^{u_{s}}G(s,X_{s}))ds\right). (19)

Indeed, from (17), where we change θ\theta by τp\tau_{p}, we obtain

𝔼t,x(G(τp,Xτp)G(t,x))\displaystyle\mathbb{E}_{t,x}\left(G(\tau_{p},X_{\tau_{p}})-G(t,x)\right) =\displaystyle= 𝔼t,x(tτpAusG(s,Xs)𝑑s)\displaystyle\mathbb{E}_{t,x}\left(\int_{t}^{\tau_{p}}A^{u_{s}}G(s,X_{s})ds\right)
=\displaystyle= 𝔼t,x(tτp(L(s,Xs,us)+AusG(s,Xs))ds\displaystyle\mathbb{E}_{t,x}\left(\int_{t}^{\tau_{p}}(L(s,X_{s},u_{s})+A^{u_{s}}G(s,X_{s}))ds\right.
+tτpL(s,Xs,us)ds).\displaystyle\left.+\int_{t}^{\tau_{p}}L(s,X_{s},u_{s})ds\right).

Thus, inequality (16) and the dominated and monotone convergence theorems allow us to show that (19) holds. In particular, inequality (19) is true for the control uu^{*} that satisfies (15), namely

0<𝔼t,x(tτ(L(s,Xs;us)+AusG(s,Xs))𝑑s),0<\mathbb{E}_{t,x}\left(\int_{t}^{\tau}(L(s,X^{*}_{s};u^{*}_{s})+A^{u^{*}_{s}}G(s,X^{*}_{s}))ds\right),

which yields a contradiction since GG is a solution of equation (13), thus

𝔼t,x(G(t,x))=V(t,x)=𝒥(t,x;u)\displaystyle\mathbb{E}_{t,x}\left(G(t,x)\right)=V(t,x)=\mathcal{J}(t,x;u^{*})

and the proof of case b) is complete. ∎

4 A converse-type result for the verification theorem

The purpose of this section is to give a converse type result of the verification theorem proved in Section 3. Towards this end, the main tool in this section is the forward integral with respect to the Brownian motion BB. Remember that 𝐅\mathbf{F} stands for the filtration generated by BB augmented with the PP-null sets.

Definition 4.1 (Forward integral).

Let v:[0,T]×Ωv:[0,T]\times\Omega\rightarrow\mathbb{R} be a ([0,T])\mathcal{B}([0,T])\otimes\mathcal{F}-measurable process with integrable trajectories. We say that vv is forward integrable with respect to BB (vDom δv\in\text{Dom }\delta^{-} for short) if

1ϵ0Tvs(B(s+ϵ)TBs)𝑑s,\frac{1}{\epsilon}\int_{0}^{T}v_{s}(B_{(s+\epsilon)\wedge T}-B_{s})ds,

converges in probability as ϵ0\epsilon\downarrow 0. We denote this limit by 0TvsdBs\int_{0}^{T}v_{s}d^{-}B_{s}.

Remark 4.2.

The Forward integral has the following two properties:

  • i)

    Assume that v={vt:t[0,T]}v=\{v_{t}:t\in[0,T]\} is a bounded ([0,T])T\mathcal{B}([0,T])\otimes\mathcal{F}_{T}-measurable and 𝐅\mathbf{F}-adapted process. Then, Russo and Vallois [17, Proposition 1.1] have shown that vDom δv\in\text{Dom }\delta^{-} and

    0TvsdBs=0Tvs𝑑Bs,\int_{0}^{T}v_{s}d^{-}B_{s}=\int_{0}^{T}v_{s}dB_{s},

    where the stochastic integral in the right-hand side is in the Itô sense.

  • ii)

    Assume that BB is a 𝐆~\tilde{\mathbf{G}}-semimartingale, where 𝐆~\tilde{\mathbf{G}} is a bigger filtration than 𝐅\mathbf{F}. Let XX be a 𝐆~\tilde{\mathbf{G}}-adapted process that is integrable with respect to the 𝐆~\tilde{\mathbf{G}}-semimartingale BB, then XDom δX\in\text{Dom }\delta^{-} and

    0TXsdBs=0TXs𝑑Bs,\int_{0}^{T}X_{s}d^{-}B_{s}=\int_{0}^{T}X_{s}dB_{s},

    where the right-hand side is the Itô integral with respect to the 𝐆~\tilde{\mathbf{G}}-semimartingale BB. This is also proven in Proposition 1.1 of [17].

  • iii)

    Let vDom δv\in\text{Dom }\delta^{-} and θ\theta a random variable. Then, it is easy to see that θvDom δ\theta v\in\text{Dom }\delta^{-} and

    0T(θvs)dBs=θ0TvsdBs.\int_{0}^{T}(\theta v_{s})d^{-}B_{s}=\theta\int_{0}^{T}v_{s}d^{-}B_{s}.

Using Definition 4.1 the involve stochastic equation for the wealth process of an investor is the following controlled stochastic process (see equation (3))

Xt=x+0t[rsXs+(r~srs)us]𝑑s+0tusσsdBs,t[0,T].X_{t}=x+\int_{0}^{t}\left[r_{s}X_{s}+(\tilde{r}_{s}-r_{s})u_{s}\right]ds+\int_{0}^{t}u_{s}\sigma_{s}d^{-}B_{s},\quad t\in[0,T]. (20)

Here, the coefficients satisfy the following condition:

Hypothesis 4.3.

r,r~,σ:[0,T]×Ωr,\tilde{r},\sigma:[0,T]\times\Omega\rightarrow\mathbb{R} are ([0,T])T\mathcal{B}([0,T])\otimes\mathcal{F}_{T}-measurable and 𝐅\mathbf{F}-adapted processes such that

  1. 1.

    rr is a bounded process such that (rr~)L2([0,T])(r-\tilde{r})\in L^{2}([0,T]) with probability 1.

  2. 2.

    σ>0\sigma>0 is a bounded process.

Throughout this section, we assume that we have a filtration 𝐆~\tilde{\mathbf{G}} bigger than 𝐅\mathbf{F}. The family of admissible controls are related to this filtration. That is, in this section, the family 𝒜(t,x)\mathcal{A}(t,x) of admissible controls is the set of 𝐆~\tilde{\mathbf{G}}-progressively measurable processes uL2([0,T]×Ω)u\in L^{2}([0,T]\times\Omega), for which (20) has a unique solution with Xt=xX_{t}=x, for all xx\in\mathbb{R}. Note that, in particular, we have uσDom δu\sigma\in\text{Dom }\delta^{-}. We also observe that if the filtration 𝐆~\tilde{\mathbf{G}} agrees with the filtration 𝐆\mathbf{G} introduced in (1) and BB is the 𝐆\mathbf{G}-semimartingale given in (2), then equation (20) is nothing else than equation (4) due to Remark 4.2.ii). This is the reason why the forward integral was used for the first time in [13] to solve problems related to financial markets.

Remember that we are interested in the optimal control problem defined on (11), where we consider the cost functional given by

𝒥(x,t;u):=𝔼t,x(tTaus2𝑑sexp(0Trs𝑑s)XT(u)), for a,b>0.\mathcal{J}(x,t;u):=\mathbb{E}_{t,x}\left(\int_{t}^{T}au_{s}^{2}ds-\exp\left(-\int_{0}^{T}r_{s}ds\right)X_{T}(u)\right),\text{ for }a,b>0. (21)

In other words, we take the classic quadratic running cost function L(t,x,u)=au2L(t,x,u)=au^{2} and the final cost is ψ(t,x)=e0trsdsx\psi(t,x)=-e^{\int_{0}^{t}-r_{s}ds}x, which can be interpreted as the present value of quantity xx.

The objective is to prove that if there exists an admissible optimal control u𝒜(t,x)u^{*}\in\mathcal{A}(t,x) for the problem given in (11) via the cost functional (21), thus we can conclude that the 𝐅\mathbf{F}-Brownian motion BB is a semimartingale in the bigger filtration 𝐆~\tilde{\mathbf{G}}. For achieving this result we will use the following hypothesis which is inspired in [3, Theorem 3.5].

Hypothesis 4.4.
  1. 1.

    For all t[0,T)t\in[0,T) and u𝒜(t,x)u\in\mathcal{A}(t,x), the process (s,ω)us+χ(t,t+h](s)θ0(ω)(s,\omega)\mapsto u_{s}+\chi_{(t,t+h]}(s)\theta_{0}(\omega) belongs to 𝒜(t,x)\mathcal{A}(t,x), where θ0\theta_{0} is a bounded 𝐆~t\tilde{\mathbf{G}}_{t}-measurable random variable and h>0h>0 is such that t+hTt+h\leq T.

  2. 2.

    There is a constant mm such that 0<m|σ|0<m\leq|\sigma| with probability 1.

Concerning point 1 of Hypothesis 4.4, we observe the following. Consider the 𝐅\mathbf{F}-adapted random field

Xt~(y)\displaystyle X_{\tilde{t}}(y) =\displaystyle= exp(0t~rs𝑑s)x+exp(0t~rs𝑑s)y0t~exp(0srη𝑑η)(r~srs)χ(t,t+h](s)𝑑s\displaystyle\exp\left(\int_{0}^{\tilde{t}}r_{s}ds\right)x+\exp\left(\int_{0}^{\tilde{t}}r_{s}ds\right)y\int_{0}^{\tilde{t}}\exp\left(-\int_{0}^{s}r_{\eta}d\eta\right)(\tilde{r}_{s}-r_{s})\chi_{(t,t+h]}(s)ds (22)
+exp(0t~rs𝑑s)y0t~exp(0srs𝑑s)σsχ(t,t+h](s)𝑑Bs,\displaystyle+\exp\left(\int_{0}^{\tilde{t}}r_{s}ds\right)y\int_{0}^{\tilde{t}}\exp\left(-\int_{0}^{s}r_{s}ds\right)\sigma_{s}\chi_{(t,t+h]}(s)dB_{s},

for t~[0,T]\tilde{t}\in[0,T] and yy\in\mathbb{R}. The classical Itô formula implies that X(y)X(y) is a solution to the 𝐅\mathbf{F}-adapted stochastic differential equation

Xt~(y)=x+0t~rsXs(y)𝑑s+y0t~(r~srs)χ(t,t+h](s)𝑑s+y0t~σsχ(t,t+h](s)𝑑Bs,t~[0,T].X_{\tilde{t}}(y)=x+\int_{0}^{\tilde{t}}r_{s}X_{s}(y)ds+y\int_{0}^{\tilde{t}}(\tilde{r}_{s}-r_{s})\chi_{(t,t+h]}(s)ds+y\int_{0}^{\tilde{t}}\sigma_{s}\chi_{(t,t+h]}(s)dB_{s},\quad\tilde{t}\in[0,T].

Therefore, Remarks 4.2.i) and 4.2.iii) yield that, for a 𝐆~\tilde{\mathbf{G}}-random variable θ\theta, the 𝐆~\tilde{\mathbf{G}}-adapted process X(θ)X(\theta) is a solution to equation (20) with u=θχ(t,t+h]u=\theta\chi_{(t,t+h]}. Moreover, proceeding as in León et. al. [11], we can show that, in this case, equation (20) has a unique solution of the form π(θ)\pi(\theta), where ϕ={πs(y):(s,y)[0,T]×}\phi=\{\pi_{s}(y):(s,y)\in[0,T]\times\mathbb{R}\} is an 𝐅\mathbf{F}-adapted random field satisfying suitable conditions. We observe that we suppose that, in Hypothesis 4.4.1, the process (s,ω)χ(t,t+h](s)θ0(ω)(s,\omega)\mapsto\chi_{(t,t+h]}(s)\theta_{0}(\omega) is an admissible control because we do not know the form of all the solutions to equation (20). In other words, we are assuming the uniqueness of the solution to (20) for controls of the form (s,ω)χ(t,t+h](s)θ0(ω)(s,\omega)\mapsto\chi_{(t,t+h]}(s)\theta_{0}(\omega).

Now, we can prove the main result of this section.

Theorem 4.5.

Suppose that Hypotheses 4.3 and 4.4 are satisfied and that there exists an optimal control u𝒜(t,x)u^{*}\in\mathcal{A}(t,x) for the problem defined in (11) with the functional (21). Then, the 𝐅\mathbf{F}-Brownian motion BB is a 𝐆~\tilde{\mathbf{G}}-semimartingale.

Proof.

In order to simplify the notation we use the the convention

bt:=0trs𝑑s,t[0,T].b_{t}:=\int_{0}^{t}r_{s}ds,\quad t\in[0,T].

Consider the functional HH defined as follows

H(u):=𝔼t,x(tTaus2𝑑sebTXT(u)),for u𝒜(t,x).H(u):=\mathbb{E}_{t,x}\left(\int_{t}^{T}au_{s}^{2}ds-e^{-b_{T}}X_{T}(u)\right),\quad\mbox{for }u\in\mathcal{A}(t,x).

Let θs(ω)=χ(t,t+h](s)θ0(ω)\theta_{s}(\omega)=\chi_{(t,t+h]}(s)\theta_{0}(\omega) be an admissible control as in Hypothesis 4.4.1 and define F(y):=H(u+yθ)F(y):=H(u^{*}+y\theta) for all yy\in\mathbb{R}. Then the directional derivative of FF is

y^F\displaystyle\nabla_{\hat{y}}F =limε01ε𝔼t,x(tTa[us+yθs+εy^θs]2dsebTXT(u+yθ+εy^θ)\displaystyle=\lim_{\varepsilon\downarrow 0}\frac{1}{\varepsilon}\mathbb{E}_{t,x}\left(\int_{t}^{T}a\left[u^{*}_{s}+y\theta_{s}+\varepsilon\hat{y}\theta_{s}\right]^{2}ds-e^{-b_{T}}X_{T}(u^{*}+y\theta+\varepsilon\hat{y}\theta)\right.
tTa[us+yθs]2dsebTXT(u+yθ))\displaystyle\quad\left.-\int_{t}^{T}a\left[u^{*}_{s}+y\theta_{s}\right]^{2}ds-e^{-b_{T}}X_{T}(u^{*}+y\theta)\right)
=y^𝔼t,x(tT2a(us+yθs)θs𝑑s)\displaystyle=\hat{y}\mathbb{E}_{t,x}\left(\int_{t}^{T}2a(u^{*}_{s}+y\theta_{s})\theta_{s}ds\right)
limε0𝔼t,x(ebT[XT(u+yθ+εy^θ;x)XT(u+yθ;x)ε])\displaystyle\quad-\lim_{\varepsilon\downarrow 0}\mathbb{E}_{t,x}\left(e^{-b_{T}}\left[\frac{X_{T}(u^{*}+y\theta+\varepsilon\hat{y}\theta;x)-X_{T}(u^{*}+y\theta;x)}{\varepsilon}\right]\right)
=y^[𝔼t,x(tT2a(us+yθs)θs𝑑s)𝔼t,x(ebTXT(θ;0))],for all y^0.\displaystyle=\hat{y}\left[\mathbb{E}_{t,x}\left(\int_{t}^{T}2a(u^{*}_{s}+y\theta_{s})\theta_{s}ds\right)-\mathbb{E}_{t,x}\left(e^{-b_{T}}X_{T}(\theta;0)\right)\right],\quad\mbox{for all }\hat{y}\neq 0. (23)

From the analysis of the random field (22), we know

ebT\displaystyle e^{-b_{T}} XT(θ;0)=0Tebs(r~srs)χ(t,t+h](s)θ0𝑑s+0Tebsσsχ(t,t+h](s)θ0dBs.\displaystyle X_{T}(\theta;0)=\int_{0}^{T}e^{-b_{s}}(\tilde{r}_{s}-r_{s})\chi_{(t,t+h]}(s)\theta_{0}ds+\int_{0}^{T}e^{-b_{s}}\sigma_{s}\chi_{(t,t+h]}(s)\theta_{0}d^{-}B_{s}. (24)

Replacing (24) into (23), we get

y^F=\displaystyle\nabla_{\hat{y}}F= y^[𝔼t,x(tt+h2a(us+yθ0)θ0ds)\displaystyle\hat{y}\left[\mathbb{E}_{t,x}\left(\int_{t}^{t+h}2a(u^{*}_{s}+y\theta_{0})\theta_{0}ds\right)\right.
𝔼t,x(tt+hebs(r~srs)θ0ds+tt+hebsσsθ0dBs)],y^0.\displaystyle-\left.\mathbb{E}_{t,x}\left(\int_{t}^{t+h}e^{-b_{s}}(\tilde{r}_{s}-r_{s})\theta_{0}ds+\int_{t}^{t+h}e^{-b_{s}}\sigma_{s}\theta_{0}d^{-}B_{s}\right)\right],\quad\hat{y}\neq 0. (25)

By hypothesis, the functional HH reaches its minimum at uu^{*}. Therefore, FF has a minimum in y=0y=0. Thus, from equation (25), together with Remarks 4.4.i) and 4.4.iii), we obtain

𝔼t,x(θ0[tt+h[2ausebs(r~srs)]𝑑stt+hebsσs𝑑Bs])=0.\mathbb{E}_{t,x}\left(\theta_{0}\left[\int_{t}^{t+h}[2au^{*}_{s}-e^{-b_{s}}(\tilde{r}_{s}-r_{s})]ds-\int_{t}^{t+h}e^{-b_{s}}\sigma_{s}dB_{s}\right]\right)=0.

Since this equality holds for all 𝒢~t\tilde{\mathcal{G}}_{t}-measurable random variable θ0\theta_{0}, we have established

𝔼t,x(tt+h[2ausebs(r~srs)]𝑑stt+hebsσs𝑑Bs|𝒢~t)=0.\mathbb{E}_{t,x}\left(\left.\int_{t}^{t+h}[2au^{*}_{s}-e^{-b_{s}}(\tilde{r}_{s}-r_{s})]ds-\int_{t}^{t+h}e^{-b_{s}}\sigma_{s}dB_{s}\right|\tilde{\mathcal{G}}_{t}\right)=0. (26)

Now, for any admissible control u𝒜(t,x)u\in\mathcal{A}(t,x), we denote

Nu(t)=0t[2ausebs(r~srs)]𝑑s0tebsσs𝑑Bs,t[0,T].N_{u}(t)=\int_{0}^{t}\left[2au_{s}-e^{-b_{s}}(\tilde{r}_{s}-r_{s})\right]ds-\int_{0}^{t}e^{-b_{s}}\sigma_{s}dB_{s},\quad t\in[0,T].

In consequence, from identity (26) and point 1 of Hypothesis 4.3, we have

Et,x(Nu(t+h)|𝒢t)=Nu(t),E_{t,x}\left(\left.N_{u^{*}}(t+h)\right|\mathcal{G}_{t}\right)=N_{u^{*}}(t),

since Nu(t)N_{u^{*}}(t) is 𝒢~t\tilde{\mathcal{G}}_{t}-measurable. Thus Nu(t)N_{u^{*}}(t) is a 𝐆~\tilde{\mathbf{G}}-martingale which implies that Rt=0tebsσs𝑑BsR_{t}=\int_{0}^{t}e^{-b_{s}}\sigma_{s}dB_{s} is a 𝐆~\tilde{\mathbf{G}}-semimartingale. Finally, Hypothesis 4.3.2 gives

0tebsσs1𝑑Rs=Bt,t[0,T],\int_{0}^{t}e^{b_{s}}\sigma^{-1}_{s}dR_{s}=B_{t},\quad t\in[0,T],

which is a 𝐆~\tilde{\mathbf{G}}-semimartingale, therefore, the proof is complete. ∎

Corollary 4.6.

Let u𝒜(t,x)u\in\mathcal{A}(t,x). Suppose that the process

Nu(t)=0t[2ausexp(0srτ𝑑τ)(r~srs)]𝑑s0texp(0srτ𝑑τ)σsdBs,N_{u}(t)=\int_{0}^{t}\left[2au_{s}-\exp\left(-\int_{0}^{s}r_{\tau}d\tau\right)(\tilde{r}_{s}-r_{s})\right]ds-\int_{0}^{t}\exp\left(-\int_{0}^{s}r_{\tau}d\tau\right)\sigma_{s}d^{-}B_{s},

is a 𝐆~\tilde{\mathbf{G}}-martingale. Then uu is an optimal control for the problem (11) with the functional (21).

Proof.

By the proof of Theorem 4.5, we have that the 𝐅\mathbf{F}-Brownian motion BB is also a 𝐆~\tilde{\mathbf{G}}-semimartingale and (25) holds when we write uu instead of uu^{*}. Moreover, by Remark 4.2.2, we get

y^F=\displaystyle\nabla_{\hat{y}}F= y^[𝔼t,x(0T2ausθsds)\displaystyle\hat{y}\left[\mathbb{E}_{t,x}\left(\int_{0}^{T}2au_{s}\theta_{s}ds\right)\right.
𝔼t,x(0Tebs(r~srs)θsds+0TebsσsθsdBs)]=0.\displaystyle-\left.\mathbb{E}_{t,x}\left(\int_{0}^{T}e^{-b_{s}}(\tilde{r}_{s}-r_{s})\theta_{s}ds+\int_{0}^{T}e^{-b_{s}}\sigma_{s}\theta_{s}dB_{s}\right)\right]=0. (27)

for y^0\hat{y}\neq 0 and θ𝒜(t,x)\theta\in\mathcal{A}(t,x) of the form

θs=i=0N1θi(ω)χ(ti,ti+1](s),0sT,\theta_{s}=\sum_{i=0}^{N-1}\theta^{i}(\omega)\chi_{(t_{i},t_{i+1}]}(s),\quad 0\leq s\leq T,

where θi\theta^{i} is a bounded and 𝒢~ti\tilde{\mathcal{G}}_{t_{i}}-measurable random variable and 0=t0<t1<<tN=T0=t_{0}<t_{1}<\cdots<t_{N}=T. Let 𝒜0\mathcal{A}_{0} be the set of such processes θ\theta. Finally, using that 𝒜0\mathcal{A}_{0} is dense in the set of all the square-integrable and 𝐆~\tilde{\mathbf{G}}-progressively measurable processes, it is not difficult to see that (27) is also satisfied when θ\theta belongs to 𝒜(t,x)\mathcal{A}(t,x) and, therefore, the proof is complete. ∎

5 Application of the verification theorem under enlargement of filtrations

The aim of this section is to study two examples through the extended verification theorem analyzed in Section 3 (i.e., Theorem 3.2).

Example 5.1.

Let rr be a positive constant and σ\sigma an 𝐅\mathbf{F}-adapted bounded process. Consider the controlled stochastic process XX given by

Xt=x+0trXs𝑑s+0tusσs𝑑Bs,t[0,T].X_{t}=x+\int_{0}^{t}rX_{s}ds+\int_{0}^{t}u_{s}\sigma_{s}dB_{s},\quad t\in[0,T]. (28)

However, as we have already pointed out, if we have additional information represented by a random variable LL satisfying the conditions of Section 2 (i.e., the filtration 𝐆\mathbf{G} in (1) is such that the 𝐆\mathbf{G}-adapted process B~\tilde{B} in (2) is a 𝐆\mathbf{G}-Brownian motion), the equation (28) is equivalent to the stochastic differential equation (driven by the 𝐆\mathbf{G}-Brownian motion B~\tilde{B})

Xt=x+0t(rXs+usαs(L)σs)𝑑s+0tusσs𝑑B~s.X_{t}=x+\int_{0}^{t}\left(rX_{s}+u_{s}\alpha_{s}(L)\sigma_{s}\right)ds+\int_{0}^{t}u_{s}\sigma_{s}d\tilde{B}_{s}. (29)

Our purpose is to optimize the wealth at the end time TT reducing the costs of the control uu. That is, to solve the problem

infu𝒜(t,x)𝒥(t,x;u):=infu𝒜(t,x)𝔼t,x(tTaus2𝑑sbXT),with a,b>0.\inf_{u\in\mathcal{A}(t,x)}\mathcal{J}\left(t,x;u\right):=\inf_{u\in\mathcal{A}(t,x)}\mathbb{E}_{t,x}\left(\int_{t}^{T}au_{s}^{2}ds-bX_{T}\right),\quad\mbox{with }a,b>0. (30)

So, the corresponding HJB problem associated with (29) and (30) is to find a subset Ω0Ω\Omega_{0}\subset\Omega such that P(Ω0)=1P(\Omega_{0})=1 and, on Ω0\Omega_{0}, compute a solution to the HJB equation

{infu{tG(t,x)+12u2σt2xxG(t,x)+(rx+uαt(L)σt)xG(t,x)+au2}=0in [0,T)×,𝔼t,x(G(T,XT))=b𝔼t,x(XT).\begin{cases}\inf_{u\in\mathbb{R}}\left\{\partial_{t}G(t,x)+\frac{1}{2}u^{2}\sigma_{t}^{2}\partial_{xx}G(t,x)+\left(rx+u\alpha_{t}(L)\sigma_{t}\right)\partial_{x}G(t,x)+au^{2}\right\}=0&\text{in }[0,T)\times\mathbb{R},\\ \mathbb{E}_{t,x}\left(G(T,X_{T})\right)=-b\mathbb{E}_{t,x}\left(X_{T}\right).\end{cases} (31)

Note that the argument of the infimum in equation (31) is a polynomial of degree 2 on the variable uu. Thus, using the second derivative criterion, we obtain the optimal control

u(L)=αt(L)σtxG(t,Xt)σt2xxG(t,Xt)+2a,t[0,T].u^{*}(L)=-\frac{\alpha_{t}(L)\sigma_{t}\partial_{x}G(t,X_{t})}{\sigma_{t}^{2}\partial_{xx}G(t,X_{t})+2a},\quad t\in[0,T]. (32)

Since u(L)u^{*}(L) belongs to the argument of the infimum, then it can be replaced into (31) to get the equation

tG(t,x)+rxxG(t,x)12αt2(L)σt2(xG(t,x))2σt2xxG(t,x)+2a=0,(t,x)[0,T)×.\partial_{t}G(t,x)+rx\partial_{x}G(t,x)-\frac{1}{2}\frac{\alpha_{t}^{2}(L)\sigma_{t}^{2}(\partial_{x}G(t,x))^{2}}{\sigma_{t}^{2}\partial_{xx}G(t,x)+2a}=0,\quad(t,x)\in[0,T)\times\mathbb{R}. (33)

Now, we propose the function G(t,x)=f(t)x+gtG(t,x)=f(t)x+g_{t}, where ff is a ([0,T])\mathcal{B}([0,T])-measurable function and gg a 𝐆\mathbf{G}-adapted process, as a candidate of the solution to equation (33). In this manner we compute the partial derivatives of GG and we substitute them in (33) to get

4a(xf(t)+gt+rxf(t))αt2(L)σt2f2(t)=0,(t,x)[0,T)×.4a\left(xf^{\prime}(t)+g^{\prime}_{t}+rxf(t)\right)-\alpha^{2}_{t}(L)\sigma_{t}^{2}f^{2}(t)=0,\quad(t,x)\in[0,T)\times\mathbb{R}.

In consequence,

f(t)+rf(t)=0f^{\prime}(t)+rf(t)=0

and

4agtαt2(L)σt2f2(t)=0.4ag^{\prime}_{t}-\alpha_{t}^{2}(L)\sigma_{t}^{2}f^{2}(t)=0.

Note that the last equation imposes that Ω0={ωΩ:α(L)L2([0,T])}\Omega_{0}=\{\omega\in\Omega:\alpha(L)\in L^{2}([0,T])\}. Under the conditions f(T)=bf(T)=-b and 𝔼t,x(gT)=0\mathbb{E}_{t,x}(g_{T})=0, the solutions for ff and gg are

f(t)=ber(tT)f(t)=-be^{-r(t-T)}

and

gt=b24a0tσs2αs2(L)e2r(sT)𝑑sρ0,g_{t}=\frac{b^{2}}{4a}\int_{0}^{t}\sigma_{s}^{2}\alpha_{s}^{2}(L)e^{-2r(s-T)}ds-\rho_{0},

where the constant ρ0\rho_{0} is given by

ρ0=b24a𝔼t,x(0Tσs2αs2(L)e2r(sT)𝑑s).\rho_{0}=\frac{b^{2}}{4a}\mathbb{E}_{t,x}\left(\int_{0}^{T}\sigma_{s}^{2}\alpha_{s}^{2}(L)e^{-2r(s-T)}ds\right).

Hence, the solution GG of the HJB-equation (31) is

G(t,x)=xber(tT)+b24a0tσs2αs2(L)e2r(sT)𝑑sρ0.G(t,x)=-xbe^{-r(t-T)}+\frac{b^{2}}{4a}\int_{0}^{t}\sigma_{s}^{2}\alpha_{s}^{2}(L)e^{-2r(s-T)}ds-\rho_{0}.

Therefore, equality (32) implies that the optimal control for the problem (30) is determined by

u(L)=αt(L)σtber(tT)2a,u^{*}(L)=\frac{\alpha_{t}(L)\sigma_{t}be^{-r(t-T)}}{2a}, (34)

while the value function is

V(t,x)=𝔼t,x(xber(tT)+b24atTσs2αs2(L)e2r(sT)𝑑s)V(t,x)=-\mathbb{E}_{t,x}\left(xbe^{-r(t-T)}+\frac{b^{2}}{4a}\int_{t}^{T}\sigma_{s}^{2}\alpha_{s}^{2}(L)e^{-2r(s-T)}ds\right)

due to Theorem 3.2.

Finally, in order to have that that uu^{*} given in (34) is an admissible control, by Definition 3.1, we need to verify that it belongs to Lp([0,T]×Ω)L^{p}([0,T]\times\Omega), for all p>1p>1. An example of random variable LL such that α(L)\alpha(L) defined in (2) is in Lp([0,T]×Ω)L^{p}([0,T]\times\Omega), for all p>1p>1 is

L=0T1m(s)𝑑Bs.L=\int_{0}^{T_{1}}m(s)dB_{s}.

Here T1>TT_{1}>T, mL2([0,T1])m\in L^{2}([0,T_{1}]) and m0m\neq 0, with probability 1. We can use Yor and Mansuy [12, Section 1.3], Navarro [13, Section 3] or León et al. [11] to see that

αt(x)=x0T1m(s)𝑑BstT1m(s)2𝑑st[0,T].\alpha_{t}(x)=\frac{x-\int_{0}^{T_{1}}m(s)dB_{s}}{\int_{t}^{T_{1}}m(s)^{2}ds}\quad t\in[0,T].

In consequence equality (34) provides an admissible control.

Example 5.2.

Here, we consider the controlled stochastic differential equation

Xt=x+0tus𝑑s+0tus𝑑Bs,t[0,T].X_{t}=x+\int_{0}^{t}u_{s}ds+\int_{0}^{t}u_{s}dB_{s},\quad t\in[0,T].

Note that in this case, 𝒜(t,x)\mathcal{A}(t,x) is the family of all the 𝐆\mathbf{G}-progressively measurable processes u:[t0,T]×Ωu:[t_{0},T]\times\Omega\rightarrow\mathbb{R} such that

𝔼(t0T|us|k𝑑s)<, for all k.\mathbb{E}\left(\int_{t_{0}}^{T}|u_{s}|^{k}ds\right)<\infty,\quad\text{ for all }k\in\mathbb{N}.

Remember that the filtration 𝐆\mathbf{G} is defined in (1).

The cost function is given by (30) again, that is,

𝒥(t,x;u):=𝔼t,x(tTaus2𝑑sbXT),with a,b>0.\mathcal{J}\left(t,x;u\right):=\mathbb{E}_{t,x}\left(\int_{t}^{T}au_{s}^{2}ds-bX_{T}\right),\quad\mbox{with }a,b>0.

We observe that in the classical theory of stochastic control (i.e, there is not extra information), an optimal control is

ub2a.u^{*}\equiv\frac{b}{2a}.

Now, as in Example 5.1, we work with 𝐆\mathbf{G}-progressively measurable controls. From Theorem 3.2, we must study the HJB-equation

{infu{tG(t,x)+12u2xxG(t,x)+u(αt(L)+1)xG(t,x)+au2}=0,in [0,T),×,𝔼t,x(G(T,XT))=b𝔼t,x(XT).\begin{cases}\inf_{u\in\mathbb{R}}\left\{\partial_{t}G(t,x)+\frac{1}{2}u^{2}\partial_{xx}G(t,x)+u\left(\alpha_{t}(L)+1\right)\partial_{x}G(t,x)+au^{2}\right\}=0,&\text{in }[0,T),\times\mathbb{R},\\ \mathbb{E}_{t,x}\left(G(T,X_{T})\right)=-b\mathbb{E}_{t,x}\left(X_{T}\right).\end{cases}

Thus, proceeding as in Example 5.1, we propose the optimal control

u(L)=(αt(L)+1)xG(t,Xt)xxG(t,Xt)+2a,t[0,T].u^{*}(L)=-\frac{\left(\alpha_{t}(L)+1\right)\partial_{x}G(t,X_{t})}{\partial_{xx}G(t,X_{t})+2a},\quad t\in[0,T]. (35)

Substituting this control in previous HJB-equation, we have to solve the equation

tG(t,x)(αt(L)+1)2(xG(t,x))22(xxG(t,x)+2a)\displaystyle\partial_{t}G(t,x)-\frac{\left(\alpha_{t}(L)+1\right)^{2}(\partial_{x}G(t,x))^{2}}{2(\partial_{xx}G(t,x)+2a)} =\displaystyle= 0,(t,x)[0,T)×\displaystyle 0,\quad(t,x)\in[0,T)\times\mathbb{R}
𝔼t,x(G(T,XT))\displaystyle\mathbb{E}_{t,x}\left(G(T,X_{T})\right) =\displaystyle= b𝔼t,x(XT).\displaystyle-b\mathbb{E}_{t,x}\left(X_{T}\right).

In order to continue with our analysis, we proceed as in Example 5.1 again. It means, we propose a function GG of the form

G(t,x)=h(t)bx,(t,x)[0,T)×,G(t,x)=h(t)-bx,\quad(t,x)\in[0,T)\times\mathbb{R},

to show that

G(t,x)=b24a0t(αs(L)+1)2𝑑sbxρ0,G(t,x)=\frac{b^{2}}{4a}\int_{0}^{t}\left(\alpha_{s}(L)+1\right)^{2}ds-bx-\rho_{0},

is the function that we are looking for, if ρ0=b24a𝔼t,x(0T(αs(L)+1)2𝑑s),\rho_{0}=\frac{b^{2}}{4a}\mathbb{E}_{t,x}(\int_{0}^{T}\left(\alpha_{s}(L)+1\right)^{2}ds), which, together with (35), yields

u(L)=b(αt(L)+1)2a,t[0,T].u^{*}(L)=\frac{b\left(\alpha_{t}(L)+1\right)}{2a},\quad t\in[0,T].

As we have already pointed out, the case that α(L)0\alpha(L)\equiv 0 (i.e., there is not extra information), we have

ub2a.u^{*}\equiv\frac{b}{2a}.

Now, it is easy to apply Theorem (3.2) to figure out the value function.

Acknowledgment

The work of L. Peralta is supported by UNAM-DGAPA-PAPIIT grants IA100324, IN102822 (Mexico).

References

  • [1] E. Alòs, J. A. León, and D. Nualart. Stochastic Stratonovich calculus fBm for fractional Brownian motion with Hurst parameter less than 1/21/2. Taiwanese J. Math., 5(3):609–632, 2001.
  • [2] E. Alòs, J. A. León, and J. Vives. On the short-time behavior of the implied volatility for jump-diffusion models with stochastic volatility. Finance Stoch., 11(4):571–589, 2007.
  • [3] F. Biagini and B. Øksendal. A general stochastic calculus approach to insider trading. Appl. Math. Optim., 52(2):167–181, 2005.
  • [4] R. M. Dudley, H. Kunita, and F. Ledrappier. Ecole d’Ete de Probabilites de Saint-Flour XII, 1982, volume 1097. Springer, 2006.
  • [5] J. Garzón, J. A. León, and S. Torres. Fractional stochastic differential equation with discontinuous diffusion. Stoch. Anal. Appl., 35(6):1113–1123, 2017.
  • [6] K. Itô. Extension of stochastic integrals. In Proceedings of the International Symposium on Stochastic Differential Equations (Res. Inst. Math. Sci., Kyoto Univ., Kyoto, 1976), pages 95–109. Wiley, New York-Chichester-Brisbane, 1978.
  • [7] I. Karatzas. Optimization problems in the theory of continuous trading. SIAM J. Control Optim., 27(6):1221–1259, 1989.
  • [8] R. Korn and E. Korn. Option pricing and portfolio optimization, volume 31 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2001. Modern methods of financial mathematics, Translated from the 1999 German original by the authors.
  • [9] J. A. León. Stratonovich type integration with respect to fractional Brownian motion with Hurst parameter less than 1/21/2. Bernoulli, 26(3):2436–2462, 2020.
  • [10] J. A. Leon, D. Márquez-Carreras, and Vives. J. Stability of some anticipating semilinear stochastic differential equations of skorohod tipe. Preprint, 2023.
  • [11] J. A. León, R. Navarro, and D. Nualart. An anticipating calculus approach to the utility maximization of an insider. volume 13, pages 171–185. 2003. Conference on Applications of Malliavin Calculus in Finance (Rocquencourt, 2001).
  • [12] R. Mansuy and M. Yor. Random times and enlargements of filtrations in a Brownian setting, volume 1873 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2006.
  • [13] R. A. Navarro. Dos modelos estocásticos para transacciones con información privilegiada en mercados financieros y dependencia con memoria larga en tiempos de ocupación. Ph.D. Thesis. CINVESTAV, 2004.
  • [14] D. Nualart. The Malliavin calculus and related topics. Probability and its Applications (New York). Springer-Verlag, Berlin, second edition, 2006.
  • [15] I. Pikovsky and I. Karatzas. Anticipative portfolio optimization. Adv. in Appl. Probab., 28(4):1095–1122, 1996.
  • [16] Ph. E. Protter. Stochastic integration and differential equations, volume 21 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2005. Second edition. Version 2.1, Corrected third printing.
  • [17] F. Russo and P. Vallois. Forward, backward and symmetric stochastic integration. Probab. Theory Related Fields, 97(3):403–421, 1993.