HJB equation for maximization of wealth under insider trading
Abstract
In this paper, we combine the techniques of enlargement of filtrations and stochastic control theory to establish an extension of the verification theorem, where the coefficients of the stochastic controlled equation are adapted to the underlying filtration and the controls are adapted to a bigger filtration than the one generated by the corresponding Brownian motion . Using the forward integral defined by Russo and Vallois [17], we show that there is a -adapted optimal control with respect to a certain cost functional if and only if the Brownian motion is a -semimartingale. The extended verification theorem allows us to study a financial market with an insider in order to take advantage of the extra information that the insider has from the beginning. Finally, we consider two examples throughout the extended verification theorem. These problems appear in financial markets with an insider.
Keywords: Cost and value functions, Enlargement of the filtrations, Forward integral, HJB-equation, Itô’s formula for adapted random fields, Semimartingales, Verification theorem
Mathematical Subject Classification: 93E20 34H05 49L99 60H05
1 Introduction
The theory of enlargement of a filtration was initiated in 1976 by Itô [6]. This author has pointed out that one way to extend the domain of the stochastic integral (in the Itô sense) with respect to an -martingale is to enlarge the filtration to another filtration in such a way that remains a semimartingale with respect to the new and bigger filtration . In this way, we can now integrate processes that are -adapted, which include processes that are not necessarily adapted to the underlying filtration . In particular, Itô [6] shows that if and are two filtrations such that and is semimartingale with respect to both filtrations, then the stochastic integrals with respect to the and semimartingale are the same in the intersection of the domains of both integrals. But, this problem have not been considered in [6] when and . This problem has been solved by Russo and Vallois [17] using the forward integral. The forward integral is a limit in probability and agrees with the Itô integral if the integrator is a semimartingale (see Section 4 and Remark 4.2), which answer the problem that Itô did not address. So, the forward integral is an anticipating integral, that is, it allows us to integrate processes that are not adapted to the underlying filtration with respect to other processes that are not necessarily semimartingales, therefore the forward integral coincides with the Itô’s integral if this last one is well-defined for the filtration . In consequence, the forward integral becomes an appropriate tool to deal with problems that involve processes that are no adapted to the underlying filtration. Now, we have another anticipating integrals such as the divergence operator in the Malliavin calculus as it is defined in Nualart [14], or the Stratonovich integral introduced in [17] (see also León[9]). But these integrals do not agree with the Itô integral when we apply the enlargement of filtrations. Examples where we can apply anticipating integrals, together with the Malliavin calculus, are the study of stability of solutions to stochastic differential equations with a random variable as initial condition (León et al. [10]), optimal portfolio of an investor with extra information from the beginning (see, for instance, Biagini and Øksendal [3], León et al. [11] and references therein, and Pikovsky and Karatzas [15]), the study of stochastic differential equations driven by fractional Brownian motion, which is not a semimartingale (see, for example, Alòs et al. [1], or Garzón et al. [5]), the study of short-time behaviour of the implied volatility investigated by Alòs et al. [2], etc. The last problem contains only adapted processes to the underlying filtration but employs the future volatility as a main tool, which is a process that is not adapted (i.e., it is an anticipating process).
The use of the forward integral in financial markets was first introduced by León et al. [11] to figure an optimal portfolio out of an insider to maximize the expected logarithmic utility from terminal wealth. An insider is an investor that possesses extra information of the development of the market from the beginning, which is represented by a random variable . In this way, we obtain an approach based on the Malliavin calculus to analyse the dynamics of the wealth equation of this insider since the forward integral is related to the divergence and derivative operators, as it is shown in Nualart [14, equality (3.14)], and in Russo and Vallois [17, Remark 2.5].
It is well-know that the wealth equation is a controlled stochastic differential equation. So, the problem of calculating an optimal portfolio to maximize the utility from terminal wealth is nothing else than a problem of stochastic control. That is, we must compute an optimal control that maximize/minimize a cost functional. A main tool in stochastic control theory is the verification theorem, which involves an optimal control and the so called Hamilton-Jacobi-Bellman equation (for short HJB-equation). The version of the classical verification theorem considered in this paper is the one given in the book by Korn and Korn [8]. Therefore, in this verification theorem, it is natural to consider controls that are adapted to a bigger filtration than the underlying one in the HJB-equation, as it is done in Theorem 3.2 below. Thus, the first goal of this paper is to study an extension of the verification theorem that is based on a classical controlled stochastic differential equation and on a classical cost function, but with controls adapted to the filtration generated by the underlying filtration and a random variable that stands for a certain extra information of the problem (see the filtration defined in (1)).
Since the forward integral allows us to integrate with respect to stochastic processes that are not semimartingales, we could think that we can deal with a forward controlled stochastic differential equation driven by a process that is a martingale with respect to the underlying filtration, but with controls adapted to a filtration bigger than the one generated by this martingale. However, we show that if we can find an optimal control in this case, then the driven process is still a semimartingale with respect to the bigger filtration. This is the second goal in this paper.
The paper is organized as follows. In Section 2, we establish the framework that we use in the remaining of this article. Section 3 is devoted to state the extended verification theorem. In Section 4, we analyse an inverse type result for the extended verification theorem. Namely, we show that if there exists an optimal control with respect to certain cost function and certain filtration , then the given Brownian motion is still a -semimartingale. Finally, as an example, we provide two application of our extended verification theorem, which appear in financial markets.
2 Statement of the problem using initial enlargement of the filtrations
Let be a Brownian motion defined on a complete probability space and the filtration generated by augmented with the null sets. It is well-known that satisfies the usual conditions. We know that every -algebra in contains the events for which is possible determine their occurrence or not only from the history of the process until time . If we assume the arrival of new information from a random variable this leads up to consider a new filtration given by
(1) |
which also satisfies the usual conditions. Under suitable assumptions on (see, for example, Yor and Mansuy [12, Section 1.3], León [11, Section 3] or Protter [16, Section 6]), is still a special -semimartingale with decomposition
(2) |
where is a -Brownian motion and the information drift is an -adapted random field such that w.p.1, for each and some .
In the financial framework, the initial enlargement of filtrations can be interpreted in this fashion: Consider a classical financial market with one bond and one risky asset. Then, by Karatzas [7], the wealth of an honest investor follows the dynamics of the Itô’s stochastic differential equation
(3) |
Here, stands for the amount that the investor invest in the stock (i.e., the risky asset), and the processes , and are -adapted stochastic processes that represent the rate of the bound, the rate of the stock and the volatility of the market, respectively. Now suppose that this investor is an insider. That is, he/she has from the beginning some extra knowledge of the future development of the market given by the random variable . So, this insider can use strategies of the form to invest in the stock to make profit, where is an -adapted random field (see León et al. [11] or Navarro [13], and Pikovsky and Karatzas [15]). In this case, from (2) and (3), the wealth equation of the insider is
(4) |
Actually, equations (3) and (4) are equivalent (i.e., they have the same solutions). Also, in this case, we have that equation (3) is a controlled stochastic differential equation driven by the -semimartingale that involves controls that are -adapted. Hence, to take advantage of the extra information , we can figure out a -adapted optimal control with respect to a certain cost function, unlike the classical stochastic control problem, where the controls are -adapted processes. This is extended as follows.
Let and be a closed and an open subsets of , respectively. For , we will denote and . Throughout this work, we will assume that the extra information is modeled by a random variable . Now, consider two measurable functions satisfying suitable conditions that are given in Section 4 and the controlled stochastic differential equation for the filtration
(5) |
Here, has the form as in equation (4). In consequence, under assumption (2), this last equation is also written as
(6) |
That is, the solution to these two equations is an Itô process adapted to the filtration . Therefore, would be only controlled whenever it remains in the open set . Thus, it is necessary to introduce the -stopping time
(7) |
Remember that is a stopping time since the filtration satisfies the usual conditions, as it is established in Protter [16]. Moreover, by definition, .
The main task in stochastic control consists in determining a control which is optimal with respect to a certain cost function. In this paper, the cost function has the form
(8) |
where the deterministic functions and are the initial and final cost functions, respectively. Furthermore, the expectation indicates that the solution to the controlled equation (5) has initial condition at time . The classical tool to solve this optimization problem is the so called Hamilton-Jacobi-Bellman equation (HJB-equation), which is related with the value function (see (11) below), through the verification theorem. Consequently, in this paper we are interested in establishing an extension of the verification theorem that allows us to deal with controls adapted to a bigger filtration than the underlying filtration, for which the Brownian motion is a semimartingale. This is done in Section 3. Conversely, in Section 4, we show that if we can find an -adapted optimal control (with respect to a certain cost function), where the filtration is bigger than the one generated by , then is an -semimartingale. Finally, we provide two examples where we apply our extended verification theorem in Section 5.
3 The statement of verification theorem under enlargement of filtration
The goal of this section is to state a verification theorem for the initial enlargement of filtrations. We first introduce the general assumptions and notation that we use throughout this section.
is a complete probability space where it is defined a Brownian motion and is a random variable such that there are a -Brownian motion and an -random field satisfying equality (2), for all , w.p.1. Here, is defined in Section 2 and is the filtration introduced in (1). In this paper, we do not necessarily have that is the -algebra . That is, we could have .
In this section, we deal with equation (5). That is, the controlled stochastic differential equation
Here, , the coefficients and the control satisfy the following hypothesis and definition, respectively. Remember that and were introduced in Section 2.
-
()
The coefficients are measurable and satisfy the following conditions:
-
i)
, for all .
-
ii)
There exists a constant such that, for all ,
-
i)
Observe that equation (5) can only have a solution up to the first time it exploits and consequently, it will be only controlled as long as it remains in the set . In this case, it means that equation (5) has a solution up to either it reaches the boundary of the set , or .
Now, we are ready to defined the admissible strategies.
Definition 3.1.
Note that if and , then equation (5) has a unique solution such that because of the definition of admissible control and Hypothesis ().ii), which implies that the coefficients and are Lipschitz on any interval contained in , uniformly on .
The main task in stochastic control consists in determining a control which is optimal with respect to a certain cost functional. For our purposes, the cost functional has the form as in equality (8) where the deterministic functions and verify
(10) |
for some . Remember that the notation corresponds to the expectation of functionals of the solution to equation (5) with an initial condition at time .
Before stating the control problem for this work, we need to introduce some extra definitions and conventions.
The control problem that we consider here is to compute , which minimizes the cost functional (8). That is, a control in satisfying
(11) |
Note that the function describes the evolution of the minimal costs as a function of . This function is called the value function.
In analogy with the adapted case, where , we use the convention
(12) | |||||
for and . We observe that we need to deal with the extra term since equation (5) is equivalent to equation (6) due to condition (2). Remember that equation (6) is a controlled stochastic differential equation driven by the -Brownian motion . So, in the remaining of this section, we assume that defined in (2) belongs to , for some .
Now we are in position to enunciate the main result of this section, where we use the -stopping time given in (7). Note that in the case that .
Theorem 3.2.
Let Hypothesis ( be satisfied and let be a -adapted random field and a set of probability such that, for all ,
for some random variables and , and . In addition, assume that is a solution of the Hamilton-Jacobi-Bellman equation
(13) |
where is the solution of either equation (5) or equation (6). Then, if
(14) |
with , where is the exponent in (10), we have that
- a)
-
, for all and .
- b)
-
If for all , there exits a control such that
(15) for all , where is the controlled process with corresponding to via (5), then
In particular is an optimal control and coincides with the value function.
Proof.
Let and . Also, let be the -stopping time introduced in (7).
We first assume that the open set is bounded. Then, using that is a solution of the HJB-equation (13), we have that, for and ,
(16) |
On the other hand, consider a -stopping time , such that . Hence, by (12), Itô’s formula (see [4, Theorem 8.1, pp. 184]) applied to and taking expectation, we obtain
(17) |
Now, we claim that the expectation of the stochastic integral in equality (17) is equal to zero. Indeed, since satisfies Hypothesis (), and using the assumption on , we can write
Therefore, the fact that is bounded yields that there is a constant such that
Thus, our claim is satisfied since and condition (9). That is,
which implies that
because is a -Brownian motion. Then, equality (17) becomes the inequality
(18) | |||||
where to obtain the last inequality we have used (16). In particular, with , we obtain the assertion in a).
Now consider a general open set and see that (18) is also satisfied in this case. To do so, choose such that . For such that , set
with
Let . Then, (18) implies
for all and . Consequently, the dominated convergence theorem, , (9), (10), (14), and the facts that is continuous in and lead to
To finish the proof, we now assume that, for all and , the following strict inequality is satisfied
which gives
(19) |
Indeed, from (17), where we change by , we obtain
Thus, inequality (16) and the dominated and monotone convergence theorems allow us to show that (19) holds. In particular, inequality (19) is true for the control that satisfies (15), namely
which yields a contradiction since is a solution of equation (13), thus
and the proof of case b) is complete. ∎
4 A converse-type result for the verification theorem
The purpose of this section is to give a converse type result of the verification theorem proved in Section 3. Towards this end, the main tool in this section is the forward integral with respect to the Brownian motion . Remember that stands for the filtration generated by augmented with the -null sets.
Definition 4.1 (Forward integral).
Let be a -measurable process with integrable trajectories. We say that is forward integrable with respect to ( for short) if
converges in probability as . We denote this limit by .
Remark 4.2.
The Forward integral has the following two properties:
-
i)
Assume that is a bounded -measurable and -adapted process. Then, Russo and Vallois [17, Proposition 1.1] have shown that and
where the stochastic integral in the right-hand side is in the Itô sense.
-
ii)
Assume that is a -semimartingale, where is a bigger filtration than . Let be a -adapted process that is integrable with respect to the -semimartingale , then and
where the right-hand side is the Itô integral with respect to the -semimartingale . This is also proven in Proposition 1.1 of [17].
-
iii)
Let and a random variable. Then, it is easy to see that and
Using Definition 4.1 the involve stochastic equation for the wealth process of an investor is the following controlled stochastic process (see equation (3))
(20) |
Here, the coefficients satisfy the following condition:
Hypothesis 4.3.
are -measurable and -adapted processes such that
-
1.
is a bounded process such that with probability 1.
-
2.
is a bounded process.
Throughout this section, we assume that we have a filtration bigger than . The family of admissible controls are related to this filtration. That is, in this section, the family of admissible controls is the set of -progressively measurable processes , for which (20) has a unique solution with , for all . Note that, in particular, we have . We also observe that if the filtration agrees with the filtration introduced in (1) and is the -semimartingale given in (2), then equation (20) is nothing else than equation (4) due to Remark 4.2.ii). This is the reason why the forward integral was used for the first time in [13] to solve problems related to financial markets.
Remember that we are interested in the optimal control problem defined on (11), where we consider the cost functional given by
(21) |
In other words, we take the classic quadratic running cost function and the final cost is , which can be interpreted as the present value of quantity .
The objective is to prove that if there exists an admissible optimal control for the problem given in (11) via the cost functional (21), thus we can conclude that the Brownian motion is a semimartingale in the bigger filtration . For achieving this result we will use the following hypothesis which is inspired in [3, Theorem 3.5].
Hypothesis 4.4.
-
1.
For all and , the process belongs to , where is a bounded -measurable random variable and is such that .
-
2.
There is a constant such that with probability 1.
Concerning point 1 of Hypothesis 4.4, we observe the following. Consider the -adapted random field
(22) | |||||
for and . The classical Itô formula implies that is a solution to the -adapted stochastic differential equation
Therefore, Remarks 4.2.i) and 4.2.iii) yield that, for a -random variable , the -adapted process is a solution to equation (20) with . Moreover, proceeding as in León et. al. [11], we can show that, in this case, equation (20) has a unique solution of the form , where is an -adapted random field satisfying suitable conditions. We observe that we suppose that, in Hypothesis 4.4.1, the process is an admissible control because we do not know the form of all the solutions to equation (20). In other words, we are assuming the uniqueness of the solution to (20) for controls of the form .
Now, we can prove the main result of this section.
Theorem 4.5.
Proof.
In order to simplify the notation we use the the convention
Consider the functional defined as follows
Let be an admissible control as in Hypothesis 4.4.1 and define for all . Then the directional derivative of is
(23) |
From the analysis of the random field (22), we know
(24) |
Replacing (24) into (23), we get
(25) |
By hypothesis, the functional reaches its minimum at . Therefore, has a minimum in . Thus, from equation (25), together with Remarks 4.4.i) and 4.4.iii), we obtain
Since this equality holds for all -measurable random variable , we have established
(26) |
Corollary 4.6.
Proof.
By the proof of Theorem 4.5, we have that the -Brownian motion is also a -semimartingale and (25) holds when we write instead of . Moreover, by Remark 4.2.2, we get
(27) |
for and of the form
where is a bounded and -measurable random variable and . Let be the set of such processes . Finally, using that is dense in the set of all the square-integrable and -progressively measurable processes, it is not difficult to see that (27) is also satisfied when belongs to and, therefore, the proof is complete. ∎
5 Application of the verification theorem under enlargement of filtrations
The aim of this section is to study two examples through the extended verification theorem analyzed in Section 3 (i.e., Theorem 3.2).
Example 5.1.
Let be a positive constant and an -adapted bounded process. Consider the controlled stochastic process given by
(28) |
However, as we have already pointed out, if we have additional information represented by a random variable satisfying the conditions of Section 2 (i.e., the filtration in (1) is such that the -adapted process in (2) is a -Brownian motion), the equation (28) is equivalent to the stochastic differential equation (driven by the -Brownian motion )
(29) |
Our purpose is to optimize the wealth at the end time reducing the costs of the control . That is, to solve the problem
(30) |
So, the corresponding HJB problem associated with (29) and (30) is to find a subset such that and, on , compute a solution to the HJB equation
(31) |
Note that the argument of the infimum in equation (31) is a polynomial of degree 2 on the variable . Thus, using the second derivative criterion, we obtain the optimal control
(32) |
Since belongs to the argument of the infimum, then it can be replaced into (31) to get the equation
(33) |
Now, we propose the function , where is a -measurable function and a -adapted process, as a candidate of the solution to equation (33). In this manner we compute the partial derivatives of and we substitute them in (33) to get
In consequence,
and
Note that the last equation imposes that . Under the conditions and , the solutions for and are
and
where the constant is given by
Hence, the solution of the HJB-equation (31) is
Therefore, equality (32) implies that the optimal control for the problem (30) is determined by
(34) |
while the value function is
due to Theorem 3.2.
Finally, in order to have that that given in (34) is an admissible control, by Definition 3.1, we need to verify that it belongs to , for all . An example of random variable such that defined in (2) is in , for all is
Here , and , with probability 1. We can use Yor and Mansuy [12, Section 1.3], Navarro [13, Section 3] or León et al. [11] to see that
In consequence equality (34) provides an admissible control.
Example 5.2.
Here, we consider the controlled stochastic differential equation
Note that in this case, is the family of all the -progressively measurable processes such that
Remember that the filtration is defined in (1).
The cost function is given by (30) again, that is,
We observe that in the classical theory of stochastic control (i.e, there is not extra information), an optimal control is
Now, as in Example 5.1, we work with -progressively measurable controls. From Theorem 3.2, we must study the HJB-equation
Thus, proceeding as in Example 5.1, we propose the optimal control
(35) |
Substituting this control in previous HJB-equation, we have to solve the equation
In order to continue with our analysis, we proceed as in Example 5.1 again. It means, we propose a function of the form
to show that
is the function that we are looking for, if which, together with (35), yields
As we have already pointed out, the case that (i.e., there is not extra information), we have
Now, it is easy to apply Theorem (3.2) to figure out the value function.
Acknowledgment
The work of L. Peralta is supported by UNAM-DGAPA-PAPIIT grants IA100324, IN102822 (Mexico).
References
- [1] E. Alòs, J. A. León, and D. Nualart. Stochastic Stratonovich calculus fBm for fractional Brownian motion with Hurst parameter less than . Taiwanese J. Math., 5(3):609–632, 2001.
- [2] E. Alòs, J. A. León, and J. Vives. On the short-time behavior of the implied volatility for jump-diffusion models with stochastic volatility. Finance Stoch., 11(4):571–589, 2007.
- [3] F. Biagini and B. Øksendal. A general stochastic calculus approach to insider trading. Appl. Math. Optim., 52(2):167–181, 2005.
- [4] R. M. Dudley, H. Kunita, and F. Ledrappier. Ecole d’Ete de Probabilites de Saint-Flour XII, 1982, volume 1097. Springer, 2006.
- [5] J. Garzón, J. A. León, and S. Torres. Fractional stochastic differential equation with discontinuous diffusion. Stoch. Anal. Appl., 35(6):1113–1123, 2017.
- [6] K. Itô. Extension of stochastic integrals. In Proceedings of the International Symposium on Stochastic Differential Equations (Res. Inst. Math. Sci., Kyoto Univ., Kyoto, 1976), pages 95–109. Wiley, New York-Chichester-Brisbane, 1978.
- [7] I. Karatzas. Optimization problems in the theory of continuous trading. SIAM J. Control Optim., 27(6):1221–1259, 1989.
- [8] R. Korn and E. Korn. Option pricing and portfolio optimization, volume 31 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2001. Modern methods of financial mathematics, Translated from the 1999 German original by the authors.
- [9] J. A. León. Stratonovich type integration with respect to fractional Brownian motion with Hurst parameter less than . Bernoulli, 26(3):2436–2462, 2020.
- [10] J. A. Leon, D. Márquez-Carreras, and Vives. J. Stability of some anticipating semilinear stochastic differential equations of skorohod tipe. Preprint, 2023.
- [11] J. A. León, R. Navarro, and D. Nualart. An anticipating calculus approach to the utility maximization of an insider. volume 13, pages 171–185. 2003. Conference on Applications of Malliavin Calculus in Finance (Rocquencourt, 2001).
- [12] R. Mansuy and M. Yor. Random times and enlargements of filtrations in a Brownian setting, volume 1873 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2006.
- [13] R. A. Navarro. Dos modelos estocásticos para transacciones con información privilegiada en mercados financieros y dependencia con memoria larga en tiempos de ocupación. Ph.D. Thesis. CINVESTAV, 2004.
- [14] D. Nualart. The Malliavin calculus and related topics. Probability and its Applications (New York). Springer-Verlag, Berlin, second edition, 2006.
- [15] I. Pikovsky and I. Karatzas. Anticipative portfolio optimization. Adv. in Appl. Probab., 28(4):1095–1122, 1996.
- [16] Ph. E. Protter. Stochastic integration and differential equations, volume 21 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2005. Second edition. Version 2.1, Corrected third printing.
- [17] F. Russo and P. Vallois. Forward, backward and symmetric stochastic integration. Probab. Theory Related Fields, 97(3):403–421, 1993.