Lifting of Volterra processes:
optimal control in UMD Banach spaces
Abstract.
We study a stochastic control problem for a Volterra-type controlled forward equation with past dependence obtained via convolution with a deterministic kernel. To be able to apply dynamic programming to solve the problem, we lift it to infinite dimensions and we formulate a UMD Banach-valued Markovian problem, which is shown to be equivalent to the original finite-dimensional non-Markovian one. We characterize the optimal control for the infinite dimensional problem and show that this also characterizes the optimal control for the finite dimensional problem.
Keywords: Backward stochastic integral equation; Dynamic programming principle;
Hamilton Jacobi Bellman; Optimal control; UMD Banach space; Markovian Lift;
MSC 2020: 60H10; 60H20; 93E20; 35R15; 49L20; 91B70;
1. Introduction
We intend to minimize a performance functional of the form
(1.1) |
where , and is given in the controlled Volterra-type dynamics of the process :
(1.2) |
Here , , , , and the convolution kernel are all measurable mappings on which additional hypothesis will be stated later on, and is an admissible control. Also, is a real-valued Brownian motion on a complete filtered probability space .
Stochastic control problems as (1.1)-(1.2) appear, e.g. when studying optimal advertising strategies (see e.g. [15] for the case of Volterra dynamics in (1.2) and [17, 16, 20] for the case of delay). Other cases of applications are found in electrodynamics [24] and in epidemiology [25]. When dealing with such problems, one cannot directly apply a dynamic programming principle (DPP) in view of the non Markovianity of the framework. While in some particular cases is still possible to derive the DPP also for Volterra forward dynamics (see [1, 18, 5]), most authors approached the general problem by means of a maximum principle (see, e.g., [12, 29, 30, 3, 2] and references therein). Even though the maximum principle approach might seem practical, one usually has to impose regularity conditions on both the drift and volatility which are not always easy to satisfy. In this paper, thanks to the developments on the lift theory for Volterra processes (see [1, 9, 8, 7, 15, 5]) we aim to move the stochastic control problem (1.1)-(1.2) to an infinite dimensional UMD-Banach setting and solve the newly formulated problem by means of DPP.
The main purpose of this lifting approach is to recover the Markov property for the forward process (1.2) which, in turns, allows us to derive a DPP in terms of the Hamilton-Jacobi-Bellman (HJB) equations. In fact, one can show that solving the lifted problem is equivalent to solving the original one, with the fundamental difference that, by moving to an infinite dimensional setting, we work in a Markovian framework. Focusing on Markovian lifts, we assume that the kernel can be represented as , for a uniformly continuous semigroup acting on a Banach space , , with the pre dual of and pairing . Examples of kernels that satisfy this condition can be found both in [9, 8, 15] and in the last section of this paper.
Our goal is to find such that, for all ,
(1.3) |
with as in (1.1) and for belonging to some admissible control set defined as
where is a closed convex subset of and the information flow is associated to the Brownian motion in (1.2). Our approach consists in formulating a new infinite-dimensional Banach-valued optimization problem that can be shown to be equivalent to (1.1)-(1.2) (in the sense that the optimal control and optimal value , are the same of the original one) and then solve such infinite dimensional optimization problem, which is Markovian. The solution is achieved exploiting Malliavin calculus for unconditional martingale differences (UMD) Banach spaces.
The Markovian lift to the infinite dimensional setting that we present here was originally introduced in [9], and then developed in [8] for the multi-dimensional case, and in [7] for a Lévy drivers. Our work can be seen as a generalization of the case presented in [1], in which a kernel that can be expressed as the Laplace transform of a measure is considered, and where the performance functional (1.1) is of linear-quadratic type. Our work differs from [1] as we are able to consider a broader class of kernels and performance functionals thanks to the nature of the lift we apply.
The present work introduces an element of novelty also with respect to infinite dimensional stochastic control. Indeed, we consider a setting which is different from both the ones presented in [14] and [22]. In [14] the authors consider a Hilbert valued forward controlled process, whereas in [22] the forward process has values in a general Banach space, but with a volatility term not depending on . Here we are able to take general volatility dynamics for the forward process and this will require to work in Banach spaces of the UMD type so to be able to apply Malliavin calculus techniques.
In the context of optimal control for lifted process, we also mention [5]. There, the authors follow an approach close to the one presented here. However, we remark that, in our framework, we are able to consider a wider class of kernel thanks to the nature of our lift, which allows us to work in UMD Banach spaces instead of Hilbert spaces. On the other side, in [5] the authors consider a forward equation driven by a Lévy process instead of a Brownian motion (like (1.2)). While the lift theory for Lévy-driven forward processes is available (see [7]), the optimal control of an infinite-dimensional Lévy-driven forward equation in the present setting is a topic for future research.
This paper is structured as follows: in Section 2 we present some preliminary results both on the Gâteaux differentiability in general Banach spaces and on the lift for Volterra processes. We recall the essentials on UMD Banach spaces and some results of Malliavin calculus in this framework. In Section 3 we give an existence and continuity result for the forward equation, and in Section 4 we introduce the backward equation and the Hamiltonian function associated with the lifted optimization problem. Here we present a solution method via HJB equations. To conclude, in Section 5 we present a problem of optimal consumption where we obtain a characterization of the optimal control via DPP.
2. Some preliminary results
We recall some useful notions and results that are used throughout the paper. Then we show how the Markovian lift is performed in the present context of stochastic control (1.1)-(1.2). Lastly, we introduce UMD Banach spaces and state some crucial results for Malliavin calculus in this setting. We refer to [14] for the results on Gâteaux derivatives and Banach spaces, to [9, 8, 7, 15] for the ones concerning Markovian lifts, and to [19, 23, 26, 21] for the results concerning UMD Banach spaces.
2.1. The class of Gâteaux differentiable functions
For a mapping , with two Banach spaces, we say that the directional derivative at in the direction is defined as
whenever the limit exists in the topology of . The mapping is said to be Gâteaux differentiable at the point if it has directional derivative at in every direction and there exists an element in such that for every . We call the Gâteaux derivative at .
Definition 2.1.
A mapping belongs to if it is continuous, Gâteaux differentiable for all and is strongly continuous, i.e. the map is continuous for every .
Remark 2.2.
Let be three Banach spaces and . If , then and .
We also introduce the partial directional derivative for a mapping , with , , Banach spaces as
with , and the limit taken in the topology of . We say that is partially Gâteaux differentiable with respect to at if there exists such that for all .
Definition 2.3.
We say that belongs to the class if it is continuous, Gâteaux differentiable with respect to , for all and is strongly continuous.
For depending on additional arguments, the definition above can be easily generalized.
Lemma 2.4.
Given three Banach spaces, a continuous map belongs to provided the following conditions hold:
-
(1)
The partial directional derivatives exist at every point and in every directoin .
-
(2)
For every the mapping is continuous from to .
-
(3)
For every , the mapping is continuous.
We are going to use the following parameter depending contraction principle to study the regular dependence of the solution of stochastic differential equations on their initial data.
Proposition 2.5.
Let be Banach spaces and let a continuous mapping satisfying
for some and every , , . Let denote the unique fixed point of the mapping . Then is continuous. If, in addition , then and
2.2. Lift approach to optimal control
In the sequel, we exploit an infinite dimensional lift to reformulate the optimization problem (1.1)-(1.2) in an infinite dimensional setting. Our first step is to rewrite in (1.2) in terms of a process with values in a Banach space, using the lift procedure presented in [9]. Notice that we do not actually work in the affine framework of [9], but the approach presented here is actually a particular case of the one introduced in [7].
Definition 2.6.
Let be a Banach space with dual and denote with the pairing between and . We say that a kernel is liftable if there exist , and a uniformly continuous semigroup , with generator , acting on , such that
-
•
-
•
for all
-
•
for all .
For notational simplicity we write for when no confusion arises.
From now on, we make the following assumption:
Hypothesis 2.7.
The kernel in (1.2) is liftable.
We rewrite as
where
(2.1) |
Defining as an element in such that we can now rewrite (1.2) as follows:
(2.2) |
where . One can then check that follows the dynamics:
(2.3) |
In fact, we have that
and, rearranging the terms we obtain (2.3).
Remark 2.8.
By defining , and exploiting (2.2), we actually get that the function is given by the expression
Set , and plug (2.1) into (2.3), then we can rewrite (2.3) in differential notation as
(2.4) |
with . We also rewrite (2.3) as
(2.5) |
where
We are going to discuss existence and uniqueness results for equation (2.4) in Section 3.1.
Remark 2.9.
We point out that the term
(2.6) |
in (2.5) can be regarded in two different ways. On the one hand, it can be seen as the element of :
where the integration of is done on and then lifted to by multiplying it by . On the other hand, by writing (2.6) as
we have that can be considered as a cylindrical Wiener process on , which is a Hilbert space with the scalar product . In this case we also see that .
In section 3.1 we are going to provide sufficient conditions that guarantee the existence of a solution of (2.3)-(2.4). Due to the nature of the lift and identification (2.2), this will, in turn, provide sufficient conditions also for the existence of a solution to (1.2).
Remark 2.10.
From [9, 8] we see that we could perform the lift also under weaker hypothesis, by taking a subspace with their relative duals such that:
-
•
and are Banach spaces and embeds continuously into .
-
•
The semigroup with generator acts in a strongly continuous way on and with respect to the respective norm topologies.
-
•
The map is weak-* continuous on and on for every .
-
•
The pre-adjoint operator of , generates a strongly continuous semigroup on with respect to the respective norm topology (but not necessarily on ).
In this case every kernel of the form with and is liftable. While this setting would allow to work with a wider class of kernels, we would not be able to formulate the HJB equations. This is due to the fact that, when considering a kernel with , some of the inner products in the definition of the Hamilton-Jacobi-Bellman equation (3.16), would not be well defined. Being the goal of this work a control problem, we restrict ourselves to the case , as originally stated.
In a similar fashion to what we did for (1.2), recalling that , we rewrite the performance functional (1.1) so to make its dependence from the lifted process explicit:
(2.7) |
where the functions , are lifted to the functions
where is the Banach space associated to the liftable kernel, see Definition 2.6. The lifted maps and are
The stochastic optimal control problem (1.1)-(1.3) is then lifted to
(2.8) |
where the process takes values in the Banach space , and where the dynamics for the controlled process are given by (2.3)-(2.4). Notice that, while the performance functional has not changed, we write instead of in order to highlight the dependence on instead of , as underneath there is a passage from finite to infinite dimensions. Indeed, this change of notation embodies a crucial change of framework from a finite to an infinite dimensional setting, allowing us to move from functions , , , and taking values from to new functions , , , and that now take values from . This lift allows us to consider a new optimization problem, written on a space which is not the original one. Nonetheless, we have that for , . Also, being fixed and only depending on the kernel representation, finding the pair that minimizes (2.5)-(2.8) is equivalent to finding the pair that solves (1.1)-(1.2).
2.3. UMD Banach spaces
In the sequel we use techniques of Malliavin calculus on the space . For this, we assume:
Hypothesis 2.11.
The space is a unconditional martingale differences (UMD) Banach space.
For convenience we report here below the essentials on UMD Banach spaces.
Definition 2.12.
Let be a Banach-space valued martingale, the sequence is called the martingale difference sequence associated with . A Banach space is said to be a , space if there exists a constant such that for all -valued -martingale difference sequences we have
where for all and Thanks to [26] we also know that, if a Banach space is for some , then is a Banach space for all , and we simply call it a UMD Banach space.
In the context of stochastic analysis in Banach spaces, martingale difference sequences provide a substitute for orthogonal sequences. In the following parts, we will see that this hypothesis is not very restrictive, as the UMD Banach spaces include all Hilbert spaces, spaces for , reflective Sobolev spaces and many others, thus allowing us to consider a wide class of liftable kernels. In our framework, the process takes values in a UMD Banach space whenever we consider, for example, a shift operator or a quasi-exponential kernel or a kernel that can be expressed as the Laplace transform of a measure with density in , .
Assuming that is UMD, allows us to define the Malliavin derivative operator on . From [23, Proposition 2.5], we know that is a closed operator and we denote with the closure of the domain.
For the results on UMD Banach spaces exploited in the following parts, we refer to [28] for the BDG inequality, [27] for the Fubini Theorem and to [23] for general Malliavin calculus results. In this framework we will also use a Clark-Okone formula for UMD Banach spaces (see [21]) and the following chain rule linking the Malliavin derivative and the Gâteaux derivative (see [23])
Proposition 2.13.
Let be a UMD Banach space and let . Suppose that . If , then with
3. The optimal control problem
We are now interested in solving the lifted optimal control problem (2.8), where the process follows the controlled dynamics given by
(3.1) |
For our results to hold, we add some Hypothesis on , which directly translates into hypothesis on .
Hypothesis 3.1.
is measurable and for a suitable positive constant and every , , .
In order to find the optimal value , we associate the following partially coupled system of forward-backward equations
(3.2) |
to (3.1). Here above is the Hamiltonian function defined as
(3.3) |
Notice that the control only appears in the Hamiltonian functional. The solution of the backward equation is denoted by . We often write , when we want to emphasize the dependence of and on the parameter at time . Analogously, when we want to emphasize the dependence of on the initial value at time , we write .
Define now
(3.4) |
with the solution to the backward SDE in (3.2). In the sequel we show that in (2.8) is such that
(3.5) |
and that the optimal control can be retrieved explicitly via a verification theorem once is known. In order to achieve (3.5) we proceed as follow. First we study the forward equation in Section 3.1, then we study the backward equation in Section 3.2 and there we prove the crucial identification:
(3.6) |
(see Proposition 3.11). In Section 3.3 we provide an approach to find through HJB equations and at last, in Section 3.4, we prove (3.5) and we provide a characterization of the optimal control .
Notice that, for (3.6) to hold, the backward process has to be differentiable with respect to . This can be obtained by showing that is differentiable with respect to the initial condition , and by assuming the following:
Hypothesis 3.2.
Let us assume that
-
1)
There exists such that
for every , and .
-
2)
For all , .
-
3)
For every we have .
-
4)
There exist and such that
for every , and .
-
5)
and there exists such that, for every
Further details on the continuous dependence on of the forward equation can be found in Section 3.1, while we refer to Section 3.2 for details on the differentiability of with respect to .
Remark 3.3.
3.1. On the lifted forward equation
In this section we study the lifted forward equation in (3.2). In particular, we prove that it admits a unique Markovian solution and we study its continuous dependence from the initial parameter . We thus take
(3.7) |
we recall that is the generator of a uniformly continuous semigroup on the Banach space We assume the following:
Hypothesis 3.4.
Suppose that
-
i)
is continuous and, for all , there exists a constant such that
the map is measurable. Moreover, for all and ,
for some constant .
-
ii)
is such that, for every the map is measurable, for every , and , and
for some constant .
Moreover, for , , there exists a constant such that
-
iii)
For every , , .
Our first result is the following:
Proposition 3.5.
Assume Hypothesis 3.4 holds. For every , we have that:
-
i)
The map is in .
-
ii)
For every the partial directional derivative process , solves -a.s. the equation
-
iii)
for some constant .
We also find that
-
iv)
(3.7) admits a unique adapted solution .
Moreover, we have the following estimate
(3.8) |
where is a constant depending only on , where and .
Proof.
The proof is inspired by [10, Theorem 7.4] and [14, Proposition 3.2] The main difference with our work are the spaces at play. Consider the map
defined as
We want to show that is a contraction with respect to the first variable. We notice that
and
where we used the linear growth conditions on and and the Burkholder-Davis-Gundy inequality for Banach spaces (see [28]). We thus have showed that is a well defined mapping. Now, taking and arbitrary processes in , then
With computations similar to the ones above, exploiting the Lipschitz condition on and (see Hypothesis 3.4, i) and ii)), one finds that
and
Summing up, we have that
This means that is a contraction only for when satisfies
(3.9) |
Condition (3.9) on can be easily removed by considering the equation on intervals , ,…, where satisfies (3.9). Thanks to the fixed point theorem we find that (2.5) admits a unique solution. We conclude that (3.8) holds by applying Gronwall’s Lemma with arguments in line with [10, Theorem 7.4 (iii)]. Notice now that, being a contraction uniformly with respect to , , by Proposition 2.5 we obtain if
This is verified by a (slight modification) of Lemma 2.4. Indeed we notice that is differentiable in . For more details we refer to [10, 14]. ∎
Remark 3.6.
We notice that is Markovian (see e.g. [13, Theorem 1.157]).
3.2. On the backward equation
In this section we study the backward equation
(3.10) |
introduced in (3.2). We study existence and uniqueness of a solution as well as its continuous dependence from the parameter . Later on we will exploit (3.10) to prove (3.6), as well as show that the optimal value for the optimization problem (2.5) - (2.8) is achieved for .
We observe that the following a priori estimate for the pair process holds (see [22] and [14, Proposition 4.3] ):
where is a constant depending on and , where , are the coefficients in Hypothesis 3.2.
Proposition 3.8.
Assume that Hypotheses 3.2 and 3.4 hold true. Then (3.10) admits a unique solution such that the map
for , where is the space of adapted processes taking values in such that has continuous paths and
Moreover, for every .
Proof.
See [22] Proposition 4.2. ∎
Still aiming to prove (3.6), we provide yet another crucial result that links the directional derivative of to its Malliavin derivative.
Proposition 3.9.
Proof.
Thanks to Proposition 3.5, for every and every direction , the directional derivative process , solves -a.s. the equation
Given and , we can replace by and by in the previous equation, since is measurable. Note now that
for , as a consequence of the uniqueness of the solution of (3.7). This yields
for , -a.s. This shows that the process
is a solution of the equation
where . The thesis now follows from the uniqueness property, as proved e.g. in [14] Proposition 3.5. To complete the proof of (3.12), we take a sequence such that (3.11) holds for every , and we let (see [14]). The result follows from the regularity properties of and , as well as the closedness of the operator on UMD Banach spaces. ∎
Proposition 3.10.
Proof.
The proof follows the same arguments as [14, Proposition 5.6] though the spaces at play are different. Indeed, the main tools are provided in Proposition 3.9. So, thanks to the extension of Malliavin calculus to UMD Banach spaces, and the chain rule linking Malliavin derivative and Gateaux derivative (see Proposition 2.13), the result is secured. ∎
Finally, the next result provides the proof of (3.6).
Proposition 3.11.
3.3. The HJB equation
Formally define
We can consider the Hamilton-Jacobi-Bellman equation associated with the control problem (2.5) - (2.8), which is given by
(3.16) |
A solution of this equation provides a way to compute in (3.4) by PDE methods (see e.g. [6]). The connection between (3.16) and (3.4) is actually detailed in the forthcoming Theorem 3.13 by means of the forward backward system (3.2). Later on, in Theorem 3.18, we shall see how is connected with the optimal performance, see (3.5). Thus, we are interested in finding mild solutions to the previous equation, which we are going to defined soon. This problem was tackled in [14] (for a Hilbert space) in the case of a general , and in [22] (for a Banach space) in the case of a constant . Our result is then extending the on of [22].
Let be a solution to (3.7), with , and satisfying Hypotheses 3.2 - 3.4. We recall that this solution is a -valued Markov process (see Remark 3.6). We can then define the transition semigroup on continuous and bounded functions as
Moreover, we have that this semigroup is also well defined on continuous functions with polynomial growth with respect to .
Definition 3.12.
A function is a mild solution of the Hamilton-Jacobi-Bellman equation (3.16) if:
-
•
For every , is continuous and is measurable from with values in
-
•
For every , there exists such that and , with and and positive integers.
-
•
The following equality holds.
In order to prove that there exists a unique solution of (3.16) we need once again the forward-backward system (3.2):
Theorem 3.13.
3.4. Solving the optimal control problem
As we have proven the identification (3.6) to be true (see Proposition 3.11), we can finally move to the study of the optimal control problem (2.5) - (2.8). As anticipated, we want to show that the optimal value
where we have defined in (3.4), solve the backward stochastic differential equation (3.2) and a solution of can be obtained through the HJB equation (3.16) (see Theorem 3.13). We define the, possibly empty, set
(3.18) |
where , , .
Hypothesis 3.14.
We notice that, intuitively, represents the set of cotrols that allow us to obtain the minimum in the Hamiltonian (3.3). We will thus assume that for all , , ,
Remark 3.15.
Thanks to the Filippov theorem (see [4]), being non empty for all , , , there exists a Borel measurable map such that, for , and , .
Proposition 3.16.
Proof.
Let in and take to be a solution of (3.1) corresponding to the control (for the existence of such a solution see Corollary 3.7). Define
We notice that solves the equation
and, being and thus bounded, we can find a probability equivalent to such that is a Wiener process on and thus is a cylindrical Wiener process with values in (see [10, Theorem 7.2 (iii)]). We consider the backward equation w.r.t. for the unknowns , given by
(3.19) |
By taking at in (3.19), we get that depends only on and . With the same approach as in [22, Proposition 5.5], one obtains immediately that is actually a -martingale.
Corollary 3.17.
Proof.
The proof is in line with the one in [22, Corollary 5.6]. ∎
Theorem 3.18.
Assume that Hypothesis 3.1, 3.2, 3.4 and 3.14 hold true. For all admissible controls in , we have that
and the equality holds true if and only if
(3.21) |
Moreover, let us denote by be the measurable selection of defined in Remark 3.15. A control satisfying the feedback law, defined as:
(3.22) |
is optimal. Define the closed loop equation:
(3.23) |
Then (3.23) admits a weak solution which is unique in law, and the corresponding pair is optimal.
For more details about the definition of feedback law and closed loop equation in the case of optimal control for Hilbert spaces we refer to [13, Section 2.5].
Proof.
Using (3.20) and Proposition 3.10 we can rewrite as
The proof of the first statement now follows from Corollary 3.17. The closed loop equation can be solved in the weak sense via a Girsanov change of measure. Recall that is the probability space on which the Wiener process in (3.2) is defined. Define as
Due to the Girsanov theorem there exists a probability on such taht is a Wiener process. We then notice that and are cylindrical Wiener processes with values in , and that the closed loop equation (3.23) can be rewritten under as
Then, thanks to Proposition 3.5, we have a unique solution to this new process related to the probability and the Wiener process , which implies that also the closed loop equation (3.23) always admits a solution in the weak sense. Thanks to Hypothesis 3.14, we know that is non empty and thus, by the Filippov theorem, a measurable selection of exists and the optimal control can be obtained. This proof is in line with [22, Theorem 5.7] and [14, Theorem 7.2]. ∎
Remark 3.19.
Notice that, having solved the lifted optimization problem, thanks to (2.2)-(2.7), we have also solved the original problem (1.1)-(1.2). Indeed, we have that a control which is optimal for (2.8) where the dynamics for the forward process are given by in (3.1) is also optimal for the original problem (1.1), as by definition and .
4. A problem of optimal consumption
A cash flow admits consumption with rate according to the forward dynamics
(4.1) |
where , , and satisfy Hypothesis 3.4, and satisfy Hypothesis 3.1. In this case we lift on for and such that by considering
the left shift semigroup defined as , for all , , . Then we have that
We consider a classical optimal control problem given by the maximization of the performance functional
(4.2) |
for some functions and () satisfying Hypothesis 3.2. Linear-quadratic performance functionals such as (4.2) appear, for example, when considering optimal advertising problems (see e.g. [16, 17] and [15]). In this case we have that the stochastic control problem can be reformulated in with forward dynamics given by
(4.3) |
with such that for all . The goal is to minimize
(4.4) |
In this case the Hamiltonian functional (3.3) is given by
(4.5) |
and the forward-backward system is
(4.6) |
In particular, using (4.5), we have that
We thus get that the set defined in (3.18) is
(4.7) |
and thus the optimal can be characterized by Theorem 3.18 as
(4.8) |
for a certain function such that . In this case the HJB equations (3.16) become
(4.9) |
Where, we remind, , , and
For more details about solving the HJB equation (4.9) we refer to [15]. Now, thanks to Theorem 3.13 we have that
Theorem 4.1.
Equation (4.9) has a unique mild solution . If the cost is given by (4.4), then for all admissible couples we have that , and the equality holds if and only if in (4.7), characterized as (4.8). Vice versa, if (4.8) holds, then
with initial condition , admits a weak solution, which is unique in law, and the corresponding pair is optimal.
Remark 4.2.
This gives us a characterization of the optimal control for the lifted problem (4.4) and thus also for (4.2): in fact in our case we have that . Thanks to this we are able to find the optimal process . Moreover, we note that, the optimal control for the lifted problem and the optimal control for the original problem coincide., This allows us to retrieve the optimal pair for the optimal control problem (4.2). Lastly we notice that the HJB equations (4.9), gives us the optimal value , which in turn gives us the optimal value of , thanks to (2.7).
Remark 4.3.
Inspired by [11], we could also have considered the kernel , in (4.1). In this case we can take the space of of functions on and its dual of measures with density in , where and . We have that
where , and i.e. is the Laplace trasnform of . We notice that kernel is liftable (see Definition 2.6) and that we are in a UMD Banach space. It is clear that is actually in for all . In particular we can also take and work on the Hilbert space .
Acknowledgment
We would like to thank Anton Yurchenko-Tytarenko and Dennis Schroers for the nice input on kernel decompositions. The research leading to these results is within the project STORM: Stochastics for Time-Space Risk Models, receiving founding from the Research Council of Norway (RCN). Project number: 274410.
References
- [1] E. Abi Jaber, E. Miller, and H. Pham. Linear-Quadratic control for a class of stochastic Volterra equations: solvability and approximation. The annals of probability, 31:2244–2274, 2021.
- [2] N. Agram and B. Øksendal. Malliavin Calculus and Optimal Control of Stochastic Volterra Equations. Journal of Optimization Theory and Applications, 167:1070–1094, 2015.
- [3] N. Agram, B. Øksendal, and S. Yakhlef. Optimal Control of Forward-Backward Stochastic Volterra Equations. Non-linear Partial Differential Equations, Mathematical Physics, and Stochastic Analysis: The Helge Holden Anniversary Volume, pages 3–36, 2009.
- [4] J.P. Aubin and H. Frankowska. Set-valued analysis. Modern Birkhäuser Classics, 1990.
- [5] S. Bonaccorsi and F. Confortola. Optimal control for stochastic Volterra equations with multiplicative Lévy noise. Nonlinear differential equations and applications, pages 1–26, 2020.
- [6] P. Cannarsa and G. Da Prato. Second-order Hamilton–Jacobi equations in infinite dimensions. SIAM Journal on Control and Optimization, 29(2):474–492, 1991.
- [7] C. Cuchiero and G. Di Nunno. Notes - Markovian lifts. to appear, 2019.
- [8] C. Cuchiero and J. Teichman. Markovian lifts of positive semidefinite affine Volterra-type processes. Decisions in Economics and Finance, 42:407–448, 2019.
- [9] C. Cuchiero and J. Teichman. Generalized Feller processes and Markovian lifts of stochastic Volterra processes: the affine case. Journal of Evolution Equations, pages 1–48, 2020.
- [10] G. Da Prato and J. Zabczyk. Stochastic equations in infinite dimensions. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2 edition, 2014.
- [11] G. Di Nunno, A. Fiacco, and E. Hove Karlsen. On the approximation of Lévy driven Volterra processes and their integrals. Journal of Mathematical Analysis and Applications, 476:120–148, 2019.
- [12] G. Di Nunno and M. Giordano. Stochastic Volterra equations with time-changed Lévy noise and maximum principles. Annals of Operations Research, 2023.
- [13] G. Fabbri, F. Gozzi, and A. Swiech. Stochastic Optimal Control in Infinite Dimension Dynamic Programming and HJB Equations. Springer, 2017.
- [14] M. Fuhrman and G. Tessitore. Nonlinear Kolmogorov equations in infinite dimensional spaces: the backward stochastic differential equations approach and applications to optimal control. The Annals of Probability, 30:1397–1465, 2002.
- [15] M. Giordano and A. Yurchenko-Tytarenko. Optimal control in linear stochastic advertising models with memory. ArXiv, 2021.
- [16] F. Gozzi and C. Marinelli. Stochastic optimal control of delay equations arising in advertising models. In Stochastic Partial Differential Equations and Applications, Lecture Notes in Pure and Applied Mathematics, page 133–148. Chapman and Hall/CRC, 2005.
- [17] F. Gozzi, C. Marinelli, and S. Savin. On controlled linear diffusions with delay in a model of optimal advertising under uncertainty with memory effects. Journal of optimization theory and applications, 142(2):291–321, 2009.
- [18] C. Hernandez and D. Possamaï. A unified approach to well-posedness of type-I backward stochastic Volterra integral equations. ArXiv, 2020.
- [19] T. Hytönen, J. Van Neerven, M. Veraar, and L. Weis. Analysis in Banach Spaces, volume I: Martingales and Littlewood-Paley Theory. Springer, Cham, 2016.
- [20] C. Li and W. Zhen. Stochastic optimal control problem in advertising model with delay. Journal of Systems Science and Complexity, 33:968–987, 2020.
- [21] J. Maas and J. Van Neerven. A Clark-Ocone formula in UMD Banach spaces. Electronic communications in probability, 2008.
- [22] F. Masiero. Stochastic optimal control problems and parabolic equations in Banach spaces. SIAM Journal on Control and Optimization, 47:251–300, 2008.
- [23] M. Pronk and M. Veraar. Tools for Malliavin calculus in UMD Banach spaces. Potential Analysis, 40:307–344, 2014.
- [24] J. Prüss. Evolutionary integral equations and applications, volume 87. Birkhäuser, 2013.
- [25] M. Saeedian, M. Khalighi, N. Azimi-Tafreshi, G.R. Jafari, and M. Ausloos. Memory effects on epidemic evolution: The susceptible-infected-recovered epidemic model. Physical Review E, 95(2):022409, 2017.
- [26] J. Van Neerven. Stochastic evolution equations. Lecture notes, 2007.
- [27] J. Van Neerven and M. Veraar. On the stochastic Fubini theorem in infinite dimensions. In In Stochastic partial differential equations and applications—VII, volume 245 of Lect, 2005.
- [28] J. Van Neerven, M. Veraar, and L. Weis. Stochastic integration in UMD Banach spaces. The Annals of Probability, 35, 2007.
- [29] J. Yong. Backward Stochastic Volterra Integral Equations and some Related Problems. Stochastic Processes and their Applications, 116:770–795, 2006.
- [30] J. Yong. Well-Posedness and Regularity of Backward Stochastic Volterra Integral Equations. Probability Theory and Related Fields, 142:21–77, 2007.