A framework for adaptive Monte-Carlo procedures
Abstract
Adaptive Monte Carlo methods are recent variance reduction techniques. In this work, we propose a mathematical setting which greatly relaxes the assumptions needed by for the adaptive importance sampling techniques presented in [24, 23, 1, 2]. We establish the convergence and asymptotic normality of the adaptive Monte Carlo estimator under local assumptions which are easily verifiable in practice. We present one way of approximating the optimal importance sampling parameter using a randomly truncated stochastic algorithm. Finally, we apply this technique to some examples of valuation of financial derivatives.
1 Introduction
Monte-Carlo methods aim at computing the expectation of a real-valued random variable using samples along the law of . In this work, we focus on cases where there exists a parametric representation of the expectation
(1) |
where is a random vector with values in and is a measurable function satisfying for all . We also impose that
(2) |
We want to make the most of this free parameter to settle an automatic variance reduction method, see [8] for a recent survey on adaptive variance reduction. It consists in first finding a minimiser of the variance and then in plugging it into a Monte Carlo method with a narrower confidence interval. This technique heavily relies on the ability to find a parametric representation and to effectively minimise the function . Many papers have been written on how to construct parametric representations for several kinds of random variables . We mainly have in mind examples based on control variates (see [4, 13, 12]) or importance sampling (see [24, 23, 1, 2]). We refer the reader to section 4 for a presentation of a few examples.
Assume we have a parametric representation of the form satisfying Equations (1) and (2). Let be an independent and identically distributed sequence of random vectors following the law of . Assume we know how to use the sequence to build an estimator of adapted to the filtration . Once such an approximation is available, there are at least two ways of using it to devise a variance reduction method.
The non-adaptive algorithm
Algorithm 1.1 (Non adaptive importance sampling (NADIS)).
Let be the number of samples used for the Monte Carlo computation. Draw a second set of samples independent of and compute
Since the sequence converges to , the convergence of to ensues from the strong law of large numbers and the sequence satisfies a central limit theorem
This algorithm has been studied in [23, 2] and required samples. It may use less than samples if the estimation of is performed on a smaller number of samples but then it raises the question of how many samples to use.
The adaptive algorithm
The adaptive approach is to use the same samples to compute and the Monte Carlo estimator. Compared to the sequential algorithm, the adaptive one uses half of the samples.
Algorithm 1.2 (Adaptive Importance Sampling (ADIS)).
Let be the number of samples used for the Monte Carlo computation.
For fixed in , compute
(3) |
Note that the sequence can be written in a recursive manner so that it can be updated online each time a new iterate is drawn
Being able to update the sequence online has the advantage that there is no need to store the whole sequence for computing . This adaptive algorithm was first studied in [1] in which the author studied the convergence of the sequence under assumptions to be verified along the path which makes them hard to check in practise. In this article, we prove a new convergence result under local integrability conditions on the function , namely we impose that for any compact subset of , . We refer the reader to section 2.1 for a precise statement and proof of these results. We want to emphasize that such assumptions only involving properties of the function and not of the sequence are far easier to check in practice.
Sofar, we have assumed that we knew how to devise a convergent estimator of , but this may not be so simple as when no closed form expression is available for , there is hardly no chance that the function can be computed explicitly. Henceforth, it is needed to approximate without being able to compute the variance itself. In this work, we recall the methodology based on stochastic approximation developed in [23, 24, 2] to estimate using some stochastic gradient style algorithms. We aim at applying this methodology to the evaluation of financial derivatives and the main difficulty in approximating comes from the non-boundedness of the payoff functions usually considered and consequently the non-boundedness of the functions. To encompass this problem, several authors as in [24, 23] restrict the parameter to lie in a compact set, which is obviously unknown in practice; therefore, this compact set will have to be quite large. Although, it permits to prove the theoretical convergence of the Robbins-Monro algorithm it does not help to build a numerically convergent estimator of . We all know that the true convergence of stochastic algorithms highly relies on the fine tuning of the gain sequence which reveals to be very difficult when dealing with an artificially bounded parameter set.
In this work following [2], we would rather use a randomly truncated algorithm which is known to converge for a much wider class of functions. We give a unified framework with easily verifiable assumptions under which Algorithm 1.2 converges and satisfies a central limit theorem. Then, we combine this convergence result with the new results on randomly truncated stochastic algorithm from [16] to revisit the adaptive algorithm in the Gaussian framework studied in [1].
The paper is organised as follows. In Section 2, we focus on the mathematical foundation of the method and give both a strong law of large numbers and a central limit theorem for the adaptive estimator under weak assumptions. In Section 3, we present one way of constructing a convergent estimator of and recall some recent results on stochastic approximation. Then, we give in Section 4 some examples of how to construct a parametric estimator using importance sampling or other more elaborate transformations. Finally, we illustrate the convergence results obtained in Section 2 on numerical examples coming from financial problems.
2 Mathematical foundations of the method
Notations:
-
•
We encode any elements of as column vectors.
-
•
If , is a row vector. We use the “∗” notation to denote the transpose operator for vectors and matrices.
-
•
If , denotes the Euclidean scalar product of and and the associated norm is denoted by .
In this section, is an i.i.d. sequence following the law of and we introduce the algebra it generates . For technical reasons, we assume that the variance does not vanish, i.e. . If such is not the case, it means that we are actually in a better situation as far as variance reduction is concerned but it does not fit in our framework.
2.1 An adaptive strong law of large numbers
Theorem 2.1 (Adaptive strong law of large numbers).
Proof.
For any , we define . The sequence is an increasing sequence of stopping times such that a.s.. Let . We introduce defined by
. On the set , the conditional expectation is bounded from above by . Hence, the sequence is square integrable and it is obvious that is a martingale, which means that the sequence is a locally square integrable martingale (i.e. a local martingale which is locally square integrable).
By Condition (4), we have a.s. and . Applying the strong law of large numbers for locally martingales (see [18]) yields the result. ∎
The sequence can be any sequence adapted to convergent or not. For instance, can be an ergodic Markov chain distributed around the minimizer such as Monte Carlo Markov Chain algorithms.
Remark 2.2.
When the sequence converges a.s. to a deterministic constant , it is sufficient to assume that is continuous at and to ensure that Condition (4) is satisfied. Note that there is no need to impose that although it is undoubtedly wished in practice. For instance, can be an approximation of obtained either by heuristic arguments such as large deviations.
2.2 A Central limit theorem for the adaptive strong law of large numbers
To derive a central limit theorem for the adaptive estimator , we need a central limit theorem for locally square integrable martingales, whose convergence rate has been extensively studied. We refer to the works of Rebolledo [21], Jacod and Shiryaev [7], Hall and Heyde [6] and Whitt [25] to find different statements of central limit theorems for locally square integrable càdlàg martingales in continuous time, from which theorems can easily be deduced for discrete time locally square integrable martingales.
Theorem 2.3.
Proof.
We know from the proof of Theorem 2.1 that is a locally square integrable martingale and that converges a.s. to .
The term on the r.h.s is bounded thanks to the continuity of at . Hence, the local martingale satisfies Lindeberg’s condition. The result ensues from the central limit theorem for locally martingales. ∎
Corollary 2.4 (Effective central limit theorem with confidence interval).
Remark 2.5.
Even if , may take negative values for small. This corollary is really essential from a practical point of view because it proves that confidence intervals can be built as in the case of a crude Monte Carlo procedure. The only difference lies in the way of approximating the asymptotic variance.
The assumptions of Theorem 2.3 are fairly easy to check in practice since they are formulated independently of the sequence . When , which is nonetheless not required, the limiting variance is optimal in the sense that a crude Monte Carlo computation with the optimal parameter would have lead to the same limiting variance. These assumptions are satisfied in the frameworks introduced in Section 4.
3 Estimation of the optimal variance parameter
From Theorem 2.1 and Theorem 2.3, we know that if we can construct a convergent estimator of , the adaptive estimator is a convergent and asymptotically normal estimator of the expectation . The challenging issue is now to propose an automatic way of approximating the minimiser of . In the following, we will assume that is strictly convex, goes to infinity at infinity and is continuously differentiable. Moreover, we assume that admits a representation as an expectation
where is a measurable and integrable function. We could see in the examples developed in Section 4 that these conditions are very easily satisfied. Stochastic algorithms such as the Robbins Monro algorithm (see [22]) are perfectly well suited to estimate quantities defined as the root of an expectation. Because for the applications we are targeting we cannot impose that , the Robbins-Monro algorithm will fail to converge and we need a more robust algorithm. This will naturally lead us to consider randomly truncated stochastic algorithms as introduced by Chen et al. [3]. When dealing with stochastic approximations, the idea of averaging the iterates comes out quite naturally to smooth the trajectories, see Section 3.2.
3.1 Randomly truncated stochastic algorithms
Let be an i.i.d sequence of random variables following the law of and be a decreasing sequence of a positive real numbers satisfying
(5) |
The sequence is often called the gain sequence or the step sequence. We define the field . We introduce an increasing sequence of compact sets of
(6) |
Now, we can present the randomly truncated stochastic algorithm introduced in [3], which essentially consists in a truncation of the Robbins Monro algorithm on an increasing sequence of compact sets. For and , we define the sequences of random variables and by
(7) |
is the new sample we draw, either we accept it and set or we reject it and reset the algorithm to when it tries to jump too far ahead in a single step. Note that is actually drawn along the dynamics of the Robbins Monro algorithm and either we accept it as the new iterate or we reject it when the algorithm tries to jump to far ahead and in this case we reset the new iterate to . In the following, we write Equation (7) in a more condensed form
(8) |
where denotes the truncation on the compact sets .
The use of truncations enables to relax the hypotheses required to ensure the convergence. From the recent results of [16], we can state the following convergence result
Theorem 3.1.
Note that the assumptions required to ensure the convergence are very weak and are formulated independently of the algorithm trajectories, which makes them easy to check. Since the variance reduction technique we settle here aims at being automatic in the sense that it does not require any fiddling with the gain sequence depending on the function , it is quite natural to average the procedure defined by Equation (7).
3.2 Averaging a stochastic algorithm
This section is based on the remark that Cesaro type averages tend to smooth the behaviour of convergent estimators at least from a theoretical point of view. Such averaging techniques have already been studied and proved to provide asymptotically efficient estimators (see for instance [20], [14] or [19]).
At the same time, it is well known that true Cesaro averages are not so
efficient from a practical point of view because the rate at which the impact
of the first iterates vanishes in the average is too slow and it induces some
kind of a numerical bias which in turn dramatically slows down the
convergence. Combining these two facts has led us to consider a moving window
average of Algorithm (7).
In this section, we restrict to gain sequences of the form with . Let be
the length of the window used for averaging. For , we introduce
(9) |
We use the convention . The almost sure convergence of can easily be deduced from Theorem 3.1. The asymptotic normality of the sequence has been studied in [15]. The definition of is a little different from the one used in [15] because we want to ensure that the sequence is adapted to the filtration in view of the use of as an estimator of in Algorithm 1.2.
4 Examples of parametric Monte-Carlo settings
In this section, we give various examples of cases in which a parametric representation of the expectation of interest is available
In each example, we highlight the strong convexity and the regularity of the function such that the minimiser is uniquely defined as the one root of .
4.1 Importance sampling for normal random variables
Let be a dimensional standard normal random vector. For any measurable function such that , one has for all
(10) |
Assume we want to compute for a measurable function such that is integrable. By applying equality (10) to and , one obtains that the expectation and the variance of the random variable are respectively equal to and where
The strict convexity of the function is already known from [23] for instance. For the sake of completeness, we prove here a slightly improved version of this result.
Proposition 4.1.
Assume that
(11) | ||||
(12) |
Then, is infinitely continuously differentiable and strongly convex.
Proof.
The function is infinitely continuously differentiable. Since,
where the right hand side is integrable because by Hölder’s inequality and Equation (12), we have Lebesgue’s theorem ensures that is continuously differentiable with . Higher order differentiability properties are obtained by similar arguments and in particular the Hessian matrix writes
The second term in the above equation is a positive semi-definite matrix, hence
Assumption (11) ensures that . Then, the Hessian matrix is uniformly bounded from below by the positive definite matrix . This yields the strong convexity of the function . ∎
Proposition 4.1 implies that has a unique minimiser characterised by , i.e. .
4.2 Importance sampling for processes
Equality (10) can actually be extended to the Brownian motion framework using Girsanov’s theorem. Let be a dimensional Brownian motion and its natural filtration. For any measurable and predictable process such that , one has
Assume , for all . The variance of writes down with
A similar result to Proposition 4.1 holds; in particular is infinitely continuously differentiable, strictly convex and goes to infinity at infinity.
For more general processes , we refer the reader to [17].
4.3 The exponential change of measure
The idea of tilting some probability measure to find the ones that minimises the variance is a very common idea which can be also be applied to a wide range of distribution, see for instance the recent results of Kawai [11, 10] in which he applied an exponential change of measure to Lévy processes, also known as the Esscher transform.
Consider a random variable with values in and cumulative generating function . We assume that for all . Let denote the density of . We define the density by
Let have as a density, then
The variance of writes with
Obviously, this change of measure is only valuable as a variance reduction technique if can be simulated at approximately the same cost as .
Proposition 4.2.
Assume that
(13) | ||||
(14) |
Then, is infinitely continuously differentiable, convex and .
Proof.
Remark 4.3.
If is a random standard normal vector, and is a random normal vector with mean and identity covariance matrix. Hence, we recover Equality (10).
5 Application to the Gaussian random vector framework
5.1 Presentation of the problem
We consider a multidimensional local volatility model in which each asset is supposed to be driven by the following dynamics under the risk neutral measure.
is a vector of correlated standard Brownian motions. The covariance structure of the Brownian motions is given by where is a definite positive matrix with a diagonal filled with ones. In our numerical examples, we take with to ensure that the matrix is positive definite. The function is the local volatility function, is the instantaneous interest rate and the vector is the vector of the spot values. In this model, we want to price path-dependent options whose payoffs can be written as a function of . Hence, the price is given by the discounted expectation . Most of the time, this expectation must be computed by Monte Carlo methods and one has to consider an approximation of on a time grid . Then, the quantity of interest becomes
The discretisation of the asset can for instance be obtained using an Euler scheme, which means that the function can be expressed in terms of the Brownian increments or equivalently using a random normal vector. These remarks finally turn the original pricing problem into the computation of an expectation of the form where is a standard normal random vector in and is a measurable and integrable function. Using Equation (10), we have for all ,
(15) |
where is matrix. The particular choice and corresponds to Equation (10). When , the choice and corresponds to adding a linear drift to the one dimensional standard Brownian motion and we recover the Cameron-Martin formula.
5.2 Bespoke estimators for the optimal variance parameter
It ensues from Proposition 4.1, that the second moment
(16) |
is strongly convex, infinitely differentiable and
(17) |
If we apply Equation (10) again, we obtain an other expression for
(18) |
Let us introduce the following two functions
(19) | ||||
(20) |
Using either Equation (17) or (18), we can write and these two functions and fit in the framework of Section 3 and enable to construct two estimators of and following Equation (7)
(21) | ||||
(22) |
where is an i.i.d sequence of random variables following the law of . We also introduce their corresponding averaging versions and following Equation (9). Based on Equation (15), we define
Corresponding to the different estimators of listed above, we can define as many approximations of following Equation (3)
where the sequence has already been used to build the estimators. From Proposition 4.1 and Theorems 3.1, 2.1 and 2.3, we can deduce the following result.
Theorem 5.1.
Proof.
We only do the proof for and as the same ideas can be applied to and . We know from Proposition 4.1, that the function defined by Equation (16) is strongly convex and infinitely differentiable, hence satisfies Assumption (A(A1)). Let . For any satisfying , we have
Using Hölder’s inequality, it can easily be proved that the expectation on the right hand side is uniformly bounded for . Hence Assumption (A(A2)) is satisfied. Therefore, and both converge to . Let ,
Using the integrability of and Hölder’s inequality, one can prove that the expectation on the .r.h.s is bounded for in a ball. Moreover, combining this with Lebesgue’s theorem, we obtain that the functions and are continuous. Therefore, the convergence and asymptotic normality of issues from Theorems (2.1) and (2.3). ∎
Remark 5.2.
Theorem 5.1 extends the result of [2, Theorem 4]. Our result is valid for any increasing sequences of compact sets satisfying (6) whereas Arouna needed a condition on the compact sets to ensure the convergence of the estimators. The only condition required is some integrability on the payoff function and nothing has to be checked along the algorithm paths, which is a great improvement from a practical point of view.
For the vast majority of payoff functions commonly used, the assumptions of Theorem 5.1 are always satisfied.
5.3 Numerical results
5.3.1 Complexity of the different approximations
In the introduction, we have presented two different strategies for implementing a variance reduction method based on the approximation of the optimal variance parameter. We know from Theorem 5.1, that the adaptive and non-adaptive algorithms both converges at the same rate and the same limiting variance. Therefore, to decide which one is the better, we have to compare their computational costs. In this section, we assume that the computational cost of the different algorithms is determined by the number of evaluations of the function . We will see in the examples later that this assumption is realistic and therefore it becomes obvious that the averaging and non-averaging estimators of all have the same computational costs when implemented with expertise.
The non-adaptive algorithm
We know from [2, 23] that the sequential algorithm converges with a rate of if we have samples at hand and want to implement the sequential algorithm by using the first samples for approximating and the last samples for actually computing the Monte Carlo estimator with the previously computed approximation of . Whatever approximation of is used, be it , , or , this algorithm requires evaluations of the function whereas a crude Monte Carlo method only evaluates the function times and achieves a convergence rate of , hence this method only becomes efficient when .
The adaptive algorithm
From Theorem 5.1, we know that the adaptive estimators
all converge with
the same rate but as we will see it they do not have the
same computational cost. First, let us concentrate on and
, at each iteration , the function has to be
computed twice : once at the point (or )
to update the Monte Carlo estimator and once at the point to update
or . Hence, the computation of or
requires evaluations of the function . Similarly, the
computation of requires at each step evaluations of the
function : one at the point to update the Monte
Carlo estimator and one at the point to update the
stochastic algorithm. So the overall cost is still evaluations of the
function . But looking closely at the computation of
immediately highlights the benefit of having put the parameter back into
the function in the expression of : the updates of
and both use the evaluation of the function
at the same point . Hence, the computation of only
needs evaluations of the function instead of for all the
others algorithms. Obviously, the computational costs of the different
estimators cannot really be reduced to the number of times the function
is evaluated so one should not expect that computing is twice less
costly than the other estimators but we will see in the examples below that
the estimator is indeed faster than the others.
To shortly conclude on the complexity of the different algorithms, be they sequential or adaptive, one should bear in mind that all the estimators except roughly require twice the computational time of the crude Monte-Carlo method.
5.3.2 Practical implementation
The choice of using Equations (7) or (9) to build an
estimator of becomes really important when one has to implement the variance
reduction procedure either by using Algorithms (1.1)
or (1.2). Both the averaging and non-averaging strategies have pros
and cons. The averaging algorithm theoretically converges a little slower but has a
much smoother behaviour with respect to the proper adjustment of the gain sequence
. Then, to have a robust estimator — in the sense that the
numerical convergence of the estimator does not depend too much on the choice of of
the gain sequence — the averaging procedure proves to be better in
practice. The non averaging algorithm should converge a little faster even though we
do not notice it in practise as the convergence oscillates too much and is far more
sensitive to the proper choice of the sequence . Eventually, both
algorithms produce very similar results regarding variance reduction; the averaging
one is easier to tune but requires more computational time.
In the numerical experiments of this section, we compare the different algorithms on multi-asset options. The quantity “Var MC” denotes the variance of the crude Monte Carlo estimator computed on-line on a single run of the algorithm. The variance denoted “Var ” (resp. “Var ”) is the variance of the ADIS algorithm (see Algorithm 1.2) which uses (resp. ) to estimate . These variances are computed using the on-line estimator given by Corollary 2.4. These adaptive algorithms are also compared to the sequential strategy described by Algorithm 1.1 denoted by “+MC” or “+MC” depending on how is approximated. In all these algorithms, the matrix introduced in Equation (15) is chosen as the identity matrix. When is not the identity matrix, its purpose is to reduce the dimension of the space in which the optimal is searched and in such cases the algorithms will be call “reduced”. Note that for the comparison to be fair between the different strategies, we have used samples for the adaptive algorithms but for the sequential algorithms so that they all satisfy a central limit theorem with the rate .
Basket options
We consider options with payoffs of the form where is a vector of algebraic weights (enabling us to consider exchange options).
Price | Var MC | Var | Var | |||
---|---|---|---|---|---|---|
0.1 | 45 | 1 | 7.21 | 12.24 | 1.59 | 1.10 |
55 | 10 | 0.56 | 1.83 | 0.19 | 0.14 | |
0.2 | 50 | 0.1 | 3.29 | 13.53 | 1.82 | 1.76 |
0.5 | 45 | 0.1 | 7.65 | 43.25 | 6.25 | 4.97 |
55 | 0.1 | 1.90 | 14.74 | 1.91 | 1.4 | |
0.9 | 45 | 0.1 | 8.24 | 69.47 | 10.20 | 7.78 |
55 | 0.1 | 2.82 | 30.87 | 2.7 | 2.6 |
Estimators | MC | ||
---|---|---|---|
CPU time | 0.85 | 0.9 | 1.64 |
The results of Table 1 indicate that the adaptive algorithm using an averaging stochastic approximation outperforms not only the crude Monte Carlo approach but also the adapted algorithms using non-averaging stochastic approximation. The better performance of the algorithms using averaging estimators of comes from the better smoothness of the averaging algorithm (see Equation (9)). Nonetheless, these good results in terms of variance reduction must be considered together with their computation costs reported in Table 2. As explained in Section 5.3.1, we notice that the computational cost of the estimator is very close to the one of the crude Monte Carlo estimator because the implementation made the most of the fact that the updates of and both need to evaluate the function at the same point. Note that, because this implementation trick cannot be applied to , the adaptive algorithm using an averaging stochastic approximation is twice slower. For a given precision, the adaptive algorithm is between and times faster.
Barrier Basket Options
We consider basket options in dimension with a discrete barrier on each asset. For instance, if we consider a Down and Out Call option, the payoff writes down where is a vector of positive weights, is the vector of barriers, the strike value and . We consider one time step per month, which means that for an option with maturity time , the number of time steps is . From now on, we fix . Hence if we use the identity matrix , the parameter is of size . Here, we propose to reduce the dimension of and we will in Table 3 that it achieves almost the same variance reduction. Of course the matrix cannot be chosen independently of the structure of the problem. Remember that the vector actually corresponds to the increments of the Brownian motion with values in on the grid . We recall that we can simulate the Brownian motion on the time grid by using the following equality in law
where is the identity matrix in dimension . If we choose
then the transformation corresponds to the transformation and it reduces the effective dimension of the importance sampling parameter to rather than .
Price | Var MC | Var | Var | Var | Var | Var | Var | ||
+MC | +MC | ||||||||
reduced | reduced | reduced | |||||||
45 | 0.5 | 2.37 | 22.46 | 4.92 | 3.52 | 2.59 | 2.64 | 2.62 | 2.60 |
50 | 1 | 1.18 | 10.97 | 1.51 | 1.30 | 0.79 | 0.80 | 0.80 | 0.79 |
55 | 1 | 0.52 | 4.85 | 0.39 | 0.38 | 0.19 | 0.24 | 0.23 | 0.19 |
Estimators | MC | + MC | + MC | ||||
---|---|---|---|---|---|---|---|
reduced | reduced | reduced | |||||
CPU time | 1.86 | 1.93 | 3.34 | 4.06 | 1.89 | 2.89 | 3.90 |
First, we note from Table 3 that the reduced and non-reduced ADIS algorithm achieve almost the same variance reduction. Actually, it is even advisable to reduce the size of the importance sampling parameter to reduce the noise in the stochastic approximation and therefore in the adaptive Monte Carlo estimator. Comparing the columns “Var MC”, “Var ” and “Var ” points out that when the convergence of the estimator of is too slow the first iterates of the adaptive Monte Carlo estimators use wrong values of and therefore cannot reach whereas if a sequential algorithm is used all the iterates of the Monte Carlo estimator use the same and better approximation of . We can see in Table 3 that the variance of “+MC” is half the one of or but its CPU time is twice the one of as noted in Table 4.
The reduced algorithms are a little faster than the non-reduced ones but their real advantage is to converge much more stably and to achieve the same variance as “+MC” but in far less computational time. As in the previous examples, we still observe that the estimator “ reduced” is faster than the others and has a variance very close to the best method which is “+MC”.
6 Conclusion
In this work, we have explained how one could devise an adaptive variance reduction method for computing an expectation with a free parameter. Different algorithms have been studied both from a theoretical point of view and in practice. Although all the adaptive algorithms satisfy the same central limit theorem, they may behave very differently in practice, in particular adaptive algorithms using a non-averaging stochastic approximation of the optimal variance parameter can be implemented in a clever way which makes them as fast as a crude Monte Carlo approach. Nevertheless, the numerical convergence of these stochastic is very sensitive to the tuning of their gain sequence and one way to smooth this behaviour is to plug an averaging procedure on top of the stochastic approximation but then the computational time significantly increases and yet the dependency with respect to the gain sequence is still a serious drawback. To encounter the fine tuning of the algorithm, Jourdain and Lelong [9] have recently suggested to use deterministic optimisation techniques coupled with sample approximation, but their technique can not be implemented in an adaptive manner which increases its computational cost.
References
- [1] B. Arouna. Adaptative Monte Carlo method, a variance reduction technique. Monte Carlo Methods Appl., 10(1):1–24, 2004.
- [2] B. Arouna. Robbins-Monro algorithms and variance reduction in finance. The Journal of Computational Finance, 7(2), Winter 2003/2004.
- [3] H.F. Chen and Y.M. Zhu. Stochastic Approximation Procedure with randomly varying truncations. Scientia Sinica Series, 1986.
- [4] Paul Glasserman. Monte Carlo methods in financial engineering, volume 53 of Applications of Mathematics (New York). Springer-Verlag, New York, 2004. , Stochastic Modelling and Applied Probability.
- [5] Paul Glasserman, Philip Heidelberger, and Perwez Shahabuddin. Asymptotically optimal importance sampling and stratification for pricing path-dependent options. Math. Finance, 9(2):117–152, 1999.
- [6] P. Hall and C. C. Heyde. Martingale limit theory and its application. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1980. Probability and Mathematical Statistics.
- [7] J. Jacod and A.N. Shiryaev. Limit Theorems for Stochastic Processes. Springer-Verlag Berlin, 1987.
- [8] B. Jourdain. Advanced Financial Modelling, chapter Adaptive variance reduction techniques in finance, pages 205–222. Radon Series Comp. Appl. Math 8. Walter de Gruyter, 2009.
- [9] B. Jourdain and J. Lelong. Robust adaptive importance sampling for normal random vectors. Annals of Applied Probability, 19(5):1687–1718, 2009.
- [10] Reiichiro Kawai. Adaptive monte carlo variance reducion for Lévy processes with two-time-scale stochastic approximation. Methodology and Computing in Applied Probability, 10(2):199–223, 2008.
- [11] Reiichiro Kawai. Optimal importance sampling parameter search for Lévy processes via stochastic approximation. SIAM Journal on Numerical Analysis, 47(1):293–307, 2010.
- [12] S. Kim and S. G. Henderson. Adaptive control variates. In Proceedings of the 2004 Winter Simulation Conference, 2004.
- [13] Sujin Kim and Shane G. Henderson. Adaptive control variates for finite-horizon simulation. Math. Oper. Res., 32(3):508–527, 2007.
- [14] Harold J. Kushner and Jichuan Yang. Stochastic approximation with averaging of the iterates: Optimal asymptotic rate of convergence for general processes. SIAM Journal on Control and Optimization, 31(4):1045–1062, 1993.
- [15] J. Lelong. Etude asymptotique des algorithmes stochastiques et calcul du prix des options Parisiennes. PhD thesis, Ecole Nationale des Ponts et Chasusées, http://tel.archives-ouvertes.fr/tel-00201373/fr/, 2007.
- [16] J. Lelong. Almost sure convergence of randomly truncated stochastic algorithms under verifiable conditions. Statistics & Probability Letters, 78(16), 2008.
- [17] V. Lemaire and G. Pagès. Unconstrained recursive importance sampling. Annals of Applied Probability (to appear), 2009.
- [18] D. Lépingle. Sur le comportement asymptotique des martingales locales. In Séminaire de Probabilités, XII (Univ. Strasbourg, Strasbourg, 1976/1977), volume 649 of Lecture Notes in Math., pages 148–161. Springer, Berlin, 1978.
- [19] Mariane Pelletier. Asymptotic almost sure efficiency of averaged stochastic algorithms. SIAM J. Control Optim., 39(1):49–72 (electronic), 2000.
- [20] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM J. Control Optim., 30(4):838–855, 1992.
- [21] Rolando Rebolledo. Central limit theorems for local martingales. Z. Wahrsch. Verw. Gebiete, 51(3):269–286, 1980.
- [22] Herbert Robbins and Sutton Monro. A stochastic approximation method. Ann. Math. Statistics, 22:400–407, 1951.
- [23] Yi Su and Michael C. Fu. Optimal importance sampling in securities pricing. Journal of Computational Finance, 5(4):27–50, 2002.
- [24] Felisa J. Vázquez-Abad and Daniel Dufresne. Accelerated simulation for pricing asian options. In WSC ’98: Proceedings of the 30th conference on Winter simulation, pages 1493–1500, Los Alamitos, CA, USA, 1998. IEEE Computer Society Press.
- [25] Ward Whitt. Proofs of the martingale FCLT. Probab. Surv., 4:268–302, 2007.