Global sensitivity analysis for stochastic simulators based on generalized lambda surrogate models
Abstract
Global sensitivity analysis aims at quantifying the impact of input variability onto the variation of the response of a computational model. It has been widely applied to deterministic simulators, for which a set of input parameters has a unique corresponding output value. Stochastic simulators, however, have intrinsic randomness due to their use of (pseudo)random numbers, so they give different results when run twice with the same input parameters but non-common random numbers. Due to this random nature, conventional Sobol’ indices, used in global sensitivity analysis, can be extended to stochastic simulators in different ways. In this paper, we discuss three possible extensions and focus on those that depend only on the statistical dependence between input and output. This choice ignores the detailed data generating process involving the internal randomness, and can thus be applied to a wider class of problems. We propose to use the generalized lambda model to emulate the response distribution of stochastic simulators. Such a surrogate can be constructed without the need for replications. The proposed method is applied to three examples including two case studies in finance and epidemiology. The results confirm the convergence of the approach for estimating the sensitivity indices even with the presence of strong heteroskedasticity and small signal-to-noise ratio.
1 Introduction
Computational models, a.k.a. simulators, have been extensively used to represent physical phenomena and engineering systems. They can help assess the reliability, control the risk and optimize the behavior of complex systems early at the design stage. Conventional simulators are usually deterministic, in the sense that repeated model evaluations with the same input parameters yield the same value of the output. In contrast, several runs of a stochastic simulator for a given set of input parameters provide different results. More precisely, the output of a stochastic simulator is a random variable following an unknown probability distribution. Hence, each model evaluation with the same input values generates a realization of the random variable. Mathematically, a stochastic simulator can be defined by
(1) |
where is the input vector that belongs to the input space , and denotes the probability space that represents the stochasticity. The intrinsic randomness is due to the fact that some latent variables inside the model are not explicitly considered as a part of the input variables: given a fixed input value , the output of the simulator is a random variable.
In this respect, we can consider a stochastic simulator as a random field indexed by the parameters [1]. For a given realization of the latent variables , the simulator becomes a deterministic function of . This is realized in practice by initializing the random seed to the same value before running the simulator for different ’ s, a trick known as common random numbers. The (classical) functions will be called trajectories in this paper. One particular trajectory corresponds to one particular value .
In contrast, for a given , the output of the stochastic simulator is a random variable. Its distribution can be obtained by repeatedly running the simulator with , yet different realizations of the latent variables called replications.
Stochastic simulators are ubiquitous in modern engineering, finance and medical sciences. Typical examples include stochastic differential equations (e.g., financial models [2]) and agent-based models (e.g., epidemiological models [3]). To a certain extent, physical experiments can also be considered as stochastic models, because we may not be able to measure and consider all the relevant variables that can uniquely determine the experimental conditions.
In practice, the input variables may be affected by uncertainty due to noisy measurements, expert judgment or lack of knowledge. Therefore, they are modeled as random variables and grouped into a random vector , which is characterized by a joint distribution . Quantification of the contribution of input variability to the output uncertainty is a major task in sensitivity analysis [4]. It allows us to identify the most important set of input variables that dominate the output variability and also to figure out non-influential variables. This information provides more insights into the simulator and can be further used for model calibrations and decision making [5].
A large number of methods have been successfully developed to perform sensitivity analysis in the context of deterministic simulators [4, 6, 7]. Among others, the variance-based sensitivity analysis, also referred to as Sobol’ indices [8], is one of the most popular approaches, which relies on the analysis of variance. Several extensions of Sobol’ indices to stochastic simulators can be found in the literature, depending on the treatment of the intrinsic randomness. It is worth emphasizing that the overall uncertainty now consists of two parts, namely the inherent stochasticity in the latent variables and the uncertainty in the input parameters . Iooss and Ribatet [9] include the latent variables as a part of the input, which results in a natural extension of the classical Sobol’ indices to stochastic simulators. Hart et al. [10] and Jimenez et al. [11] define the Sobol’ indices as functions of the latent variables which, as a consequence, become random variables whose statistical properties can be studied. Recently, Azzi et al. [12] propose to represent the intrinsic randomness by the entropy of the response distribution and to calculate the classical Sobol’ indices on the latter. All in all, relatively little attention has been devoted to sensitivity analysis for stochastic simulators.
Sensitivity analysis usually requires a large number of model evaluations for different realizations of the input vector. Due to the intrinsic randomness of stochastic simulators, an additional layer of stochasticity comes on top of the input uncertainty, which requires repeated runs with the same input parameters to fully characterize the model response. As a consequence, such analyses become intractable when the simulator is expensive to evaluate. To alleviate the computational burden, surrogate models can be constructed to mimic the original numerical model at a smaller computational cost. Large efforts have been dedicated to emulating the mean and variance function of stochastic simulators [13, 14, 15]. These two functions provide only the first two moments of the response distribution and are mostly used to estimate the Sobol’ indices proposed in [9]. In recent papers [16, 17], we developed a novel surrogate model, called generalized lambda model (GLaM), to emulate the whole response distribution of stochastic simulators. This model uses generalized lambda distributions (GLD) [18] to flexibly approximate the response distribution, while the distribution parameters cast as functions of the inputs are approximated through polynomial chaos expansions (PCE) [19].
Based on these premises, the goal of this paper is to establish a clear framework to carry out global sensitivity analysis for stochastic simulators, and to propose efficient computational approaches based on GLaM stochastic emulators. Therefore, the original contributions of this paper are two-fold. On the one hand, we give a thorough review of the current development of global sensitivity analysis for stochastic simulators. We point out the nature and the properties of different extensions of Sobol’ indices, which provides a general guideline to their usage. On the other hand, we present a unified framework based on generalized lambda models to calculate a whole variety of global sensitivity indices using this single surrogate.
The paper is organized as follows. First, we review three extensions of Sobol’ indices to stochastic simulators in Section 2. In Section 3, we present the framework of GLaMs [16]. There, we recap the fitting procedure proposed in [17], where it is emphasized that there is no need for replicated runs. Then, we discuss the use of GLaMs for estimating different types of Sobol’ indices. In Section 4, we illustrate the performance of GLaMs on three examples. While the first example is analytical, the second and third ones are realistic case studies in finance and epidemiology, respectively. Finally, we summarize the main findings of the paper and provide an outlook for future research in Section 5.
2 Global sensitivity analysis of stochastic simulators
2.1 Sobol’ indices
Variance-based sensitivity analysis has been extensively studied and successfully developed in the context of deterministic simulators. For a deterministic model , Sobol’ indices quantify the contribution of each input variable , or combination thereof, to the variance of the model output .
In this paper, we assume that ’s are mutually independent. Let us split the input vector into two subsets , where and is the complement of , i.e., . From the total variance theorem, the variance of the output can be decomposed as
(2) |
The first-order and total Sobol’ indices introduced by Sobol’ [8] and Homma and Saltelli [20] for the subset of input variables are defined by
(3) |
Higher-order Sobol’ indices can be defined with the help of . For example, the second-order or two-factor interaction Sobol’ index of and is given by
(4) |
where we denote by for the sake of simplicity.
In the context of stochastic simulators defined in Equation 1, the input variables alone do not determine the value of the output. Iooss and Ribatet [9] extend by adding the internal source of randomness represented by latent variables , which turns the stochastic simulator into a deterministic one. In this case, all the input variables are gathered in , and thus the Sobol’ indices in Equation 3 can be naturally extended to
(5) |
Note that has the same expression as in the case of deterministic simulators, but contains the additional variables [14]. Similarly, higher-order Sobol’ indices corresponding to interactions among components of are defined in the same way as deterministic simulators, whereas interactions between components of and involve in their definition. Since this is a direct extension, the Sobol’ indices defined in Equation 5 are referred to as classical Sobol’ indices in the sequel.
Another way to extend Sobol’ indices to stochastic simulators is to first eliminate the internal randomness by representing the response random variable by some summarizing statistical quantity, called here quantity of interest (QoI) denoted by , such as the mean value , variance [9], -quantile [21] and differential entropy [12]. As a result, the stochastic simulator is reduced to a deterministic function , and we can calculate the associated QoI-based Sobol’ indices as follows:
(6) |
A third extension is defined by considering a stochastic simulator as a random field. For a fixed internal randomness , the stochastic simulator is a deterministic function of the input variables, which corresponds to a trajectory. Hence, the associated Sobol’ indices are well-defined. Yet, they are random variables because of their dependence on [10], which results in the trajectory-based Sobol’ indices:
(7) |
where the indices of the variance operators correspond to those variables to which these operators apply.
2.2 Discussion
The three types of Sobol’ indices introduced above have different nature and focus. The classical Sobol’ indices defined in Equation 5 treat the latent variables as a set of separate input variables. As a result, indices of this type treat in the same way as . The first-order index indicates how much the output variance can be reduced (in expectation) if we can fix the value of . Besides, the classical Sobol’ indices can also quantify the influence of the intrinsic randomness as well as its interactions with input variables.
The QoI-based Sobol’ indices defined in Equation 6 help study a specific statistical quantity of the model response, which is a deterministic function of the inputs. Using a summary quantity to represent the random output might lead to a loss of information [10], unless this quantity itself is of interest. For example, we may want to find the variable(s) that has the largest effect(s) on the 95% quantile (i.e., for ) of the model response. However, the importance (ranking) of the inputs ’s can be quite different depending on the choice of the QoI.
Unlike the previous two types of indices, trajectory-based Sobol’ indices presented in Equation 7 are random variables. This is because the latent variables and the input parameters are treated differently: conditioned on a given , the stochastic simulator reduces to a deterministic function of , and we can calculate the associated (classical) Sobol’ indices. To evaluate the probability distribution of these trajectory-based Sobol’ indices requires that the same random seeds can be explicitly fixed in the simulator when running it for different values of . In a sense, trajectory-based Sobol’ indices emphasize the variation of the trajectory of stochastic simulators. They typically show the importance of each input variable in terms of its contribution to the variability of trajectories.
For stochastic simulators, the question “which input variable has the strongest effect?” is rather vague and cannot be answered by a single type of index. The analyst should properly pose the problem and select the appropriate sensitivity measures. If one aims at reducing the variance of the model output, the classical Sobol’ indices are of interest. If one is interested in some summary QoIs (which is often the case for applications, e.g., quantiles in reliability analysis), the QoI-based indices are more appropriate. Finally, if one can control the internal randomness and is interested in the variability of the model output for fixed intrinsic stochasticity, trajectory-based indices should be selected.
To further illustrate this discussion, let’s consider the stochastic simulator
where , and are independent random variables following standard normal distributions. The classical first-order Sobol’ indices are and . Therefore, if we want to primarily reduce the variance of , should be investigated. For the response mean function , we have and , which indicates that contributes fully to the variation of the mean function. In contrast, for the variance function , and reveals a different order. Regarding the trajectory-based indices, we have and which are two random variables (due to the randomness in ). The probability distribution of the two variables characterize how the randomness in affects the function with being fixed as a single realization of . In summary, even for this simple toy example, the various indices provide different conclusions.
It is worth remarking that the classical Sobol’ indices in Equation 5 share some common properties with the mean-based Sobol’ indices in Eq. 6, when we consider the mean function :
(8) |
According to the law of total expectation, we have
(9) |
As a result, can be rewritten as
(10) |
This implies that both classical Sobol’ indices and mean-based Sobol’ indices provide the same ranking, as the numerators of Equations 8 and 10 are identical. However, it is worth emphasizing that is not equal to and they are measuring different quantities, since the denominator of Equation 10 is but that of Equation 8 is .
As a summary for all three extensions, a stochastic simulator is essentially transformed into a reduced deterministic model at a certain stage. The classical Sobol’ indices include the latent variables as a part of the inputs. The QoI-based Sobol’ indices rely on a deterministic representation. The trajectory-based indices are random variables whose statistical properties can be studied at the cost of repeating a standard Sobol’ analysis for different realizations of the latent variables separately. As a result, the three types of sensitivity indices can be estimated by modifying only slightly the standard methods based on Monte Carlo simulation developed for deterministic simulators [4].
The classical Sobol’ indices involving (e.g., , in Eq. 5) and the trajectory-based Sobol’ indices require controlling the latent variables . In practical computations, this is achieved by fixing the random seed in the computational model [14, 10]. However, for certain types of stochastic simulators, or when the data are generated by physical experiments, it may be difficult to control or even identify . For the sake of general applicability, we focus in this paper only on Sobol’ indices that can be estimated by manipulating , that is, the QoI-based Sobol’ indices and, to some extent, the classical Sobol’ indices.
Using Monte Carlo simulations to estimate these indices requires evaluating the simulator for various realizations of the input vector. In addition, it is generally necessary to evaluate the function for calculating the associated QoI-based Sobol’ indices. However, this function is not directly accessible due to the intrinsic randomness. Therefore this function is usually estimated by using replications; i.e., for each realization , the simulator is run many times, and is estimated from the output samples. Both factors call for a large number of model runs, which becomes impracticable for costly models. Therefore, the use of surrogate models is unavoidable.
In the sequel, we present the generalized lambda model as a stochastic surrogate. Such a model emulates the response distribution conditioned on , which fully characterizes the statistical dependence between the inputs and output. Therefore, it can be used to estimate the considered Sobol’ indices.
3 Generalized lambda models
Generalized lambda models consist of mainly two parts: the generalized lambda distribution and polynomial chaos expansions. In this section, we briefly recap these two elements and present an algorithm to construct such a model without the need for replicated runs of the simulator. Then, we discuss how to estimate the sensitivity indices from the surrogate.
3.1 Generalized lambda distributions
The generalized lambda distribution is a flexible distribution family, which is designed to approximate many common distributions [18], e.g., normal, lognormal, Weibull and generalized extreme value distributions. A GLD is defined by its quantile function with , that is, the inverse of the cumulative distribution function . In this paper, we consider the GLD of the Freimer-Kollia-Mudholkar-Lin (FKML) family [22] with four parameters, whose quantile function is defined by
(11) |
where is the location parameter, is the scaling parameter, and and are the shape parameters. is required to be positive to produce valid quantile functions (i.e., being non-decreasing on ). Based on the quantile function defined in Equation 11, the probability density function (PDF) of a random variable following a GLD is given by
(12) |
where is the indicator function. A closed-form expression of is not available for arbitrary values of and . Therefore, evaluating the PDF for a given usually requires solving the nonlinear equation (12) numerically.
GLDs cover a wide range of shapes determined by and [16]. For instance, produces symmetric PDFs, and () results in left-skewed (resp. right-skewed) distributions. Moreover, and are closely linked to the support and the tail behaviors of the corresponding PDF. More precisely, and control the left and right tail, respectively. Whereas implies that the PDF support is left-bounded, implies that the distribution has a lower infinite support. Similarly, implies that the PDF support is right-bounded, whereas it is for . In addition, for (), the left (resp. right) tail decays asymptotically as a power law. Hence, GLDs can also provide fat-tailed distributions. The reader is referred to [16, 22] for a longer presentation of GLDs.
3.2 Polynomial chaos expansions
Consider a deterministic model that maps a set of input parameters to the output . Under the assumption that has finite variance, belongs to the Hilbert space of square-integrable functions with respect to the inner product . If the joint distribution satisfies certain conditions [23], the simulator admits a spectral representation in terms of orthogonal polynomials:
(13) |
where is a multivariate polynomial basis function indexed by , and denotes the associated coefficient. The orthogonal basis can be obtained by using tensor products of univariate polynomials, each of which is orthogonal with respect to the probability measure of the -th variable :
(14) |
Details about the construction of this generalized polynomial chaos expansion can be found in [24, 25].
The PCE defined in Equation 13 contains an infinite sum of terms. However, in practice, it is only feasible to use a finite series as an approximation. To this end, truncation schemes are adopted to select a set of basis functions defined by a finite subset of multi-indices. A typical scheme is the hyperbolic (a.k.a. -norm) truncation scheme [26] given by
(15) |
where is the maximum degree of polynomials, and defines the quasi-norm . Note that with , we obtain the full basis of total degree less than , which corresponds to the standard truncation scheme.
3.3 Formulation of generalized lambda models
Because of the flexibility of GLDs, we assume that the response random variable of a stochastic simulator for a given input vector can be well approximated by a GLD. Hence, the associated distribution parameters are functions of the input variables:
(16) |
Under appropriate conditions discussed in Section 3.2, each component of can be represented by a series of orthogonal polynomials. Because is required to be positive (see Section 3.1), the associated polynomial chaos representation is built on the natural logarithmic transform . This results in the following approximations:
(17) | ||||
(18) |
where () is a finite set of selected basis functions for , and ’s are the coefficients. For the purpose of clarity, we explicitly express in the PC approximations to emphasize that are the model parameters yet to be estimated from data.
3.4 GLaM constructions
We assume that our costly stochastic simulator is evaluated once for each point of the experimental design , and the associated model response is collected in :
(19) |
As already mentioned (and as emphasized by the notation ), no replications are required, and we do not control the random seed. To construct a GLaM from the available data , both the truncated sets of basis functions and the coefficients shall be determined. In this section, we summarize the method proposed in [17], which is designed to achieve both purposes without the need for replications.
Sometimes prior knowledge is available to set the basis functions. For example, when working with a standard linear regression problem, the data is supposed to be generated by
(20) |
where has mean zero and is independent of . This case can be treated within the GLaM framework as follows: contains the constant and linear term, and , , contain only a constant term. Note that such a GLaM allows estimating the distribution of , which is not required to be normal, whereas the usual linear regression framework assumes normally distributed .
However, in general there is no prior knowledge that would help select . Thus, we make the following assumptions to find appropriate hyperbolic truncation schemes defined in Equation 15 for each :
-
(A1)
The response distribution of can be well-approximated by a generalized lambda distribution;
-
(A2)
The shape of this distribution smoothly varies as a function of , so that the parameters can be well approximated by a low-order PCE
Because the shape of a GLD is controlled by and , the associated hyperbolic truncation schemes can be set with a small value of , say .
Moreover, the parameters and mainly affect the variation of the mean and of the variance as a function of the input , respectively. As a result, they may require possibly larger degree . To this end, we modify the feasible generalized least-squares (FGLS) [27] to find suitable truncation schemes for the mean and variance function modeled as
Basically, FGLS iterates between a weighted least-square problem (WLS) to fit the mean function, and an ordinary least-square (OLS) analysis to estimate the variance function.
The details of the modified FGLS are presented in Algorithm 1. In this algorithm, the inputs and stand for the set of candidate degrees and -norms that are tested to expand , respectively. The same notation apply to and for . Indeed, because of the low cost of least-square analysis, various combinations of and are tested for both and .
More precisely, AOLS denotes adaptive ordinary least-squares with degree and -norm adaptivity [28, 29]. This algorithm first builds a series of PCEs, each of which is obtained by applying ordinary least-squares with a truncation scheme defined by a combination of and . Then, it selects the PCE, therefore the associated truncation scheme, with the smallest leave-one-out errors (see [29] for details). denotes the use of weighted least-squares, which takes the estimated variance as weight to re-estimate . In this procedure, the truncation set for is selected only once (before the loop), whereas a set of truncation schemes is obtained.
We finally select the one with the smallest leave-one-out error as the final truncated set for . The number of iterations is defined by the user, typically 5–10. After applying Algorithm 1, we set and .
Once the basis functions are selected, we use the maximum (conditional) likelihood estimator to estimate :
(21) |
where is the conditional negative log-likelihood
(22) |
with being the probability density function of the GLD defined in Equation 12.
The advantages of the proposed estimator are twofold. On the one hand, the simulator is required to be evaluated only once (but not limited to one) on each point of the experimental design. Thereby, replications are not necessary (yet possible), and the method is versatile in this respect. On the other hand, if the underlying computational model can be exactly represented by a GLaM for a specific choice of , the maximum likelihood estimator is consistent (see proof in [17]).
In practice, the evaluation of is not straightforward because the PDF of generalized lambda distributions does not have an explicit form: it is necessary to solve nonlinear equations as shown in Eq. 12. Nevertheless, the nonlinear function is monotonic and defined on . Therefore, we proposed using the bisection method [30] to efficiently solve the nonlinear equations.
3.5 Sensitivity analysis with GLaMs
3.5.1 Introduction
The various Sobol’ indices introduced in Equation 5 and (6) can be estimated by sampling from the conditional distribution . Because of the specific format of the GLaM definition, such a sampling can be easily performed.
The generalized lambda distribution parameterizes the quantile function (see Equation 11), which can be seen as the inverse probability integral transform. In other words, the random variable with follows . As a result, sampling from a GLD is straightforward. We define the function by
(23) |
is a so-called GLaM stochastic emulator where serves as a latent variable that introduces the internal source of randomness. Precisely, is a random variable following the surrogate response PDF for , and provides its corresponding -quantile. In other words, GLaM is a simple stochastic surrogate model with only one latent variable (namely, ); this surrogate behaves similarly to the original stochastic simulator in terms of the response distribution for any .
Equation 23 emulates the conditional quantile function of the original model, so calculating Sobol’ indices of the deterministic function can directly provide the classical Sobol’ indices defined in Equation 5. Note that also allows us to calculate classical Sobol’ indices involving , e.g., . However, since the surrogate approximates only the response distribution but cannot produce the trajectories, these Sobol’ indices are not representative of those of the original model, e.g., that requires estimating (according to Equation 5), where the inner expectation is taken over a trajectory.
For QoI-based Sobol’ indices in Equation 6, if the quantity of interest can be directly calculated from generalized lambda distributions, we can just treat it as a classical surrogate model of . This is the case for the mean and the variance (see Section 6.1 for details). In addition, if is a -quantile of the response distribution, Equation 23 is used directly.
Finally, if it is impossible to evaluate analytically , we generate a large sample set from Equation 23 by sampling , and then use the sample statistic as a surrogate model for .
3.5.2 Monte Carlo estimates
Because both and are deterministic, we can use methods based on Monte Carlo simulations [20] to estimate the considered Sobol’ indices. Here, we illustrate the estimator suggested by Janon et al. [31] for classical Sobol’ indices estimations.
We first define two random variables and , where and are independent copies of and . This indicates that is correlated to by using the same set of random variables as argument. In addition, and follow the same distribution, and thus they share the same moments, e.g., , .
Following Janon et al. [31], defined in Equation 5 can be re-written as:
(24) |
We generate realizations of and by sampling (independently) , , , , and . The expectations in Equation 24 can be estimated by sample statistics, which leads to
(25) |
where and are the -th realizations of and , respectively.
For QoI-based Sobol’ indices defined in Equation 6, we follow the same procedure by replacing by .
3.5.3 PCE-based estimates
As discussed in Section 3.5.1, estimating the considered two types of Sobol’ indices of a GLaM surrogate model is reduced to studying two deterministic functions and . According to the definition in Section 2.1, has mutually independent components, which are also independent of . Both functions can be represented by polynomial chaos expansions (see Section 3.2):
(26) |
where and are the truncated sets defining the basis functions ’s and ’s, respectively, as discussed in Equation 14. Note that each multi-index in has a dimension because of the additional variable , and the univariate basis functions of in are Legendre polynomials [24]. The advantage of using a PCE surrogate is that its Sobol’ indices (of any order) can be analytically calculated by post-processing its coefficients [32].
Several methods have been developed to construct PCEs for deterministic functions with given basis functions, such as the projection method [19] and ordinary least-squares [33]. To both determine the truncated set and estimate the associated coefficients, we opt for the hybrid-LAR algorithm [29]. This method selects the most important basis functions among a candidate set, before ordinary least-squares is used to compute the coefficients. The selection procedure of the algorithm is based on least angle regression (LAR) [34].
Practically, we first generate samples by sampling and . They are used to evaluate the target function or to obtain the associated model responses. Then, we apply the hybrid-LAR algorithm with the generated data to construct the PCE surrogate. Finally, the Sobol’ indices are calculated by post-processing the PC coefficients.
In the following examples, we use the PCE-based estimates for the Sobol’ indices of the GLaM surrogate model, instead of performing Monte Carlo simulations, as the accuracy of the former turned out to be extremely good.
4 Examples
In this section, we illustrate the performance of GLaMs for global sensitivity analysis on an analytical example and two case studies. We focus on the classical first-order Sobol’ indices and QoI-based total Sobol’ indices. The choice of the QoI depends on the focus of the example. To characterize the examples, we define the signal-to-noise ratio of a stochastic simulator by
(27) |
This quantity gives the ratio between the variance of explained by the mean function and the remaining variance.
We use Latin hypercube sampling [35] to generate the experimental design. The stochastic simulator is evaluated only once on each combination of input parameters. The associated output values are used to construct surrogates with the proposed estimation procedure introduced in Section 3.4.
To assess the overall surrogate quality, we define the error measure
(28) |
where is the conditional quantile function of the model, is that of the GLaM following the definition in Equation 23, and . This error has a form similar to the Wasserstein distance between probability measures [36]. In addition, we also define an error measure to assess the accuracy of estimating the quantity of interest whose approximation by GLaM is denoted by
(29) |
The expectations in Equation 28 and Equation 29 are estimated by averaging the error over a test set of size .
Experimental designs of various size are investigated to study the convergence of the proposed method. For each size, 50 independent realizations of these experimental designs are carried out to account for statistical uncertainty in the random design. As a consequence, estimates for each scenario are represented by box plots.
4.1 A three-dimensional toy example
The first example is defined as follows:
(30) |
where , are independent input variables, and denotes the latent variable that introduces the intrinsic randomness. The response distribution is a shifted lognormal distribution: the shift is equal to and the lognormal distribution is parameterized by . As a result, this stochastic simulator has a nonlinear location function and a strong heteroskedastic effect: the variance varies between 0.069 and 72.35. Besides, this example has a mild signal-to-noise ratio . This implies that the input variables can explain around 58% of the total variance of (i.e., ).


.
Figure 1(b) compares the PDFs predicted by a GLaM built on an experimental design of with the reference response PDFs of the simulator. The results show that the developed algorithm correctly identifies the shape of the underlying shifted lognormal distribution. Moreover, the PDF supports and tails are also accurately approximated.
We consider the differential entropy [12] as the QoI in this example. Because the analytical response distribution and entropy are known, we investigate the convergence of GLaM in terms of the conditional quantile function estimation Equation 29 and the entropy estimation Equation 28. The size of experimental design varies among . Note that the entropy of a GLD does not have a closed form. Therefore, we use Monte Carlo samples to estimate this quantity of a GLaM for each in the test set.
In addition, we consider another model where we approximate the response distribution with a normal distribution. The mean and variance (as functions of ) for such an approximation are chosen as the true mean and variance of the original. In other words, this model represents the “oracle” of Gaussian-type mean-variance models.
The results are summarized in Figure 2. The proposed method exhibits a clear convergence with respect to for both and estimations. We observe in Figure 2(a) that the decay of has two regimes separated by . For a small , the error coming from the use of finite samples dominates the estimate accuracy. When we consider a large data set, the error mainly comes from the model mispecification, because the stochastic simulator cannot be exactly represented by a GLaM. This phenomenon is not significant for the entropy estimation, which demonstrates a relatively consistent decay.
The accuracy of the oracle normal approximation is reported with red dash lines in Figure 2. The error shown is only due to model misspecifications (because the true response distribution is not Gaussian) since we use the underlying true mean and variance. For both measures, the medians of the errors of GLaMs built on model runs are smaller than those of the normal approximation. For , the GLaM clearly outperforms the oracle of Gaussian-type mean-variance models. This example illustrates the limits of such Gaussian-type models in practice.
Finally, the errors of GLaMs are below for indicating that the surrogate is able to explain over 95% of the variance of the target functions.


For sensitivity analysis, we focus on the classical first-order and the entropy-based total Sobol’ indices. Figure 3 and Figure 4 show the convergence of GLaMs for estimating these quantities of each input variable. The reference values are derived from Equation 30. As shown by the two figures, this toy example is designed to have as the most important variable according to the classical first-order Sobol’ indices, which also indicates that contributes the most to the variance of the mean function . In contrast, it has zero effect to the entropy. In comparison, is the dominant variable for the variation of the entropy . Because mainly controls the shape of the response distribution (especially the right tail), it has a minor first-order effect to the mean function, which leads to a very small value of . In contrast, the entropy depends on the distribution shape, and thus is not negligible. The results reveal that GLaMs capture this characteristic and yield accurate estimates for both classical Sobol’ indices and entropy-based Sobol’ indices.
Similar to Figure 2, we also reported the sensitivity indices calculated by using Gaussian approximations with the true mean and variance. Because the classical first-order indices depend only on the mean and variance functions, the oracle Gaussian model gives the exact values. Therefore, we showed only the results for the entropy-based Sobol’ indices in Figure 4. It is clear that the Gaussian approximation with the true mean and variance demonstrates a significant bias. In contrast, GLaMs show nearly no bias and approximate much more accurately the reference values.


4.2 Heston model
In this example, we perform the global sensitivity analysis for a Heston model used in mathematical finance [37]. The Heston model describes the evolution of a stock price . It is an extension of the geometric Brownian motion by modeling the volatility as a stochastic process , instead of considering it as constant. Hence, the Heston model is a stochastic volatility model and consists of two coupled stochastic differential equations:
(31) |
with
(32) |
where and are two Wiener processes with correlation coefficient , which introduce the intrinsic randomness of the stochastic model. The model parameters are summarized in Table 1. The range of the last five input variables are selected based on the parameters calibrated from real data (S&P 500 and Eurostoxx 50) [38]. The range of the first variable is set to to take the uncertainty of the expected return rate into account. Without loss of generality, we set .
Variable | Description | Distribution |
---|---|---|
Expected return rate | ||
Mean reversion speed of the volatility | ||
Long term mean of the volatility | ||
Volatility of the volatility | ||
Correlation coefficient between and | ||
Volatility at time 0 |
In this example, we are interested in the stock price after one year, i.e., with . A closed form solution to Equation 31 is generally not available. To get samples of , we simulate the entire time evolution of and for a given using Euler integration scheme with over the time interval . Note that when simulating the bivariate process , a problem may happen: since follows a Cox–Ingersoll–Ross process [38], the simulation scheme can generate negative values for . To overcome the problem, we apply the full truncation scheme, which replaces the update of by [38].


Figure 5 shows two response PDFs predicted by a surrogate built upon model runs. The reference histograms are obtained from repeated model runs with the same input parameters. We observe that the variance of the response distribution is not constant, e.g., 0.065 and 0.027 for the two illustrated PDFs. Moreover, the PDF shape varies: it changes from symmetric to left-skewed distributions depending on the model parameters. This would be difficult to approximate with a simple distribution family such as normal or lognormal. In contrast, GLaMs are able to accurately capture this shape variation, because of the flexibility of generalized lambda distributions.
Even though a closed form distribution of does not exist, the mean function can be derived analytically:
(33) |
As a result, we use defined in Equation 29 with to assess the convergence of the surrogate. In addition, we also consider the expected payoff of an European call option. The payoff and the expected payoff of an European call option are defined by
(34) |
where is the strike price and set to in the following analysis. In finance, not only is important for making investment decisions but also has a very similar form to the option price [39]. For the Heston model, numerical methods based on the Fourier transform have been developed to calculate the expected payoff without the need for Monte Carlo simulations [37]. For the GLaM surrogate, this quantity can also be calculated numerically (see Section 6.1). As a second performance index, we compute the associated error denoted by (Equation 29) for the convergence study.
Figure 6 shows box plots of the errors and for . Both and are relatively large for . This is mainly due to the fact that the variability of the model response is dominated by the intrinsic randomness: the model parameters altogether are only able to explain about 2% of the variance of the output (i.e., ). In other words, the stochastic simulator has a very small signal-to-noise ratio . Since GLDs are flexible, a few data scattered in a moderately high dimensional space may not provide enough information of the response distribution variation. We observe that for , the selection procedure proposed in Algorithm 1 can choose and being only constant. Such a model is too simple and thus fails to capture the variations of the scalar quantities. Consequently, it is necessary to have enough data to achieve an accurate estimate: when increasing the size of of the LHS design, we observe a clear decay of the errors.


We now study the convergence for the Sobol’ indices estimations. According to Equation 33, the mean function depends only on the first input variable , which contributes little (2%) to the total variance of . This implies that the classical Sobol’ indices are not informative (they are either 0 or very close to 0). However, we cannot ignore the variability of the input variables because the response distribution demonstrates a clear dependence on the input parameters, as shown in Figure 5. Therefore, we focus on the accuracy of the expected-payoff-based total Sobol’ indices, denoted by .
As a second quantity of interest, we also calculate the total Sobol’ indices associated to the -superquantile, referred to as . Superquantiles are known as the conditional value-at-risk, which is an important risk measure in finance [40]. The -superquantile of a random variable is defined by
(35) |
where is the -quantile of . This quantity corresponds to the conditional expectation of being larger than its -quantile. For the Heston model, this quantity does not have an analytical closed form, whereas of a GLD can be derived analytically (see Section 6.1).
We use Monte Carlo samples to evaluate (numerically) the function to obtain a reference value for each . To calculate the Sobol’ indices associated to the 95%-superquantile , it is necessary to evaluate the function . Because it cannot be analytically derived for the Heston model, we use replications to calculate the sample 95%-superquantile as an estimate for . Then, we treat it as a deterministic function and use samples to estimate each Sobol’ index. This indicates that a total number of model runs are performed to obtain the six reference 95%-superquantile-based total Sobol’ indices. Because only samples are used to estimate each , we use bootstraps [41] to calculate the 95% confidence interval to account for the uncertainty of the Monte Carlo simulation.


Figures 7 and 8 confirm and quantify the convergence of GLaMs to estimate and . For the expected payoff , the first variable is the most important. The estimation of its total effect converges from below the reference value, and we observe a bias in the estimate. Nevertheless, with large enough (), the GLaM can always correctly identify its importance (the bias is 0.072 for and 0.055 for ), and each classical first-order Sobol’ index of the other five variables converge to the reference line. The 95%-superquantile suggests a different ranking: , , and (corresponding to the first, third, fifth and sixth input variable, respectively) have similar total effects, which are superior to those of and (i.e., the second and fourth input variables). In addition, none of the input variables has nearly 0 total effect. The GLaM surrogate model accurately reproduces the phenomena. Moreover, the estimates generally vary around the reference values, and larger results in narrower spread of the box plots.
As a conclusion, GLaM surrogates allow us to represent accurately the QoI of the Heston model and carry out a detailed sensitivity analysis at the cost of runs of the stochastic simulator. Note that in this example the leave-one-out errors of the polynomial chaos expansions built on the GLaM surrogates are of the order of , which justifies the use of PCE-based Sobol’ indices.
4.3 Stochastic SIR model
In this example, we apply the proposed method to a stochastic Susceptible-Infected-Recovered (SIR) model in epidemiology [3]. This model simulates the spread of an infectious disease, which can help conduct appropriate epidemiological intervention to minimize the social and ethical impacts during the outbreak.
In a SIR model, a population of size at time can be partitioned into three groups: susceptible, infected and recovered during the outbreak of an epidemic. Susceptible individuals are those who can get infected by contacting an infectious person. Infected individuals are suffering from the disease and are contagious. They can recover (therefore classified as recovered) and become immune to future infections. The number of individuals within each group is denoted by , and , respectively. Without differentiating individuals, these three quantities characterize the configuration of the population at a given time . Hence, their evolutions represent the spread of the epidemic. In this study, we consider a fixed population without newborns and deaths, i.e., the total population size is constant, . As a result, , and satisfy the constraint , and thus only the time evolution of and is necessary to represent the disease evolution.

Without going into detailed assumptions of the model, we illustrate the system dynamics in Figure 9, where the black icons represent susceptible individuals, the red icons indicate infected persons, and the blue icons are those recovered. Suppose that at time the population has the configuration (top left figure of Figure 9). Infected individuals can meet susceptible individuals, or they may receive essential treatments and recover from the disease. Hence, the next configuration has two possibilities: (1) , where one susceptible individual is infected; (2) , where one infected person recovers. The population state evolving either to or depends on two random variables, and , which denote the respective time to move to the associated candidate configuration. Both random variables follow an exponential distribution, yet with different parameters:
(36) | ||||
(37) |
where indicates the contact rate of an infected individual, and is the recovery rate. If , becomes the next configuration at with and , and vice versa. This update step iterates until time when . Because the population size is finite and the recovered individuals will not get infected again, the total number of updates is finite (). This number is not a constant due to the updating process, indicating that the amount of latent variables of this simulator is also random. Note that the evolution procedure described here corresponds to the Gillespie algorithm [42].
In this case study, we set . is the vector of input parameters. To account for different scenarios, the input variables are modeled as , and . The uncertainty in the first two variables can be interpreted as lack of knowledge of the initial condition. While the last two variables are affected by possible interventions, such as social distancing measures that can reduce the contact rate and increasing medical resources that improves the recovery rate . We are interested in the total number of newly infected individuals during the outbreak, i.e., . The signal-to-noise ratio of this stochastic model is estimated to be , which is relatively large.
Figure 10 shows the response PDF estimation of the surrogate model built on an experimental design of . The reference histograms are calculated from repeated model runs with the same input values. We observe that the response distribution changes from right-skewed to left-skewed distributions (so we can also find symmetric distributions in between), which is correctly represented by the surrogate. In addition, GLaMs also accurately approximate the bulk and the support of the response PDF.


In this example, we investigate the convergence of GLaMs for estimating the classical first-order Sobol’ indices and the standard-deviation-based total Sobol’ indices, denoted by . To calculate the reference values, we use Monte Carlo samples for each classical Sobol’ index. Regarding the standard-deviation-based Sobol’ indices, we calculate the sample standard deviation based on replications. Then, we apply Monte Carlo simulations with samples to estimate the associated Sobol’ indices. The total cost to get reference values is thus equal to . As in the previous example, we use bootstraps to calculate the 95% confidence intervals.


Figures 11 and 12 show the results of the convergence study. In terms of the classical Sobol’ indices, the GLaM yields accurate estimates even when : the box plots scatter around the reference values with a small variability. Among the four input variables, the second one that corresponds to the number of infected individuals at time 0 is the most important. It is followed by the contact rate and the recovery rate, which show similar first-order effect. As a result, performing medical test to better determine would be the most effective way to reduce the variance of the output. In contrast, Figure 12 suggests that controlling the contact rate and recovery rate would be the best measure to reduce the variation of . For estimating the associated Sobol’ indices, the GLaM converges within the 95% confidence intervals of the Monte Carlo estimates, and the spread of the box plots decreases significantly with increasing.
Finally, we remark that when higher-order Sobol’ indices are of interest, Monte Carlo simulations require additional runs of the original model. For example, more model evaluations should be performed to obtain the reference values for the standard-deviation-based second-order Sobol’ indices. This results in a large amount of total model runs, which become impracticable even with cheap models. In contrast, GLaM surrogates can be used without additional cost: the PCE-based method presented in Section 3.5.3 provides analytical higher-order indices by post-processing the PC coefficients. In this example, the leave-one-out errors of PCE built on the GLaM surrogates are of the order , which justifies the use of PCE estimates. Based on the surrogate model, we observe that the largest standard-deviation-based second-order Sobol’ indices, which translate parameters interactions, are and , and . Both have a value 0.09, while the other second-order interactions are very small. As illustrated in Figure 12, has a total effect . Moreover, it has a relatively small first-order effect . This implies that mainly affects the variance of through its interactions with and .
5 Conclusions
In this paper, we discuss the nature and focus of three extensions of Sobol’ indices to stochastic simulators: classical Sobol’ indices, QoI-based Sobol’ indices and trajectory-based Sobol’ indices. The first two types are of interest because of their versatility and applicability to a broad class of problems. We propose to use the generalized lambda model as a stochastic emulator to estimate the considered indices. This surrogate model aims at emulating the entire response distribution, instead of focusing only on some scalar statistical quantities, e.g., mean and variance. More precisely, it relies on using the four-parameter generalized lambda distribution to approximate the response distribution. The associated distribution parameters as functions of the input are represented by polynomial chaos expansions. Such a surrogate can be constructed without the need for replications, and thus it is not restricted to a special data structure.
Because of the special formulation of GLaM, the considered sensitivity indices can be estimated by directly working with deterministic functions. This allows applying the methods developed for deterministic simulators, namely estimators based on Monte Carlo simulations and on polynomial chaos expansions. In this paper, we suggest the latter to post-process the surrogate model to achieve high computational efficiency.
The performance of the proposed method for estimating various Sobol’ indices is illustrated on three examples with different signal-to-noise ratios. The toy example is designed to have a strong heteroskedastic effect. It shows the general convergent behavior of GLaMs for approximating the conditional quantile functions and estimating the entropy of the response distributions. The second example is a Heston model from mathematical Finance. This case study has a very small signal-to-noise ratio and demonstrates a shape variation of the response PDF. The surrogate generally yields accurate estimate of the Sobol’ indices associated to the expected payoff and the 95%-superquantile. The last example is a stochastic SIR model in epidemiology, in which GLaMs exhibit robust estimates of the classical Sobol’ indices and the standard-deviation-based Sobol’ indices. All three examples have a different ranking of the input variables depending on the type of Sobol’ indices, which is correctly captured by GLaMs when comparing to reference values obtained by extremely costly Monte Carlo simulations. Fairly accurate results are obtained at a cost of runs of the simulator compared to reference values based on runs by a brute force approach.
In future work, we plan to develop algorithms to improve GLaMs for small data sets. Besides, we will investigate GLaMs for estimating distribution-based sensitivity indices [43, 44]. The estimation of these indices usually requires a large number of model runs to infer the conditional PDF, which can be easily obtained from GLaMs. In addition, appropriate contrast measures between distributions, such as the Wasserstein metric, can be developed for sensitivity analysis in the context of stochastic simulators. Finally, developing sensitivity indices for stochastic simulators with dependent input variables will allow engineers to tackle a broader group of problems.
Acknowledgements
This paper is a part of the project “Surrogate Modeling for Stochastic Simulators (SAMOS)” funded by the Swiss National Science Foundation (Grant #200021_175524), the support of which is gratefully acknowledged.
6 Appendix
6.1 Some properties of GLDs
The mean and variance of a GLD can be calculated by
(38) | ||||
(39) |
where are defined by
(40) |
with denoting the beta function.
The expected payoff defined in Equation 41 of a GLD with the strike price is given by
(41) |
where is the solution of the nonlinear equation:
(42) |
The -superquantile defined in Equation 35 of a GLD has a closed-form:
(43) |
References
- [1] S. Azzi, Y Huang, B. Sudret, and J. Wiart. Surrogate modeling of stochastic functions – application to computational electromagnetic dosimetry. Int. J. Uncertainty Quantification, 9(4):351–363, 2019.
- [2] A. J. McNeil, R. Frey, and P. Embrechts. Quantitative Risk Management: Concepts, Techniques, and Tools. Princeton Series in Finance. Princeton University Press, Princeton, New Jersey, 2005.
- [3] T. Britton. Stochastic epidemic models: a survey. Math. Biosci., 225:24–35, 2010.
- [4] A. Saltelli, K. Chan, and E. M. Scott, editors. Sensitivity analysis. J. Wiley & Sons, 2000.
- [5] E. de Rocquigny, N. Devictor, and S. Tatantola. Uncertainty in Industrial Practice: A guide to Quantitative Uncertainty Management. J. Wiley & Sons, New York, 2008.
- [6] J.C. Helton, J.D. Johnson, C.J. Sallaberry, and C.B. Storlie. Survey of sampling-based methods for uncertainty and sensitivity analysis. Reliab. Eng. Sys. Safety, 91(10–11):1175–1209, 2006.
- [7] E. Borgonovo. Sensitivity analysis – An Introduction for the Management Scientist. Springer, 2017.
- [8] I. M. Sobol’. Sensitivity estimates for nonlinear mathematical models. Math. Modeling & Comp. Exp., 1:407–414, 1993.
- [9] B. Iooss and M. Ribatet. Global sensitivity analysis of computer models with functional inputs. Reliab. Eng. Sys. Safety, 94:1194–1204, 2009.
- [10] J.L. Hart, A. Alexanderian, and P.A. Gremaud. Efficient computation of Sobol’ indices for stochastic models. SIAM J. Sci. Comput., 39:1514–1530, 2016.
- [11] M. N. Jimenez, O. P. Le Maître, and O. M. Knio. Nonintrusive polynomial chaos expansions for sensitivity analysis in stochastic differential equations. SIAM J. Uncer. Quant., 5:212–239, 2017.
- [12] S. Azzi, B. Sudret, and J. Wiart. Sensitivity analysis for stochastic simulators using differential entropy. Int. J. Uncertainty Quantification, 10:25–33, 2020.
- [13] M. Dacidian and R. J. Carroll. Variance function estimation. J. Amer. Stat. Assoc., 400:1079–1091, 1987.
- [14] A. Marrel, B. Iooss, S. Da Veiga, and M. Ribatet. Global sensitivity analysis of stochastic computer models with joint metamodels. Stat. Comput., 22:833–847, 2012.
- [15] M. Binois, R. Gramacy, and M. Ludkovski. Practical heteroskedastic Gaussian process modeling for large simulation experiments. J. Comput. Graph. Stat., 27:808–821, 2018.
- [16] X. Zhu and B. Sudret. Replication-based emulation of the response distribution of stochastic simulators using generalized lambda distributions. Int. J. Uncertainty Quantification, 10(3):249–275, 2020.
- [17] X. Zhu and B. Sudret. Emulation of stochastic simulators using generalized lambda models. Submitted to SIAM/ASA J. Unc. Quant., 2021. (ArXiv: 2007.00996).
- [18] Z. A. Karian and E. J. Dudewicz. Fitting Statistical Distributions: The Generalized Lambda Distribution and Generalized Bootstrap Methods. Chapman and Hall/CRC, 2000.
- [19] R. Ghanem and P. Spanos. Stochastic Finite Elements: A Spectral Approach. Courier Dover Publications, Mineola, 2nd edition, 2003.
- [20] T. Homma and A. Saltelli. Importance measures in global sensitivity analysis of non linear models. Reliab. Eng. Sys. Safety, 52:1–17, 1996.
- [21] T. Browne, B. Iooss, L. Le Gratiet, J. Lonchampt, and E. Rémy. Stochastic simulators based optimization by Gaussian process metamodels – application to maintenance investments planning issues. Quality Reliab. Eng. Int., 32(6):2067–2080, 2016.
- [22] M. Freimer, G. Kollia, G. S. Mudholkar, and C. T. Lin. A study of the generalized Tukey lambda family. Comm. Stat. Theor. Meth., 17:3547–3567, 1988.
- [23] O. G. Ernst, A. Mugler, H. J. Starkloff, and E. Ullmann. On the convergence of generalized polynomial chaos expansions. ESAIM: Math. Model. and Num. Anal., 46:317–339, 2012.
- [24] D. Xiu and G. E. Karniadakis. The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput., 24(2):619–644, 2002.
- [25] B. Sudret. Polynomial chaos expansions and stochastic finite element methods, chapter 6, pages 265–300. Risk and Reliability in Geotechnical Engineering. Taylor and Francis, 2015.
- [26] G. Blatman and B. Sudret. An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Prob. Eng. Mech., 25:183–197, 2010.
- [27] J.M. Wooldridge. Introductory Econometrics: A Modern Approach. Cengage Learning, 5th edition, 2013.
- [28] S. Marelli and B. Sudret. UQLab user manual – Polynomial chaos expansions. Technical report, Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Switzerland, 2019. Report # UQLab-V1.3-104.
- [29] G. Blatman and B. Sudret. Adaptive sparse polynomial chaos expansion based on Least Angle Regression. J. Comput. Phys., 230:2345–2367, 2011.
- [30] R. L. Burden, J. D. Faires, and A. M. Burden. Numerical analysis. Cengage Learning, 2015.
- [31] A. Janon, T. Klein, A. Lagnoux, M. Nodet, and C. Prieur. Asymptotic normality and efficiency of two Sobol’ index estimators. ESAIM: Probab. Stat., 18:342–364, 2014.
- [32] B. Sudret. Global sensitivity analysis using polynomial chaos expansions. Reliab. Eng. Sys. Safety, 93:964–979, 2008.
- [33] M. Berveiller, B. Sudret, and M. Lemaire. Stochastic finite elements: a non intrusive approach by regression. Eur. J. Comput. Mech., 15(1-3):81–92, 2006.
- [34] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Statist., 32:407–499, 2004.
- [35] M. D. McKay, R. J. Beckman, and W. J. Conover. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 21(2):239–245, 1979.
- [36] C. Villani. Optimal transport, old and new. Cambridge Series in Statistical and Probabilistic Mathematics. Springer, Cambridge, 2000.
- [37] S. L. Heston. A closed-form solution for options with stochastic volatility with applications to bond and currency options. The Review of Financial Studies, 6:327–343, 1993.
- [38] F.D. Rouah. The Heston Model in Matlab and C#. J. Wiley & Sons, New Jersey, 2013.
- [39] S. Shreve. Stochastic Calculus for Finance II. Springer, New York, 2004.
- [40] C. Acerbi. Spectral measures of risk: A coherent representation of subjective risk aversion. J. Bank. Finance, 26:1505–1518, 2002.
- [41] B Efron. Bootstrap methods: another look at the Jackknife. Ann. Stat., 7(1):1–26, 1979.
- [42] D. T. Gillespie. Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem., 81:2340–2361, 1977.
- [43] E. Borgonovo. A new uncertainty importance measure. Reliab. Eng. Sys. Safety, 92:771–784, 2007.
- [44] Y. Huoh. Sensitivity Analysis of Stochastic Simulators with Information Theory. PhD thesis, University of California, Berkeley, 2013.