Gradient-free Importance Sampling Scheme for Efficient Reliability Estimation
Abstract
This work presents a novel gradient-free importance sampling-based framework for precisely and efficiently estimating rare event probabilities, often encountered in reliability analyses of engineering systems. The approach is formulated around our foundational Approximate Sampling Target with Post-processing Adjustment (ASTPA) methodology. ASTPA uniquely constructs and directly samples an unnormalized target distribution, relaxing the optimal importance sampling distribution (ISD). The target’s normalizing constant is then estimated using our inverse importance sampling (IIS) scheme, employing an ISD fitted based on the obtained samples. In this work, a gradient-free sampling method within ASTPA is developed through a guided dimension-robust preconditioned Crank-Nicolson (pCN) algorithm, particularly suitable for black-box computational models where analytical gradient information is not available. To boost the sampling efficiency of pCN in our context, a computationally effective, general discovery stage for the rare event domain is devised, providing (multi-modal) rare event samples used in initializing the pCN chains. A series of diverse test functions and engineering problems involving high dimensionality and strong nonlinearity is presented, demonstrating the advantages of the proposed framework compared to several state-of-the-art sampling methods.
Keywords Rare Event Gradient-free Sampling Inverse Importance Sampling Reliability Estimation Preconditioned Crank-Nicolson High dimensions ASTPA.
1 Introduction
This work introduces a novel gradient-free framework for reliability analysis and rare event probabilities estimation in high-dimensional random variable spaces. Evaluation of system reliability and rare event uncertainty quantification (UQ) play a vital role in modern decision-making across diverse fields. Yet, as our systems and computational models grow increasingly complex, the computational cost of repeatedly evaluating them becomes a major challenge. Additionally, analytically deriving gradients for such models can sometimes be cumbersome or even infeasible, and using numerical gradient estimation methods in high-dimensional settings is impractical due to the high computational cost involved. These challenges are further compounded by the exceptionally low probabilities associated with such rare events, often ranging from to or lower. This necessitates the development of efficient gradient-free approaches capable of accurately estimating rare event probabilities while minimizing the number of model evaluations, a challenge that this work aims to address.
In the context of reliability estimation, failures are often considered rare events; therefore, rare event probability is also commonly referred to as failure probability. Several techniques have been developed over the years to estimate rare event (failure) probabilities, with the sampling-based approaches being able to address this problem in its utmost generality. To overcome the limitations of the crude Monte Carlo approach [1], various advanced sampling techniques have been proposed. Among others, Importance sampling (IS) [2], a well-known variance-reduction approach, is capable of efficiently estimating rare event probabilities. IS employs an appropriate importance sampling density (ISD), placing greater importance on rare event domains in the random variable space. This results in an increased number of rare event samples, thereby reducing the number of required model calls. Although the theoretically optimal ISD is known [3], it is challenging to sample from and impractical to implement. Notably, finding an effective near-optimal ISD is an ongoing, challenging research pursuit. The existing approaches can be largely classified into two categories [4]. The first seeks to approximate the optimal ISD by a parametric or non-parametric normalized probability density function (PDF). An early approach in this category utilizes densities centered around one or more design points [5], a powerful, practical process, yet with certain limitations for high-dimensional, complex spaces, as also discussed in [6]. A more recent, popular approach is the cross entropy-based importance sampling (CE-IS). It alternatively constructs a parametric ISD by minimizing the Kullback-Leibler (KL) divergence between the optimal ISD and a chosen parametric family of probability distributions [7, 8, 9, 10, 11]. Recent CE-IS variants incorporate information from influential rare event points, e.g., design points, to effectively initialize the cross entropy process [12, 13]. Within this first category of methods, several other different noteworthy approaches have also been proposed to approximate the optimal ISD, e.g., [14, 15, 16, 17, 18]. The second category of methods employs sequential variants of importance sampling, mainly utilizing a number of intermediate unnormalized distributions [19, 20, 21, 22]. These unnormalized distributions are generally more flexible than the PDFs usually utilized within the first category, thus being capable of providing a better approximation of the optimal ISD. Another sequential sampling method that shares similarities [4] with this second methodological category is the Subset Simulation (SuS) approach, originally presented in [6], and continuously refined [23, 24, 25, 26, 27], due to its potential to handle high-dimensional problems, despite some limitations [28]. A key to the success of all sequential methods is to design nearby intermediate distributions and to sufficiently sample them to accurately compute the sought rare event probability, a non-trivial and computationally demanding task.
Recently, we developed a novel foundational importance sampling-based approach in a non-sequential manner, termed Approximate Sampling Target with Post-processing Adjustment (ASTPA), for reliability and rare event probabilities estimation [29, 4]. The ASTPA approach innovatively decomposes the challenging rare event probability estimation process into two less demanding estimation problems. First, ASTPA uniquely constructs and directly samples an unnormalized target distribution, relaxing the optimal ISD, thereby making it comparatively much easier to sample. The obtained samples are subsequently utilized not only to compute a shifted estimate of the sought probability but also to guide the second ASTPA stage. Post-sampling, the normalizing constant of the approximate sampling target is computed through the inverse importance sampling (IIS) technique [29], which utilizes an importance sampling density (ISD) fitted based on the samples drawn in the first stage. The rare event probability is eventually computed by utilizing this computed normalizing constant to correct the shifted estimate of the first stage. Consequently, ASTPA significantly reduces the computational cost typically associated with using multiple intermediate distributions. Sampling the approximate target distribution in ASTPA has been so far performed using gradient-based Hamiltonian Markov Chain Monte Carlo (HMCMC) methods, demonstrating outstanding performance in complex, high-dimensional static and first-passage dynamic problems, both in Gaussian spaces [29] and directly in non-Gaussian spaces [30, 4]. However, the competitiveness of this gradient-based approach is hindered when analytical gradients cannot be obtained.
Gradient-free sampling has been extensively employed in various reliability estimation approaches, such as importance sampling and Subset Simulation, as discussed earlier. One common method employed is the Component-wise Metropolis-Hastings (CWMH) algorithm, often applied within Subset Simulation [6] to effectively sample high-dimensional conditional distributions by sequentially updating individual dimensions. To further enhance efficiency, delayed rejection variants of CWMH have been introduced, allowing for multiple proposal attempts within a single iteration when a proposal is rejected, thereby increasing acceptance rates and reducing sample autocorrelation [31, 32, 33, 34, 35]. Another significant development is the preconditioned Crank-Nicolson (pCN) algorithm, a MCMC method designed for high-dimensional probability distributions and infinite-dimensional settings, often encountered in Bayesian inverse problems [36, 37]. Inspired by the Crank-Nicolson discretization for solving partial differential equations, the pCN sampler proposes updates by combining the current state with a perturbation drawn from the prior distribution, scaled by a tunable parameter to control the step size. This design maintains stable acceptance probabilities even as dimensionality increases, achieving dimension-robust performance [38]. To further improve efficiency, some extensions of the standard pCN algorithm incorporate information from the posterior distribution by adapting the proposal covariance using sampling history [39, 40] or by estimating the posterior covariance through gradient information [41]. The pCN algorithm has been applied in various reliability estimation frameworks, including Subset Simulation [42, 32] and importance sampling [20, 43].
In this paper, we introduce a novel gradient-free framework for efficiently and accurately estimating rare event probabilities, building on our foundational ASTPA approach, particularly suitable for black-box computational models where analytical gradient information is not available, for any reason possible. Central to the framework is a guided variant of the dimension-robust pCN algorithm, tailored to sample the approximate target distribution within ASTPA. The standard pCN algorithm encounters challenges with multimodal distributions or when the target is distant from prior information, as often occurs with rare event domains located in the tails of original probability distributions. To overcome these limitations, we propose a computationally efficient discovery stage in this work, properly designed to enhance sampling efficiency by generating (multimodal) rare event samples to serve as initial seeds for the pCN chains. This process begins by sampling from the original distribution or an adjusted version, such as one with a modified covariance matrix, followed by conditional sampling based on the reciprocal of the original distribution, facilitating rapid diffusion toward the rare event domain. The procedure is stopped after generating a specified number of rare event samples, from which a defined number of seeds is randomly selected to initialize the pCN chains. The generated pCN sample set is then employed within the ASTPA approach to eventually estimate the sought failure probability. Consistent with our previous findings [29, 4], the ASTPA estimator remains unbiased, and the derived analytical coefficient of variation aligns very well with the sampling one, as demonstrated by the numerical experiments presented in this work.
The structure of this paper is as follows: Section 2 defines the rare event probability estimation problem, highlighting its complexities and the considered challenging yet realistic and practical scenarios. The ASTPA framework is presented in Section 3, followed by a detailed description of the developed gradient-free sampling scheme in Section 4. A summary of the pCN-based ASTPA framework is presented in Section 5. To fully evaluate the capabilities of the proposed methodology, Section 6 demonstrates its performance, successfully comparing it with state-of-the-art Subset Simulation and advanced importance sampling methods across a series of challenging low- and high-dimensional problems. The paper then concludes in Section 7.
2 Rare Event Probability Estimation
Let be a continuous random vector taking values in and having a joint probability density function (PDF) . We are interested in rare events , where is a performance function, also known as the limit-state function, defining the occurrence of a rare event. In this work, we aim to estimate the rare event probability :
(1) |
where is the indicator function, i.e., if , and otherwise, and is the expectation operator.
Our objective in this work is to estimate the described integration in Eq. 1 under these challenging yet realistic settings: (i) the analytical calculation of Eq. 1 is generally intractable; (ii) the computational effort for evaluating for each realization is assumed to be quite significant, often relying on computationally intensive models, necessitating the minimization of such function evaluations (model calls); (iii) the rare event probability is extremely low, typically in the range of or lower; (iv) the random variable space, , is high-dimensional, with often as large as or more; (v) the analytical gradient of the limit state function is not available, motivating the use of gradient-free approaches; and (vi) the joint probability density, , is assumed to follow an independent standard Gaussian distribution, i.e., , with being an identity matrix, a common assumption in rare event estimation. In most cases, this assumption is not restrictive, as transforming the original random variables to the independent standard Gaussian space can be uncomplicated [44, 45, 46, 47]. However, when such Gaussian transformations are not feasible, directly estimating rare event probabilities in complex non-Gaussian spaces may require sophisticated gradient-based samplers, a challenge addressed in our Quasi-Newton mass preconditioned HMCMC (QNp-HMCMC)-based ASTPA framework, as demonstrated in [4, 48, 49]. The gradient-free sampling approach developed in this work is specifically tailored for Gaussian spaces, with ongoing efforts to extend these methods in the future to direct estimations in non-Gaussian spaces.
Under these settings, several sampling-based methods for estimating rare event probabilities become highly inefficient. For instance, the standard Monte Carlo estimator of Eq. 1 using draws from , , has a coefficient of variation , rendering this estimate inefficient by requiring a prohibitively large number of samples to accurately quantify small probabilities; see setting (iii) above. As discussed in Section 1, importance sampling (IS) can efficiently compute the integral in Eq. 1, through sampling from an appropriate importance sampling density (ISD), . The ISD places greater importance on rare event domain, , in the random variable space compared to , satisfying the sufficient condition , with denoting the support of the function. Eq. (1) can then be written as:
(2) |
leading to the unbiased IS estimator:
(3) |
where . The efficiency of IS relies heavily on the careful selection of both the ISD and the sampling algorithm. A theoretically optimal ISD , providing zero-variance IS estimator, is given by [3]:
(4) |
However, this optimal ISD cannot be utilized because the normalizing constant in Eq. 4 is the sought probability . Additionally, can be highly concentrated within the rare event domain, making it challenging to sample, particularly in high dimensions [4]. To address these challenges, we employ our developed importance sampling-based framework, ASTPA [29, 4], which constructs a relaxed ISD that approximates the optimal one while being much easier to sample. This relaxation allows for more efficient sampling, leading to a more accurate estimation of the sought probability. The primary focus of this work is to introduce an efficient, gradient-free sampling approach, which is especially beneficial when analytical gradients are not available, as detailed in Section 4.
3 Approximate Sampling Target with Post-processing Adjustment (ASTPA) in Gaussian Spaces
ASTPA estimates the rare event probability in Eq. 1 through an innovative non-sequential, importance sampling variant, as introduced in [29, 4]. In this approach, ASTPA constructs a single unnormalized approximate sampling target , relaxing the optimal ISD in Eq. 4. After sampling , its normalizing constant is computed utilizing inverse importance sampling. By employing the unnormalized importance sampling distribution , Eq. 2 can be written as:
(5) |
Thus, computing is decomposed into two generally simpler tasks. The first involves constructing and sampling 111 is equivalent to ., to compute the unbiased expectation of the weighted indicator function (shifted probability estimate) as:
(6) |
The normalizing constant, , is then estimated using inverse importance sampling, as discussed in Section 3.2. The ASTPA estimator for the sought rare event probability can then be computed as:
(7) |
Fig. 1 provides a concise illustration of the ASTPA framework using a nonlinear bimodal limit state function, as detailed in Section 6.1, with . The construction of the approximate sampling target in ASTPA, along with recommended parameters, is discussed in Section 3.1. We then present the inverse importance sampling technique in Section 3.2. Subsequently, Section 3.3 shows the statistical properties of the ASTPA estimator presented in Eq. 7.
![]() |
![]() |
|||
(a) | (b) | |||
![]() |
![]() |
![]() |
![]() |
|
(c) | (d) Samples | (e) | (f) Samples |
3.1 Target distribution formulation
As discussed earlier, the basic idea is to construct a near-optimal sampling target distribution , placing higher importance on the rare event domain . In ASTPA, this is achieved by approximating the indicator function in Eq. 4 with a smooth likelihood function , chosen as a logistic cumulative distribution function (CDF) of the negative of a scaled limit state function, , with being a scaling constant:
(8) |
where and are the mean and dispersion factor of the logistic CDF, respectively. The approximate sampling target is ASTPA is then expressed as:
(9) |
The dispersion factor controls the spread of the likelihood function and thus influences the shape of the constructed target . Recommended values for lie in the range of , with lower values (typically between and ) generally working efficiently. Higher values within the recommended range may, however, be needed for high-dimensional cases involving strongly nonlinear effects. The mean parameter, , of the logistic CDF is typically chosen as , generally placing inside the rare event domain, as derived in [4]. In Fig. 2, we illustrate how the dispersion factor, , influences the shape of the target distribution, based on a nonlinear bimodal limit state function, as detailed in Section 6.1. As shown, decreasing results in a more concentrated target distribution inside the rare event domain. Although pushing the target further inside the rare event domain, i.e., closer to the optimal distribution , can generate more rare event samples, it may also complicate the sampling process and, thus, the computation of the normalizing constant , as discussed in [4].
The scaling accounts for the varying magnitudes of limit-state functions, ensuring that the input to the logistic CDF , evaluated at the mean of the original distribution, consistently falls within a predefined range. This process standardizes the input while keeping the recommended values for unchanged. To ensure the input at lies within a desired range of , is defined as , where , and otherwise. Our empirical analysis confirms the effectiveness of this scaling technique, demonstrating that the recommended range for is practical for the studied Gaussian problems.
Fig. 3 demonstrates the effect of the scaling constant on constructing the target distribution . Since the limit state function considered in this problem provides , the scaling constant can be defined as . Fig. 3(b) shows the unnormalized target distribution constructed without applying the proper scaling constant. As a result, the input at does not lie within the desired range, leading to a highly concentrated target. In contrast, Fig. 3(c) presents the corrected target distribution after applying the appropriate scaling factor . These plots clearly demonstrate how scaling the limit-state function helps maintain an appropriately relaxed target, with the recommended parameter values remaining unchanged, thus enhancing sampling efficiency.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
---|---|---|
(a) | (b) | (c) |
3.2 Inverse importance sampling
Inverse importance sampling (IIS) is a general technique to compute normalizing constants, demonstrating exceptional performance in the context of rare event probability estimation [29]. Given an unnormalized distribution , and samples , inverse importance sampling first fits an ISD based on the samples . IIS then estimates the normalizing constant as:
(10) |
By drawing i.i.d. samples from , the unbiased IIS estimator can be computed as:
(11) |
In this work, a Gaussian Mixture Model (GMM), based on the available samples , is employed as the ISD . In low-dimensional spaces, where , we employ a GMM with a large number of Gaussian components that have full covariance matrices. This approach aims to closely approximate the target distribution with an ISD , thus improving the IIS estimator in eq. 11. To address the challenges posed by high-dimensional spaces, GMMs with diagonal covariance matrices are used in high-dimensional examples in this work. This choice reduces the number of GMM parameters to be estimated, thereby mitigating the scalability issues typically associated with density estimation methods. Notably, IIS has been shown to work effectively even with a crudely fitted [4, 29].
The IIS technique is further enhanced using the splitting approach introduced in [4] to improve the robustness of the IIS estimator against the impact of rarely observed outliers or anomalous samples. In this approach, the sample set is split into two equal-sized subsets, and the IIS estimate is computed separately for each subset. If the two estimates are within a reasonable range of each other (specifically, ), the final estimate is taken as their average. Otherwise, the final estimate is conservatively set to the minimum of the two. This method has proven effective across all examples presented in this work.
3.3 Statistical properties of the ASTPA estimator
The ASTPA estimator of the rare event probability can finally be computed as:
(12) |
where and are samples from and , respectively.
As shown in [29, 4], the ASTPA estimator is unbiased, i.e., . The analytical Coefficient of Variation (C.o.V) of the ASTPA estimator can also be computed as , where the variance () is estimated by:
(13) |
Here, and can be computed according to Eqs. 6 and 11, respectively. The variances and can thus be estimated as follows:
(14) |
where denotes the number of used Markov chain samples, accounting for the fact that the samples are not independent and identically distributed (i.i.d) in this case. To address this, a thinning process is adopted to reduce the sample size from to based on the effective sample size (ESS) of the sample set . We select every sample, where , an integer, is determined by:
(15) |
where represents the minimum effective sample size across all dimensions. To ensure remains within practical bounds, it is constrained to the set . Any calculated value of outside this range is adjusted to the nearest bound within it. The analytical C.o.V computed according to this approach showed very good agreement with the sample C.o.V in all our numerical examples.
4 Gradient-free Sampling of the Approximate Target () in ASTPA
In this section, we introduce a novel gradient-free approach for sampling the approximate target, , in ASTPA. Rare event probability estimation typically faces two fundamental sampling challenges. First, efficiently sampling high-dimensional distributions is difficult due to the curse of dimensionality, where the volume of the space expands exponentially with the number of dimensions [50]. Second, the rare event domain, , often exists randomly in the tails of the probability distribution , making it challenging to effectively guide samples toward these regions of interest, which may also exhibit multi-modality; see Fig. 1(a) for an example. In addressing the first challenge, we adopt the dimension-robust preconditioned Crank-Nicolson (pCN) algorithm within our approach, as discussed in Section 4.1. For the second challenge, we develop an effective discovery method, aimed at precisely locating rare event domains and appropriately initializing the pCN chains, as detailed in Section 4.2. Our developed algorithm, termed guided-pCN, thus enhances sampling for by overcoming the limitations of the standard pCN sampler in the context of rare event probability estimation.
4.1 The preconditioned Crank-Nicolson (pCN) algorithm
The preconditioned Crank-Nicolson (pCN) algorithm is a gradient-free Markov Chain Monte Carlo (MCMC) method designed for efficient sampling from high-dimensional probability distributions [36, 37]. To sample from a target distribution , the pCN sampler proposes an update, , at each iteration, , by combining the current sample, , with a perturbation drawn from the original (prior) Gaussian distribution, , as follows:
(16) |
where is a tunable scaling parameter, discussed below, and is a covariance matrix. This update is equivalent to using a Gaussian proposal distribution , yielding the acceptance probability:
(17) |
where is the likelihood function in Eq. 8. This formula illustrates a fundamental characteristic of the pCN algorithm: its invariance to the underlying Gaussian distribution . In essence, this ensures that the proposed sample is always accepted when sampling from , making the pCN highly suitable for efficiently exploring high-dimensional Gaussian spaces. This invariance can be understood by examining the Metropolis-Hastings (MH) ratio when using the pCN algorithm to sample , as:
(18) |
Considring the marginal distribution and the conditional distribution , the joint random vector is therefore Gaussian with a mean vector of and a covariance matrix of . The conditional distribution of given is hence Gaussian with a conditional mean and a covariance . We therefore get:
(19) | ||||
Consequently, the MH ratio in Eq. 18 simplifies to one, confirming that the pCN proposal is indeed invariant under the original Gaussian distribution . As a result, when applying the pCN algorithm to sample from the target distribution , the acceptance probability reduces to the ratio between the likelihood function for the proposed and the current samples:
(20) |
The scaling parameter determines the balance between retaining information from the current sample and exploring new regions of the random variable space. When is close to zero, we have , which keeps the proposed sample close to the current one. This proximity leads to high acceptance rates but potentially slows the mixing of the Markov chain due to limited exploration, thus reducing the effective sample size. Conversely, as approaches one, we have , making largely independent of . This allows the sampler to explore the random variable space more broadly, but the proposed samples are more likely to land in low-probability regions, leading to an increase in rejections. To enhance the sampling efficiency, it is essential to find an optimal balancing these trade-offs, which is typically achieved by tuning to target a desired average acceptance ratio . The Robbins–Monro stochastic approximation algorithm provides a method for such adaptation as [51, 52]:
(21) |
where is the acceptance probability at iteration , computed by Eq. 17, and is a diminishing step size sequence, commonly chosen as [32]. In this work, we target an average acceptance probability in the range of –, with the lower values applied to high-dimensional distributions and the higher values to low-dimensional ones. This range is consistent with the values typically employed in the literature [37, 41] and has proven effective across all numerical examples considered in this work. Algorithm 1 describes the pCN algorithm, where is the initial sample, is the initial scaling parameter ( in this work), is the desired average acceptance ratio, and indicates the number of steps in the pCN chain.
![]() |
![]() |
![]() |
(a) | (b) | (c) |
![]() |
![]() |
|
(d) | (e) Selected seeds |
4.2 Rare event domain discovery
In this section, we introduce a computationally efficient discovery approach for rare event domains, designed to enhance sampling efficiency by generating (multimodal) rare event samples that serve as initial seeds for the pCN chains. This approach addresses the limitations the pCN algorithm may encounter in scenarios involving multimodality and target distributions, , that are significantly distant from the original (prior) distribution .
![]() |
![]() |
![]() |
![]() |
(a) | (b) | (c) | (d) Selected seeds |
This process begins by sampling from the original distribution, , or a more dispersed version, , with the dispersion parameter , to facilitate broader exploration of the random variable space. This step is followed by conditional sampling based on the reciprocal of the conditional distribution , where is an intermediate event defined as , with , where and , chosen adaptively as discussed below. The use of significantly facilitates rapid diffusion toward the rare event domain, compared to sampling from the conditional distribution . While the latter is necessary for methods sequentially computing the sought probability, our method focuses solely on generating samples in the rare event domain (without utilizing any sequential information). The procedure stops after generating a specified number of rare event samples from the synthetic distribution , from which a defined number of seeds, , is randomly selected to initialize the pCN chains, ensuring that the sampler is initialized within the rare event domain.
The detailed steps of this discovery process are outlined in Algorithm 2, where denotes the number of samples of each intermediate event , is the conditional probability for intermediate events, controlling the selection of threshold levels, , is a dispersion parameter for the initial distribution, , and represents the number of the desired pCN chains. Fig. 4 outlines the proposed rare event domain discovery stage, showing the evolution of samples and seed selection, while the computational benefits of using a more dispersed initial distribution are demonstrated in Fig. 5. In this work, we use for all numerical examples to facilitate broader exploration of the random variable space by the initial sample set .
Let , with , be the rare event sample set obtained in Algorithm 2. The probability of selecting as a pCN seed can be given by:
(22) |
where is the value of the target distribution in ASTPA at sample . Then, pCN seeds can be sampled from the set without replacement, using the selection probability . This approach ensures that pCN seeds are more likely to be selected from different rare event modes, based on their statistical weights. Fig. 4(e) and Fig. 5(d) illustrate the selected pCN seeds using this weighted selection method. While this technique is effective for generating multimodal seeds in low-dimensional cases, as demonstrated in the low-dimensional examples in this work, it may not scale well in high-dimensional, multimodal spaces due to the sensitivity of the high-dimensional target distribution values, even for relatively small perturbations. In such cases, we recommend using uniform sampling to select pCN seeds, which ensures a broader exploration of high-dimensional, multimodal rare event domains. This uniform sampling method is successfully adopted in the high-dimensional examples in this work. Two examples of this uniform sampling are shown in Fig. 6(a) and Fig. 6(b), illustrating that the selected seeds are located in the rare event domain but do not follow any relevant target distribution ( or ). The computational efficiency of the proposed discovery approach, which requires significantly fewer model evaluations, just in order to identify the relevant domains, compared to conventional sequential methods utilized for probability estimation, is highlighted by comparing its performance in Fig. 6(a) and Fig. 6(b) with the sampling evolution of the Subset Simulation (SuS) method, as shown in Fig. 6(c).
![]() |
![]() |
![]() |
(a) | (b) | (c) |
5 Summary of Gradient-free ASTPA
The proposed guided pCN-based ASTPA framework can be applied as follows:
-
1.
Construct the approximate sampling target in Eq. 9, where its two parameters and are determined as recommended in Section 3.1.
-
2.
Run the rare event discovery stage as described in Algorithm 2 to obtain initial seeds for the pCN sampler. , , and typically work effectively in Algorithm 2, with being more suitable for high-dimensional problems. Denote the total number of model calls in this stage as .
-
3.
Generate pCN chains, as described in Algorithm 1, with a number of steps, , starting from the seeds obtained in the previous step, to get a sample set . A burn-in phase with a length of is considered, with denoting the total number of burn-in samples, i.e, , and . The required number of chains, , typically ranges between 5 and 25, with higher numbers recommended for multimodal cases.
-
4.
Compute the expected value of the weighted indicator function, , using Eq. 6, based on .
-
5.
Apply Inverse Importance Sampling (IIS) to compute the normalizing constant of . This process initiates by fitting a GMM, , based on , with the structured as recommended in Section 3.2. An i.i.d. sample set is then drawn from to compute using Eq. 11. can be generally around of .
-
6.
Compute the sought rare event probability as .
A target total computational effort of model calls, , can be attributed to the different stages according to all the aforementioned steps. The convergence of the estimator and/or any additional needed model calls can then be checked through Eq. 14.
6 Numerical Results
This section studies several numerical examples to examine the performance and efficiency of the proposed methods. In all examples, the guided pCN-based ASTPA is implemented as summarized in Section 5. The results are compared with state-of-the-art gradient-free algorithms, including the improved Cross Entropy-based Importance Sampling (iCE-IS) algorithm [9], the Sequential Importance Sampling (SIS) method [20], and the adaptive Conditional Sampling Subset Simulation variant (aCS-SuS) [32], all of which have well documented, publicly available code implementations [53]. Comparisons are made in terms of accuracy and computational cost, based on independent simulations. Specifically, for each simulation , we record the target probability estimate , the total number of model calls , and the analytical Coefficient of Variation of ASTPA, , computed as discussed in Section 3.3. is defined for ASTPA in Section 5. Then, we report the mean of the rare event probability estimates, , the mean of the total number of model calls , and the sampling C.o.V, computed as:
(23) |
The mean of the analytical C.o.V, , is also reported in parentheses. The total number of limit-state function evaluations for ASTPA has been determined based on achieving C.o.V values . Reference rare event probabilities are computed based on independent standard Monte Carlo simulations (MCs), as described in Section 2, using - samples, as appropriate. The problem dimensionality is denoted by , and all ASTPA parameters are carefully chosen for all examples but are not optimized.
6.1 Example 1: Nonlinear bimodal convex limit-state function
The first example involves a bimodal nonlinear limit-state function characterized by two independent standard normal random variables:
(24) |
The approximate sampling target, , in ASTPA is constructed using a likelihood dispersion factor and a scaling constant , as discussed in Section 3.1. The discovery stage then starts using and , yielding 10 rare event seeds for the pCN sampler (), with each chain running for iterations. Subsequently, inverse importance sampling (IIS) is applied using samples. Table 1 compares the performance of the guided pCN-ASTPA with iCE-IS, SIS, and aCS-SuS. As shown in table 1, the guided pCN-based ASTPA provides an estimate, closely matching the reference probability obtained through standard Monte Carlo simulation (MCS). Compared to the other methods considered, the guided pCN-based ASTPA demonstrates improved performance, achieving a lower C.o.V using fewer total model calls, . Notably, iCE-IS also demonstrates strong efficiency, outperforming both SIS and aCS-SuS in this 2D example. The excellent agreement between the sampling C.o.V and analytical C.o.V, as reported in parentheses, confirms the accuracy and effectiveness of the ASTPA analytical C.o.V expression. Since this example was introduced earlier in the paper to illustrate the ASTPA framework and the discovery stage, the steps and performance of the proposed framework are visualized in Figs. 1 and 5.
Eq. 24 | 500 Simulations | MCS | pCN-ASTPA | iCE-IS | SIS | aCS-SuS |
---|---|---|---|---|---|---|
( n.a.) | ||||||
1.00E9 | 2,373 | 2,972 | 5,525 | 5,741 | ||
C.o.V | 0.01 | 0.160.16 | 0.18 | 1.18 | 0.72 | |
9.47E-6 | 9.46E-6 | 9.52E-6 | 9.89E-6 | 9.66E-6 |
6.2 Example 2: Quartic bimodal limit-state function
In the second example, we consider a quartic bimodal limit-state function, defined in the standard Gaussian space as:
(25) |
The approximate target is constructed with a likelihood dispersion factor of and a scaling constant . The discovery stage is applied using and , generating 18 rare event seeds for the pCN sampler (), with each chain running for steps. As in the previous example, inverse importance sampling (IIS) is used with samples. Fig. 7 shows the evolution of the rare event domain discovery stage, highlighting the effectiveness of the proposed algorithm in accurately identifying and sampling rare event regions.
Eq. 25 | 500 Simulations | MCS | pCN-ASTPA | iCE-IS | SIS | aCS-SuS |
---|---|---|---|---|---|---|
( n.a.) | ||||||
1.00E9 | 3,165 | 5,000 | 10,750 | 7,945 | ||
C.o.V | 0.30 | 0.120.11 | 0.15 | 2.12 | 2.07 | |
5.91E-8 | 5.92E-8 | 5.80E-8 | 4.28E-8 | 5.79E-8 |
The results, as summarized in Table 2, show that the guided pCN-ASTPA significantly outperforms the other methods, achieving the lowest C.o.V, while using the fewest model calls (). This demonstrates the robustness and efficiency of the guided pCN-ASTPA framework, even in the presence of increased nonlinearity introduced through higher-order terms in the limit-state function. The guided pCN-based ASTPA also provides estimates in close agreement with the reference probability computed by Monte Carlo simulation (MCS). The very good agreement between the sampling and analytical C.o.V values (reported in parentheses) further confirms the reliability of the analytical C.o.V expression for ASTPA in this more challenging problem.



6.3 Example 3: The Himmelblau function
The Himmelblau function [54] is a commonly used fourth-order polynomial test function in nonlinear optimization. In this example, we modify the Himmelblau function to suit rare event scenarios, particularly those with multiple separated rare event domains. The modified limit-state function is defined in the standard Gaussian space as:
(26) |
where and are independent standard normal random variables. The approximate target is constructed using a likelihood dispersion factor and a scaling constant , as described in Section 3.1 and Fig. 3. The discovery stage is applied using and , generating 16 rare event seeds for the pCN sampler, with each chain subsequently running for iterations. Similar to the previous examples, inverse importance sampling (IIS) is applied using samples. Fig. 8 illustrates the constructed target distribution, the rare event domain discovery process, and the pCN samples, effectively exploring all rare event modes. The results in Table 3 show that the guided pCN-ASTPA outperforms the other methods, achieving a lower C.o.V and fewer total model evaluations. This demonstrates the robustness and efficiency of the guided pCN-ASTPA framework, particularly in handling complex, multi-modal rare event domains. The estimate produced by the guided pCN-based ASTPA closely matches the reference probability, further confirming the method’s accuracy.



Eq. 26 | 500 Simulations | MCS | pCN-ASTPA | iCE-IS | SIS | aCS-SuS |
---|---|---|---|---|---|---|
() | ||||||
1.00E9 | 3,430 | 6,466 | 6,216 | 7,101 | ||
C.o.V | 0.05 | 0.180.17 | 0.37 | 0.30 | 0.53 | |
2.81E-7 | 2.82E-7 | 2.36E-7 | 2.21E-7 | 3.02E-7 |
6.4 Example 4: Limit-state surface with changing topological parameter space.
This example examines a limit-state function with significant topological changes, making it a challenging scenario for rare event sampling methods, as described in [28]. The function is defined as:
(27) |
where and are independent standard normal random variables. This limit-state surface is particularly challenging due to its abrupt changes in topology, as shown in Fig. 9(a), requiring a robust sampling framework to accurately capture the rare event domain.
The guided pCN-ASTPA algorithm is applied with a likelihood dispersion factor of and a scaling constant , as outlined in Section 3.1. The discovery stage uses and , generating 6 rare event seeds for the pCN sampler, with each chain running for iterations. Inverse importance sampling (IIS) is employed with samples. Fig. 9 visualizes the limit-state surface, the constructed sampling target, and the evolution of samples in the rare event domain discovery process, highlighting the effectiveness of guided pCN-ASTPA in accurately capturing the influential rare event region. In contrast, Fig. 9(e) demonstrates the limitations of sequential methods like SuS, which struggle to efficiently identify the rare event mode with a competitive number of model evaluations. These rare event domain discovery limitations are more pronounced in iCE-IS in this example, which exhibits a significant error in the estimated mean of the rare event probability. As presented in Table 4, the guided pCN-ASTPA achieves an accurate rare event probability estimate, , with significantly fewer model evaluations and a lower Coefficient of Variation (C.o.V) compared to the other methods. This underscores the effectiveness of the pCN-ASTPA framework in tackling problems with complex topological changes, where traditional sequential methods often underperform.





Eq. 27 | 500 Simulations | MCS | pCN-ASTPA | iCE-IS | SIS | aCS-SuS |
---|---|---|---|---|---|---|
() | ||||||
1.00E9 | 1,370 | 14,744 | 13,542 | 14,597 | ||
C.o.V | 0.01 | 0.110.13 | 0.09 | 3.33 | 1.92 | |
1.13E-5 | 1.10E-5 | 1.92E-8 | 1.32E-5 | 1.26E-5 |
6.5 Example 5: A thirty four-story structural example
This example analyzes a thirty-four-story structure, adapted from [55], as represented in Fig. 10. The structure is subjected to thirty four static loads , for . The floor slabs/beams are assumed to be rigid, and all columns have the same height, m, but different flexural stiffnesses , for . Both the loads and stiffnesses are treated as random variables, resulting in a total of random variables. The loads are modeled as normally distributed with a mean of 2 kN and a C.o.V. of 0.4. Similarly, the stiffnesses are normally distributed with a mean of 20 MNm² and a C.o.V. of 0.2. The top story displacement, , can be determined by summing the interstory displacements , expressed as:
(28) |
The limit-state function indicates failure (a rare event) when the top story displacement exceeds a threshold value , set to 0.22, 0.23, and 0.235 m for different cases, to examine different levels of failure probability. All random variables are transformed into the independent standard Gaussian space for the analysis. Since , as discussed in Section 3.1, scaling is performed with . The discovery stage is performed with and to effectively identify the rare event seeds for 5 pCN chains (), each with a chain length of samples (). The inverse importance sampling (IIS) is performed with resampled points (). Table 5 highlights the outstanding performance of the guided pCN-ASTPA in this high-dimensional problem. Our framework consistently provides accurate failure probability estimates with a low Coefficient of Variation (C.o.V.), while requiring significantly fewer total model evaluations compared to the other methods considered.

Eq. 28 | 500 Simulations | MCS | pCN-ASTPA | iCE-IS | SIS | aCS-SuS |
---|---|---|---|---|---|---|
() | ||||||
1.00E8 | 7,540 | 7,570 | 10,082 | 9,200 | ||
C.o.V | 0.03 | 0.140.14 | 0.55 | 0.60 | 0.24 | |
2.41E-5 | 2.44E-5 | 1.93E-5 | 2.75E-5 | 2.42E-5 | ||
1.00E8 | 7,540 | 10,851 | 11,504 | 11,470 | ||
C.o.V | 0.09 | 0.220.20 | 0.81 | 0.63 | 0.30 | |
1.22E-6 | 1.22E-6 | 0.83E-6 | 1.25E-6 | 1.22E-6 | ||
1.00E8 | 7,540 | 11,833 | 12,782 | 12,815 | ||
C.o.V | 0.20 | 0.270.28 | 0.85 | 1.32 | 0.35 | |
2.46E-7 | 2.48E-7 | 1.55E-7 | 2.41E-7 | 2.41E-7 |
6.6 Example 6: Steel plate
In this example, we consider a low-carbon steel plate, as shown in Fig. 11 [56, 57]. The plate has dimensions of mm × mm, with a circular hole of radius 200 mm located at the center. Due to symmetry, only a quarter of the plate is analyzed. The Poisson ratio is assumed to be and the plate thickness is mm. A deterministic uniform load of is applied on the right edge of the plate. The elastic modulus of the plate is modeled as a two-dimensional homogeneous normal random field with a mean value of and a standard deviation of . The autocorrelation function of the random field is given as:
(29) |
where and are correlation lengths, both set to 300 mm in this example. The stochastic field is described using the Karhunen-Loève (KL) expansion with 100 terms, implying 100 standard Gaussian random variables utilized within the KL expansion ().
The stress, strain, and displacement fields of the plate are governed by the Navier-Cauchy equilibrium equation [58], simplified under the plane stress assumption as:
(30) |
where is the shear modulus, defined as , is the displacement vector, and is the body force acting on the plate. This equation is solved using Finite Element Analysis (FEA) with a mesh of 368 four-node quadrilateral elements, as shown in Fig. 11. The limit-state function is defined as:
(31) |
where is the maximum first principal stress, with failure occurring when this stress exceeds the threshold of 410 MPa.
For the guided pCN-ASTPA implementation, 5 chains () are used, each with a chain length of iterations (). The IIS step is performed with resampled points (). The likelihood dispersion factor is set to , and the scaling constant is applied. The discovery stage utilizes and . The results in Table 6 demonstrate that the guided pCN-ASTPA and iCE-IS methods provide comparable performance in this example, with pCN-ASTPA being slightly better, and with both significantly outperforming SIS and aCS-SuS.
![]() |
![]() |
|
(a) | (b) |
Eq. 31 | 500 Simulations | MCS | pCN-ASTPA | iCE-IS | SIS | aCS-SuS |
---|---|---|---|---|---|---|
() | ||||||
1.00E7 | 7,247 | 7,611 | 11,605 | 12,996 | ||
C.o.V | 0.29 | 0.160.17 | 0.21 | 1.22 | 0.31 | |
1.04E-6 | 1.01E-6 | 1.02E-6 | 0.96E-6 | 1.01E-6 |
6.7 Example 7: Bimodal high-dimensional decic problem
To further investigate the performance of the proposed method in high-dimensional spaces with very significant nonlinearities, we consider the following challenging decic limit-state function in the standard normal space:
(32) |
where controls the number of nonlinear terms, and hence the problem’s complexity. To construct the sampling target in ASTPA, the likelihood dispersion factor is set to , and the scaling constant to . The discovery stage of the proposed framework employs and . Following this stage, to pCN chains are used, each with a chain length ranging from to iterations. Higher values within this range are utilized in scenarios with greater nonlinearities to account for the increased sampling complexity. The inverse importance sampling (IIS) step employs to samples, with larger numbers chosen for more complex scenarios. As shown in Table 7, the guided pCN-ASTPA framework demonstrates remarkable adaptability to increasing nonlinearity, achieving the lowest coefficient of variation (C.o.V.) while requiring fewer model evaluations compared to other methods. In contrast, the iCE-IS method fails to converge with a comparable number of model calls in this example. The ability of the guided pCN-ASTPA to efficiently manage both high dimensionality and strong nonlinearity highlights its robustness and computational efficiency, making it particularly well-suited for complex reliability estimation problems involving intricate variable interactions.
Eq. 32 | 500 Simulations | MCS | pCN-ASTPA | SIS | aCS-SuS |
---|---|---|---|---|---|
(n.a.) | |||||
1.00E7 | 25,434 | 38,488 | 28,526 | ||
C.o.V | 0.08 | 0.260.23 | 3.39 | 0.76 | |
1.02E-5 | 1.01E-5 | 1.20E-5 | 1.06E-5 | ||
1.00E7 | 26,568 | 41,921 | 29,946 | ||
C.o.V | 0.12 | 0.330.30 | 1.66 | 1.26 | |
6.66E-6 | 6.58E-6 | 6.42E-6 | 6.83E-6 | ||
1.00E7 | 29,630 | 43,886 | 31,007 | ||
C.o.V | 0.16 | 0.340.31 | 1.86 | 1.56 | |
4.51E-6 | 4.46E-6 | 4.41E-6 | 4.58E-6 | ||
1.00E7 | 35,072 | 45,830 | 38,520 | ||
C.o.V | 0.18 | 0.330.33 | 2.89 | 2.07 | |
3.12E-6 | 3.15E-6 | 3.24E-6 | 3.42E-6 |
7 Conclusions
In this work, we introduce a novel gradient-free framework for reliability and rare event probabilities estimation in complex, high-dimensional Gaussian spaces, particularly suited for black-box computational models where analytical gradients are not available. At the core of this framework is the foundational Approximate Sampling Target with Post-processing Adjustment (ASTPA) approach, which efficiently estimates rare event (failure) probabilities by constructing and sampling from an unnormalized target distribution, relaxing the optimal importance sampling density (ISD) [29]. This relaxation enables efficient sampling, especially in high-dimensional and highly nonlinear problem settings, with the generated samples employed to compute a shifted estimate of the sought probability. The inverse importance sampling (IIS) procedure subsequently refines the probability estimation by employing a fitted ISD using the already obtained samples to compute the target’s normalizing constant, avoiding scalability challenges.
While previous work has shown that integrating gradient-based samplers with ASTPA yields outstanding performance in both Gaussian and non-Gaussian spaces [4, 29], the absence of analytical gradients hinders the efficient applicability of such approaches. The proposed gradient-free framework addresses this limitation by advancing the preconditioned Crank-Nicolson (pCN) algorithm, incorporating a rare event domain discovery stage to significantly enhance the sampling process. The scale parameter of the pCN algorithm is adapted using the Robbins–Monro stochastic approximation algorithm to achieve a – average acceptance probability, which was proven effective in our numerical examples. Despite the dimension-robustness of pCN, its effectiveness depends on carefully initializing its chains in relation to the regions of interest, particularly in high-dimensional multimodal distributions. To address this, we developed a computationally efficient rare event domain discovery technique, conditionally sampling well-designed synthetic variants of the original distribution, thereby facilitating rapid diffusion toward relevant regions in the random variable space. Our investigation into selecting rare event (pCN) seeds (from the obtained rare event sample set) shows that weighted sampling is effective in low-dimensional cases, whereas uniform sampling is more suitable for initializing the pCN chains in high-dimensional, multimodal cases, where the target distributions become sensitive to even small perturbations. This approach significantly enhances the performance of the pCN algorithm in sampling the approximate target in ASTPA.
The guided pCN-based ASTPA methodology has been tested across a range of challenging Gaussian problems, including mathematically formulated test functions and engineering examples, such as a 34-story frame structure and a steel plate, with rare event probabilities as low as . These mathematical limit-state functions are carefully designed to involve multimodality and up to decic nonlinear terms, representing challenging benchmarks. Our method consistently outperformed state-of-the-art gradient-free approaches, including improved Cross-Entropy Importance Sampling (iCE-IS), Sequential Importance Sampling (SIS), and adaptive Conditional Sampling Subset Simulation (aCS-SuS), both in computational cost and accuracy. Moreover, the ASTPA analytical coefficient of variation (C.o.V) demonstrated close agreement with empirical results, further validating the accuracy of this theoretical estimate.
This guided pCN-based ASTPA framework sets a new performance benchmark for gradient-free rare event probability estimation, offering a scalable and computationally efficient solution for high-dimensional, nonlinear, and multimodal problems. Future work will explore further extensions of the framework, including the development of gradient-free variants suited for direct applicability in non-Gaussian spaces and estimating high-dimensional first-passage problems under various settings [30].
References
- [1] Y. Bai, A. B. Dieker, and H. Lam, “Curse of dimensionality in rare-event simulation,” in 2023 Winter Simulation Conference (WSC), pp. 375–384, IEEE, 2023.
- [2] R. Y. Rubinstein and D. P. Kroese, Simulation and the Monte Carlo Method. John Wiley & Sons, 2016.
- [3] G. L. Ang, A. H.-S. Ang, and W. H. Tang, “Optimal importance-sampling density estimator,” Journal of Engineering Mechanics, vol. 118, no. 6, pp. 1146–1163, 1992.
- [4] E. Eshra, K. G. Papakonstantinou, and H. Nikbakht, “A direct importance sampling-based framework for rare event uncertainty quantification in non-Gaussian spaces,” arXiv preprint arXiv:2405.14149, 2025.
- [5] G. I. Schuëller and R. Stix, “A critical appraisal of methods to determine failure probabilities,” Structural Safety, vol. 4, no. 4, pp. 293–309, 1987.
- [6] S.-K. Au and J. L. Beck, “Estimation of small failure probabilities in high dimensions by Subset Simulation,” Probabilistic Engineering Mechanics, vol. 16, no. 4, pp. 263–277, 2001.
- [7] N. Kurtz and J. Song, “Cross-entropy-based adaptive importance sampling using Gaussian mixture,” Structural Safety, vol. 42, pp. 35–44, 2013.
- [8] Z. Wang and J. Song, “Cross-entropy-based adaptive importance sampling using von Mises-Fisher mixture for high dimensional reliability analysis,” Structural Safety, vol. 59, pp. 42–52, 2016.
- [9] I. Papaioannou, S. Geyer, and D. Straub, “Improved cross entropy-based importance sampling with a flexible mixture model,” Reliability Engineering & System Safety, vol. 191, p. 106564, 2019.
- [10] F. Uribe, I. Papaioannou, Y. M. Marzouk, and D. Straub, “Cross-entropy-based importance sampling with failure-informed dimension reduction for rare event simulation,” SIAM/ASA Journal on Uncertainty Quantification, vol. 9, no. 2, pp. 818–847, 2021.
- [11] J. Demange-Chryst, F. Bachoc, J. Morio, and T. Krauth, “Variational autoencoder with weighted samples for high-dimensional non-parametric adaptive importance sampling,” arXiv preprint arXiv:2310.09194, 2023.
- [12] S. Tong and G. Stadler, “Large deviation theory-based adaptive importance sampling for rare events in high dimensions,” SIAM/ASA Journal on Uncertainty Quantification, vol. 11, no. 3, pp. 788–813, 2023.
- [13] M. Chiron, C. Genest, J. Morio, and S. Dubreuil, “Failure probability estimation through high-dimensional elliptical distribution modeling with multiple importance sampling,” Reliability Engineering & System Safety, vol. 235, p. 109238, 2023.
- [14] S.-K. Au and J. L. Beck, “A new adaptive importance sampling scheme for reliability calculations,” Structural Safety, vol. 21, no. 2, pp. 135–158, 1999.
- [15] S.-K. Au and J. L. Beck, “Important sampling in high dimensions,” Structural Safety, vol. 25, no. 2, pp. 139–163, 2003.
- [16] A. Tabandeh, G. Jia, and P. Gardoni, “A review and assessment of importance sampling methods for reliability analysis,” Structural Safety, vol. 97, p. 102216, 2022.
- [17] M. Ehre, I. Papaioannou, and D. Straub, “Stein variational rare event simulation,” arXiv preprint arXiv:2308.04971, 2023.
- [18] T. Cui, S. Dolgov, and R. Scheichl, “Deep importance sampling using tensor trains with application to a priori and a posteriori rare events,” SIAM Journal on Scientific Computing, vol. 46, no. 1, pp. C1–C29, 2024.
- [19] P. Del Moral, A. Doucet, and A. Jasra, “Sequential Monte Carlo samplers,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, no. 3, pp. 411–436, 2006.
- [20] I. Papaioannou, C. Papadimitriou, and D. Straub, “Sequential importance sampling for structural reliability analysis,” Structural Safety, vol. 62, pp. 66–75, 2016.
- [21] A. Sinha, M. O’Kelly, R. Tedrake, and J. C. Duchi, “Neural bridge sampling for evaluating safety-critical autonomous systems,” Advances in Neural Information Processing Systems, vol. 33, pp. 6402–6416, 2020.
- [22] J. Xian and Z. Wang, “Relaxation-based importance sampling for structural reliability analysis,” Structural Safety, vol. 106, p. 102393, 2024.
- [23] K. M. Zuev, J. L. Beck, S.-K. Au, and L. S. Katafygiotis, “Bayesian post-processor and other enhancements of Subset Simulation for estimating failure probabilities in high dimensions,” Computers & Structures, vol. 92, pp. 283–296, 2012.
- [24] Z. Wang, M. Broccardo, and J. Song, “Hamiltonian Monte Carlo methods for Subset Simulation in reliability analysis,” Structural Safety, vol. 76, pp. 51–67, 2019.
- [25] M. D. Shields, D. G. Giovanis, and V. Sundar, “Subset Simulation for problems with strongly non-Gaussian, highly anisotropic, and degenerate distributions,” Computers & Structures, vol. 245, p. 106431, 2021.
- [26] W. Chen, Z. Wang, M. Broccardo, and J. Song, “Riemannian Manifold Hamiltonian Monte Carlo based Subset Simulation for reliability analysis in non-Gaussian space,” Structural Safety, vol. 94, p. 102134, 2022.
- [27] D. Thaler, S. L. Dhulipala, F. Bamer, B. Markert, and M. D. Shields, “Reliability analysis of complex systems using Subset Simulations with Hamiltonian Neural Networks,” arXiv preprint arXiv:2401.05244, 2024.
- [28] K. Breitung, “The geometry of limit state function graphs and Subset Simulation: Counterexamples,” Reliability Engineering & System Safety, vol. 182, pp. 98–106, 2019.
- [29] K. G. Papakonstantinou, H. Nikbakht, and E. Eshra, “Hamiltonian MCMC methods for estimating rare events probabilities in high-dimensional problems,” Probabilistic Engineering Mechanics, vol. 74, p. 103485, 2023.
- [30] K. G. Papakonstantinou, E. Eshra, and H. Nikbakht, “Hamiltonian MCMC based framework for time-variant rare event uncertainty quantification,” in 14th International Conference on Applications of Statistics and Probability in Civil Engineering (ICASP), Dublin, Ireland, 2023.
- [31] K. M. Zuev and L. S. Katafygiotis, “Modified Metropolis–Hastings algorithm with delayed rejection,” Probabilistic Engineering Mechanics, vol. 26, no. 3, pp. 405–412, 2011.
- [32] I. Papaioannou, W. Betz, K. Zwirglmaier, and D. Straub, “MCMC algorithms for Subset Simulation,” Probabilistic Engineering Mechanics, vol. 41, pp. 89–103, 2015.
- [33] A. Mira, “On Metropolis–Hastings algorithms with delayed rejection,” Metron, vol. 59, no. 3-4, pp. 231–241, 2001.
- [34] F. Miao and M. Ghosn, “Modified Subset Simulation method for reliability analysis of structural systems,” Structural Safety, vol. 33, no. 4-5, pp. 251–260, 2011.
- [35] A. Santoso, K. Phoon, and S. Quek, “Modified Metropolis–Hastings algorithm with reduced chain correlation for efficient Subset Simulation,” Probabilistic Engineering Mechanics, vol. 26, no. 2, pp. 331–341, 2011.
- [36] R. M. Neal, “Regression and classification using Gaussian process priors,” Bayesian Statistics, vol. 6, pp. 475–501, 1998.
- [37] S. L. Cotter, G. O. Roberts, A. M. Stuart, and D. White, “MCMC methods for functions: modifying old algorithms to make them faster,” Statistical Science, pp. 424–446, 2013.
- [38] M. Hairer, A. M. Stuart, and S. J. Vollmer, “Spectral gaps for a Metropolis–Hastings algorithm in infinite dimensions,” The Annals of Applied Probability, vol. 24, no. 6, pp. 2455 – 2490, 2014.
- [39] Z. Hu, Z. Yao, and J. Li, “On an adaptive preconditioned Crank–Nicolson MCMC algorithm for infinite dimensional Bayesian inference,” Journal of Computational Physics, vol. 332, pp. 492–503, 2017.
- [40] B. Carrera and I. Papaioannou, “Covariance-based MCMC for high-dimensional Bayesian updating with Sequential Monte Carlo,” Probabilistic Engineering Mechanics, vol. 77, p. 103667, 2024.
- [41] D. Rudolf and B. Sprungk, “On a generalization of the preconditioned Crank–Nicolson Metropolis algorithm,” Foundations of Computational Mathematics, vol. 18, pp. 309–343, 2018.
- [42] S.-K. Au and E. Patelli, “Rare event simulation in finite-infinite dimensional space,” Reliability Engineering & System Safety, vol. 148, pp. 67–77, 2016.
- [43] S. Xiao and W. Nowak, “Failure probability estimation with failure samples: An extension of the two-stage Markov chain Monte Carlo simulation,” Mechanical Systems and Signal Processing, vol. 212, p. 111300, 2024.
- [44] M. Hohenbichler and R. Rackwitz, “Non-normal dependent vectors in structural safety,” Journal of the Engineering Mechanics Division, vol. 107, no. 6, pp. 1227–1238, 1981.
- [45] A. Der Kiureghian and P.-L. Liu, “Structural reliability under incomplete probability information,” Journal of Engineering Mechanics, vol. 112, no. 1, pp. 85–104, 1986.
- [46] R. Lebrun and A. Dutfoy, “An innovating analysis of the Nataf transformation from the copula viewpoint,” Probabilistic Engineering Mechanics, vol. 24, no. 3, pp. 312–320, 2009.
- [47] I. Papaioannou and D. Straub, “Efficient sampling of non-Gaussian priors in high dimensions,” in Proc. 12th International Conference on Structural Safety & Reliability (ICOSSAR), Vienna, Austria, 2017.
- [48] K. G. Papakonstantinou, H. Nikbakht, and E. Eshra, “Quasi-Newton Hamiltonian MCMC sampling for reliability estimation in high-dimensional non-Gaussian spaces,” in Proc. 13th International Conference on Structural Safety & Reliability (ICOSSAR), Shanghai, China, 2022.
- [49] H. Nikbakht and K. G. Papakonstantinou, “A direct Hamiltonian MCMC approach for reliability estimation,” in Proc. 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP), Hersonissos, Greece, 2019.
- [50] M. Betancourt, “A conceptual introduction to Hamiltonian Monte Carlo,” arXiv preprint arXiv:1701.02434, 2017.
- [51] H. Robbins and S. Monro, “A stochastic approximation method,” The Annals of Mathematical Statistics, pp. 400–407, 1951.
- [52] C. Andrieu and J. Thoms, “A tutorial on adaptive MCMC,” Statistics and Computing, vol. 18, pp. 343–373, 2008.
- [53] Engineering Risk Analysis Group, “Reliability analysis tools,” 2024. Technical University of Munich. Available at https://github.com/ERA-Software/Overview.
- [54] D. M. Himmelblau, Applied Nonlinear Programming. McGraw-Hill, 1972.
- [55] C. Bucher, Computational Analysis of Randomness in Structural Mechanics: Structures and Infrastructures Book Series, vol. 3. CRC Press, 2009.
- [56] I. Papaioannou, M. Ehre, and D. Straub, “PLS-based adaptation for efficient PCE representation in high dimensions,” Journal of Computational Physics, vol. 387, pp. 186–204, 2019.
- [57] M. Ko and K. G. Papakonstantinou, “Quadratic point estimate method for probabilistic moments computation,” Probabilistic Engineering Mechanics, vol. 79, p. 103705, 2025.
- [58] C. Johnson, Numerical Solution of Partial Differential Equations by the Finite Element Method. Dover Publications, 2012.