This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Tail approximations for sums of dependent regularly varying random variables under Archimedean copula models

Hélène COSSETTE, Etienne MARCEAU, Quang Huy NGUYEN Université Laval, École d’Actuariat, 2425, rue de l’Agriculture, Pavillon Paul-Comtois, local 4177, Québec (Québec) G1V 0A6, Canada, Canada    and Christian Y. ROBERT Université de Lyon, Université Lyon 1, Institut de Science Financière et d’Assurances, 50 Avenue Tony Garnier, F-69007 Lyon, France
Abstract

In this paper, we compare two numerical methods for approximating the probability that the sum of dependent regularly varying random variables exceeds a high threshold under Archimedean copula models. The first method is based on conditional Monte Carlo. We present four estimators and show that most of them have bounded relative errors. The second method is based on analytical expressions of the multivariate survival or cumulative distribution functions of the regularly varying random variables and provides sharp and deterministic bounds of the probability of exceedance. We discuss implementation issues and illustrate the accuracy of both procedures through numerical studies.

Keywords: Tail approximation; Archimedean Copulas; Dependent regularly varying random variables; Conditional Monte Carlo simulation; Numerical Bounds

1 Introduction

A well-known problem in applied probability is the evaluation of the probability that the sum of nn random variables (rvs) exceeds a certain level ss. This problem finds its way in many areas of application such as actuarial science, finance, quantitative risk management and, reliability. Different methods can be used to tackle this problem depending on the type of distributions of the random variables and their interaction as well as the values of nn and ss.

In this paper, we consider the case of nn positive heavy-tailed random variables that are linked through dependence structures based on Archimedean copulas. An nn-dimensional copula CC is a multivariate distribution on [0,1]n\left[0,1\right]^{n} with uniform margins. Following [Ling, 1965], any Archimedean copula can be simply written as

C(u1,,un)=Φ(Φ(u1)++Φ(un)), (u1,,un)[0,1]n,C(u_{1},...,u_{n})=\Phi\left(\Phi^{\leftarrow}(u_{1})+...+\Phi^{\leftarrow}(u_{n})\right),\text{ }(u_{1},...,u_{n})\in\left[0,1\right]^{n}, (1)

where Φ\Phi is a non-increasing function referred to as the generator of the copula CC and Φ\Phi^{\leftarrow} is the generalized inverse function of Φ\Phi defined as Φ(x)=inf{t+:Φ(t)x}\Phi^{\leftarrow}(x)=\inf\{t\in\mathbb{R}^{+}:\Phi(t)\leq x\}. The conditions under which a generator Φ\Phi defines a proper nn-dimensional copula are given in detail in [McNeil and Nešlehová, 2009].

Let 𝐔=(U1,,Un)\mathbf{U}=\left(U_{1},...,U_{n}\right) be a random vector distributed as an Archimedean copula CC. For i=1,,ni=1,\ldots,n, we assume that the survival distribution function of the ii-th rv of the sum is regularly varying, i.e. it satifies F¯i(x)=1Fi(x)=xαili(x)\bar{F}_{i}(x)=1-F_{i}(x)=x^{-\alpha_{i}}l_{i}(x) where αi>0\alpha_{i}>0 and lil_{i} is a slowly varying function at infinity. We assume without loss of generality that α1αn\alpha_{1}\leq\ldots\leq\alpha_{n}. Throughout the paper we shall consider two sums that are linked through the Archimedean copula CC in the following way

SnX=i=1nXiandSnY=i=1nYi,S_{n}^{X}=\sum_{i=1}^{n}X_{i}\quad\text{and}\quad S_{n}^{Y}=\sum_{i=1}^{n}Y_{i}\text{,}

where

𝐗=(X1,,Xn)=𝑑(F1(U1),,Fn(Un))and𝐘=(Y1,,Yn)=𝑑(F¯1(U1),,F¯n(Un)),\mathbf{X}=(X_{1},...,X_{n})\overset{d}{=}(F_{1}^{\leftarrow}(U_{1}),...,F_{n}^{\leftarrow}(U_{n}))\quad\text{and}\quad\mathbf{Y}=(Y_{1},...,Y_{n})\overset{d}{=}\left(\overline{F}_{1}^{\leftarrow}(U_{1}),...,\overline{F}_{n}^{\leftarrow}(U_{n})\right),

with Fi(ui)=inf{t+:Fi(t)x}F_{i}^{\leftarrow}(u_{i})=\inf\{t\in\mathbb{R}^{+}:F_{i}(t)\geq x\} since FiF_{i} is a non-decreasing function. Note that the multivariate cumulative distribution function of 𝐗\mathbf{X} is given by

Pr(X1x1,,Xnxn)=C(F1(x1),,Fn(xn)), (x1,,xn)n,\Pr(X_{1}\leq x_{1},...,X_{n}\leq x_{n})=C(F_{1}(x_{1}),...,F_{n}(x_{n})),\text{ }(x_{1},...,x_{n})\in\mathbb{R}^{n},

while the multivariate survival distribution function of 𝐘\mathbf{Y} is given by

Pr(Y1>y1,,Yn>yn)=C(F¯1(y1),,F¯n(yn)), (y1,,yn)n.\Pr(Y_{1}>y_{1},...,Y_{n}>y_{n})=C\left(\overline{F}_{1}(y_{1}),...,\overline{F}_{n}(y_{n})\right),\text{ }(y_{1},...,y_{n})\in\mathbb{R}^{n}.

The Archimedean copula CC is referred to as the copula of 𝐗\mathbf{X} and as the survival copula of 𝐘\mathbf{Y}.

To approximate the probabilities zX(s)=Pr(SnX>s)z_{X}(s)=\Pr\left(S_{n}^{X}>s\right) and zY(s)=Pr(SnY>s)z_{Y}(s)=\Pr\left(S_{n}^{Y}>s\right) when ss is large, one could use functions that are asymptotically equivalent to these probabilities. However there are few results concerning the asymptotic behaviours of these probabilities. Actually it strongly depends on the tails of the Archimedean copulas (see [Charpentier and Segers, 2009] for a fine analysis of the several types of tails based on characteristics of the Archimedean generator) and strong assumptions have to hold to characterize these behaviours. For example, if the upper-tails of the Archimedean copula CC are independent, i.e limu1Pr(Ui>u|Uj>u)=0\lim_{u\rightarrow 1}\Pr(U_{i}>u|U_{j}>u)=0 for all iji\neq j and if there exist n1n-1 non-negative constants c2,,cnc_{2},...,c_{n} such that ci=limsF¯i(s)/F¯1(s)c_{i}=\lim_{s\rightarrow\infty}\bar{F}_{i}(s)/\bar{F}_{1}(s), then it can be shown that

limsPr(SnX>s)F¯1(s)=1+i=2nci.\lim\limits_{s\rightarrow\infty}\frac{\Pr(S_{n}^{X}>s)}{\overline{F}_{1}(s)}=1+\sum_{i=2}^{n}c_{i}.

If the lower-tails of the Archimedean copula CC are rather independent, i.e. limu0Pr(Ui<u|Uj<u)=0\lim_{u\rightarrow 0}\Pr(U_{i}<u|U_{j}<u)=0 for all iji\neq j, then

limsPr(SnY>s)F¯1(s)=1+i=2nci\lim\limits_{s\rightarrow\infty}\frac{\Pr(S_{n}^{Y}>s)}{\overline{F}_{1}(s)}=1+\sum_{i=2}^{n}c_{i}

(see e.g. [Jessen and H., 2006] or [Yuen and Yin, 2012]). [Sun and Li, 2010] studied the asymptotic behaviours of zX(s)z_{X}(s) and zY(s)z_{Y}(s) under the assumption of identically distributed marginals and specific upper or lower-tail dependence. Let β>0\beta>0 and lΦl_{\Phi} be a slowly varying function at infinity. If 1Φ(x)=xβlΦ(x1)1-\Phi(x)=x^{\beta}l_{\Phi}\left(x^{-1}\right), they proved that

limsPr(SnX>s)Pr(X1>s)=qnC(α,β),\lim\limits_{s\rightarrow\infty}\frac{\Pr(S_{n}^{X}>s)}{\Pr(X_{1}>s)}=q_{n}^{C}(\alpha,\beta),

where α\alpha denotes the common tail index and

qnC(α,β)=i=1nvi1/α>1nv1vn1i1,,ijn((1)j1(vi11/β++vij1/β)β)dv1dvn.q_{n}^{C}(\alpha,\beta)=\int_{\sum_{i=1}^{n}v_{i}^{-1/\alpha}>1}\frac{\partial^{n}}{\partial v_{1}...\partial v_{n}}\sum_{1\leq i_{1},...,i_{j}\leq n}\left((-1)^{j-1}(v_{i_{1}}^{1/\beta}+...+v_{i_{j}}^{1/\beta})^{\beta}\right)\mathrm{d}v_{1}...\mathrm{d}v_{n}.

If the generator rather satisfies Φ(x)=xβlΦ(x)\Phi(x)=x^{-\beta}l_{\Phi}\left(x\right), they proved that

limsPr(SnY>s)Pr(Y1>s)=qnD(α,β),\lim\limits_{s\rightarrow\infty}\frac{\Pr(S_{n}^{Y}>s)}{\Pr(Y_{1}>s)}=q_{n}^{D}(\alpha,\beta),

where

qnD(α,β)=i=1nvi1>1nv1vn(v1α/β++vnα/β)βdv1dvn,q_{n}^{D}(\alpha,\beta)=\int_{\sum_{i=1}^{n}v_{i}^{-1}>1}\frac{\partial^{n}}{\partial v_{1}...\partial v_{n}}\left(v_{1}^{-\alpha/\beta}+...+v_{n}^{-\alpha/\beta}\right)^{-\beta}\mathrm{d}v_{1}...\mathrm{d}v_{n},

(see also [Wüthrich, 2003]). Although qnC(α,β)q_{n}^{C}(\alpha,\beta) and qnD(α,β)q_{n}^{D}(\alpha,\beta) are known, they do not have closed-form expressions and they can not be easily computed.

In this paper, we aim to provide two numerical methods for approximating zX(s)z_{X}(s) and zY(s)z_{Y}(s) for different values of nn and ss and choices of parameters and functions: ((α1,l1),,(αn,ln))((\alpha_{1},l_{1}),\ldots,(\alpha_{n},l_{n})) and Φ\Phi.

Our first method is based on conditional Monte Carlo and is ideally suited when ss and/or nn are large. The classical Monte Carlo method is easy to implement and can be applied in complex situations such as high dimensional calculations. However, it is well known that it is inadequate for small probability simulation since the relative errors (variation coefficients) are too large. [Asmussen and Glynn, 2007] introduced relative error as a measure of efficiency of an estimator and several definitions of efficient estimators. An unbiased estimator Z(s)Z(s) of the probability z(s)z(s), with relative error e(Z(s))=𝔼[Z2(s)]/z(s)e(Z(s))=\sqrt{\mathbb{E[}Z^{2}(s)]}/z(s), is called (i) a logarithmically efficient estimator if limsupse(Z(s))[z(s)]ϵ=0\lim\sup_{s\rightarrow\infty}e(Z(s))\ [z(s)]^{\epsilon}=0 for all ϵ>0\epsilon>0; (ii) an estimator with bounded relative error if limsupse(Z(s))<\lim\sup_{s\rightarrow\infty}e(Z(s))<\infty; (iii) an estimator with vanishing relative error limsupse(Z(s))=0\lim\sup_{s\rightarrow\infty}e(Z(s))=0.

For sums of independent random variables, the most widely used alternatives to crude Monte Carlo computation of rare-event probabilities are conditional Monte Carlo and importance sampling. [Asmussen and Binswanger, 1997] propose a logarithmically efficient algorithm based on conditional Monte Carlo simulation using order statistics.

[Boots and Shahabuddin, 2001] use importance sampling to simulate ruin probabilities for subexponential claims and [Juneja and Shahabuddin, 2002] use importance sampling based on hazard rate twisting to simulate heavy-tailed processes. [Asmussen and Kroese, 2006] propose two algorithms which use importance sampling and conditional Monte carlo and study their efficiency in the Pareto and Weibull case.

Estimating tail distribution of the sums of dependent random variables via simulation requires a specific expression for the dependence structure or a closed form expression for the conditional distribution functions, the case of elliptic distributions is an example. For an elliptic dependence structure, [Blanchet and Rojas-Nandayapa, 2011] proposed a conditional Monte Carlo estimator for the tail distribution of the sum of log-elliptic random variables and proved that it has a logarithmically efficient relative error. The sum of the log-elliptic random variables was also estimated by [Kortschak and Hashorva, 2013] using the simulation method introduced by [Asmussen and Kroese, 2006] and favorable results are presented especially in the multivariate lognormal case. [Asmussen et al., 2011] and [Blanchet et al., 2008] focus on the efficient estimation of sums of correlated lognormals using importance sampling and conditional Monte Carlo strategies. [Chan and Kroese, 2010], [Chan and Kroese, 2011] use conditional Monte Carlo notably in a credit risk setting under the t-copula model to estimate rare-event probabilities.

In this paper, we introduce four different estimators of the probabilities zX(s)=Pr(SnX>s)z_{X}(s)=\Pr\left(S_{n}^{X}>s\right) and zY(s)=Pr(SnY>s)z_{Y}(s)=\Pr\left(S_{n}^{Y}>s\right) using techniques of conditional Monte Carlo simulation. The main idea to build our estimators is to first isolate the known probabilities Pr(MnX>s)\Pr(M_{n}^{X}>s) or Pr(MnY>s)\Pr(M_{n}^{Y}>s) where MnAM_{n}^{A} correspond to the maximum element of a given vector 𝐀\mathbf{A} (because Pr(MnX>s)\Pr(M_{n}^{X}>s) or Pr(MnY>s)\Pr(M_{n}^{Y}>s) have closed-form expressions in our framework), and then simulate conditionally on the values taken by these maxima. Two effective simulation techniques of vectors of Archimedean copula proposed in [Brechmann et al., 2013] and [McNeil and Nešlehová, 2009] will be used. We show that most of our estimators have bounded relative errors.

Our second method is based on analytical expressions of the survival multivariate distribution function and provides sharp, deterministic and numerical bounds of the probabilities using the same ideas as developed in [Cossette et al., 2014]. This method performs very well for cases when nn is relatively small and effectively completes the conditional Monte Carlo method.

The outline of the paper is as follows. In Section 2, we present the two simulation techniques related to Archimedean copulas which are later used to develop the proposed estimators. These estimators are introduced, described and discussed in Section 3. Some results on their asymptotic efficiency are also given. Section 4 explains how to derive and compute the numerical bounds following the approach proposed in [Cossette et al., 2014]. Section 5 illustrates the accuracy of both methods through a numerical study and discusses implementation issues.

2 Simulation and conditional simulation with Archimedean copulas

The classical simulation method for a dependent vector relies on conditional distributions. Consider a random vector 𝐔\mathbf{U} with density function c(u1,,un)c(u_{1},...,u_{n}) which can be decomposed as the product of conditional densities

c(u1,,un)=cn|n1,,1(un|un1,,u1)c2|1(u2|u1)c1(u1).c(u_{1},...,u_{n})=c_{n|n-1,\ldots,1}(u_{n}|u_{n-1},...,u_{1})...c_{2|1}(u_{2}|u_{1})c_{1}(u_{1}).

The classical procedure of simulating vector such a vector 𝐔\mathbf{U} is then: simulate u1u_{1} based on c1(u1)c_{1}(u_{1}), simulate U2U_{2} based on c2|1(u2|u1)c_{2|1}(u_{2}|u_{1}), …, simulate UnU_{n} based on cn|n1,,1(un|un1,,u1)c_{n|n-1,...,1}(u_{n}|u_{n-1},...,u_{1}). Hence, the realization of vector 𝐔\mathbf{U} is created by calculating (n1)(n-1) times the inverses of conditional distribution functions. However it can be difficult and take quite an amount of time when the distribution function of 𝐔\mathbf{U} is an Archimedean copula.

Another method for an Archimedean copula could be to consider the mixed exponential or frailty representation often used to model dependent lifetimes and discussed in, notably, [Marshall and Olkin, 1988], [McNeil, 2008] and [Hofert, 2008]. In this case, the Archimedean generator Φ\Phi is the Laplace-Stieltjes transform of a non-negative random variable. Such a method thus requires to invert the Laplace-Stieltjes transform Φ\Phi which can not always be evaluated explicitly.

To circumvent these problems, one can resort to two effective simulation techniques proposed in [Brechmann et al., 2013] and [McNeil and Nešlehová, 2009]. More precisely, [Brechmann et al., 2013] use the Kendall distribution function while [McNeil and Nešlehová, 2009] suggest a simulation method which relies on 1\ell_{1}-norm symmetric distributions.

2.1 Brechmann, Hendrich and Czado’s approach

Arguing that the classical method does not work due to the problem of calculating the inverse functions of conditional distributions

Cj|j1,,1(uj|uj1,,u1)=Pr(Ujuj|uj1,,u1),C_{j|j-1,...,1}(u_{j}|u_{j-1},...,u_{1})=\Pr(U_{j}\leq u_{j}|u_{j-1},...,u_{1}),

[Brechmann, 2014] provides an algorithm to simulate Archimedean copulas using an intermediate variable ZZ whose distribution function is known as the Kendall distribution function (see [Barbe et al., 1996]). This conditional inverse simulation method eliminates the problems encountered with the numerical calculations of the inverse functions of Cj|j1,,1C_{j|j-1,...,1}. We restate below two propositions which will prove useful for the understanding of the algorithm proposed by [Brechmann et al., 2013].

Proposition 1 (Barbe et al., 1996)

Let 𝐔\mathbf{U} be distributed as the Archimedean copula CC  with generator Φ\Phi and let the random variable ZZ be defined as Z=C(𝐔)Z=C(\mathbf{U}). Then, the density function of ZZ is defined in terms of the generator Φ\Phi as

fZ(z)=(1)n1(n1)!(Φ(z))n1(Φ)(1)(z)Φ(n)(Φ(z)).f_{Z}(z)=\frac{(-1)^{n-1}}{(n-1)!}\ \left(\Phi^{\leftarrow}(z)\right)^{n-1}\ (\Phi^{\leftarrow})^{(1)}(z)\ \Phi^{(n)}\left(\Phi^{\leftarrow}(z)\right).
Proposition 2 (Brechmann, 2014)

Let 𝐔\mathbf{U} be distributed as the Archimedean copula CC with generator Φ\Phi and let the random variable ZZ be defined as Z=C(𝐔)Z=C(\mathbf{U}). Then, the conditional distribution of Uj|Z,Uj1,,U1U_{j}|Z,U_{j-1},...,U_{1} for j=1,,nj=1,...,n is

FUj|Z,Uj1,,U1(uj|z,uj1,,u1)=(1Φ(uj)Φ(z)k=1j1Φ(uk))njF_{U_{j}|Z,U_{j-1},...,U_{1}}(u_{j}|z,u_{j-1},...,u_{1})=\left(1-\frac{\Phi^{\leftarrow}(u_{j})}{\Phi^{\leftarrow}(z)-\sum_{k=1}^{j-1}\Phi^{\leftarrow}(u_{k})}\right)^{n-j}

for 1>uj>Φ(Φ(z)k=1j1Φ(uk))1>u_{j}>\Phi\left(\Phi^{\leftarrow}(z)-\sum_{k=1}^{j-1}\Phi^{\leftarrow}(u_{k})\right).

From these results, the inverse function of the conditional distribution function

FUj|Z,Uj1,,U1(uj|z,uj1,,u1)F_{U_{j}|Z,U_{j-1},...,U_{1}}(u_{j}|z,u_{j-1},...,u_{1})

can be calculated with an explicit formula. Indeed, if we have z,uj1,,u1z,u_{j-1},...,u_{1} as realizations of Z,Uj1,,U1Z,U_{j-1},...,U_{1} respectively and vv as a realization of a uniform random variable in (0,1)(0,1), a realization of UjU_{j} is obtained with

uj=Φ((1v1/(nj))(Φ(z)k=1j1Φ(uk))).u_{j}=\Phi\left((1-v^{1/(n-j)})\left(\Phi^{\leftarrow}(z)-\sum_{k=1}^{j-1}\Phi^{\leftarrow}(u_{k})\right)\right)\text{.}

The conditional distribution of (Z|U1)(Z\left|U_{1}\right.) is given in the following proposition.

Proposition 3 (Brechmann et al., 2013)

Let 𝐔\mathbf{U} be distributed as the Archimedean copula CC with generator Φ\Phi and let the random variable ZZ be defined as Z=C(𝐔)Z=C(\mathbf{U}). Then, the conditional distribution FZ|U1(z|u1)F_{Z|U_{1}}(z|u_{1}) can be calculated by the Archimedean generator and its derivatives as

FZ|U1(z|u1)=(Φ)(1)(u1)j=0n2(1)jj!(Φ(z)Φ(u1))jΦ(j+1)(Φ(z)) for z(0,u1).F_{Z|U_{1}}(z|u_{1})=\left(\Phi^{\leftarrow}\right)^{(1)}(u_{1})\sum_{j=0}^{n-2}\frac{(-1)^{j}}{j!}\left(\Phi^{\leftarrow}(z)-\Phi^{\leftarrow}(u_{1})\right)^{j}\Phi^{(j+1)}(\Phi^{\leftarrow}(z))\text{{\ for }}z\in(0,u_{1}). (2)

Given the above propositions, the following algorithm is derived from [Brechmann et al., 2013] to generate a random vector (X1,,Xn)(X_{1},...,X_{n}) from an nn-dimensional Archimedean copula CC with generator Φ\Phi or a random vector (Y1,,Yn)(Y_{1},...,Y_{n}) from an nn-dimensional survival Archimedean copula CC with generator Φ\Phi.

Algorithm 4

Brechmann et al. (2013)’s algorithm.

  1. 1.

    Generate a random variable U1U_{1} uniformally distributed on (0,1).

  2. 2.

    Generate a random variable (Z|U1=u1)\left(Z\left|U_{1}=u_{1}\right.\right) from (2)\left(\ref{Distribution Z|U1}\right).

  3. 3.

    For j=2,,n:j=2,...,n: generate a random variable (Uj|Z=z,U1=u1,,Uj1=uj1)\left(U_{j}\left|Z=z,U_{1}=u_{1},...,U_{j-1}=u_{j-1}\right.\right) with

    uj=Φ((1v1/(nj))(Φ(z)k=1j1Φ(uk))),u_{j}=\Phi\left((1-v^{1/(n-j)})\left(\Phi^{\leftarrow}(z)-\sum_{k=1}^{j-1}\Phi^{\leftarrow}(u_{k})\right)\right),

    where VV has been generated as a random variable uniformally distributed on (0,1).

  4. 4.

    Set Xi=FXi(Ui)X_{i}=F_{X_{i}}^{\leftarrow}\left(U_{i}\right) or Yi=F¯Yi(Ui)Y_{i}=\bar{F}_{Y_{i}}^{\leftarrow}\left(U_{i}\right) for i=1,,ni=1,...,n.

As a consequence, it is easy to simulate a conditional Archimedean copula U=(U1,,Un|U1[a,b])U=(U_{1},...,U_{n}|U_{1}\in[a,b]) where [a,b](0,1)[a,b]\in(0,1) by first simulating U1U_{1} uniformly distributed on [a,b][a,b] and then by following Steps 2-4 of the previous algorithm.

2.2 McNeil and Neslehovà’s approach

[McNeil and Nešlehová, 2009] give the conditions under which a generator Φ\Phi defines an nn-dimensional copula by means of (1) and show that the close connection between Archimedean copulas and 1\ell_{1}-norm symmetric distributions, introduced by [Fang and Fang, 1988], and that allows a new perspective and understanding of Archimedean copulas. With such an insight, they are able to consider cases where an Archimedean generator is not completely monotone or equivalently is not equal to a Laplace transform of a non-negative random variable (see [Kimberling, 1974]).

Theorem 5 (McNeil and Neslehovà, 2009)

A real function Φ\Phi: [0,)[0,1][0,\infty)\rightarrow[0,1] is the generator of an nn-dimensional Archimedean copula if and only if it is an nn-monotone function on [0,)[0,\infty) i.e it is differentiable up to order (n2)(n-2) and the derivatives satisfy (1)iΦ(i)(x)0,i=0,1,,n2(-1)^{i}\Phi^{(i)}(x)\geq 0,i=0,1,...,n-2 for any xx in [0,)[0,\infty) and further if (1)n2Φ(n2)(-1)^{n-2}\Phi^{(n-2)} is non-increasing and convex in [0,)[0,\infty).

Definition 6 (Fang and Fang, 1988)

A random vector 𝐗\mathbf{X} on +n=[0,)n\mathbb{R}_{+}^{n}=\left[0,\infty\right)^{n} follows an 1\ell_{1}-norm symmetric distribution if and only if there exists a non-negative random variable RR independent of 𝐖\mathbf{W}, where 𝐖=(W1,,Wn)\mathbf{W}=(W_{1},...,W_{n}) is a random vector distributed uniformly on the unit simplex 𝔰n\mathfrak{s}_{n},

𝔰n={𝐱+n:𝐱1=1},\mathfrak{s}_{n}=\left\{\mathbf{x}\in\mathbb{R}_{+}^{n}:\left\|\mathbf{x}\right\|_{1}=1\right\},

so that 𝐗\mathbf{X} permits the stochastic representation

𝐗=𝑑R𝐖.\mathbf{X}\overset{d}{\mathbf{=}}R\mathbf{W}\text{.}

The random variable RR is referred to as the radial part of 𝐗\mathbf{X} and its distribution as the radial distribution.

The following theorem establishes the connection between 1\ell_{1}-norm symmetric distributions and Archimedean copulas. More details and interesting results and comments in that regard can be found in [McNeil and Nešlehová, 2009].

Theorem 7 (McNeil and Neslehovà, 2009)

Let the random vector 𝐔\mathbf{U}\ be distributed according to an nn-dimensional Archimedean copula CC with generator Φ\Phi. Then, (Φ1(U1),,Φ1(Un))\left(\Phi^{-1}\left(U_{1}\right),...,\Phi^{-1}\left(U_{n}\right)\right) has an 1\ell_{1}-norm symmetric distribution with survival copula CC and radial distribution FRF_{R} given by

FR(x)=1j=0n2(1)jxjj!Φ(j)(x)(1)n1xn1(n1)!Φ(n1)+(x)x[0,).F_{R}(x)=1-\sum_{j=0}^{n-2}(-1)^{j}\frac{x^{j}}{j!}\Phi^{(j)}(x)-(-1)^{n-1}\frac{x^{n-1}}{(n-1)!}\Phi^{(n-1)+}(x)\text{, }x\in\left[0,\infty\right).

This last theorem implies

(U1,,Un)=𝑑(Φ(RW1),,Φ(RWn))(U_{1},...,U_{n})\,\overset{d}{=}(\Phi(RW_{1}),...,\Phi(RW_{n}))

for RR a positive random variable with distribution function FRF_{R} and 𝐖\mathbf{W} a random vector uniformly distributed on the nn-dimensional unit simplex 𝔰n\mathfrak{s}_{n}. Hence, since the vector (X1,,Xn)(X_{1},...,X_{n}) has marginal distribution functions F1,,FnF_{1},...,F_{n} and F(x1,,xn)=C(F1(x1),,Fn(xn))F\left(x_{1},...,x_{n}\right)=C\left(F_{1}\left(x_{1}\right),...,F_{n}\left(x_{n}\right)\right) where CC is an Archimedean copula with generator Φ\Phi, then

(X1,,Xn)=𝑑(F1(Φ(RW1)),,Fn(Φ(RWn))).(X_{1},...,X_{n})\overset{d}{=}(F_{1}^{\leftarrow}(\Phi(RW_{1})),...,F_{n}^{\leftarrow}(\Phi(RW_{n}))).

Since (Y1,,Yn)(Y_{1},...,Y_{n}) has marginal distribution functions F1,,FnF_{1},...,F_{n} with a dependence structure defined through an Archimedean survival copula CC with generator Φ\Phi, then

(Y1,,Yn)=𝑑(F¯1(Φ(RW1)),,F¯n(Φ(RWn))).(Y_{1},...,Y_{n})\,\overset{d}{=}\,(\overline{F}_{1}^{\leftarrow}(\Phi(RW_{1})),...,\overline{F}_{n}^{\leftarrow}(\Phi(RW_{n})))\text{.}

This representation leads to the following sampling algorithm.

Algorithm 8

McNeil and Neslehovà’s algorithm.

  1. 1.

    Generate a vector (E1,,En)(E_{1},...,E_{n}) of nn iid exponential rvs with parameter 11. Calculate Wi=Ei/j=1nEjW_{i}=E_{i}/\sum_{j=1}^{n}E_{j} such that 𝐖\mathbf{W} is uniformly distributed on the nn-dimensional unit simplex 𝔰n\mathfrak{s}_{n}.

  2. 2.

    Generate a random variable RR with distribution FRF_{R} (see Theorem 7).

  3. 3.

    Return 𝐔\mathbf{U} where Ui=Φ(RWi)U_{i}=\Phi\left(RW_{i}\right) for i=1,,ni=1,...,n.

  4. 4.

    Set Xi=FXi(Ui)X_{i}=F_{X_{i}}^{\leftarrow}\left(U_{i}\right) or Yi=F¯Yi(Ui)Y_{i}=\bar{F}_{Y_{i}}^{\leftarrow}\left(U_{i}\right) for i=1,,ni=1,...,n.

3 Estimators of zX(s)z_{X}(s) and zY(s)z_{Y}(s)

In this section, we propose four different estimators of zX(s)z_{X}(s) and zY(s)z_{Y}(s). All estimators rely on a similar idea which is to decompose the probability of interest into different components. The known components are exactly evaluated and the other ones, which are our main concern, are estimated by simulation.

To begin, we need to establish basic notations. Throughout, 𝐗i\mathbf{X}_{-i} denotes the vector 𝐗=(X1,,Xn)\mathbf{X}=(X_{1},...,X_{n}) with the ii-th component XiX_{i} removed and MiXM_{i}^{X} (or Mi{𝐗}M_{i}\{\mathbf{X}\}) corresponds to the ithi^{th} element of the vector 𝐗\mathbf{X} after rearranging the elements of 𝐗\mathbf{X} in a non-decreasing order. Obviously, M1XM_{1}^{X} and MnXM_{n}^{X} correspond to the minimum and maximum element of vector 𝐗\mathbf{X}, respectively. The same convention holds for 𝐘=(Y1,,Yn)\mathbf{Y}=(Y_{1},...,Y_{n}).

In what follows, we encounter frequently the evaluation of the probabilities Pr(MnX>s)\Pr(M_{n}^{X}>s) and Pr(MnY>s)\Pr\left(M_{n}^{Y}>s\right) which rely on the marginal distributions F1F_{1},…,FnF_{n} and the Archimedean generator Φ\Phi. They are obtained as follows

Pr(MnX>s)=1C(F1(s),,Fn(s))\Pr(M_{n}^{X}>s)=1-C\left(F_{1}(s),...,F_{n}(s)\right)

and

Pr(MnY>s)=1C¯(F¯1(s),,F¯n(s)),\Pr(M_{n}^{Y}>s)=1-\overline{C}\left(\overline{F}_{1}(s),...,\overline{F}_{n}(s)\right),

with C¯(u1,,un)=Pr(U1>u1,,Un>un)\overline{C}\left(u_{1},...,u_{n}\right)=\Pr\left(U_{1}>u_{1},...,U_{n}>u_{n}\right).

3.1 First estimator

The first estimator of zX(s)z_{X}(s) and zY(s)z_{Y}(s) that we propose is based on the simulation technique used by [Brechmann et al., 2013] to generate sampled values of the conditional vector (𝐔|U1)[a,b]\left(\mathbf{U}|U_{1}\right)\in[a,b] (see Section 2.1). The idea is to first isolate the known probability Pr(MnX>s)\Pr(M_{n}^{X}>s) and then condition on the value taken by the maximum MnXM_{n}^{X}.

Hence, we have

zX(s)\displaystyle z_{X}(s) =\displaystyle= Pr(MnX>s)+Pr(SnX>s,s/n<MnXs)\displaystyle\Pr(M_{n}^{X}>s)+\Pr(S_{n}^{X}>s,s/n<M_{n}^{X}\leq s)
=\displaystyle= Pr(MnX>s)+i=1nPr(s/n<Xis)Pr(SnX>s,Xi=MnX|s/n<Xis).\displaystyle\Pr(M_{n}^{X}>s)+\sum_{i=1}^{n}\Pr(s/n<X_{i}\leq s)\ \Pr(S_{n}^{X}>s,X_{i}=M_{n}^{X}|s/n<X_{i}\leq s).

Then, it leads to the following estimator ZNR1X(s)Z_{NR1}^{X}(s) for zX(s)z_{X}(s):

ZNR1X(s)=Pr(MnX>s)+i=1n(F¯i(s/n)F¯i(s))I{SnXi>s,Xii=MnXi},Z_{NR1}^{X}(s)=\Pr(M_{n}^{X}>s)+\sum_{i=1}^{n}\left(\overline{F}_{i}(s/n)-\overline{F}_{i}(s)\right)I_{\{S_{n}^{X_{i}}>s,X_{i}^{i}=M_{n}^{X_{i}}\}},

where SnXiS_{n}^{X_{i}}, XiiX_{i}^{i} and MnXiM_{n}^{X_{i}} correspond to the conditionnal random variables SnXiS_{n}^{X_{i}}, XiX_{i} and MnXiM_{n}^{X_{i}} given (s/n<Xis)\left(s/n<X_{i}\leq s\right). The challenging problem here is the simulation of the vector

𝐗i=(X1i,,Xni)=(X1,,Xn|s/n<Xis)\mathbf{X}^{i}=(X_{1}^{i},...,X_{n}^{i})=(X_{1},...,X_{n}|s/n<X_{i}\leq s)

with an Archimedean copula as dependence structure. Given that Xi=Fi(Ui)X_{i}=F_{i}^{\leftarrow}(U_{i}), this last vector 𝐗i\mathbf{X}^{i} can be viewed as

(X1,,Xn|s/n<Xis)\displaystyle(X_{1},...,X_{n}|s/n<X_{i}\leq s) =𝑑\displaystyle\,\overset{d}{=}\, (F1(U1),,Fn(Un)|s/n<Fi(Ui)s)\displaystyle(F_{1}^{\leftarrow}(U_{1}),...,F_{n}^{\leftarrow}(U_{n})|s/n<F_{i}^{\leftarrow}(U_{i})\leq s)
=𝑑\displaystyle\,\overset{d}{=}\, (F1(U1i+),,Fn(Uni+)),\displaystyle(F_{1}^{\leftarrow}(U_{1}^{i+}),...,F_{n}^{\leftarrow}(U_{n}^{i+})),

where (U1i+,,Uni+)=(U1,,Un|Fi(s/n)<UiFi(s))(U_{1}^{i+},...,U_{n}^{i+})=(U_{1},...,U_{n}|F_{i}(s/n)<U_{i}\leq F_{i}(s)).

Similarly for the random vector 𝐘\mathbf{Y} with multivariate cumulative distribution function based on the Archimedean survival copula CC, we have

(Y1,,Yn|s/n<Yis)\displaystyle(Y_{1},...,Y_{n}|s/n<Y_{i}\leq s) =𝑑\displaystyle\,\overset{d}{=}\, (F¯1(U1),,F¯n(Un)|s/n<F¯i(Ui)s)\displaystyle(\overline{F}_{1}^{\leftarrow}(U_{1}),...,\overline{F}_{n}^{\leftarrow}(U_{n})|s/n<\overline{F}_{i}^{\leftarrow}(U_{i})\leq s)
=𝑑\displaystyle\,\overset{d}{=}\, (F¯1(U1i),,F¯n(Uni)),\displaystyle(\overline{F}_{1}^{\leftarrow}(U_{1}^{i-}),...,\overline{F}_{n}^{\leftarrow}(U_{n}^{i-})),

where (U1i,,Uni)=(U1,,Un|F¯i(s/n)>UiF¯i(s))(U_{1}^{i-},...,U_{n}^{i-})=(U_{1},...,U_{n}|\overline{F}_{i}(s/n)>U_{i}\geq\overline{F}_{i}(s)). Hence, the first estimator for zY(s)z_{Y}(s) is given by

ZNR1Y(s)=Pr(MnY>s)+i=1n(F¯i(s/n)F¯i(s))I{SnYi>s,Yii=MnYi}.Z_{NR1}^{Y}(s)=\Pr(M_{n}^{Y}>s)+\sum_{i=1}^{n}\left(\overline{F}_{i}(s/n)-\overline{F}_{i}(s)\right)I_{\{S_{n}^{Y_{i}}>s,Y_{i}^{i}=M_{n}^{Y_{i}}\}}.

We are now in a position to propose the following algorithms to generate realizations of both estimators ZNR1X(s)Z_{NR1}^{X}(s) and ZNR1Y(s)Z_{NR1}^{Y}(s).

Algorithm 9

(Estimator ZNR1X(s)Z_{NR1}^{X}(s)) To generate a realization of ZNR1X(s)Z_{NR1}^{X}(s), proceed as follows:

  1. 1.

    For i=1,,ni=1,...,n, independently simulate Uii+U_{i}^{i+} uniformly distributed on (Fi(s/n),Fi(s))(F_{i}(s/n),F_{i}(s)).

  2. 2.

    For each Uii+U_{i}^{i+} in the first step, simulate ZZ based on conditional distribution FZ|Ui+F_{Z|U_{i}^{+}} and then simulate (U1i+,,Ui1i+,Ui+1i+,Uni+)(U_{1}^{i+},...,U_{i-1}^{i+},U_{i+1}^{i+},U_{n}^{i+}).

  3. 3.

    For each j=1,,nj=1,...,n, compute Xji=Fj(Uji+)X_{j}^{i}=F_{j}^{\leftarrow}(U_{j}^{i+}) and return I{SnXi>s,Xii=MnXi}I_{\{S_{n}^{X_{i}}>s,X_{i}^{i}=M_{n}^{X_{i}}\}} which takes value 0 or 1.

  4. 4.

    Return ZNR1X(s)=Pr(MnX>s)+i=1n(F¯i(s/n)F¯(s))I{SnXi>s,Xii=MnXi}Z_{NR1}^{X}(s)=\Pr(M_{n}^{X}>s)+\sum_{i=1}^{n}\left(\overline{F}_{i}(s/n)-\overline{F}(s)\right)I_{\{S_{n}^{X_{i}}>s,X_{i}^{i}=M_{n}^{X_{i}}\}}.

Algorithm 10

(Estimator ZNR1Y(s)Z_{NR1}^{Y}(s)) To generate a realization of ZNR1Y(s)Z_{NR1}^{Y}(s), proceed as follows:

  1. 1.

    For i=1,,ni=1,...,n, independently simulate UiiU_{i}^{i-} uniformly distributed on (F¯i(s),F¯i(s/n))(\bar{F}_{i}(s),\bar{F}_{i}(s/n)).

  2. 2.

    For each UiiU_{i}^{i-} in the first step, simulate ZZ based on conditional distribution FZ|UiF_{Z|U_{i}^{-}} and then simulate (U1i,,Ui1i,Ui+1i,Uni)(U_{1}^{i-},...,U_{i-1}^{i-},U_{i+1}^{i-},U_{n}^{i-}).

  3. 3.

    For each j=1,2,,nj=1,2,\ldots,n, compute Yji=F¯j(Uji)Y_{j}^{i}=\bar{F}_{j}^{\leftarrow}(U_{j}^{i-}) and return I{SnYi>s,Yii=MnYi}I_{\{S_{n}^{Y_{i}}>s,Y_{i}^{i}=M_{n}^{Y_{i}}\}} which takes value 0 or 1.

  4. 4.

    Return ZNR1Y(s)=Pr(MnY>s)+i=1n(F¯i(s/n)F¯i(s))I{SnYi>s,Yii=MnYi}Z_{NR1}^{Y}(s)=\Pr(M_{n}^{Y}>s)+\sum_{i=1}^{n}\left(\overline{F}_{i}(s/n)-\overline{F}_{i}(s)\right)I_{\{S_{n}^{Y_{i}}>s,Y_{i}^{i}=M_{n}^{Y_{i}}\}}.

Proposition 11

Estimators ZNR1X(s)Z_{NR1}^{X}(s) and ZNR1Y(s)Z_{NR1}^{Y}(s) have bounded relative errors.

Proof. See Appendix.   

It is important to note that the approach used here to estimate the sum of regularly varying random variables leads to estimators with a bounded relative error no matter the dependence structure between the random variables.The key element is the simulation of the conditional random vector (𝐗|s/n<Xis)\left(\mathbf{X}|s/n<X_{i}\leq s\right). Unfortunately, the numerical performance of ZNR1(s)Z_{NR1}(s), as we will see in Section 5, is not as good as for the other estimators.

3.2 Second estimator

The construction of the second estimator is based on the stochastic representation of an Archimedean copula proposed by [McNeil and Nešlehová, 2009]. As stated in Section 2.2, for a multivariate random vector 𝐗\mathbf{X} with underlying Archimedean copula CC with generator Φ\Phi, we have

(X1,,Xn)=𝑑(F1(Φ(RW1)),,Fn(Φ(RWn))),(X_{1},...,X_{n})\,\overset{d}{=}\,(F_{1}^{\leftarrow}(\Phi(RW_{1})),...,F_{n}^{\leftarrow}(\Phi(RW_{n}))),

where the distribution function of RR is given as in Theorem 7 and 𝐖\mathbf{W} is a random vector uniformly distributed on 𝔰n\mathfrak{s}_{n}. This representation of the random vector 𝐗\mathbf{X} permits to write the probability of interest zX(s)z_{X}(s) as follows:

zX(s)\displaystyle z_{X}(s) =\displaystyle= Pr(MnX>s)+Pr(SnX>s,MnXs)\displaystyle\Pr(M_{n}^{X}>s)+\Pr(S_{n}^{X}>s,M_{n}^{X}\leq s)
=\displaystyle= Pr(MnX>s)+Pr(i=1nFi(Φ(RWi))>s,Mn{Fi(Φ(RWi))}s)\displaystyle\Pr(M_{n}^{X}>s)+\Pr\left(\sum_{i=1}^{n}F_{i}^{\leftarrow}(\Phi(RW_{i}))>s,M_{n}\left\{F_{i}^{\leftarrow}(\Phi(RW_{i}))\right\}\leq s\right)
=\displaystyle= Pr(MnX>s)+Pr(i=1nFi(Φ(RWi))>s,RMn{Φ(Fi(s))Wi}).\displaystyle\Pr(M_{n}^{X}>s)+\Pr\left(\sum_{i=1}^{n}F_{i}^{\leftarrow}\left(\Phi(RW_{i})\right)>s,R\geq M_{n}\left\{\frac{\Phi^{\leftarrow}(F_{i}(s))}{W_{i}}\right\}\right).

By conditioning on the random vector 𝐖\mathbf{W}, we have

Pr(i=1nFi(Φ(RWi))>s,RMn{Φ(Fi(s))Wi})\displaystyle\Pr\left(\sum_{i=1}^{n}F_{i}^{\leftarrow}\left(\Phi(RW_{i})\right)>s,R\geq M_{n}\left\{\frac{\Phi^{\leftarrow}(F_{i}(s))}{W_{i}}\right\}\right)
=\displaystyle= E𝐖[Pr(i=1nFi(Φ(RWi))>s,RMn{Φ(Fi(s))Wi}|𝐖)].\displaystyle E_{\mathbf{W}}\left[\Pr\left(\left.\sum_{i=1}^{n}F_{i}^{\leftarrow}\left(\Phi(RW_{i})\right)>s,R\geq M_{n}\left\{\frac{\Phi^{\leftarrow}(F_{i}(s))}{W_{i}}\right\}\right|\mathbf{W}\right)\right].

Then, we obtain the following second estimator of zX(s)z_{X}(s) in terms of the known radial cumulative distribution function FRF_{R} given by

ZNR2X(s)\displaystyle Z_{NR2}^{X}(s) =\displaystyle= Pr(MnX>s)+Pr(i=1nFi(Φ(RWi))>s,RMn{Φ(Fi(s))Wi}|𝐖)\displaystyle\Pr(M_{n}^{X}>s)+\Pr\left(\left.\sum_{i=1}^{n}F_{i}^{\leftarrow}\left(\Phi(RW_{i})\right)>s,R\geq M_{n}\left\{\frac{\Phi^{\leftarrow}(F_{i}(s))}{W_{i}}\right\}\right|\mathbf{W}\right)
=\displaystyle= Pr(MnX>s)+(FR(UX(𝐖,s))FR(LX(𝐖,s))),\displaystyle\Pr(M_{n}^{X}>s)+\left(F_{R}(U^{X}(\mathbf{W},s))-F_{R}(L^{X}(\mathbf{W},s))\right),

where

UX(𝐖,s)=sup{r+:i=1nFi(Φ(rWi))s}U^{X}(\mathbf{W},s)=\sup\{r\in\mathbb{R}^{+}:\sum_{i=1}^{n}F_{i}^{\leftarrow}(\Phi(rW_{i}))\leq s\} (3)

and

LX(𝐖,s)=Mn{Φ(Fi(s))Wi}.L^{X}(\mathbf{W},s)=M_{n}\left\{\frac{\Phi^{\leftarrow}(F_{i}(s))}{W_{i}}\right\}. (4)

In a similar fashion for the random vector 𝐘\mathbf{Y} with an underlying Archimedean survival copula, we obtain the estimator ZNR2Y(s)Z_{NR2}^{Y}(s) for zY(s)z_{Y}(s) which is given by

ZNR2Y(s)=Pr(MnY>s)+(FR(UY(𝐖,s))FR(LY(𝐖,s))),Z_{NR2}^{Y}(s)=\Pr(M_{n}^{Y}>s)+\left(F_{R}(U^{Y}(\mathbf{W},s))-F_{R}(L^{Y}(\mathbf{W},s))\right),

where

UY(𝐖,s)=M1{Φ(F¯i(s))Wi}U^{Y}(\mathbf{W},s)=M_{1}\left\{\frac{\Phi^{\leftarrow}(\overline{F}_{i}(s))}{W_{i}}\right\} (5)

and

LY(𝐖,s)=inf{r+:i=1nF¯i(Φ(rWi))s}.L^{Y}(\mathbf{W},s)=\inf\{r\in\mathbb{R}^{+}:\sum_{i=1}^{n}\overline{F}_{i}^{\leftarrow}(\Phi(rW_{i}))\geq s\}. (6)

Note that if the marginal distributions are continuous and strictly increasing, then UX(𝐖,s)U^{X}(\mathbf{W},s) and LY(𝐖,s)L^{Y}(\mathbf{W},s) are the unique roots of equations i=1nFi(Φ(xWi))=s\sum_{i=1}^{n}F_{i}^{\leftarrow}(\Phi(xW_{i}))=s and i=1nF¯i(Φ(xWi))=s\sum_{i=1}^{n}\overline{F}_{i}^{\leftarrow}(\Phi(xW_{i}))=s respectively.

The sampling algorithm to generate realizations of ZNR2X(s)Z_{NR2}^{X}(s) and ZNR2Y(s)Z_{NR2}^{Y}(s) can be written down as outlined in the following.

Algorithm 12

(Estimator ZNR2X(s)Z_{NR2}^{X}(s)) To generate a realization of ZNR2X(s)Z_{NR2}^{X}(s), proceed as follows:

  1. 1.

    Let (E1,,En)(E_{1},...,E_{n}) be nn iid exponential rvs with parameter 11. Calculate Wi=Ei/j=1nEjW_{i}=E_{i}/\sum_{j=1}^{n}E_{j}.

  2. 2.

    Evaluate numerically UX(𝐖,s)U^{X}(\mathbf{W},s) from (3)\left(\ref{UX}\right) and LX(𝐖,s)L^{X}(\mathbf{W},s) from (4)\left(\ref{LX}\right).

  3. 3.

    Calculate derivatives Φ(j)\Phi^{(j)} for j=1,,n1j=1,...,n-1 and then the radial distribution FR.F_{R}.

  4. 4.

    Return ZNR2X(s)=Pr(MnX>s)+FR(UX(𝐖,s))FR(LX(𝐖,s)).Z_{NR2}^{X}(s)=\Pr(M_{n}^{X}>s)+F_{R}(U^{X}(\mathbf{W},s))-F_{R}(L^{X}(\mathbf{W},s)).

Algorithm 13

(Estimator ZNR2Y(s)Z_{NR2}^{Y}(s)) To generate a realization of ZNR2Y(s)Z_{NR2}^{Y}(s), proceed as follows:

  1. 1.

    Let (E1,,En)(E_{1},...,E_{n}) be nn iid exponential rvs with parameter 11. Calculate Wi=Ei/j=1nEjW_{i}=E_{i}/\sum_{j=1}^{n}E_{j}.

  2. 2.

    Evaluate numerically UY(𝐖,s)U^{Y}(\mathbf{W},s) from (3)\left(\ref{UX}\right) and LY(𝐖,s)L^{Y}(\mathbf{W},s) from (4)\left(\ref{LX}\right).

  3. 3.

    Calculate derivatives Φ(j)\Phi^{(j)} for j=1,,n1j=1,...,n-1 and then the radial distribution FR.F_{R}.

  4. 4.

    Return ZNR2Y(s)=Pr(MnY>s)+FR(UY(𝐖,s))FR(LY(𝐖,s))Z_{NR2}^{Y}(s)=\Pr(M_{n}^{Y}>s)+F_{R}(U^{Y}(\mathbf{W},s))-F_{R}(L^{Y}(\mathbf{W},s)).

In the following proposition, the accuracy of our estimator ZNR2Y(s)Z_{NR2}^{Y}(s) is investigated under the assumption that the Archimedean generator is regularly varying.

Proposition 14

For the random vector 𝐘\mathbf{Y} with an underlying Archimedean survival copula CC with generator Φ\Phi satisfying Φ(n2)\Phi^{(n-2)} differentiable and Φ(x)=xβlΦ(x)\Phi(x)=x^{-\beta}l_{\Phi}\left(x\right) with β>0\beta>0, then ZNR2Y(s)Z_{NR2}^{Y}(s) has a bounded relative error.

Proof. See Appendix.   

3.3 Third estimator

This section presents the third estimator for zX(s)z_{X}(s) which will show better numerical performances than the two previous estimators in the numerical study presented in a later section. The third estimator for zY(s)z_{Y}(s) is based on the same idea and is not discussed. Let us separate the probability zX(s)z_{X}(s) into the components

Pr(SnX>s)=Pr(MnX>s)+z1X(s)+z2X(s),\Pr(S_{n}^{X}>s)=\Pr(M_{n}^{X}>s)+z_{1}^{X}(s)+z_{2}^{X}(s),

where z1X(s)=Pr(SnX>s,Mn1Xλs,MnXs)z_{1}^{X}(s)=\Pr(S_{n}^{X}>s,M_{n-1}^{X}\leq\lambda s,M_{n}^{X}\leq s), z2(s)=Pr(SnX>s,Mn1X>λs,MnXs)z_{2}(s)=\Pr(S_{n}^{X}>s,M_{n-1}^{X}>\lambda s,M_{n}^{X}\leq s) and λ\lambda is a positive quantity less than 1/n1/n. In z1X(s)z_{1}^{X}(s), the inequality Mn1XλsM_{n-1}^{X}\leq\lambda s implies that there is only one variable taking a large value. Consequently, we estimate z1X(s)z_{1}^{X}(s) conditionally on 𝐗i\mathbf{X}_{-i} when Xi=MnXX_{i}=M_{n}^{X}. In z2X(s)z_{2}^{X}(s), there are at least two variables taking large values, so it is coherent if we estimate z2X(s)z_{2}^{X}(s) conditionally on the uniform random vector 𝐖\mathbf{W} defined on the unit simplex 𝔰n\mathfrak{s}_{n}.

3.3.1 Estimators for z1X(s)z_{1}^{X}(s) and z1Y(s)z_{1}^{Y}(s)

Let us develop the probability z1X(s)z_{1}^{X}(s) as

Pr(SnX>s,Mn1Xλs,MnX<s)\displaystyle\Pr(S_{n}^{X}>s,M_{n-1}^{X}\leq\lambda s,M_{n}^{X}<s) =\displaystyle= i=1nPr(SnX>s,Mn1Xλs,MnX<s,Xi=MnX)\displaystyle\sum_{i=1}^{n}\Pr(S_{n}^{X}>s,M_{n-1}^{X}\leq\lambda s,M_{n}^{X}<s,X_{i}=M_{n}^{X})
=\displaystyle= i=1nPr(SnX>s,max{𝐗i}λs,Xi<s,Xi=MnX).\displaystyle\sum_{i=1}^{n}\Pr(S_{n}^{X}>s,\max\{\mathbf{X}_{-i}\}\leq\lambda s,X_{i}<s,X_{i}=M_{n}^{X}).

By conditioning on 𝐗i\mathbf{X}_{-i}, we obtain the following estimator ZNR3,1[i]X(s)Z_{NR3,1}^{[i]X}(s) for Pr(SnX>s,max{𝐗i}λs,Xi<s,Xi=MnX):\Pr(S_{n}^{X}>s,\max\{\mathbf{X}_{-i}\}\leq\lambda s,X_{i}<s,X_{i}=M_{n}^{X}):

ZNR3,1[i]X(s)=Ψ(𝐗i,s),Z_{NR3,1}^{[i]X}(s)=\Psi\left(\mathbf{X}_{-i},s\right),

where

Ψ(𝐱i,s)=I{max{𝐱i}λs}Pr(s>Xi>sj=1,jinxj)\Psi\left(\mathbf{x}_{-i},s\right)=I_{\{\max\{\mathbf{x}_{-i}\}\leq\lambda s\}}\ \Pr\left(s>X_{i}^{\ast}>s-\sum_{j=1,j\neq i}^{n}x_{j}\right)

with Xi=(Xi|𝐗i=𝐱i)X_{i}^{\ast}=\left(X_{i}|\mathbf{X}_{-i}=\mathbf{x}_{-i}\right) and i=1,,ni=1,...,n. Note that, if max{𝐱i}<λs\max\{\mathbf{x}_{-i}\}<\lambda s, then

sj=1,jinxj>(1(n1)λ)s>s/n>λsmax{𝐱i}s-\sum_{j=1,j\neq i}^{n}x_{j}>(1-(n-1)\lambda)s>s/n>\lambda s\geq\max\{\mathbf{x}_{-i}\}

which is coherent with Xi=MnX>max{𝐗i}X_{i}=M_{n}^{X}>\max\{\mathbf{X}_{-i}\}. Estimator ZNR3,1X(s)Z_{NR3,1}^{X}(s) for z1X(s)z_{1}^{X}(s) is then defined by

ZNR3,1X(s)=i=1nZNR3,1[i]X(s).Z_{NR3,1}^{X}(s)=\sum_{i=1}^{n}Z_{NR3,1}^{[i]X}(s).

Under the assumption of identically distributed random variables X1,,XnX_{1},...,X_{n}, the estimator ZNR3,1X(s)Z_{NR3,1}^{X}(s) coincides with Asmussen and Kroese’s estimator (see [Asmussen and Kroese, 2006]).

To perform the calculations, we need the conditional distribution of Xi=(Xi|𝐗i=𝐱i)X_{i}^{\ast}=\left(X_{i}|\mathbf{X}_{-i}=\mathbf{x}_{-i}\right) for each i=1,,ni=1,...,n which is given by

FXi(xi)=Φ(n1)(j=1nΦ1(Fj(xj)))Φ(n1)(j=1,jinΦ1(Fj(xj))).F_{X_{i}^{\ast}}(x_{i})=\frac{\Phi^{(n-1)}\left(\sum_{j=1}^{n}\Phi^{-1}(F_{j}(x_{j}))\right)}{\Phi^{(n-1)}\left(\sum_{j=1,j\neq i}^{n}\Phi^{-1}(F_{j}(x_{j}))\right)}.

The method is similar to obtain ZNR3,1Y(s)Z_{NR3,1}^{Y}(s) except for the expression of the distribution of YiY_{i}^{\ast} which is slightly more difficult to derive.

Proposition 15

Let 𝐘=(Y1,,Yn)\mathbf{Y}=(Y_{1},...,Y_{n}) with multivariate distribution defined with an Archimedean survival copula and marginals F1,,FnF_{1},...,F_{n}. The conditional cumulative distribution function of Yi=(Yi|𝐘i=𝐲i)Y_{i}^{\ast}=\left(Y_{i}|\mathbf{Y}_{-i}=\mathbf{y}_{-i}\right) is

FYi(yi)=Pr(Yiyi)=1Φ(n1)(j=1nΦ1(F¯j(yj)))Φ(n1)(j=1,jinΦ1(F¯j(yj))).F_{Y_{i}^{\ast}}(y_{i})=\Pr(Y_{i}^{\ast}\leq y_{i})=1-\frac{\Phi^{(n-1)}\left(\sum_{j=1}^{n}\Phi^{-1}(\bar{F}_{j}(y_{j}))\right)}{\Phi^{(n-1)}\left(\sum_{j=1,j\neq i}^{n}\Phi^{-1}(\bar{F}_{j}(y_{j}))\right)}\text{.}

Proof. See Appendix.   

3.3.2 Estimators for z2X(s)z_{2}^{X}(s) and z2Y(s)z_{2}^{Y}(s)

Given [McNeil and Nešlehová, 2009], we can write the probability z2X(s)=Pr(SnX>s,Mn1X>λs,MnXs)z_{2}^{X}(s)=\Pr(S_{n}^{X}>s,M_{n-1}^{X}>\lambda s,M_{n}^{X}\leq s) as

z2X(s)\displaystyle z_{2}^{X}(s) =\displaystyle= Pr(i=1nFi(Φ(WiR))>s,Mn1{Fi(Φ(WiR))}>λs,Mn{Fi(Φ(WiR))}s)\displaystyle\Pr\left(\sum_{i=1}^{n}F_{i}^{\leftarrow}\left(\Phi(W_{i}R)\right)>s,M_{n-1}\{F_{i}^{\leftarrow}\left(\Phi(W_{i}R)\right)\}>\lambda s,M_{n}\{F_{i}^{\leftarrow}\left(\Phi(W_{i}R)\right)\}\leq s\right)
=\displaystyle= Pr(R<UX(𝐖,s),R<Mn1{Φ(Fi(λs))Wi},RLX(𝐖,s)).\displaystyle\Pr\left(R<U^{X}(\mathbf{W},s),R<M_{n-1}\left\{\frac{\Phi^{\leftarrow}\left(F_{i}(\lambda s)\right)}{W_{i}}\right\},R\geq L^{X}(\mathbf{W},s)\right).

Conditioning on 𝐖\mathbf{W}, we have

z2X(s)=𝔼𝐖[Pr(R<UX(𝐖,s),R<Mn1{Φ(Fi(λs))Wi},RLX(𝐖,s)|𝐖)],z_{2}^{X}(s)=\mathbb{E}_{\mathbf{W}}\left[\Pr\left(\left.R<U^{X}(\mathbf{W},s),R<M_{n-1}\left\{\frac{\Phi^{\leftarrow}\left(F_{i}(\lambda s)\right)}{W_{i}}\right\},R\geq L^{X}(\mathbf{W},s)\right|\mathbf{W}\right)\right],

which leads to the estimator ZNR3,2X(s)Z_{NR3,2}^{X}(s) given by

ZNR3,2X(s)\displaystyle Z_{NR3,2}^{X}(s) =\displaystyle= FR(UX(𝐖,s)Mn1{Φ(Fi(λs))Wi})FR(LX(𝐖,s))\displaystyle F_{R}\left(U^{X}(\mathbf{W},s)\wedge M_{n-1}\left\{\frac{\Phi^{\leftarrow}\left(F_{i}(\lambda s)\right)}{W_{i}}\right\}\right)-F_{R}\left(L^{X}(\mathbf{W},s)\right)
=\displaystyle= FR(UλX(𝐖,s))FR(LX(𝐖,s))\displaystyle F_{R}\left(U_{\lambda}^{X}(\mathbf{W},s)\right)-F_{R}\left(L^{X}(\mathbf{W},s)\right)

with UX(𝐖(j),s)U^{X}(\mathbf{W}^{\left(j\right)},s), LX(𝐖(j),s)L^{X}(\mathbf{W}^{\left(j\right)},s) as in (3) and (4) respectively, and

UλX(𝐖(j),s)=UX(𝐖(j),s)Mn1{Φ(Fi(λs))Wi(j)}.U_{\lambda}^{X}(\mathbf{W}^{\left(j\right)},s)=U^{X}(\mathbf{W}^{\left(j\right)},s)\wedge M_{n-1}\left\{\frac{\Phi^{\leftarrow}\left(F_{i}(\lambda s)\right)}{W_{i}^{\left(j\right)}}\right\}. (7)

Similarly, under an Archimedean survival copula, we have

z2Y(s)\displaystyle z_{2}^{Y}(s) =\displaystyle= Pr(i=1nF¯i(Φ(WiR))>s,Mn1{F¯i(Φ(WiR))}λs,Mn{F¯i(Φ(WiR))}s)\displaystyle\Pr\left(\sum_{i=1}^{n}\overline{F}_{i}^{\leftarrow}\left(\Phi(W_{i}R)\right)>s,M_{n-1}\{\bar{F}_{i}^{\leftarrow}\left(\Phi(W_{i}R)\right)\}\geq\lambda s,M_{n}\{\bar{F}_{i}^{\leftarrow}\left(\Phi(W_{i}R)\right)\}\leq s\right)
=\displaystyle= Pr(R>LY(𝐖,s),RM2{Φ(F¯i(λs))Wi},RUY(𝐖,s)).\displaystyle\Pr(R>L^{Y}(\mathbf{W},s),R\geq M_{2}\left\{\frac{\Phi^{\leftarrow}(\bar{F}_{i}(\lambda s))}{W_{i}}\right\},R\leq U^{Y}(\mathbf{W},s)).

The estimator ZNR3,2Y(s)Z_{NR3,2}^{Y}(s) is hence given by

ZNR3,2Y(s)\displaystyle Z_{NR3,2}^{Y}(s) =\displaystyle= FR(UY(𝐖,s))FR(LY(𝐖,s)M2{Φ(F¯i(λs))Wi})\displaystyle F_{R}\left(U^{Y}(\mathbf{W},s)\right)-F_{R}\left(L^{Y}(\mathbf{W},s)\vee M_{2}\left\{\frac{\Phi^{\leftarrow}(\overline{F}_{i}(\lambda s))}{W_{i}}\right\}\right)
=\displaystyle= FR(UY(𝐖,s))FR(LλY(𝐖,s))\displaystyle F_{R}\left(U^{Y}(\mathbf{W},s)\right)-F_{R}\left(L_{\lambda}^{Y}(\mathbf{W},s)\right)

with UY(𝐖,s)U^{Y}(\mathbf{W},s), LY(𝐖,s)L^{Y}(\mathbf{W},s) as given in (5), (6) respectively, and

LλY(𝐖,s)=LY(𝐖,s)M2{Φ(F¯i(λs))Wi}.L_{\lambda}^{Y}(\mathbf{W},s)=L^{Y}(\mathbf{W},s)\vee M_{2}\left\{\frac{\Phi^{\leftarrow}(\overline{F}_{i}(\lambda s))}{W_{i}}\right\}. (8)

3.3.3 Estimators for zX(s)z_{X}(s) and zY(s)z_{Y}(s)

The third estimators for zX(s)z_{X}(s) and zY(s)z_{Y}(s) are finally given by

ZNR3X(s)=Pr(MnX>s)+ZNR3,1X(s)+ZNR3,2X(s)andZNR3Y(s)=Pr(MnY>s)+ZNR3,1Y(s)+ZNR3,2Y(s).Z_{NR3}^{X}(s)=\Pr(M_{n}^{X}>s)+Z_{NR3,1}^{X}(s)+Z_{NR3,2}^{X}(s)\quad\text{and}\quad Z_{NR3}^{Y}(s)=\Pr(M_{n}^{Y}>s)+Z_{NR3,1}^{Y}(s)+Z_{NR3,2}^{Y}(s).
Algorithm 16

To generate a realization of ZNR3X(s)Z_{NR3}^{X}(s), proceed as follows:

  1. 1.

    Let (E1,,En)(E_{1},...,E_{n}) be nn iid exponential rvs with parameter 1. Calculate Wi=Ei/j=1nEjW_{i}=E_{i}/\sum_{j=1}^{n}E_{j}.

  2. 2.

    Compute UλX(𝐖,s)U_{\lambda}^{X}(\mathbf{W},s) from (7) and LX(𝐖,s)L^{X}(\mathbf{W},s) from (4).

  3. 3.

    For i=1,,ni=1,...,n, simulate vector 𝐔i\mathbf{U}_{-i} following (n1)(n-1) dimensional Archimedean copula of generator Φ.\Phi.

    1. (a)

      Evaluate Xj=Fj(Uj)X_{j}=F_{j}^{\leftarrow}(U_{j}) for jij\neq i.

    2. (b)

      Evaluate ZNR3,1[i]X(s)=I{max{𝐱i}<λs}(FXi(s)FXi(ssum(𝐱i))).Z_{NR3,1}^{[i]X}(s)=I_{\{\max\{\mathbf{x}_{-i}\}<\lambda s\}}\left(F_{X_{i}^{\ast}}(s)-F_{X_{i}^{\ast}}(s-sum(\mathbf{x}_{-i}))\right).

    3. (c)

      Return ZNR3X(s)=Pr(MnX>s)+FR(UλX(W,s))FR(LX(W,s))+i=1nZNR3,1[i]X(s).Z_{NR3}^{X}(s)=\Pr(M_{n}^{X}>s)+F_{R}(U_{\lambda}^{X}(W,s))-F_{R}(L^{X}(W,s))+\sum_{i=1}^{n}Z_{NR3,1}^{[i]X}(s).

Algorithm 17

To generate a realization of ZNR3Y(s)Z_{NR3}^{Y}(s), proceed as follows:

  1. 1.

    Let (E1,,En)(E_{1},...,E_{n}) be nn iid exponential rvs of parameter 1, calculate Wi=Ei/j=1nEjW_{i}=E_{i}/\sum_{j=1}^{n}E_{j}.

  2. 2.

    Calculate UY(𝐖,s)U^{Y}(\mathbf{W},s) from (5) and LλY(𝐖,s)L_{\lambda}^{Y}(\mathbf{W},s) from (8)

  3. 3.

    For i=1,,ni=1,...,n, simulate vector UiU_{-i} following (n1)(n-1) dimensional Archimedean copula of generator Φ\Phi and then calculate Yj=F¯j(Uj)Y_{j}=\bar{F}_{j}^{\leftarrow}(U_{j}) for jij\neq i. After that, calculate the value of ZNR3,1[i]Y(s)=I{max{𝐲i}<λs}(FYi(s)FYi(ssum(𝐲i)))Z_{NR3,1}^{[i]Y}(s)=I_{\{\max\{\mathbf{y}_{-i}\}<\lambda s\}}\left(F_{Y_{i}^{\ast}}(s)-F_{Y_{i}^{\ast}}(s-sum(\mathbf{y}_{-i}))\right)

  4. 4.

    Return ZNR3Y(s)=P(MnY>s)+FR(UY(W,s))FR(LλY(W,s))+i=1nZNR3,1[i]Y(s).Z_{NR3}^{Y}(s)=P(M_{n}^{Y}>s)+F_{R}(U^{Y}(W,s))-F_{R}(L_{\lambda}^{Y}(W,s))+\sum_{i=1}^{n}Z_{NR3,1}^{[i]Y}(s).

Unfortunately, the relative errors of ZNR3,1X(s)Z_{NR3,1}^{X}(s) and ZNR3,1X(s)Z_{NR3,1}^{X}(s) are not bounded if no assumption is made on the Archimedean generator. Consequently, the relative errors of ZNR3X(s)Z_{NR3}^{X}(s) and ZNR3Y(s)Z_{NR3}^{Y}(s) will not be bounded either in general. However, numerical performances of these estimators are better than ZNR2(s)Z_{NR2}(s) in some situations when parameter λ\lambda takes appropriate values. Moreover, in almost all cases, ZNR3X(s)Z_{NR3}^{X}(s) and ZNR3Y(s)Z_{NR3}^{Y}(s) perform better than ZNR1X(s)Z_{NR1}^{X}(s) and ZNR1Y(s)Z_{NR1}^{Y}(s) which we have proven to have a bounded relative error.

However we are able to prove the following result.

Proposition 18

The estimator ZNR3,2Y(s)Z_{NR3,2}^{Y}(s) has bounded relative error.

Proof. See Appendix.   

3.3.4 Fourth estimator

We propose in this section a fourth and final estimator of zX(s)z_{X}(s) which is derived in a similar fashion as ZNR3X(s)Z_{NR3}^{X}(s), meaning that we split Pr(SnX>s,MnXs)\Pr(S_{n}^{X}>s,M_{n}^{X}\leq s) into two parts. Note that the estimator under an Archimedean survival copula structure, denoted by ZNR4Y(s)Z_{NR4}^{Y}(s) has bounded relative error without any assumption on Φ\Phi.

First, for a chosen κ\kappa\in (1/n,1)(1/n,1), we decompose zX(s)z_{X}(s) into

zX(s)=Pr(MnX>s)+Pr(SnX>s,κs<MnXs)+Pr(SnX>s,MnXκs).z_{X}(s)=\Pr(M_{n}^{X}>s)+\Pr(S_{n}^{X}>s,\kappa s<M_{n}^{X}\leq s)+\Pr(S_{n}^{X}>s,M_{n}^{X}\leq\kappa s).

As for ZNR1X(s)Z_{NR1}^{X}(s), we use the simulation technique of [Brechmann et al., 2013] to estimate Pr(SnX>s,κs<MnXs)\Pr(S_{n}^{X}>s,\kappa s<M_{n}^{X}\leq s) while Pr(SnX>s,MnXκs)\Pr(S_{n}^{X}>s,M_{n}^{X}\leq\kappa s) will be estimated conditionally on 𝐖𝔰n\mathbf{W}\in\mathfrak{s}_{n} as for ZNR2(s)Z_{NR2}(s). Then we have

zX(s)\displaystyle z_{X}(s) =\displaystyle= Pr(MnX>s)+i=1n(F¯i(κs)F¯i(s))Pr(SnX>s,Xi=MnX|κs<Xis)\displaystyle\Pr(M_{n}^{X}>s)+\sum_{i=1}^{n}\left(\overline{F}_{i}(\kappa s)-\overline{F}_{i}(s)\right)\ \Pr(S_{n}^{X}>s,X_{i}=M_{n}^{X}|\kappa s<X_{i}\leq s)
+Pr(i=1nFi(Φ(RWi))>s,RMn{Φ(Fi(κs))Wi}).\displaystyle+\Pr\left(\sum_{i=1}^{n}F_{i}^{\leftarrow}\left(\Phi(RW_{i})\right)>s,R\geq M_{n}\left\{\frac{\Phi^{\leftarrow}(F_{i}(\kappa s))}{W_{i}}\right\}\right).

Following the same rationale as for the first and second estimator, we obtain the fourth estimator ZNR4X(s)Z_{NR4}^{X}(s) given by

ZNR4X(s)=Pr(MnX>s)+i=1n(F¯i(κs)F¯i(s))𝕀{SnXκi>s,Xiκi=MnXκi}+FR(UX(𝐖,s))FR(LκX(𝐖,s)),Z_{NR4}^{X}(s)=\Pr(M_{n}^{X}>s)+\sum_{i=1}^{n}\left(\overline{F}_{i}(\kappa s)-\overline{F}_{i}(s)\right)\mathbb{I}_{\{S_{n}^{X\kappa i}>s,X_{i}^{\kappa i}=M_{n}^{X\kappa i}\}}+F_{R}(U^{X}(\mathbf{W},s))-F_{R}(L_{\kappa}^{X}(\mathbf{W},s)),

with Xjκi=(Xj|κs<Xis;UX(𝐖,s))X_{j}^{\kappa i}=\left(X_{j}|\kappa s<X_{i}\leq s;U^{X}(\mathbf{W},s)\right) as defined in (3) and

LκX(𝐖,s)=Mn{Φ(Fi(κs))Wi}.L_{\kappa}^{X}(\mathbf{W},s)=M_{n}\left\{\frac{\Phi^{\leftarrow}(F_{i}(\kappa s))}{W_{i}}\right\}. (9)

Similarly, for random vector 𝐘\mathbf{Y}, we have

zY(s)\displaystyle z_{Y}(s) =\displaystyle= Pr(MnY>s)+i=1n(F¯i(κs)F¯i(s))Pr(SnY>s,Yi=MnY|κs<Yis)\displaystyle\Pr(M_{n}^{Y}>s)+\sum_{i=1}^{n}\left(\bar{F}_{i}(\kappa s)-\bar{F}_{i}(s)\right)\ \Pr(S_{n}^{Y}>s,Y_{i}=M_{n}^{Y}|\kappa s<Y_{i}\leq s)
+\displaystyle+ Pr(i=1nF¯i(Φ(RWi))>s,RM1{Φ(F¯i(κs))Wi})\displaystyle\Pr\left(\sum_{i=1}^{n}\bar{F}_{i}^{\leftarrow}(\Phi(RW_{i}))>s,R\leq M_{1}\left\{\frac{\Phi^{\leftarrow}(\bar{F}_{i}(\kappa s))}{W_{i}}\right\}\right)

which leads to

ZNR4Y(s)=Pr(MnY>s)+i=1n(F¯i(κs)F¯i(s))𝕀{SnYκi>s,Yiκi=MnYκi}+FR(UκY(𝐖,s))FR(LY(𝐖,s))Z_{NR4}^{Y}(s)=\Pr(M_{n}^{Y}>s)+\sum_{i=1}^{n}\left(\overline{F}_{i}(\kappa s)-\overline{F}_{i}(s)\right)\mathbb{I}_{\{S_{n}^{Y\kappa i}>s,Y_{i}^{\kappa i}=M_{n}^{Y\kappa i}\}}+F_{R}(U_{\kappa}^{Y}(\mathbf{W},s))-F_{R}(L^{Y}(\mathbf{W},s))

with Yjκi=(Yj|κs<Yis;LY(𝐖,s))Y_{j}^{\kappa i}=\left(Y_{j}|\kappa s<Y_{i}\leq s;L^{Y}(\mathbf{W},s)\right) as defined in (6) and

UκY(𝐖,s)=M1{Φ(F¯i(κs))Wi}.U_{\kappa}^{Y}(\mathbf{W},s)=M_{1}\left\{\frac{\Phi^{\leftarrow}(\overline{F}_{i}(\kappa s))}{W_{i}}\right\}. (10)

We are able to prove that ZNR4Y(s)Z_{NR4}^{Y}(s) is an estimator with bounded relative error.

Algorithm 19

To generate a realization of ZNR4X(s)Z_{NR4}^{X}(s), proceed as follows:

  1. 1.

    For each i=1,,ni=1,...,n, simulate vector (X1κi,,Xnκi)=(X1,,Xn|κs<Xis)(X_{1}^{\kappa i},...,X_{n}^{\kappa i})=(X_{1},\ldots,X_{n}|\kappa s<X_{i}\leq s), then calculate ZNR4,1X(s)=i=1n(F¯i(κs)F¯i(s))I{SnXκi>s,Xiκi=MnXκi}Z_{NR4,1}^{X}(s)=\sum_{i=1}^{n}\left(\overline{F}_{i}(\kappa s)-\overline{F}_{i}(s)\right)I_{\{S_{n}^{X\kappa i}>s,X_{i}^{\kappa i}=M_{n}^{X\kappa i}\}}.

  2. 2.

    Let (E1,,En)(E_{1},...,E_{n}) be nn iid exponential rvs of parameter 1, calculate Wi=Ei/j=1nEjW_{i}=E_{i}/\sum_{j=1}^{n}E_{j}.

  3. 3.

    Calculate UX(𝐖,s)U^{X}(\mathbf{W},s) from (3) and LκX(𝐖,s)L_{\kappa}^{X}(\mathbf{W},s) from (9).

  4. 4.

    Return ZNR4X(s)=Pr(MnX>s)+ZNR4,1X(s)+F¯R(UX(𝐖,s))F¯R(LκX(𝐖,s))Z_{NR4}^{X}(s)=\Pr(M_{n}^{X}>s)+Z_{NR4,1}^{X}(s)+\overline{F}_{R}(U^{X}(\mathbf{W},s))-\overline{F}_{R}(L_{\kappa}^{X}(\mathbf{W},s)).

Algorithm 20

To generate a realization of ZNR4Y(s)Z_{NR4}^{Y}(s), proceed as follows:

  1. 1.

    For each i=1,,ni=1,...,n, simulate vector (Y1κi,,Ynκi)=(Y1,,Yn|κs<Yis)(Y_{1}^{\kappa i},\ldots,Y_{n}^{\kappa i})=(Y_{1},...,Y_{n}|\kappa s<Y_{i}\leq s), then calculate ZNR4,1Y(s)=i=1n(F¯i(κs)F¯i(s))I{SnYκi>s,Yiκi=MnYκi}Z_{NR4,1}^{Y}(s)=\sum_{i=1}^{n}\left(\overline{F}_{i}(\kappa s)-\overline{F}_{i}(s)\right)I_{\{S_{n}^{Y\kappa i}>s,Y_{i}^{\kappa i}=M_{n}^{Y\kappa i}\}}.

  2. 2.

    Let (E1,,En)(E_{1},...,E_{n}) be nn iid exponential rvs of parameter 1, calculate Wi=Ei/j=1nEjW_{i}=E_{i}/\sum_{j=1}^{n}E_{j}.

  3. 3.

    Calculate UκY(𝐖,s)U_{\kappa}^{Y}(\mathbf{W},s) from (10) and LY(𝐖,s)L^{Y}(\mathbf{W},s) from (6).

  4. 4.

    Return ZNR4Y(s)=Pr(MnY>s)+ZNR4,1Y(s)+F¯R(UκY(𝐖,s))F¯R(LY(𝐖,s))Z_{NR4}^{Y}(s)=\Pr(M_{n}^{Y}>s)+Z_{NR4,1}^{Y}(s)+\overline{F}_{R}(U_{\kappa}^{Y}(\mathbf{W},s))-\overline{F}_{R}(L^{Y}(\mathbf{W},s)).

Proposition 21

ZNR4Y(s)Z_{NR4}^{Y}(s) is an estimator with bounded relative error.

Proof. See Appendix.   

4 Numerical bounds for zX(s)z_{X}(s) and zY(s)z_{Y}(s)

Inspired from the AEP algorithm in [Arbenz et al., 2011], [Cossette et al., 2014] have proposed sharp numerical bounds for Pr(SnXs)\Pr(S_{n}^{X}\leq s) when a closed-form expression is available for Pr(X1x1,,Xnxn)\Pr(X_{1}\leq x_{1},...,X_{n}\leq x_{n}). These bounds are recalled in a first subsection. In the next subsection, we propose an adaptation of this method for Pr(SnY>s)\Pr(S_{n}^{Y}>s) assuming that a closed-form expression is available for Pr(Y1>y1,,Yn>yn)\Pr(Y_{1}>y_{1},...,Y_{n}>y_{n}).

4.1 Numerical bounds for zX(s)z_{X}\left(s\right)

Let us denote by AS(l,m)(s)A_{S}^{\left(l,m\right)}\left(s\right) and AS(u,m)(s)A_{S}^{\left(u,m\right)}\left(s\right) the bounds for Pr(SnXs)\Pr\left(S_{n}^{X}\leq s\right) with precision parameter m+m\in\mathbb{N}^{+}, such that

AS(l,m)(s)Pr(SnXs)AS(u,m)(s)x0.A_{S}^{\left(l,m\right)}\left(s\right)\leq\Pr\left(S_{n}^{X}\leq s\right)\leq A_{S}^{\left(u,m\right)}\left(s\right)\text{, }x\geq 0\text{.}

Briefly, for n=2n=2, AS(l,m)(s)A_{S}^{\left(l,m\right)}\left(s\right) corresponds to the sum of the probabilities associated to 2m12^{m}-1 rectangles which lie strictly under the diagonal x1+x2=sx_{1}+x_{2}=s, i.e.

AS(l,m)(s)=i=12m1(FX(i2ms,2mi2ms)FX((i1)2ms,2mi2ms)).A_{S}^{\left(l,m\right)}\left(s\right)=\sum_{i=1}^{2^{m}-1}\left(F_{X}\left(\frac{i}{2^{m}}s,\frac{2^{m}-i}{2^{m}}s\right)-F_{X}\left(\frac{\left(i-1\right)}{2^{m}}s,\frac{2^{m}-i}{2^{m}}s\right)\right). (11)

Similarly, for n=2n=2, AS(u,m)(s)A_{S}^{\left(u,m\right)}\left(s\right) is the sum of the probabilities associated to the 2m2^{m} rectangles strictly above the diagonal x1+x2=sx_{1}+x_{2}=s, i.e.

AS(u,m)(s)=i=12m(FX(i2ms,2m+1i2ms)FX(i12ms,2m+1i2ms)).A_{S}^{\left(u,m\right)}\left(s\right)=\sum_{i=1}^{2^{m}}\left(F_{X}\left(\frac{i}{2^{m}}s,\frac{2^{m}+1-i}{2^{m}}s\right)-F_{X}\left(\frac{i-1}{2^{m}}s,\frac{2^{m}+1-i}{2^{m}}s\right)\right). (12)

For n=3n=3, the lower bound is given by

AS(l,m)(s)=i1=13m2i2=13m1i1ζX(l,m)(s;i1,i2),A_{S}^{\left(l,m\right)}\left(s\right)=\sum_{i_{1}=1}^{3^{m}-2}\sum_{i_{2}=1}^{3^{m}-1-i_{1}}\zeta_{X}^{\left(l,m\right)}\left(s;i_{1},i_{2}\right), (13)

where

ζX(l,m)(s;i1,i2)\displaystyle\zeta_{X}^{\left(l,m\right)}\left(s;i_{1},i_{2}\right) =\displaystyle= Pr(i113ms<X1i13ms,i213ms<X2i23ms,X33mi1i23ms)\displaystyle\Pr\left(\frac{i_{1}-1}{3^{m}}s<X_{1}\leq\frac{i_{1}}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s<X_{2}\leq\frac{i_{2}}{3^{m}}s,X_{3}\leq\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right)
=\displaystyle= FX(i13ms,i23ms,3mi1i23ms)FX(i113ms,i23ms,3mi1i23ms)\displaystyle F_{X}\left(\frac{i_{1}}{3^{m}}s,\frac{i_{2}}{3^{m}}s,\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right)-F_{X}\left(\frac{i_{1}-1}{3^{m}}s,\frac{i_{2}}{3^{m}}s,\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right)
FX(i13ms,i213ms,3mi1i23ms)+FX(i113ms,i213ms,3mi1i23ms),\displaystyle-F_{X}\left(\frac{i_{1}}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s,\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right)+F_{X}\left(\frac{i_{1}-1}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s,\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right),

for i1=1,,3m2i_{1}=1,...,3^{m}-2 and i2=1,,3m1i1i_{2}=1,...,3^{m}-1-i_{1}. The upper bound is given by

AS(u,m)(s)=i1=13mi2=13m+1i1ζX(u,m)(s;i1,i2),A_{S}^{\left(u,m\right)}\left(s\right)=\sum_{i_{1}=1}^{3^{m}}\sum_{i_{2}=1}^{3^{m}+1-i_{1}}\zeta_{X}^{\left(u,m\right)}\left(s;i_{1},i_{2}\right), (14)

with

ζX(u,m)(s;i1,i2)\displaystyle\zeta_{X}^{\left(u,m\right)}\left(s;i_{1},i_{2}\right) =\displaystyle= Pr(i113ms<X1i13ms,i213ms<X2i23ms,X33m+2i1i23ms)\displaystyle\Pr\left(\frac{i_{1}-1}{3^{m}}s<X_{1}\leq\frac{i_{1}}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s<X_{2}\leq\frac{i_{2}}{3^{m}}s,X_{3}\leq\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)
=\displaystyle= FX(i13ms,i23ms,3m+2i1i23ms)FX(i113ms,i23ms,3m+2i1i23ms)\displaystyle F_{X}\left(\frac{i_{1}}{3^{m}}s,\frac{i_{2}}{3^{m}}s,\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)-F_{X}\left(\frac{i_{1}-1}{3^{m}}s,\frac{i_{2}}{3^{m}}s,\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)
FX(i13ms,i213ms,3m+2i1i23ms)+FX(i113ms,i213ms,3m+2i1i23ms)\displaystyle-F_{X}\left(\frac{i_{1}}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s,\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)+F_{X}\left(\frac{i_{1}-1}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s,\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)

for i1=1,,3mi_{1}=1,...,3^{m} and i2=1,,3m+1i1i_{2}=1,...,3^{m}+1-i_{1}. Details for n>3n>3 are provided in [Cossette et al., 2014].

4.2 Numerical bounds for zY(s)z_{Y}\left(s\right)

In this section, we propose an adaptation of this method assuming that a closed-form expression for the survival distribution function of the random vector 𝐘\mathbf{Y} is available. Our objective is to develop sharp numerical bounds, denoted BS(l,m)(s)B_{S}^{\left(l,m\right)}\left(s\right) and BS(u,m)(s)B_{S}^{\left(u,m\right)}\left(s\right), such that

BS(l,m)(s)zY(s)BS(u,m)(s)s0.B_{S}^{\left(l,m\right)}\left(s\right)\leq z_{Y}\left(s\right)\leq B_{S}^{\left(u,m\right)}\left(s\right)\text{, }s\geq 0\text{.}

Clearly, we have

BS(l,m)(s)=1AS(u,m)(s) and BS(u,m)(s)=1AS(l,m)(s)s0.B_{S}^{\left(l,m\right)}\left(s\right)=1-A_{S}^{\left(u,m\right)}\left(s\right)\text{ and }B_{S}^{\left(u,m\right)}\left(s\right)=1-A_{S}^{\left(l,m\right)}\left(s\right)\text{, }s\geq 0.

However, to achieve our goal, the task is to rewrite expressions in (11) and (13) for AS(u,m)(s),A_{S}^{\left(u,m\right)}\left(s\right), and (12) and (14) for AS(u,m)(s)A_{S}^{\left(u,m\right)}\left(s\right) such that BS(l,m)(s)B_{S}^{\left(l,m\right)}\left(s\right) and BS(u,m)(s)B_{S}^{\left(u,m\right)}\left(s\right) can be defined in terms of F¯Y\overline{F}_{Y}. We provide expressions of the lower and upper bounds for n=2n=2 and n=3.n=3.

For n=2n=2, we have

BS(u,m)(s)\displaystyle B_{S}^{\left(u,m\right)}\left(s\right) =\displaystyle= A¯S(l,m)(s)=1AS(l,m)(s)\displaystyle\overline{A}_{S}^{\left(l,m\right)}\left(s\right)=1-A_{S}^{\left(l,m\right)}\left(s\right)
=\displaystyle= i=12m1(F¯Y((i1)2ms,2mi2ms)F¯Y(i2ms,2mi2ms))\displaystyle\sum_{i=1}^{2^{m}-1}\left(\overline{F}_{Y}\left(\frac{\left(i-1\right)}{2^{m}}s,\frac{2^{m}-i}{2^{m}}s\right)-\overline{F}_{Y}\left(\frac{i}{2^{m}}s,\frac{2^{m}-i}{2^{m}}s\right)\right)
+F¯Y(2m12ms,0),\displaystyle+\overline{F}_{Y}\left(\frac{2^{m}-1}{2^{m}}s,0\right),

and

BS(l,m)(s)\displaystyle B_{S}^{\left(l,m\right)}\left(s\right) =\displaystyle= A¯S(u,m)(s)=1AS(u,m)(s)\displaystyle\overline{A}_{S}^{\left(u,m\right)}\left(s\right)=1-A_{S}^{\left(u,m\right)}\left(s\right)
=\displaystyle= i=12m(F¯Y(i12ms,2m+1i2ms)F¯Y(i2ms,2m+1i2ms))+F¯Y(2m2ms,0).\displaystyle\sum_{i=1}^{2^{m}}\left(\overline{F}_{Y}\left(\frac{i-1}{2^{m}}s,\frac{2^{m}+1-i}{2^{m}}s\right)-\overline{F}_{Y}\left(\frac{i}{2^{m}}s,\frac{2^{m}+1-i}{2^{m}}s\right)\right)+\overline{F}_{Y}\left(\frac{2^{m}}{2^{m}}s,0\right).

For n=3n=3, we obtain

BS(u,m)(s)\displaystyle B_{S}^{\left(u,m\right)}\left(s\right) =\displaystyle= A¯S(l,m)(s)=1AS(l,m)(s)\displaystyle\overline{A}_{S}^{\left(l,m\right)}\left(s\right)=1-A_{S}^{\left(l,m\right)}\left(s\right)
=\displaystyle= i1=13m2i2=13m1i1(F¯Y(i13ms,i23ms,3mi1i23ms)F¯Y(i113ms,i23ms,3mi1i23ms)F¯Y(i13ms,i213ms,3mi1i23ms)+F¯Y(i113ms,i213ms,3mi1i23ms))\displaystyle\sum_{i_{1}=1}^{3^{m}-2}\sum_{i_{2}=1}^{3^{m}-1-i_{1}}\left(\begin{array}[]{c}\overline{F}_{Y}\left(\frac{i_{1}}{3^{m}}s,\frac{i_{2}}{3^{m}}s,\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right)-\overline{F}_{Y}\left(\frac{i_{1}-1}{3^{m}}s,\frac{i_{2}}{3^{m}}s,\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right)\\ -\overline{F}_{Y}\left(\frac{i_{1}}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s,\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right)+\overline{F}_{Y}\left(\frac{i_{1}-1}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s,\frac{3^{m}-i_{1}-i_{2}}{3^{m}}s\right)\end{array}\right)
+i1=13m2(F¯Y(i113ms,3m1i13ms,0)F¯Y(i13ms,3m1i13ms,0))\displaystyle+\sum_{i_{1}=1}^{3^{m}-2}\left(\overline{F}_{Y}\left(\frac{i_{1}-1}{3^{m}}s,\frac{3^{m}-1-i_{1}}{3^{m}}s,0\right)-\overline{F}_{Y}\left(\frac{i_{1}}{3^{m}}s,\frac{3^{m}-1-i_{1}}{3^{m}}s,0\right)\right)
+F¯Y(3m23ms,0,0),\displaystyle+\overline{F}_{Y}\left(\frac{3^{m}-2}{3^{m}}s,0,0\right),

and

BS(l,m)(s)\displaystyle B_{S}^{\left(l,m\right)}\left(s\right) =\displaystyle= A¯S(u,m)(s)=1AS(u,m)(s)\displaystyle\overline{A}_{S}^{\left(u,m\right)}\left(s\right)=1-A_{S}^{\left(u,m\right)}\left(s\right)
=\displaystyle= i1=13mi2=13m+1i1(F¯Y(i13ms,i23ms,3m+2i1i23ms)F¯Y(i113ms,i23ms,3m+2i1i23ms)F¯Y(i13ms,i213ms,3m+2i1i23ms)+F¯Y(i113ms,i213ms,3m+2i1i23ms))\displaystyle\sum_{i_{1}=1}^{3^{m}}\sum_{i_{2}=1}^{3^{m}+1-i_{1}}\left(\begin{array}[]{c}\overline{F}_{Y}\left(\frac{i_{1}}{3^{m}}s,\frac{i_{2}}{3^{m}}s,\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)-\overline{F}_{Y}\left(\frac{i_{1}-1}{3^{m}}s,\frac{i_{2}}{3^{m}}s,\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)\\ -\overline{F}_{Y}\left(\frac{i_{1}}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s,\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)+\overline{F}_{Y}\left(\frac{i_{1}-1}{3^{m}}s,\frac{i_{2}-1}{3^{m}}s,\frac{3^{m}+2-i_{1}-i_{2}}{3^{m}}s\right)\end{array}\right)
+i1=13m(F¯Y(i113ms,3m+1i13ms,0)F¯Y(i13ms,3m+1i13ms,0))\displaystyle+\sum_{i_{1}=1}^{3^{m}}\left(\overline{F}_{Y}\left(\frac{i_{1}-1}{3^{m}}s,\frac{3^{m}+1-i_{1}}{3^{m}}s,0\right)-\overline{F}_{Y}\left(\frac{i_{1}}{3^{m}}s,\frac{3^{m}+1-i_{1}}{3^{m}}s,0\right)\right)
+F¯Y(3m3ms,0,0).\displaystyle+\overline{F}_{Y}\left(\frac{3^{m}}{3^{m}}s,0,0\right).

Expressions for lower and upper bounds for n>3n>3 can be derived in a similar way.

5 Numerical study

The numerical performances of the four estimators and the numerical bounds are discussed in this section. We shall first compare both approaches by considering small nn (=2,3=2,3) and from moderate to large ss. We then study the accuracy of the four estimators for the case n=5n=5 where the numerical bounds may not be computed in a reasonable time.

For both 𝐗\mathbf{X} and 𝐘\mathbf{Y}, we assume that the marginal distributions are Pareto(αi,1)(\alpha_{i},1), i.e. fXi(x)=αi/(1+x)αi+1f_{X_{i}}(x)=\alpha_{i}/(1+x)^{\alpha_{i}+1} for x>0x>0. For the dependence structure, we shall consider a Clayton or Gumbel copula.

The generator of the Clayton copula of parameter θ(0,)\theta\in(0,\infty) and its inverse function are given by

Φ(t)=(1+tθ)θandΦ(t)=θ(t1/θ1).\Phi(t)=\left(1+\frac{t}{\theta}\right)^{-\theta}\ \text{and}\ \Phi^{\leftarrow}(t)=\theta\left(t^{-1/\theta}-1\right).

The derivatives of the generator are calculated as follows

Φ(k)(t)=(1+1θ)(1+2θ)(1+k1θ)(1+tθ)θk+1.\Phi^{(k)}(t)=\left(1+\frac{1}{\theta}\right)\left(1+\frac{2}{\theta}\right)\ldots\left(1+\frac{k-1}{\theta}\right)\ \left(1+\frac{t}{\theta}\right)^{-\theta-k+1}.

The formula for the nn-dimensional copula is

C(u1,,un)=(u11/θ++un1/θ(n1))θ.C(u_{1},...,u_{n})=\left(u_{1}^{-1/\theta}+...+u_{n}^{-1/\theta}-(n-1)\right)^{-\theta}.

Its Kendall’s tau is given by τ=θ1/(2+θ1)\tau=\theta^{-1}/(2+\theta^{-1}). Note that the Clayton copula has a generator satisfying the assumptions of Proposition 14.

The generator of the Gumbel copula with parameter b(0,1)b\in(0,1) and its inverse function are given by

Φ(t)=exp(xb)andΦ(t)=(log(t))1/b.\Phi(t)=\exp(-x^{b})\ \text{and}\ \Phi^{\leftarrow}(t)=\left(-\log(t)\right)^{1/b}.

The four derivatives of the generator are calculated as follows

Φ(1)(t)\displaystyle\Phi^{(1)}(t) =\displaystyle= exp(xb)(btb1)\displaystyle\exp(-x^{b})\left(-bt^{b-1}\right)
Φ(2)(t)\displaystyle\Phi^{(2)}(t) =\displaystyle= exp(tb)(b(b1)tb2+b2t2b2)\displaystyle\exp(-t^{b})\left(-b(b-1)t^{b-2}+b^{2}t^{2b-2}\right)
Φ(3)(t)\displaystyle\Phi^{(3)}(t) =\displaystyle= exp(tb)(b(b1)(b2)tb3+3b2(b1)t2b3b3t3b3)\displaystyle\exp(-t^{b})\left(-b(b-1)(b-2)t^{b-3}+3b^{2}(b-1)t^{2b-3}-b^{3}t^{3b-3}\right)
Φ(4)(t)\displaystyle\Phi^{(4)}(t) =\displaystyle= exp(tb)×\displaystyle\exp(-t^{b})\times
(b(b1)(b2)(b3)tb4+b2(b1)(7b11)t2b46b3(b1)t3b4+b4t4b4).\displaystyle\left(-b(b-1)(b-2)(b-3)t^{b-4}+b^{2}(b-1)(7b-11)t^{2b-4}-6b^{3}(b-1)t^{3b-4}+b^{4}t^{4b-4}\right).

The formula for the nn-dimensional Gumbel copula is

C(u1,,un)=exp([(log(u1))1/b++(log(un))1/b]b).C(u_{1},\ldots,u_{n})=\exp\left(-\left[\left(-\log(u_{1})\right)^{1/b}+\ldots+\left(-\log(u_{n})\right)^{1/b}\right]^{b}\right).

Its Kendall’s tau is given τ=1b\tau=1-b.

5.1 Comparison of both approaches

5.1.1 Numerical illustration for zXz_{X}

In the first example, Tables 1 and 2 provide the values of the four estimators and the numerical bounds of zX(s)z_{X}\left(s\right), for n=2n=2, 3, and s=1s=1, 10210^{2}, 10410^{4}, and 10610^{6}. The parameters of the Pareto distributions are given by α1=0.9\alpha_{1}=0.9, α2=1.8\alpha_{2}=1.8, α3=2.6\alpha_{3}=2.6, which come from Section 6 in [Arbenz et al., 2011] and Section 3.1 in [Cossette et al., 2014]. Note that, since the parameters of the Pareto distributions are different, the probability zX(s)z_{X}(s) is equivalent to Pr(X1>s)\left(X_{1}>s\right) for large ss (e.g., Pr(X1>106)=\left(X_{1}>10^{6}\right)= 3.9811E-06).

ss 1AS(u,20)(s)1-A_{S}^{\left(u,20\right)}\left(s\right) 1AS(l,20)(s)1-A_{S}^{\left(l,20\right)}\left(s\right) 𝔼(ZNR1)\mathbb{E}(Z_{NR1}) 𝔼(ZNR2)\mathbb{E}(Z_{NR2}) 𝔼(ZNR3)\mathbb{E}(Z_{NR3}) 𝔼(ZNR4)\mathbb{E}(Z_{NR4}) e(ZNR1)e(Z_{NR1}) e(ZNR2)e(Z_{NR2}) e(ZNR3)e(Z_{NR3}) e(ZNR4)e(Z_{NR4}) 1 6.84165E-01 6.84165E-01 6.83859E-01 6.84258E-01 6.77340E-01 6.84340E-01 1.22 1.13 1.18 1.22 1E02 1.63096E-02 1.63096E-02 1.63378E-02 1.63088E-02 1.63853E-02 1.62861E-02 1.99 1.51 2.78 2.45 1E04 2.5128E-04 2.5128E-04 2.5125E-04 2.5128E-04 2.5129E-04 2.5125E-04 1.45 2.04 2.53 3.45 1E06 3.9811E-06 3.9811E-06 3.9811E-06 3.9811E-06 3.9811E-06 3.9811E-06 1.85 1.56 1.75 1.60

Table 1: Four estimators and numerical bounds of zX(s)z_{X}(s). Sum of two Pareto whose Clayton copula has Kendall’s τ\tau equal to 38\frac{3}{8}.

ss 1AS(u,8)(s)1-A_{S}^{\left(u,8\right)}\left(s\right) 1AS(l,8)(s)1-A_{S}^{\left(l,8\right)}\left(s\right) 𝔼(ZNR1)\mathbb{E}(Z_{NR1}) 𝔼(ZNR2)\mathbb{E}(Z_{NR2}) 𝔼(ZNR3)\mathbb{E}(Z_{NR3}) 𝔼(ZNR4)\mathbb{E}(Z_{NR4}) e(ZNR1)e(Z_{NR1}) e(ZNR2)e(Z_{NR2}) e(ZNR3)e(Z_{NR3}) e(ZNR4)e(Z_{NR4}) 1 8.09108E-01 8.09173E-01 8.08747E-01 8.07925E-01 8.05646E-01 8.10322E-01 1.26 1.18 1.19 1.15 1E02 1.63381E-02 1.63428E-02 1.63361E-02 1.63411E-02 1.63800E-02 1.63198E-02 2.14 2.83 2.48 2.53 1E04 2.5128E-04 2.5128E-04 2.5127E-04 2.5127E-04 2.5129E-04 2.5127E-04 2.13 2.51 2.52 2.03 1E06 3.9811E-06 3.9811E-06 3.9811E-06 3.9811E-06 3.9811E-06 3.9811E-06 1.45 2.11 2.29 1.89

Table 2: Four estimators and numerical bounds of zX(s)z_{X}(s). Sum of three Pareto whose Clayton copula has Kendall’s τ\tau equal to 16\frac{1}{6}.

5.1.2 Numerical illustration for zYz_{Y}

For the second example, the values of the four estimators and the numerical bounds of zY(s)z_{Y}\left(s\right), for n=2n=2, 3, and s=1s=1, 10210^{2}, 10310^{3}, and 10410^{4} are displayed in tables 3 and 4. The parameters of the Pareto distributions are equal, with α1=α2=α3=2.5\alpha_{1}=\alpha_{2}=\alpha_{3}=2.5. In this case, all components of the sum contribute to its large values.

ss BS(l,20)(s)B_{S}^{\left(l,20\right)}\left(s\right) BS(u,20)(s)B_{S}^{\left(u,20\right)}\left(s\right) 𝔼(ZNR1)\mathbb{E}(Z_{NR1}) 𝔼(ZNR2)\mathbb{E}(Z_{NR2}) 𝔼(ZNR3)\mathbb{E}(Z_{NR3}) 𝔼(ZNR4)\mathbb{E}(Z_{NR4}) e(ZNR1)e(Z_{NR1}) e(ZNR2)e(Z_{NR2}) e(ZNR3)e(Z_{NR3}) e(ZNR4)e(Z_{NR4}) 1 3.60712E-01 3.60712E-01 3.61435E-01 3.61003E-01 3.60885E-01 3.60812E-01 0.35 0.14 0.15 0.18 1E02 5.14701E-05 5.14702E-05 5.17560E-05 5.15248E-05 5.14659E-05 5.13841E-05 0.60 0.15 0.15 0.22 1E03 1.70171E-07 1.70172E-07 1.71695E-07 1.70113E-07 1.69997E-07 1.69823E-07 0.60 0.15 0.15 0.21 1E04 5.40553E-10 5.40554E-10 5.38901E-10 5.41930E-10 5.40113E-10 5.37362E-10 0.61 0.14 0.15 0.21

Table 3: Four estimators and numerical bounds of zY(s)z_{Y}(s). Sum of two Pareto whose Clayton survival copula has Kendall’s τ\tau equal to 12\frac{1}{2}

ss BS(l,20)(s)B_{S}^{\left(l,20\right)}\left(s\right) BS(u,20)(s)B_{S}^{\left(u,20\right)}\left(s\right) 𝔼(ZNR1)\mathbb{E}(Z_{NR1}) 𝔼(ZNR2)\mathbb{E}(Z_{NR2}) 𝔼(ZNR3)\mathbb{E}(Z_{NR3}) 𝔼(ZNR4)\mathbb{E}(Z_{NR4}) e(ZNR1)e(Z_{NR1}) e(ZNR2)e(Z_{NR2}) e(ZNR3)e(Z_{NR3}) e(ZNR4)e(Z_{NR4}) 1 4.99666E-01 5.00644E-01 4.99080E-01 5.00644E-01 5.00006E-01 5.00477E-01 0.47 0.12 0.12 0.21 1E02 1.35825E-04 1.36732E-04 1.34754E-04 1.36469E-04 1.36570E-04 1.35966E-04 0.79 0.13 0.13 0.18 1E03 4.58967E-07 4.62116E-07 4.61180E-07 4.60001E-07 4.60699E-07 4.60590E-07 0.79 0.13 0.13 0.18 1E04 1.46118E-09 1.47123E-09 1.47018E-09 1.46368E-09 1.46630E-09 1.46335E-09 0.79 0.13 0.13 0.18

Table 4: Four estimators and numerical bounds of zY(s)z_{Y}(s). Sum of three Pareto whose Clayton survival copula has Kendall’s τ\tau equal to 12\frac{1}{2}

5.1.3 Comments

For both random vectors 𝐗\mathbf{X} and 𝐘\mathbf{Y}, the upper and lower bounds have been computed with the R Project for Statistical Computing. Computation time is rather fast and varies in function of the number of random variables nn and the precision parameter mm. The evaluation of these bounds becomes time consuming starting at n=4n=4 contrarily to the conditional Monte Carlo estimators which can be rapidly obtained no matter the dimension nn. For nn relatively small (n=2,3n=2,3), the lower and upper bounds are close for any value of ss. One can see with the results of both examples that the four conditional Monte Carlo estimators can produce values outside of the lower and upper bounds. Both methods are complementary in the sense that one would probably be more inclined to use the numerical bounds in small dimension and the conditional Monte Carlo method for n5n\geq 5.

5.2 Comparison of the four estimators when n=5n=5

We now compare the performance of our four estimators for zXz_{X} and zYz_{Y}. We assume that the Pareto parameters are equal with α1==α5=2.5\alpha_{1}=...=\alpha_{5}=2.5. The dependence is assumed to be defined by a Clayton copula, a Gumbel copula, a Clayton survival copula and a Gumbel survival copula. Using Kendall’s τ\tau, we study three levels of dependence. The weak level of dependence is when τ=0.1\tau=0.1, the intermediate level of dependence is when τ=0.5\tau=0.5 and the strong level of dependence is when τ=0.9\tau=0.9.

For estimators ZNR3(s)Z_{NR3}(s) and ZNR4(s)Z_{NR4}(s), the choices of λ\lambda and κ\kappa are sensitive. In fact, we choose the values that minimize the numerical standard deviations of the estimators.

Copulas 𝔼(ZNR1)\mathbb{E}(Z_{NR1}) 𝔼(ZNR2)\mathbb{E}(Z_{NR2}) 𝔼(ZNR3)\mathbb{E}(Z_{NR3}) 𝔼(ZNR4)\mathbb{E}(Z_{NR4}) e(ZNR1)e(Z_{NR1}) e(ZNR2)e(Z_{NR2}) e(ZNR3)e(Z_{NR3}) e(ZNR4)e(Z_{NR4}) s=20s=20, Kendall’s τ\tau = 0.1 Clayton 0.00379306 0.00386807 0.00384904 0.00383534 2.805 3.174 1.916 0.923 Gumbel 0.00734014 0.00722742 0.00711167 0.00718644 2.818 1.220 0.946 0.666 Survival Clayton 0.00751367 0.00765628 0.00771573 0.007658015 2.774 0.088 0.095 0.145 Survival Gumbel 0.00443284 0.00431637 0.00432776 0.004326611 2.916 0.297 0.295 0.191 s=20s=20, Kendall’s τ\tau = 0.5 Clayton 0.00592816 0.00626043 0.006108816 0.00610554 2.868 2.132 1.696 0.829 Gumbel 0.01522982 0.01551193 0.015515734 0.015427433 2.106 0.229 0.229 0.184 Survival Clayton 0.01639236 0.01661593 0.016648815 0.016583226 2.034 0.112 0.109 0.126 Survival Gumbel 0.01095976 0.01116367 0.011170268 0.011168049 2.378 0.065 0.065 0.110 s=20s=20, Kendall’s τ\tau = 0.9 Clayton 0.01701347 0.01722898 0.01730004 0.01714553 1.909 0.636 0.635 0.363 Gumbel 0.01818365 0.01918613 0.0191801 0.01919531 1.922 0.028 0.029 0.032 Survival Clayton 0.0175496 0.01786598 0.01786844 0.01787014 1.966 0.017 0.017 0.023 Survival Gumbel 0.01682336 0.01763007 0.01764852 0.01765862 1.995 0.088 0.089 0.091

Table 5: Four estimators of zX(s)z_{X}(s) and zY(s)z_{Y}(s). Sum of five Pareto with s=20s=20.

According to the numerical results, it is remarkable that ZNR1Z_{NR1} has bounded relative error. For example, under the assumption that the dependence is a Clayton survival copula with Kendall’s τ\tau equal to 0.50.5, when ss increases from 2020 (Table 5) to 200200 (Table 6), the value of z(s)z(s) decreases from 0.016392360.01639236 to 8.670118.67011E-0505, but the relative error of ZNR1Z_{NR1} does not change: 2.0342.034 compared to 2.0362.036.

Although ZNR1(s)Z_{NR1}(s) is proved to have a bounded relative error under any dependence structure, the numerical performances of this estimator is not better than ZNR2(s)Z_{NR2}(s). Note that ZNR2Z_{NR2} has bounded relative error only when the dependence structure is an Archimedean survival copula of generator Φ(x)=xβlΦ(x)\Phi(x)=x^{-\beta}l_{\Phi}\left(x\right), that is the case of Clayton survival copula in this section. However, except the case of Clayton copula, ZNR2Z_{NR2} presents acceptable results in most cases. For example, in Table 5 and τ\tau = 0.5, under Gumbel survival copula, ratio e(ZNR1)/e(ZNR2)e(Z_{NR1})/e(Z_{NR2}) equals to 3737; or in Table 6 τ\tau = 0.9, this ratio under Gumbel copula is approximated to 1515.

Copulas 𝔼(ZNR1)\mathbb{E}(Z_{NR1}) 𝔼(ZNR2)\mathbb{E}(Z_{NR2}) 𝔼(ZNR3)\mathbb{E}(Z_{NR3}) 𝔼(ZNR4)\mathbb{E}(Z_{NR4}) e(ZNR1)e(Z_{NR1}) e(ZNR2)e(Z_{NR2}) e(ZNR3)e(Z_{NR3}) e(ZNR4)e(Z_{NR4}) s=200s=200, Kendall’s τ\tau = 0.1 Clayton 9.27623E-06 9.10775E-06 9.07197E-06 9.07119E-06 1.701 3.396 0.480 0.270 Gumbel 3.19991E-05 3.16144E-05 3.16638E-05 3.13636E-05 3.216 0.752 0.626 0.584 Survival Clayton 2.1843E-05 2.18073E-05 2.17775E-05 2.17643E-05 3.582 0.130 0.165 0.145 Survival Gumbel 9.18493E-06 9.2342E-06 9.22524E-06 9.22872E-06 1.568 0.130 0.070 0.112 s=200s=200, Kendall’s τ\tau = 0.5 Clayton 9.54965E-06 9.24395E-06 9.37001E-06 9.37181E-06 2.023 2.882 0.182 0.120 Gumbel 7.84723E-05 7.9696E-05 7.91973E-05 7.96030E-05 2.148 0.273 0.240 0.230 Survival Clayton 8.67011E-05 8.63854E-05 8.60928E-05 8.61954E-05 2.036 0.111 0.113 0.133 Survival Gumbel 1.49943E-05 1.53712E-05 1.53317E-05 1.53699E-05 3.559 0.263 0.274 0.179 s=200s=200, Kendall’s τ\tau = 0.9 Clayton 1.0871E-05 1.1563E-05 1.03202E-05 1.07798E-05 2.867 6.739 1.077 0.818 Gumbel 9.18427E-05 1.09482E-04 1.07552E-04 1.09196E-04 1.973 0.134 0.096 0.127 Survival Clayton 1.14134E-04 9.27657E-05 9.27742E-05 9.27769E-05 1.721 0.017 0.017 0.015 Survival Gumbel 7.74801E-05 7.93332E-05 7.87621E-05 7.94657E-05 2.155 0.136 0.198 0.139

Table 6: Four estimators of zX(s)z_{X}(s) and zY(s)z_{Y}(s). Sum of five Pareto with s=200s=200.

The construction of ZNR3Z_{NR3} is more complex than that of ZNR2Z_{NR2}; however, the third estimator has no numerical improvement compared to the second one except for the case of Clayton copula. Indeed, in Table 6 and τ\tau = 0.1, the relative error of ZNR3Z_{NR3} is 0.480 while the relative error of ZNR2Z_{NR2} is 3.396 or in Table 6 and τ\tau = 0.5, the relative error of ZNR3Z_{NR3} is 0.182 while the relative error of ZNR2Z_{NR2} is 2.882. Under the other dependence structures, the relative errors of ZNR3Z_{NR3} and ZNR2Z_{NR2} are almost the same.

The fourth estimator has bounded relative error under Archimedean survival copula and it presents favorable numerical results even when the dependence structure is an Archimedean copula. For example, ZNR4Z_{NR4} has the smallest relative error under Clayton copula in all tables. Under Gumbel copula, except Table 5 where s=20s=20 and Kendall’s τ\tau = 0.9 or Table 6 where s=200s=200 and Kendall’s τ\tau = 0.9, ZNR4Z_{NR4} also has the smallest relative error. Under Archimedean survival copulas, there is not much difference between the relative errors of ZNR2Z_{NR2}, ZNR3Z_{NR3} and ZNR4Z_{NR4}.

6 Acknowledgements

This work was partially supported by the Natural Sciences and Engineering Research Council of Canada (Cossette: 054993; Marceau: 053934) and by the Chaire en actuariat de l’Université Laval (Cossette and Marceau: FO502323).

7 Appendix

7.1 Proof of Proposition 3

From the conditional cumulative distribution function FUj|Z,Uj1,,U1(uj|z,uj1,,u1)F_{U_{j}|Z,U_{j-1},\ldots,U_{1}}(u_{j}|z,u_{j-1},\ldots,u_{1}) with j=1j=1, we derive the conditional cumulative distribution function FU1|ZF_{U_{1}|Z}

FU1|Z(u1|z)=(1Φ(u1)Φ(z))n1F_{U_{1}|Z}(u_{1}|z)=\left(1-\frac{\Phi^{\leftarrow}(u_{1})}{\Phi^{\leftarrow}(z)}\right)^{n-1}

for z<u1<1z<u_{1}<1. Because the marginal density of U1U_{1} is 1 on (0,1)(0,1), with the density of ZZ in Proposition 2, we have the conditional density of Z|U1Z|U_{1}

fZ|U1(z|u1)=(Φ)(1)(u1)(n2)!(Φ(u1)Φ(z))n2(Φ)(1)(z)Φ(n)(Φ(z))f_{Z|U_{1}}(z|u_{1})=\frac{(\Phi^{\leftarrow})^{(1)}(u_{1})}{(n-2)!}\left(\Phi^{\leftarrow}(u_{1})-\Phi^{\leftarrow}(z)\right)^{n-2}(\Phi^{\leftarrow})^{(1)}(z)\Phi^{(n)}\left(\Phi^{\leftarrow}(z)\right)

for 0<z<u10<z<u_{1}. The cumulative distribution function of ZZ is obtained as follows:

FZ|U1(z|u1)\displaystyle F_{Z|U_{1}}(z|u_{1}) =\displaystyle= (Φ)(1)(u1)(1)n2(n2)!0z(Φ(v)Φ(u1))n2(Φ)(1)(v)Φ(n)(Φ(v))dv\displaystyle\left(\Phi^{\leftarrow}\right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!}\int_{0}^{z}\left(\Phi^{\leftarrow}(v)-\Phi^{\leftarrow}(u_{1})\right)^{n-2}\left(\Phi^{\leftarrow}\right)^{(1)}(v)\Phi^{(n)}\left(\Phi^{\leftarrow}(v)\right)\mathrm{d}v
=\displaystyle= (Φ)(1)(u1)(1)n2(n2)!Φ(z)(vΦ(u1))n2Φ(n)(v)dv\displaystyle-\left(\Phi^{\leftarrow}\right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!}\int_{\Phi^{\leftarrow}(z)}^{\infty}\left(v-\Phi^{\leftarrow}(u_{1})\right)^{n-2}\Phi^{(n)}(v)\mathrm{d}v
=\displaystyle= (Φ)(1)(u1)(1)n2(n2)!Φ(z)(vΦ(u1))n2d(Φ(n1)(v))\displaystyle-\left(\Phi^{\leftarrow}\right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!}\int_{\Phi^{\leftarrow}(z)}^{\infty}\left(v-\Phi^{\leftarrow}(u_{1})\right)^{n-2}\mathrm{d}\left(\Phi^{(n-1)}(v)\right)
=\displaystyle= (Φ)(1)(u1)(1)n2(n2)!(vΦ(u1))n2Φ(n1)(v)|Φ(z)\displaystyle-\left(\Phi^{\leftarrow}\right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!}\left(v-\Phi^{\leftarrow}(u_{1})\right)^{n-2}\Phi^{(n-1)}(v)|_{\Phi^{\leftarrow}(z)}^{\infty}
+(Φ)(1)(u1)(1)n2(n2)!Φ(z)(n2)(vΦ1(u1))n3Φ(n1)(v)dv.\displaystyle+\left(\Phi^{\leftarrow}\right)^{(1)}(u_{1})\frac{(-1)^{n-2}}{(n-2)!}\int_{\Phi^{\leftarrow}(z)}^{\infty}(n-2)\left(v-\Phi^{-1}(u_{1})\right)^{n-3}\Phi^{(n-1)}(v)\mathrm{d}v.

Note that limv(vΦ(u1))jΦ(j)(v)=0\lim\limits_{v\rightarrow\infty}\left(v-\Phi^{\leftarrow}(u_{1})\right)^{j}\Phi^{(j)}(v)=0 for all j=1,,n2j=1,\ldots,n-2. The distribution of ZZ conditioning on U1U_{1} is then

FZ|U1(z|u1)\displaystyle F_{Z|U_{1}}(z|u_{1}) =\displaystyle= (Φ)(1)(u1)[(1)n2(n2)!(Φ(z)Φ(u1))n2Φ(n1)(Φ(z))\displaystyle\left(\Phi^{\leftarrow}\right)^{(1)}(u_{1})\big{[}\frac{(-1)^{n-2}}{(n-2)!}\left(\Phi^{\leftarrow}(z)-\Phi^{\leftarrow}(u_{1})\right)^{n-2}\Phi^{(n-1)}(\Phi^{\leftarrow}(z))
\displaystyle- (1)n3(n3)!Φ(z)(vΦ(u1))n3Φ(n1)(v)dv]\displaystyle\frac{(-1)^{n-3}}{(n-3)!}\int_{\Phi^{\leftarrow}(z)}^{\infty}\left(v-\Phi^{\leftarrow}(u_{1})\right)^{n-3}\Phi^{(n-1)}(v)\mathrm{d}v\big{]}
=\displaystyle= \displaystyle\ldots\ldots
=\displaystyle= (Φ)(1)(u1)j=0n2(1)jj!(Φ(z)Φ(u1))jΦ(j+1)(Φ(z))\displaystyle\left(\Phi^{\leftarrow}\right)^{(1)}(u_{1})\sum_{j=0}^{n-2}\frac{(-1)^{j}}{j!}\left(\Phi^{\leftarrow}(z)-\Phi^{\leftarrow}(u_{1})\right)^{j}\Phi^{(j+1)}(\Phi^{\leftarrow}(z))

for z(0,u1)z\in(0,u_{1}).

7.2 Proof of Proposition 11

𝕍ar(ZNR1X(s))\displaystyle\mathbb{V}ar(Z_{NR1}^{X}(s)) =\displaystyle= 𝕍ar(i=1n(F¯i(s/n)F¯i(s))𝕀{SnXi>s,Xii=MnXi})\displaystyle\mathbb{V}ar\left(\sum_{i=1}^{n}\left(\overline{F}_{i}(s/n)-\overline{F}_{i}(s)\right)\mathbb{I}_{\{S_{n}^{X_{i}}>s,X_{i}^{i}=M_{n}^{X_{i}}\}}\right)
=\displaystyle= i=1n(F¯i(s/n)F¯i(s))2Var(𝕀{SnXi>s,Xii=MnXi})\displaystyle\sum_{i=1}^{n}\left(\overline{F}_{i}(s/n)-\overline{F}_{i}(s)\right)^{2}Var\left(\mathbb{I}_{\{S_{n}^{X_{i}}>s,X_{i}^{i}=M_{n}^{X_{i}}\}}\right)
\displaystyle\leq i=1n(F¯i(s/n))2\displaystyle\sum_{i=1}^{n}\left(\overline{F}_{i}(s/n)\right)^{2}
\displaystyle\sim i=1nn2αi(F¯i(s))2\displaystyle\sum_{i=1}^{n}n^{2\alpha_{i}}\left(\overline{F}_{i}(s)\right)^{2}
\displaystyle\leq (i=1nn2αi)(z(s))2\displaystyle\left(\sum_{i=1}^{n}n^{2\alpha_{i}}\right)\left(z(s)\right)^{2}

The variance of ZNR1Y(s)Z_{NR1}^{Y}(s) can be verified similarly.

7.3 Proof of Proposition 14

Because Φ(n2)\Phi^{(n-2)} is differentiable, the survival distribution function of the radius RR becomes

F¯R(x)=j=0n1(1)jxjj!Φ(j)(x).\overline{F}_{R}(x)=\sum_{j=0}^{n-1}(-1)^{j}\ \frac{x^{j}}{j!}\ \Phi^{(j)}(x).

Following the property of the regularly varying function Φ(x)=xβlΦ(x)\Phi(x)=x^{-\beta}l_{\Phi}\left(x\right), we have, for j=1,,(n1)j=1,\ldots,(n-1),

limx(1)jxjΦ(j)(x)Φ(x)=β(β+1)(β+j1)\lim\limits_{x\rightarrow\infty}\frac{(-1)^{j}\ x^{j}\ \Phi^{(j)}(x)}{\Phi(x)}=\beta(\beta+1)\ldots(\beta+j-1)

and we can deduce

limxF¯R(x)Φ(x)=limxj=1n1(1)jxjj!Φ(j)(x)Φ(x)=j=1n1β(β+1)(β+j1)j!.\lim\limits_{x\rightarrow\infty}\frac{\overline{F}_{R}(x)}{\Phi(x)}=\lim\limits_{x\rightarrow\infty}\frac{\sum_{j=1}^{n-1}(-1)^{j}\ \frac{x^{j}}{j!}\ \Phi^{(j)}(x)}{\Phi(x)}=\sum_{j=1}^{n-1}\frac{\beta(\beta+1)\ldots(\beta+j-1)}{j!}.

We define g(r)=i=1nF¯i(Φ(r))g(r)=\sum_{i=1}^{n}\overline{F}_{i}^{\leftharpoonup}(\Phi(r)) and L0Y(s)=inf{r+:g(r)s}L_{0}^{Y}(s)=\inf\{r\in\mathcal{R}^{+}:g(r)\geq s\}. Because F¯\overline{F}^{\leftarrow} and Φ\Phi are both non-increasing functions then for all 𝐖𝔰n\mathbf{W}\in\mathfrak{s}_{n} we have

g(r)=i=1nF¯i(Φ(r×1))i=1nF¯i(Φ(rWi))g(r)=\sum_{i=1}^{n}\overline{F}_{i}^{\leftarrow}(\Phi(r\times 1))\geq\sum_{i=1}^{n}\overline{F}_{i}^{\leftarrow}(\Phi(rW_{i}))

and then L0Y(s)LY(𝐖,s)L_{0}^{Y}(s)\leq L^{Y}(\mathbf{W},s)\ for all 𝐖𝔰n\mathbf{W}\in\mathfrak{s}_{n}. Moreover, from the definiton of L0Y(s)L_{0}^{Y}(s), we have

maxi=1,2,,nF¯i(Φ(L0Y(s)))s/n\max\limits_{i=1,2,\ldots,n}\overline{F}_{i}^{\leftarrow}(\Phi(L_{0}^{Y}(s)))\geq s/n

and

Φ(L0Y(s))maxi=1,2,,nF¯i(s/n)nαnmaxi=1,2,,nF¯i(s)nαnz(s).\Phi(L_{0}^{Y}(s))\leq\max\limits_{i=1,2,\ldots,n}\overline{F}_{i}(s/n)\leq n^{\alpha_{n}}\max\limits_{i=1,2,\ldots,n}\overline{F}_{i}(s)\leq n^{\alpha_{n}}z(s).

Thus, with limsL0Y(s)=\lim\limits_{s\rightarrow\infty}L_{0}^{Y}(s)=\infty, the second moment of ZNR2Y(s)Z_{NR2}^{Y}(s) is bounded in the following way

𝔼[(ZNR2Y(s))2]\displaystyle\mathbb{E}\left[\left(Z_{NR2}^{Y}(s)\right)^{2}\right] \displaystyle\leq 2((Pr(MnY>s))2+E[(F¯R(LY(𝐖,s))2)])\displaystyle 2\left(\left(\Pr(M_{n}^{Y}>s)\right)^{2}+E\left[\left(\overline{F}_{R}(L^{Y}(\mathbf{W},s))^{2}\right)\right]\right)
\displaystyle\leq 2((Pr(MnY>s))2+(F¯R(L0Y(s)))2)\displaystyle 2\left(\left(\Pr(M_{n}^{Y}>s)\right)^{2}+\left(\overline{F}_{R}(L_{0}^{Y}(s))\right)^{2}\right)
\displaystyle\sim 2((Pr(MnY>s))2+(j=1n1β(β+1)(β+j1)j!)2(Φ(L0Y(s)))2)\displaystyle 2\left(\left(\Pr(M_{n}^{Y}>s)\right)^{2}+\left(\sum_{j=1}^{n-1}\frac{\beta(\beta+1)\ldots(\beta+j-1)}{j!}\right)^{2}\left(\Phi(L_{0}^{Y}(s))\right)^{2}\right)
\displaystyle\leq 2((z(s))2+[j=1n1β(β+1)(β+j1)j!]2×n2αn(z(s))2)\displaystyle 2\left(\left(z(s)\right)^{2}+\left[\sum_{j=1}^{n-1}\frac{\beta(\beta+1)\ldots(\beta+j-1)}{j!}\right]^{2}\times n^{2\alpha_{n}}\left(z(s)\right)^{2}\right)
\displaystyle\leq 2(1+n2αn(j=1n1β(β+1)(β+j1)j!)2)(z(s))2\displaystyle 2\left(1+n^{2\alpha_{n}}\left(\sum_{j=1}^{n-1}\frac{\beta(\beta+1)\ldots(\beta+j-1)}{j!}\right)^{2}\right)\left(z(s)\right)^{2}

and the result follows.

7.4 Proof of Proposition 18

We start with an inequality between Φ\Phi and FRF_{R}.  If Φ\Phi is an nn-monotone function, (n1)(n-1)-times differentiable and the random variable RR has the distribution function satisfies (7)\left(\ref{th2MN09}\right) then we have

Φ(ax)(1a)(n1)F¯R(x),x+and a(0,1).\frac{\Phi(ax)}{(1-a)^{(n-1)}}\geq\bar{F}_{R}(x),\ \forall x\in\mathbb{R}^{+}\mathit{\ }\text{and }a\in(0,1).

Indeed, because Φ\Phi is non-increasing function then there exists μ(ax,x)\mu\in(ax,x) such that

Φ(ax)=k=0n2(1a)kxkk!(1)kΦ(k)(x)+(1a)(n1)x(n1)(n1)!(1)(n1)Φ(n1)(μ).\Phi(ax)=\sum_{k=0}^{n-2}(1-a)^{k}\ \frac{x^{k}}{k!}\ (-1)^{k}\Phi^{(k)}(x)+(1-a)^{(n-1)}\ \frac{x^{(n-1)}}{(n-1)!}\ (-1)^{(n-1)}\ \Phi^{{(n-1)}}(\mu).

Following the property of nn-monotone functions, (1)(n2)Φ(n2)(x)(-1)^{(n-2)}\Phi^{(n-2)}(x) is a convex function, and then (1)(n1)Φ(n1)(x)(-1)^{(n-1)}\Phi^{(n-1)}(x) is a non-increasing function, that means (1)(n1)Φ(n1)(μ)(1)(n1)Φ(n1)(x)(-1)^{(n-1)}\Phi^{(n-1)}(\mu)\geq(-1)^{(n-1)}\Phi^{(n-1)}(x) because μx\mu\leq x. Thus we have

Φ(ax)k=0n1(1a)k(1)kxkk!Φ(k)(x)\Phi(ax)\geq\sum_{k=0}^{n-1}(1-a)^{k}(-1)^{k}\ \frac{x^{k}}{k!}\ \Phi^{(k)}(x)

and

Φ(ax)(1a)(n1)k=0n1(1a)(kn+1)(1)kxkk!Φ(k)(x)F¯R(x).\frac{\Phi(ax)}{(1-a)^{(n-1)}}\geq\sum_{k=0}^{n-1}(1-a)^{(k-n+1)}(-1)^{k}\frac{x^{k}}{k!}\Phi^{(k)}(x)\geq\bar{F}_{R}(x). (17)

To prove Proposition 18, first note that ZNR3,2Y(s)Z_{NR3,2}^{Y}(s) is bounded by F¯R(LλY(𝐖,s))\bar{F}_{R}\left(L_{\lambda}^{Y}(\mathbf{W},s)\right) since

ZNR3,2Y(s)=FR(UY(𝐖,s))FR(LλY(𝐖,s))F¯R(LλY(𝐖,s)).Z_{NR3,2}^{Y}(s)=F_{R}\left(U^{Y}(\mathbf{W},s)\right)-F_{R}\left(L_{\lambda}^{Y}(\mathbf{W},s)\right)\leq\bar{F}_{R}\left(L_{\lambda}^{Y}(\mathbf{W},s)\right).

Moreover, from the definition of F¯R(LλY(𝐖,s))\overline{F}_{R}\left(L_{\lambda}^{Y}(\mathbf{W},s)\right) and M2{Φ(F¯i(λs))/Wi}M_{2}\{\Phi^{\leftarrow}(\bar{F}_{i}(\lambda s))/W_{i}\}, there exists two indexes i1,i21,2,,ni_{1},i_{2}\in 1,2,\ldots,n such that

LλY(𝐖,s)M2{Φ(F¯i(λs))Wi}=Φ(F¯i1(λs))Wi1Φ(F¯i2(λs))Wi2.L_{\lambda}^{Y}(\mathbf{W},s)\geq M_{2}\left\{\frac{\Phi^{\leftarrow}(\overline{F}_{i}(\lambda s))}{W_{i}}\right\}=\frac{\Phi^{\leftarrow}(\overline{F}_{i_{1}}(\lambda s))}{W_{i_{1}}}\vee\frac{\Phi^{\leftarrow}(\overline{F}_{i_{2}}(\lambda s))}{W_{i_{2}}}.

Therefore,

{Wi1LλY(𝐖,s)Φ(F¯i1(λs))Wi2LλY(𝐖,s)Φ(F¯i2(λs))\left\{\begin{array}[]{c@{ \geq}c}W_{i_{1}}L_{\lambda}^{Y}(\mathbf{W},s)&\Phi^{\leftarrow}(\overline{F}_{i_{1}}(\lambda s))\\ W_{i_{2}}L_{\lambda}^{Y}(\mathbf{W},s)&\Phi^{\leftarrow}(\overline{F}_{i_{2}}(\lambda s))\end{array}\right.

implies

{Φ(Wi1LλY(𝐖,s))F¯i1(λs)Φ(Wi2LλY(𝐖,s))F¯i2(λs).\left\{\begin{array}[]{c@{ \leq}c}\Phi\left(W_{i_{1}}L_{\lambda}^{Y}(\mathbf{W},s)\right)&\overline{F}_{i_{1}}(\lambda s)\\ \Phi\left(W_{i_{2}}L_{\lambda}^{Y}(\mathbf{W},s)\right)&\overline{F}_{i_{2}}(\lambda s)\end{array}\right..

Appling (17)\left(\ref{rem33}\right) with a=Wija=W_{i_{j}}, j=1,2j=1,2 and x=LλY(𝐖,s)x=L_{\lambda}^{Y}(\mathbf{W},s), we have

{(1Wi1)n1F¯R(LλY(𝐖,s))Φ(Wi1LλY(𝐖,s))(1Wi2)n1F¯R(LλY(𝐖,s))Φ(Wi2LλY(𝐖,s))\left\{\begin{array}[]{c@{ \leq}c}(1-W_{i_{1}})^{n-1}\bar{F}_{R}\left(L_{\lambda}^{Y}(\mathbf{W},s)\right)&\Phi\left(W_{i_{1}}L_{\lambda}^{Y}(\mathbf{W},s)\right)\\ (1-W_{i_{2}})^{n-1}\bar{F}_{R}\left(L_{\lambda}^{Y}(\mathbf{W},s)\right)&\Phi\left(W_{i_{2}}L_{\lambda}^{Y}(\mathbf{W},s)\right)\end{array}\right.

and for j=1,2j=1,2 we have F¯ij(λs)λαijF¯ij(s)λαnz(s)\overline{F}_{i_{j}}(\lambda s)\sim\lambda^{-\alpha_{i_{j}}}\overline{F}_{i_{j}}(s)\leq\lambda^{-\alpha_{n}}z(s). Finally,

𝔼[(ZNR3,2Y(s))2]𝔼[F¯R(LλY(𝐖,s))2]\displaystyle\mathbb{E}\left[\left(Z_{NR3,2}^{Y}(s)\right)^{2}\right]\leq\mathbb{E}\left[\overline{F}_{R}\left(L_{\lambda}^{Y}(\mathbf{W},s)\right)^{2}\right] \displaystyle\leq 𝔼[((1Wi1)(n1)(1Wi2)(n1))2]λ2αn(z(s))2\displaystyle\mathbb{E}\left[\left((1-W_{i_{1}})^{-(n-1)}\wedge(1-W_{i_{2}})^{-(n-1)}\right)^{2}\right]\lambda^{-2\alpha_{n}}\left(z(s)\right)^{2}
\displaystyle\leq 22n2λ2αn(z(s))2.\displaystyle 2^{2n-2}\lambda^{-2\alpha_{n}}\left(z(s)\right)^{2}.

7.5 Proof of Proposition 15

The multivariate survival distribution function of Y=(Y1,,Yn)Y=(Y_{1},...,Y_{n}) is given by

Pr(Y1>y1,,Yn>Yn)=C(F¯1(y1),,F¯n(yn))\Pr(Y_{1}>y_{1},...,Y_{n}>Y_{n})=C\left(\overline{F}_{1}(y_{1}),...,\overline{F}_{n}(y_{n})\right)

and it follows that

FY(y1,,yn)=1i1,,ijn(1)jC(F¯i1(yi1),,F¯ij(yij)).F_{Y}\left(y_{1},...,y_{n}\right)=\sum_{1\leq i_{1},\ldots,i_{j}\leq n}(-1)^{j}\ C(\overline{F}_{i_{1}}(y_{i_{1}}),...,\overline{F}_{i_{j}}(y_{i_{j}})).

We can calculate the derivative of FY(y1,,yn)F_{Y}(y_{1},\ldots,y_{n}) following 𝐲i\mathbf{y}_{-i}. Note that in the sum of 2n2^{n} elements, there are only two elements are different from 0 after taking the derivatives (n1)(n-1) times.

(n1)FY(y1,,yn)y1yi1yi+1yn\displaystyle\frac{\partial^{(n-1)}F_{Y}\left(y_{1},...,y_{n}\right)}{\partial y_{1}...\partial y_{i-1}\partial y_{i+1}...\partial y_{n}} =\displaystyle= Φ(n1)(jiΦ(F¯j(yj)))ji[(Φ)(1)(F¯j(yj))fj(yj)]\displaystyle\Phi^{(n-1)}\left(\sum_{j\neq i}\Phi^{\leftarrow}(\overline{F}_{j}(y_{j}))\right)\prod_{j\neq i}\left[\left(\Phi^{\leftarrow}\right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right]
\displaystyle- Φ(n1)(j=1nΦ(F¯j(yj)))ji[(Φ)(1)(F¯j(yj))fj(yj)]\displaystyle\Phi^{(n-1)}\left(\sum_{j=1}^{n}\Phi^{\leftarrow}(\overline{F}_{j}(y_{j}))\right)\prod_{j\neq i}\left[\left(\Phi^{\leftarrow}\right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right]

and note that the density of 𝐘i\mathbf{Y}_{-i} is

f(𝐲i)=Φ(n1)(j=1,jinΦ(F¯j(yj)))j=1,jin[(Φ)(1)(F¯j(yj))fj(yj)].f(\mathbf{y}_{-i})=\Phi^{(n-1)}\left(\sum_{j=1,j\neq i}^{n}\Phi^{\leftarrow}(\overline{F}_{j}(y_{j}))\right)\prod_{j=1,j\neq i}^{n}\left[\left(\Phi^{\leftarrow}\right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right].

The conditional distribution of Yi=(Yi|𝐘i=𝐲i)Y_{i}^{\ast}=\left(Y_{i}|\mathbf{Y}_{-i}=\mathbf{y}_{-i}\right) is then

Pr(Yi<yi|𝐘i=𝐲i)\displaystyle\Pr(Y_{i}<y_{i}|\mathbf{Y}_{-i}=\mathbf{y}_{-i}) =\displaystyle= 1(1)n1Φ(n1)(j=1nΦ(F¯j(yj)))ji[(Φ)(1)(F¯j(yj))fj(yj)](1)n1Φ(n1)(jiΦ(F¯j(yj)))ji[(Φ)(1)(F¯j(yj))fj(yj)]\displaystyle 1-\frac{(-1)^{n-1}\Phi^{(n-1)}(\sum_{j=1}^{n}\Phi^{\leftarrow}(\overline{F}_{j}(y_{j})))\prod_{j\neq i}\left[\left(\Phi^{\leftarrow}\right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right]}{(-1)^{n-1}\Phi^{(n-1)}(\sum_{j\neq i}\Phi^{\leftarrow}(\overline{F}_{j}(y_{j})))\prod_{j\neq i}\left[\left(\Phi^{\leftarrow}\right)^{(1)}(\overline{F}_{j}(y_{j}))f_{j}(y_{j})\right]}
=\displaystyle= 1Φ(n1)(j=1nΦ(F¯j(yj)))Φ(n1)(jiΦ(F¯j(yj))).\displaystyle 1-\frac{\Phi^{(n-1)}\left(\sum_{j=1}^{n}\Phi^{\leftarrow}(\overline{F}_{j}(y_{j}))\right)}{\Phi^{(n-1)}\left(\sum_{j\neq i}\Phi^{\leftarrow}(\overline{F}_{j}(y_{j}))\right)}.

7.6 Proof of Proposition 21

We have

Pr(SnY>s,MnYκs)=Pr(SnY>s,MnYκs,Mn1Y>1κn1s).\Pr(S_{n}^{Y}>s,M_{n}^{Y}\leq\kappa s)=\Pr(S_{n}^{Y}>s,M_{n}^{Y}\leq\kappa s,M_{n-1}^{Y}>\frac{1-\kappa}{n-1}\ s).

If we estimate this probability conditionally on 𝐖𝔰n\mathbf{W}\in\mathfrak{s}_{n} by the same method of estimating ZNR3,2Y(s)Z_{NR3,2}^{Y}(s), the value of λ\lambda in this case is 1κn1(0,1/n)\frac{1-\kappa}{n-1}\in(0,1/n), the second moment of this estimator is upper bounded by 22n2(1κn1)2αn×[zY(s)]22^{2n-2}\left(\frac{1-\kappa}{n-1}\right)^{-2\alpha_{n}}\times[z^{Y}(s)]^{2}. Thus, the variance of ZNR3,2Y(s)Z_{NR3,2}^{Y}(s) is bounded by

𝕍ar(ZNR4Y(s))\displaystyle\mathbb{V}ar(Z_{NR4}^{Y}(s)) \displaystyle\leq 2i=1n[(F¯i(κs)F¯i(s))]2𝕍ar(𝕀{SnYκi>s,Yiκi=MnYκi})+22n1(1κn1)2αn[zY(s)]2\displaystyle 2\sum_{i=1}^{n}[\big{(}\bar{F}_{i}(\kappa s)-\bar{F}_{i}(s)\big{)}]^{2}\mathbb{V}ar\left(\mathbb{I}_{\{S_{n}^{Y\kappa i}>s,Y_{i}^{\kappa i}=M_{n}^{Y\kappa i}\}}\right)+2^{2n-1}\left(\frac{1-\kappa}{n-1}\right)^{-2\alpha_{n}}[z^{Y}(s)]^{2}
\displaystyle\leq 2i=1n(F¯i(κs))2+22n1(1κn1)2αn(zY(s))2\displaystyle 2\sum_{i=1}^{n}\left(\overline{F}_{i}(\kappa s)\right)^{2}+2^{2n-1}\left(\frac{1-\kappa}{n-1}\right)^{-2\alpha_{n}}\left(z^{Y}(s)\right)^{2}
\displaystyle\leq (2κ2αn+22n1(1κn1)2αn)(zY(s))2.\displaystyle\left(2\kappa^{-2\alpha_{n}}+2^{2n-1}\left(\frac{1-\kappa}{n-1}\right)^{-2\alpha_{n}}\right)\left(z^{Y}(s)\right)^{2}.

References

  • Arbenz, P., Embrechts, P., and Puccetti, G. (2011). The aep algorithm for the fast computation of the distribution of the sum of dependent random variables. Bernoulli, 17:562–591.
  • Asmussen, S. and Binswanger, K. (1997). Simulation of ruin probabilities for subexponential claims. ASTIN Bulletin, 27(2):297–318.
  • Asmussen, S., Blanchet, J., Juneja, S., and Rojas-Nandayapa, L. (2011). Efficient simulation of tail probabilities of sums of correlated lognormals. Annals of Operations Research, 189:5–23.
  • Asmussen, S. and Glynn, P. W. (2007). Stochastic simulation: algorithms and analysis, volume 57. Springer Science & Business Media.
  • Asmussen, S. and Kroese, D., P. (2006). Improved algorithms for rare event simulation with heavy tails. Advances in Applied Probability, 38:545–558.
  • Barbe, P., Genest, C., Ghoudi, K., and Rémillard, B. (1996). On kendall’s process. Journal of Multivariate Analysis, 58(2):197–229.
  • Blanchet, J., Juneja, S., and Rojas-Nandayapa, L. (2008). Efficient tail estimation for sums of correlated lognormals. In Simulation Conference, 2008. WSC 2008. Winter, pages 607–614. IEEE.
  • Blanchet, J. and Rojas-Nandayapa, L. (2011). Efficient simulation of tail probability of sums of dependent random variables. Journal of Applied Probability, 48A:147–164.
  • Boots, N. K. and Shahabuddin, P. (2001). Simulating ruin probabilities in insurance risk processes with subexponential claims. In Mason, S. J., Hill, R. R., Mönch, L., Rose, O., Jefferson, T., and Fowler, J. W., editors, Proceedings of the Winter Simulation Conference, pages 468–476.
  • Brechmann, E. C. (2014). Hierarchical kendall copulas: Properties and inference. The Canadian Journal of Statistics, 42(1):78–108.
  • Brechmann, E. C., Hendrich, K., and Czado, C. (2013). Conditional copula simulation for systemic risk stress testing. Insurance: Mathematics and Economics, 53:722–732.
  • Chan, J. C. C. and Kroese, D. P. (2010). Efficient estimation of large portfolio loss probabilities in tt-copula models. European Journal of Operational Research, 205(2):361–367.
  • Chan, J. C. C. and Kroese, D. P. (2011). Rare-event probability estimation with conditional monte-carlo. Annals of Operations Research, 189(1):43–61.
  • Charpentier, A. and Segers, J. (2009). Tails of multivariate archimedean copulas. Journal of Multivariate Analysis, 100:1521–1537.
  • Cossette, H., Côté, M.-p., Mailhot, M., and Marceau, E. (2014). A note on the computation of sharp numerical bounds for the distribution of the sum, product or ratio of dependent risks. Journal of Multivariate Analysis, 130:1–20.
  • Fang, K. T. and Fang, B. Q. (1988). Some families of multivariate symmetric distributions related to exponential distribution. Journal of Multivariate Analysis, 24:109–122.
  • Hofert, M. (2008). Sampling archimedean copulas. Computational Statistics and Data Analysis 52, pages 5163–5174.
  • Jessen, A. and H., Mikosch, T. (2006). Regularly varying functions. Publication de l’institut mathematique.
  • Juneja, S. and Shahabuddin, P. (2002). Simulating heavy tailed processes using delayed hazard rate twisting. ACM Transactions on Modeling and Computer Simulation, 12(2):94–118.
  • Kimberling, C. H. (1974). A probabilistic interpretation of complete motonocity. Aequationes Mathematicae, 10:152–164.
  • Kortschak, D. and Hashorva, E. (2013). Efficient simulation of tail probabilities for sums of log-elliptical risks. Journal of Computational and Applied Mathematics, 247:53–67.
  • Ling, C., H. (1965). Representation of associative functions. Publ. Math. Debrecen, 12:189–212.
  • Marshall, A. W. and Olkin, I. (1988). Families of multivariate distributions. Journal of the American Statistical Association, 83(403):834–841.
  • McNeil, A. J. and Nešlehová, J. (2009). Multivariate archimedean copulas, d-monotone functions and l1-norm symmetric distributions. The Annals of Statistics, pages 3059–3097.
  • McNeil, A., J. (2008). Sampling nested archimedean copulas. J. Statist. Comput. Simulation, 78:567–581.
  • Sun, Y. and Li, H. (2010). Tail approximation of value-at-risk under multivariate regular variation. In Unpublished.
  • Wüthrich, M. V. (2003). Asymptotic value-at-risk estimates for sums of dependent random variables. ASTIN Bulletin: The Journal of the IAA, 33(1):75–92.
  • Yuen, K. C. and Yin, C. (2012). Asymptotic results for tail probabilities of sums of dependent and heavy-tailed random variables. Chinese Annals of Mathematics, Series B, 33(4):557–568.