This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Regional Pooling in Extreme Event Attribution Studies: an Approach Based on Multiple Statistical Testing

Leandra Zanger Heinrich-Heine Universität Düsseldorf, Mathematisches Institut,                  Düsseldorf, Germany Axel Bücher Corresponding author: axel.buecher@hhu.de Heinrich-Heine Universität Düsseldorf, Mathematisches Institut,                  Düsseldorf, Germany Frank Kreienkamp Deutscher Wetterdienst, Regionales Klimabüro Potsdam, Potsdam, Germany Philip Lorenz Deutscher Wetterdienst, Regionales Klimabüro Potsdam, Potsdam, Germany Jordis Tradowsky Deutscher Wetterdienst, Regionales Klimabüro Potsdam, Potsdam, Germany
Abstract

Statistical methods are proposed to select homogeneous locations when analyzing spatial block maxima data, such as in extreme event attribution studies. The methods are based on classical hypothesis testing using Wald-type test statistics, with critical values obtained from suitable parametric bootstrap procedures and corrected for multiplicity. A large-scale Monte Carlo simulation study finds that the methods are able to accurately identify homogeneous locations, and that pooling the selected locations improves the accuracy of subsequent statistical analyses. The approach is illustrated with a case study on precipitation extremes in Western Europe. The methods are implemented in an R package that allows easy application in future extreme event attribution studies.

Key words: Extreme event attribution; Extreme Value Statistics; Homogeneity Tests; Multiple Comparison Problem; Parametric Bootstrap; Max-Stable Processes.

1 Introduction

Extreme event attribution studies on precipitation extremes are typically motivated by the occurrence of an extreme event which causes major impacts such as damages to infrastructure and agriculture, or even fatalities, see, for instance, Van16; Van17; Ott18; Kre21. A key task for attributing the event to anthropogenic climate change consists of a statistical analysis of available observational data products at the location or region of interest (Phi20). Typically, the observed time period is short, often less than 100 years, which ultimately leads to large statistical uncertainties. One possibility to reduce those uncertainties is to incorporate observations from nearby locations/regions, given that their meteorological characteristics are sufficiently similar and governed by the same underlying processes to those from the region affected by an extreme event. The selection of surrounding areas for which these criteria are met can be based on expert knowledge of the meteorological characteristics and dynamics, for instance provided by experts from the national meteorological and hydrological service of the affected country, like the Deutsche Wetterdienst in Germany. The expert knowledge-based suggestion may next be assessed statistically, which, to the best of our knowledge, has been done based on ad hoc methods in the past. In this paper, we propose profound statistical methods that can complement the expert’s knowledge and which is based on statistically evaluating observational data from the past. Once regions with sufficiently similar characteristics of the analysed variable, e.g., the yearly maximum of daily rainfall, have been identified, the time series of all identified regions can be combined, thereby extending the available time series for the benefit of a more efficient statistical analysis.

The building blocks for the new approach are classical Wald-type tests statistics (Leh21) for testing the null hypothesis that the temporal dynamics at multiple locations of interest are the same. Unlike in the classical text-book case, and motivated by the fact that standard likelihood-based inference for extreme value distributions requires unreasonably large sample sizes for sufficient finite-sample accuracy, we employ a parametric bootstrap device to approximate the distribution of the test statistics under the null hypothesis. This approach is motivated by results in Lil22 for respective stationary extreme value models. Based on suitable decompositions of a global null hypothesis, we then propose to test for carefully selected sub-hypotheses, possibly after correcting the individual tests’ level for multiple comparisons.

The new methods are illustrated by a large-scale Monte Carlo simulation study and by an application to the severe flooding event in Western Europe during July 2021 for which spatial pooling was applied in an attribution study following the event (Kre21; Tra22). For the benefit of researchers who would like to use this spatial pooling approach, an implementation of the method in the statistical programming environment R (R) is publicly available as an R package called findpoolreg on GitHub (findpoolreg).

Attribution analysis of precipitation extremes is especially challenging due to short observational time series as well as their often limited spatial extend, which further complicates the detection of a trend and estimation of return periods based on the limited time series (see Tra22, for a discussion on this). Therefore, we will in the following present the suggested approach for a heavy rainfall event, however, the method could equally be applied to other parameters.

The remaining parts of this paper are organized as follows. Section 2 explains the mathematical concept of the proposed methods, starting with a detailed description of the underlying model assumptions and a strategy for the detection of a possible pooling region in Section 2.1. It is recommended to all readers. In Sections 2.2 and 2.3, mathematical details on the applied estimators and test statistics are given, and they may be skipped by non-expert readers. Next, the ideas of the bootstrap procedures that allow to draw samples from the distribution under the null hypothesis are explained. Again, this may be skipped by non-statisticians. Section 2.5 goes into detail about the detection strategy of possible pooling regions and the treatment of the related multiple testing problem. This section is of interest to all readers who want to apply the methods, since it provides details on how to process a set of p-values obtained from testing multiple hypotheses. Next, Section 3 gives the results of the simulation study that was performed in order to evaluate the performance of the proposed methods. These results are of interest to all readers, and they serve as a basis for the case study conducted in Section 4. Section 5 then discusses several extensions of the proposed methods. In 5.1, we provide a method for estimating region-wise return periods once a pooling region has been chosen. Here, a region-wise return period of a given event is defined as the number of years that one has to wait on average until an event of the same or even greater magnitude is observed anywhere in the pooling region. Extensions to different model assumptions that suit e.g. other variables such as temperature are discussed in Section 5. Last but not least, we come to a conclusion in Section 6. Some mathematical details and further illustrations on the simulation study and the case study are postponed to a supplement.

2 Assessing spatial homogeneities for precipitation extremes

2.1 A homogeneous model for precipitation extremes

The observational data of interest consists of annual or seasonal maximal precipitation amounts (over some fixed time duration, e.g., a day) collected over various years and at various locations (in practice, each location may correspond to a spatial region; we separate these two terms from the outset to avoid misunderstandings: subsequently, a region shall be a set of locations). More precisely, we denote by md(t)m^{(t)}_{d} the observed maximal precipitation amount in season tt and at location dd, with t=1,,nt=1,\dots,n and d=1,,Dd=1,\dots,D. The location of primary interest shall be the one with index d=1d=1. Note that the choice of d=1d=1 is made for illustrative purposes only and can be replaced by any index d{1,,D}d\in\{1,\ldots,D\}.

In view of the stochastic nature, we assume that md(t)m^{(t)}_{d} is an observed value of some random variable Md(t)M_{d}^{(t)}. Since Md(t)M^{(t)}_{d} is generated by a maxima operation, standard extreme value theory (Col01) suggests to assume that Md(t)M_{d}^{(t)} follows the generalized extreme value (GEV) distribution, i.e.,

Md(t)GEV(μd(t),σd(t),γd(t))M_{d}^{(t)}\sim\mathrm{GEV}(\mu_{d}(t),\sigma_{d}(t),\gamma_{d}(t))

for some μd(t),σd(t)>0,γd(t)\mu_{d}(t),\sigma_{d}(t)>0,\gamma_{d}(t)\in\mathbb{R}, where the GEV(μ,σ,γ)\mathrm{GEV}(\mu,\sigma,\gamma) distribution with location parameter μ>0\mu>0, scale parameter σ>0\sigma>0 and shape parameter γ\gamma\in\mathbb{R} is defined by its cumulative distribution function

G(μ,σ,γ)(x)=exp{(1+γxμσ)1/γ}\displaystyle G_{(\mu,\sigma,\gamma)}(x)=\exp\Big{\{}-\Big{(}1+\gamma\frac{x-\mu}{\sigma}\Big{)}^{-1/\gamma}\Big{\}} (1)

for xx such that 1+γxμσ>01+\gamma\frac{x-\mu}{\sigma}>0. Due to climate change, the temporal dynamics at location dd, which are primarily driven by the function t(μd(t),σd(t),γd(t))t\mapsto(\mu_{d}(t),\sigma_{d}(t),\gamma_{d}(t)), are typically non-constant. Any proxy for climate change qualifies as a suitable temporal covariate, and a standard assumption in extreme event attribution studies, motivated by the Clausius–Clapeyron relation, postulates that

μd(t)=μdexp(αdGMST(t)μd),σd(t)=σdexp(αdGMST(t)μd),γd(t)=γd\displaystyle\mu_{d}(t)=\mu_{d}\exp\left(\frac{\alpha_{d}\mathrm{GMST^{\prime}}(t)}{\mu_{d}}\right),\quad\sigma_{d}(t)=\sigma_{d}\exp\left(\frac{\alpha_{d}\mathrm{GMST^{\prime}}(t)}{\mu_{d}}\right),\quad\gamma_{d}(t)=\gamma_{d} (2)

for certain parameters αd,γd,μd,σd>0\alpha_{d},\gamma_{d}\in\mathbb{R},\mu_{d},\sigma_{d}>0. Here, GMST(t)\mathrm{GMST^{\prime}}(t) denotes the smoothed global mean surface temperature anomaly, see Phi20. Note that (2) implies

GEV(μd(t),σd(t),γd(t))=exp(αdGMST(t)μd)GEV(μd,σd,γd),\mathrm{GEV}(\mu_{d}(t),\sigma_{d}(t),\gamma_{d}(t))=\exp\left(\frac{\alpha_{d}\mathrm{GMST^{\prime}}(t)}{\mu_{d}}\right)\mathrm{GEV}(\mu_{d},\sigma_{d},\gamma_{d}),

hence the model may be identified as a temporal scaling model. It is further assumed that any temporal dependence at location dd is completely due to GMST(t)\mathrm{GMST^{\prime}}(t), which we treat as deterministic and which implies that Md(1),,Md(n)M_{d}^{(1)},\dots,M_{d}^{(n)} are stochastically independent, for each d=1,,Dd=1,\dots,D. For the moment, the spatial dependence will be left unspecified.

Recall that the location of interest is the one with d=1d=1, which is characterised by the four parameters μ1,σ1,γ1,α1\mu_{1},\sigma_{1},\gamma_{1},\alpha_{1}. As described before, estimating those parameters based on the observations from location d=1d=1 only may be unpleasantly inaccurate, which is why one commonly assumes that the DD locations have been carefully selected by experts to meet the following space-time homogeneity assumption:

H0ED:ϑΘd{1,,D}:ϑd=ϑ,\displaystyle H_{0}^{\mathrm{ED}}:\quad\exists\,\bm{\vartheta}\in\Theta\ \forall\,d\in\{1,\dots,D\}:\quad\bm{\vartheta}_{d}=\bm{\vartheta}, (3)

where Θ:=(0,)2×2\Theta:=(0,\infty)^{2}\times\mathbb{R}^{2} and ϑ=(μ,σ,γ,α),ϑd=(μd,σd,γd,αd)\bm{\vartheta}=(\mu,\sigma,\gamma,\alpha)^{\top},\bm{\vartheta}_{d}=(\mu_{d},\sigma_{d},\gamma_{d},\alpha_{d})^{\top}, and where the upper index ED\mathrm{ED} stands for ‘equal distribution’, since, in short, Equation (3) states that the location-wise GEV parameters coincide for the DD locations.

In the subsequent sections, we aim at testing the validity of the expert’s hypothesis H0EDH_{0}^{\mathrm{ED}}. Here, it is not only of interest to test the hypothesis for the whole set {1,,D}\{1,\ldots,D\}, but also to find a (maximal) subset A{1,,D}A\subset\{1,\ldots,D\} with 1A1\in A and |A|=k2\left|{A}\right|=k\geq 2 on which the space-time homogeneity assumption holds. Here, for an arbitrary index set AA, the latter assumption may be expressed through

H0ED(A):ϑAΘdA:ϑd=ϑA,\displaystyle H_{0}^{\mathrm{ED}}(A):\quad\exists\,\bm{\vartheta}_{A}\in\Theta\ \forall\,d\in A:\quad\bm{\vartheta}_{d}=\bm{\vartheta}_{A}, (4)

with Θ\Theta as in Equation (3) and ϑA=(μA,σA,γA,αA)\bm{\vartheta}_{A}=(\mu_{A},\sigma_{A},\gamma_{A},\alpha_{A})^{\top}, meaning that the location-wise GEV parameters coincide for all locations with index in the set AA, making the respective locations a possible pooling region.

Now, a maximal subset AA for which Equation (4) holds may be determined with the following strategy: Since we are interested in finding all locations that ‘match’ the location of primary interest with index d=1d=1, we test for each pair Ad={1,d},d=2,,DA_{d}=\{1,d\},d=2,\ldots,D, whether the null hypothesis H0ED(Ad)H_{0}^{\mathrm{ED}}(A_{d}) holds. This will provide us with a set of p-values based on which we can decide which locations to reject and which not to reject. Those locations that are not rejected can then be assumed to be sufficiently homogeneous and are thus included in the suggestion of a pooling region of maximal extent. For further details on this strategy and the impact of the induced multiple testing problem, see Section 2.5.

2.2 Coordinate-wise maximum likelihood estimation

The starting point for the subsequent test statistics are the coordinate-wise maximum likelihood estimators for the model specified in (2). Writing c(t)=GMST(t)c^{(t)}=\mathrm{GMST^{\prime}}(t) for brevity, the log-likelihood contribution of observation (Md(t),c(t))(M_{d}^{(t)},c^{(t)}) is given by ϑd(Md(t),c(t))\ell_{\bm{\vartheta}_{d}}(M_{d}^{(t)},c^{(t)}), where

ϑd(x,c)=logg(μdexp(αdc/μd),σdexp(αdc/μd),γd)(x)\displaystyle\ell_{\bm{\vartheta}_{d}}(x,c)=\log g_{(\mu_{d}\exp(\alpha_{d}c/\mu_{d}),\sigma_{d}\exp(\alpha_{d}c/\mu_{d}),\gamma_{d})}(x) (5)

with g(γ,μ,σ)(x)=xG(μ,σ,γ)(x)g_{(\gamma,\mu,\sigma)}(x)=\frac{\partial}{\partial x}G_{(\mu,\sigma,\gamma)}(x) the probability density function of the GEV(μ,σ,γ)\mathrm{GEV}(\mu,\sigma,\gamma)-distribution. The maximum likelihood estimator for ϑd\bm{\vartheta}_{d} at location dd is then defined as

ϑ^dargmaxϑdΘt=1nϑd(Md(t),c(t)).\displaystyle\hat{\bm{\vartheta}}_{d}\in\mathrm{argmax}_{\bm{\vartheta}_{d}\in\Theta}\sum_{t=1}^{n}\ell_{\bm{\vartheta}_{d}}(M_{d}^{(t)},c^{(t)}). (6)

The arg-maximum cannot be calculated explicitly, but may be found by suitable numerical optimization routines. We denote the gradient and the Hessian matrix of ϑϑ(x,c)\bm{\vartheta}\mapsto\ell_{\bm{\vartheta}}(x,c) by ˙ϑ(x,c)4\dot{\ell}_{\bm{\vartheta}}(x,c)\in\mathbb{R}^{4} and ¨ϑ(x,c)4×4\ddot{\ell}_{\bm{\vartheta}}(x,c)\in\mathbb{R}^{4\times 4}, respectively. Under appropriate regularity assumptions, standard asymptotic expansions (Van98, see also BucSeg17 for the stationary GEV family) imply that 𝜽^=(ϑ^1,,ϑ^D)ΘD\hat{\bm{\theta}}=(\hat{\bm{\vartheta}}_{1}^{\top},\dots,\hat{\bm{\vartheta}}_{D}^{\top})^{\top}\in\Theta^{D} is approximately Gaussian with mean 𝜽=(ϑ1,,ϑD){\bm{\theta}}=({\bm{\vartheta}}_{1}^{\top},\dots,{\bm{\vartheta}}_{D}^{\top})^{\top} and covariance n1𝚺nn^{-1}\bm{\Sigma}_{n}, where 𝚺n=(𝚺n;j,k)j,k=1D4D×4D\bm{\Sigma}_{n}=(\bm{\Sigma}_{n;j,k})_{j,k=1}^{D}\in\mathbb{R}^{4D\times 4D} is defined as

𝚺n;j,k\displaystyle\bm{\Sigma}_{n;j,k} =Jn,j,ϑj1(1nt=1nCov[˙ϑj(Mj(t),c(t)),˙ϑk(Mk(t),c(t))])Jn,k,ϑk14×4\displaystyle=J_{n,j,\bm{\vartheta}_{j}}^{-1}\Big{(}\frac{1}{n}\sum_{t=1}^{n}\operatorname{Cov}\big{[}\dot{\ell}_{\bm{\vartheta}_{j}}(M_{j}^{(t)},c^{(t)}),\dot{\ell}_{\bm{\vartheta}_{k}}(M_{k}^{(t)},c^{(t)})\big{]}\Big{)}J_{n,k,\bm{\vartheta}_{k}}^{-1}\in\mathbb{R}^{4\times 4} (7)

with Jn,j,ϑ=1nt=1n𝔼[¨ϑ(Mj(t),c(t))]4×4J_{n,j,\bm{\vartheta}}=\frac{1}{n}\sum_{t=1}^{n}{\mathbb{E}}[\ddot{\ell}_{\bm{\vartheta}}(M_{j}^{(t)},c^{(t)})]\in\mathbb{R}^{4\times 4}. See Appendix A.1 for details and Appendix A.2 for a suitable estimator 𝚺^n\hat{\bm{\Sigma}}_{n} for 𝚺n\bm{\Sigma}_{n}.

2.3 Wald-type test statistics

We define test statistics which allow to test for the sub-hypotheses H0ED(A)H_{0}^{\mathrm{ED}}(A) of H0EDH_{0}^{\mathrm{ED}} from Equation (4), where A{1,,D}A\subset\{1,\ldots,D\}. For that purpose, we propose to use classical Wald-type test statistics; see Section 14.4.2 in Leh21 for a general discussion and Lil22 for a similar approach in temporally stationary GEV models, i.e., with αd\alpha_{d} fixed to αd=0\alpha_{d}=0.

Write A={d1,,dk}A=\{d_{1},\dots,d_{k}\} with 1d1<<dkD1\leq d_{1}<\dots<d_{k}\leq D and let hA:4D4(k1)h_{A}:\mathbb{R}^{4D}\to\mathbb{R}^{4(k-1)} be defined by

hA(𝜽)=hA(ϑ1,,ϑD)\displaystyle h_{A}(\bm{\theta})=h_{A}(\bm{\vartheta}_{1},\ldots,\bm{\vartheta}_{D}) =(ϑd1ϑd2,ϑd2ϑd3,,ϑdk1ϑdk)\displaystyle=(\bm{\vartheta}_{d_{1}}^{\top}-\bm{\vartheta}_{d_{2}}^{\top},\bm{\vartheta}_{d_{2}}^{\top}-\bm{\vartheta}_{d_{3}}^{\top},\ldots,\bm{\vartheta}_{d_{k-1}}^{\top}-\bm{\vartheta}_{d_{k}}^{\top})^{\top}
=(μd1μd2,σd1σd2,γd1γd2,αd1αd2,\displaystyle=(\mu_{d_{1}}-\mu_{d_{2}},\sigma_{d_{1}}-\sigma_{d_{2}},\gamma_{d_{1}}-\gamma_{d_{2}},\alpha_{d_{1}}-\alpha_{d_{2}},
,γdk1γdk,αdk1αdk).\displaystyle\hskip 113.81102pt\ldots,\gamma_{d_{k-1}}-\gamma_{d_{k}},\alpha_{d_{k-1}}-\alpha_{d_{k}})^{\top}.

We may then write H0ED(A)H_{0}^{\mathrm{ED}}(A) equivalently as

H0ED(A):hA(𝜽)=0.H_{0}^{\mathrm{ED}}(A):h_{A}(\bm{\theta})=0.

Hence, significant deviations of hA(𝜽^)h_{A}(\hat{\bm{\theta}}) from 0 with 𝜽^\hat{\bm{\theta}} from Section 2.2 provide evidence against H0ED(A)H_{0}^{\mathrm{ED}}(A). Such deviations may be measured by the Wald-type test statistic

Tn(A)=n(hA(𝜽^))(𝑯A𝚺^n𝑯A)1hA(𝜽^),T_{n}(A)=n\,(h_{A}(\hat{\bm{\theta}}))^{\top}\,\left(\bm{H}_{A}\;\hat{\bm{\Sigma}}_{n}\;\bm{H}_{A}^{\top}\right)^{-1}\,h_{A}(\hat{\bm{\theta}}), (8)

where 𝑯A=h˙A(𝜽)4(k1)×4D\bm{H}_{A}=\dot{h}_{A}(\bm{\theta})\in\mathbb{R}^{4(k-1)\times 4D} denotes the Jacobian matrix of 𝜽hA(𝜽)\bm{\theta}\mapsto h_{A}(\bm{\theta}), which is a matrix with entries in {1,0,1}\{-1,0,1\} that does not depend on 𝜽\bm{\theta}. In view of the asymptotic normality of 𝜽^\hat{\bm{\theta}}, see Section 2.2, the asymptotic distribution of Tn(A)T_{n}(A) under the null hypothesis H0ED(A)H_{0}^{\mathrm{ED}}(A) is the chi-square distribution χ4(k1)2\chi_{4(k-1)}^{2} with 4(k1)4(k-1) degrees of freedom; see also Section 4 in Lil22. Hence, rejecting H0ED(A)H_{0}^{\mathrm{ED}}(A) if Tn(A)T_{n}(A) exceeds the (1α)(1-\alpha)-quantile of the χ4(k1)2\chi_{4(k-1)}^{2}-distribution provides a statistical test of asymptotic level α(0,1)\alpha\in(0,1). The finite-sample performance of the related test in the stationary setting was found to be quite inaccurate (see Lil22). To overcome this issue, we propose a suitable bootstrap scheme in the next section.

2.4 Parametric bootstrap devices for deriving p-values

Throughout this section, we propose two bootstrap devices that allow to simulate approximate samples from the H0ED(A)H_{0}^{\mathrm{ED}}(A)-distribution of the test statistic Tn(A)T_{n}(A) from Equation (8). Based on a suitably large set of such samples, one can compute a reliable p-value for testing H0ED(A)H_{0}^{\mathrm{ED}}(A), even for short sample sizes.

The first method is based on a global fit of a max-stable process model to the entire region under consideration, while the second one is based on fitting multiple pairwise models. The main difference of the two approaches is that the first one can test the hypothesis H0ED(A)H_{0}^{\mathrm{ED}}(A) for arbitrary subsets A{1,,D},A\subset\{1,\ldots,D\}, while the second approach is restricted to testing the null hypothesis on subsets of cardinality two, i.e., it can only test whether a pair of locations is homogeneous. Depending on the question that is asked, applying the one or the other method may be advantageous.

2.4.1 Global bootstrap based on max-stable process models

The subsequent bootstrap device is a modification of the parametric bootstrap procedure described in Section 5.3 of Lil22. Fix some large number BB, say B=200B=200, noting that larger numbers are typically better, but going beyond B=1000B=1000 is usually not worth the extra computational effort.

The basic idea is as follows: for each b=1,,Bb=1,\dots,B, simulate artificial bootstrap samples

𝒟b={Md,b(t),:t{1,,n},d{1,,D}}\mathcal{D}^{*}_{b}=\big{\{}M_{d,b}^{(t),*}:t\in\{1,\dots,n\},d\in\{1,\dots,D\}\big{\}}

that have a sufficiently similar spatial dependence structure as the observed data 𝒟={Md(t):t{1,,T},d{1,,D}}\mathcal{D}=\{M_{d}^{(t)}:t\in\{1,\dots,T\},d\in\{1,\dots,D\}\} and that satisfy the null hypothesis H0EDH_{0}^{\mathrm{ED}}. For each fixed A{1,,D}A\subset\{1,\dots,D\} with k=|A|2k=|A|\geq 2, the test statistics computed on all bootstrap samples, say (Tn,b(A))b=1,,B(T_{n,b}^{*}(A))_{b=1,\dots,B}, are then compared to the observed test statistic Tn(A)T_{n}(A). Since the bootstrap samples do satisfy H0ED(A)H_{0}^{\mathrm{ED}}(A), the observed test statistic Tn(A)T_{n}(A) should differ significantly from the bootstrapped test statistics in case H0ED(A)H_{0}^{\mathrm{ED}}(A) is not satisfied on the observed data.

Here, for simulating the bootstrap samples, we assume that the spatial dependence structure of the observed data can be sufficiently captured by a max-stable process model. Max-stable processes provide a natural choice here, since they are the only processes that can arise, after proper affine transformation, as the limit of maxima of independent and identically distributed random fields {Yi(x):xp}\{Y_{i}(x):x\in\mathbb{R}^{p}\} (Col01, Section 9.3). Parametric models for max-stable processes are usually stated for unit Fréchet (i.e., GEV(1,1,1)\mathrm{GEV}(1,1,1)) margins. Therefore, the first steps in our algorithm below aim at transforming the margins of our observed data to be approximately unit Fréchet.

More precisely, the parametric bootstrap algorithm is defined as follows:

Algorithm 1 (Bootstrap based on max-stable processes).
  1. (1)

    For each d{1,,D}d\in\{1,\ldots,D\}, calculate ϑ^d\hat{\bm{\vartheta}}_{d} from Section 2.2.

  2. (2)

    For each d{1,,D}d\in\{1,\ldots,D\}, transform the observations to approximately i.i.d. Fréchet-distributed data, by letting

    Yd(t)={1+γ^dMd(t)μ^dexp(α^dGMST(t)μ^d)σ^dexp(α^dGMST(t)μ^d)}+1/γd(t{1,,n}).\displaystyle Y_{d}^{(t)}=\left\{1+\hat{\gamma}_{d}\frac{M_{d}^{(t)}-\hat{\mu}_{d}\exp\left(\frac{\hat{\alpha}_{d}\mathrm{GMST^{\prime}}(t)}{\hat{\mu}_{d}}\right)}{\hat{\sigma}_{d}\exp\left(\frac{\hat{\alpha}_{d}\mathrm{GMST^{\prime}}(t)}{\hat{\mu}_{d}}\right)}\right\}_{+}^{1/\gamma_{d}}\quad(t\in\{1,\dots,n\}). (9)
  3. (3)

    Fit a set of candidate max-stable process models with standard Fréchet margins to the observations (Y1(t),,YD(t))t=1,,n(Y_{1}^{(t)},\ldots,Y_{D}^{(t)})_{t=1,\dots,n} and choose the best fit according to the composite likelihood information criterion (CLIC), which is a model selection criterion that is commonly applied when fitting max-stable process models. Throughout, we chose the following three models:

    1. (a)

      Smith’s model (3 parameters);

    2. (b)

      Schlather’s model with a powered exponential correlation function (3 parameters);

    3. (c)

      the Brown-Resnick process (2 parameters).

    For further details on max-stable processes, the mentioned models and the CLIC, see davisonModelSpatExt and Dav12. Respective functions are implemented in the R package SpatialExtremes (SpatialExtremes).

  4. (4)

    For b{1,,B}b\in\{1,\ldots,B\} and t{1,,n}t\in\{1,\dots,n\}, simulate spatial data with unit Fréchet margins from the chosen max-stable process model, denoted by

    (Y1,b(t),,Y2,b(t),,,YD,b(t),).(Y_{1,b}^{(t),\ast},Y_{2,b}^{(t),\ast},\ldots,Y_{D,b}^{(t),\ast}).

Note that until now we haven’t used the particular hypothesis H0ED(A)H_{0}^{\mathrm{ED}}(A). Subsequently, fix A={d1,,dk}A=\{d_{1},\dots,d_{k}\} with 1d1<<dkD1\leq d_{1}<\dots<d_{k}\leq D.

  1. (5)

    Assume that H0ED(A)H_{0}^{\mathrm{ED}}(A) from Equation (4) is true, and estimate the four dimensional model parameters ϑA=(μA,σA,γA,αA)Θ\bm{\vartheta}_{A}=(\mu_{A},\sigma_{A},\gamma_{A},\alpha_{A})^{\top}\in\Theta by (pseudo) maximum likelihood based on the pooled sample

    (Md1(1),c(1)),,(Md1(n),c(n)),(Md2(1),c(1)),,(\displaystyle(M_{d_{1}}^{(1)},c^{(1)}),\dots,(M_{d_{1}}^{(n)},c^{(n)}),(M_{d_{2}}^{(1)},c^{(1)}),\dots,( Md2(n),c(n)),\displaystyle M_{d_{2}}^{(n)},c^{(n)}),\dots
    ,(\displaystyle\dots,( Mdk(1),c(1)),,(Mdk(n),c(n)).\displaystyle M_{d_{k}}^{(1)},c^{(1)}),\dots,(M_{d_{k}}^{(n)},c^{(n)}).

    Denote the resulting parameter vector as ϑ^A=(μ^A,σ^A,γ^A,α^A)\hat{\bm{\vartheta}}_{A}=(\hat{\mu}_{A},\hat{\sigma}_{A},\hat{\gamma}_{A},\hat{\alpha}_{A})^{\top}, and note that ϑ^A\hat{\bm{\vartheta}}_{A} should be close to ϑ^d\hat{\bm{\vartheta}}_{d} for each dAd\in A, if H0ED(A)H_{0}^{\mathrm{ED}}(A) is met.

  2. (6)

    Transform the margins of the bootstrap samples to the ones of a GEV-model satisfying H0ED(A)H_{0}^{\mathrm{ED}}(A), by letting

    Md,b(t),=μ^Aexp(α^AGMST(t)μ^A)+σ^Aexp(α^AGMST(t)μ^A)(Yd,b(t),)γ^A1γ^A\displaystyle M_{d,b}^{(t),\ast}=\hat{\mu}_{A}\exp\left(\frac{\hat{\alpha}_{A}\mathrm{GMST^{\prime}}(t)}{\hat{\mu}_{A}}\right)+\hat{\sigma}_{A}\exp\left(\frac{\hat{\alpha}_{A}\mathrm{GMST^{\prime}}(t)}{\hat{\mu}_{A}}\right)\frac{(Y_{d,b}^{(t),\ast})^{\hat{\gamma}_{A}}-1}{\hat{\gamma}_{A}} (10)

    for t{1,,n},dAt\in\{1,\ldots,n\},d\in A and b{1,,B}b\in\{1,\ldots,B\}. For each resulting bootstrap sample 𝒟b(A)={Md,b(t),:t{1,,n},dA}\mathcal{D}_{b}^{*}(A)=\{M_{d,b}^{(t),\ast}:t\in\{1,\dots,n\},d\in A\}, compute the value tn,b(A)t_{n,b}^{\ast}(A) of the test statistic Tn(A)T_{n}(A) from Equation (8). Note that Tn(A)T_{n}(A) only depends on the coordinates with dAd\in A.

  3. (7)

    Compute the value tn(A)t_{n}(A) of the test statistic Tn(A)T_{n}(A) from Equation (8) on the observed sample.

  4. (8)

    Compute the bootstrapped pp-value by

    p(A)=1B+1b=1B𝟏(tn(A)tn,b(A)).p(A)=\frac{1}{B+1}\sum_{b=1}^{B}\operatorname{\bf{1}}(t_{n}(A)\leq t_{n,b}^{\ast}(A)).

In a classical test situation, one may now reject H0ED(A)H_{0}^{\mathrm{ED}}(A) for a fixed set AA at significance level α(0,1)\alpha\in(0,1) if p(A)αp(A)\leq\alpha. In the current pooling situation, we would need to apply the test to multiple pooling regions AA, which hence constitutes a multiple testing problem where standard approaches yield inflated levels. We discuss possible remedies in Section 2.5.

2.4.2 Pairwise bootstrap based on bivariate extreme value distributions

Recall that the location of primary interest is the one with index d=1d=1.

As stated in Section 2.1, it is of interest to test for all bivariate hypotheses H0ED({1,d})H_{0}^{\mathrm{ED}}(\{1,d\}) with d=2,,Dd=2,\dots,D. For that purpose, we may apply a modification of the bootstrap procedure from the previous section that makes use of bivariate extreme value models only. By doing so, we decrease the model risk implied by imposing a possibly restrictive global max stable process model.

The modification only affects step (3) and (4) from Algorithm 1. More precisely, for testing the hypothesis H0ED(Ad)H_{0}^{\mathrm{ED}}(A_{d}) with Ad={1,d}A_{d}=\{1,d\} for some fixed value d=2,,Dd=2,\dots,D, we make the following modifications:

Algorithm 2 (Pairwise bootstrap based on bivariate extreme value distributions).

Perform step (1) and (2) from Algorithm 1 with the set {1,,D}\{1,\ldots,D\} replaced by AdA_{d}.

  1. (3a)

    Fit a set of bivariate extreme value distributions to the bivariate sample (Y1(t),Yd(t))t=1,,n(Y_{1}^{(t)},Y_{d}^{(t)})_{t=1,\dots,n}, assuming the marginal distributions to be unit Fréchet. Choose the best fit according to the Akaike information criterion (AIC), a model selection criterion that rewards a good fit of a model and penalises the model’s complexity at the same time (Aka73). Possible models are:

    1. (a)

      the Hüsler-Reiss model (1 parameter);

    2. (b)

      the logistic model (1 parameter);

    3. (c)

      the asymmetric logistic model (2 parameters).

    Note that all models are implemented in evd.

  2. (4a)

    For b{1,,B}b\in\{1,\ldots,B\} and t{1,,n}t\in\{1,\dots,n\}, simulate bivariate data with unit Fréchet margins from the chosen bivariate extreme value model, denoted by (Y1,b(t),,Yd,b(t),).(Y_{1,b}^{(t),\ast},Y_{d,b}^{(t),\ast}).

Perform Steps (5)-(8) from Algorithm 1 with A=AdA=A_{d}.

Note that Algorithm 2 is computationally more expensive than Algorithm 1 since model selection and fitting of dependence models and its subsequent simulation must be performed separately for each hypothesis H0ED(Ad)H_{0}^{\mathrm{ED}}(A_{d}) of interest.

2.5 Combining test statistics

As already addressed at the end of Section 2.1, it is not only of interest to test the global hypothesis H0EDH_{0}^{\mathrm{ED}}, since a possible rejection of H0EDH_{0}^{\mathrm{ED}} gives no indication about which locations deviate from the one of primary interest. Instead, one might want to test hypotheses on several subsets and then pool those subsets for which no signal of heterogeneity was found. In this subsection, we provide the mathematical framework of testing sub-hypotheses and discuss how to deal with the induced multiple testing problem.

Mathematically, we propose to regard H0EDH_{0}^{\mathrm{ED}} as a global hypothesis that is built up from elementary hypotheses of smaller dimension. A particularly useful decomposition is based on pairwise elementary hypotheses: recalling the notation H0ED(A)H_{0}^{\mathrm{ED}}(A) from Equation (4), we clearly have

H0ED=d=2DH0ED({1,d}),\displaystyle H_{0}^{\mathrm{ED}}=\bigcap_{d=2}^{D}H_{0}^{\mathrm{ED}}(\{1,d\}), (11)

i.e., H0EDH_{0}^{\mathrm{ED}} holds globally when it holds locally for all pairs {1,d}\{1,d\} with d{2,,D}.d\in\{2,\ldots,D\}. We may now either apply Algorithm 1 or Algorithm 2 to obtain a p-value, say pdraw=p({1,d})p^{\text{raw}}_{d}=p(\{1,d\}), for testing H0ED({1,d})H_{0}^{\mathrm{ED}}(\{1,d\}), for any d{2,,D}d\in\{2,\dots,D\}. Each p-value may be interpreted as a signal for heterogeneity between locations 11 and dd, with smaller values indicating stronger heterogeneity. The obtained raw list of p-values may hence be regarded as an exploratory tool for identifying possible heterogeneities.

Since we are now dealing with a multiple testing problem, it might be advisable to adjust for multiple comparison in order to control error rates. This can be done by interpreting the raw list based on classical statistical testing routines, in which p-values are compared with suitable critical values to declare a hypothesis significant. Several methods appear to be meaningful, and we discuss three of them in the following. For this, let α(0,1)\alpha\in(0,1) denote a significance level, e.g., α=0.1\alpha=0.1.

IM (Ignore multiplicity): reject homogeneity for all pairs {1,d}\{1,d\} for which pdrawαp_{d}^{\text{raw}}\leq\alpha. In doing so, we do not have any control over false rejections. In particular, in case DD is large, false rejections of some null hypotheses will be very likely. On the other hand, the procedure will have decent power properties, and will likely detect most alternatives. Hence, in a subsequent analysis based on the pooled sample of homogeneous locations, we can expect estimators to exhibit comparably little bias and large variance.

Holm (Control the family-wise error rate): apply Holm’s stepdown procedure (Hol79). For that purpose, sort the p-values pj=p1+jraw=p({1,1+j})p_{j}=p^{\text{raw}}_{1+j}=p(\{1,1+j\}) with j=1,,D1j=1,\dots,D-1; denote them by p(1)p(D1)p_{(1)}\leq\dots\leq p_{(D-1)}. Starting from j=1j=1, determine the smallest index jj such that

p(j)>αj:=α/(Dj).p_{(j)}>\alpha_{j}:=\alpha/(D-j).

If j=1j=1, then reject no hypotheses. If no such index exists, then reject all hypotheses. Otherwise, if j{2,,D1}j\in\{2,\dots,D-1\}, reject the hypotheses that belong to the p-values p(1),,p(j1)p_{(1)},\dots,p_{(j-1)}.

The procedure can be equivalently expressed by adjusted p-values. Recursively defining p~(1)=min{1,(D1)p(1)}\tilde{p}_{(1)}=\min\{1,(D-1)p_{(1)}\} and

p~(j)=min{1,max{p~(j1),(Dj)p(j)}}\tilde{p}_{(j)}=\min\left\{1,\max\{\tilde{p}_{(j-1)},(D-j)p_{(j)}\}\right\}

for j=2,,D1j=2,\ldots,D-1, we simply reject those hypotheses that belong to the adjusted p-values with p~(j)α\tilde{p}_{(j)}\leq\alpha.

Holm’s stepdown procedure is known to asymptotically control the family-wise error rate (FWER) at level α\alpha, i.e.,

FWER:=Pr(reject any true null hypothesis H0ED({1,d}))α,\mathrm{FWER}:=\Pr\big{(}\text{reject any true null hypothesis }H_{0}^{\mathrm{ED}}(\{1,d\})\big{)}\leq\alpha,

see Theorem 9.1.2 in Leh21.

In general, controlling the family-wise error rate will result in comparably little power, i.e., we might falsely identify some pairs of locations as homogeneous. Hence, in a subsequent analysis based on the pooled sample of homogeneous locations, we can expect estimators to exhibit comparably large bias and little variance.

BH (Control the false discovery rate): apply the Benjamini Hochberg stepup procedure (Ben95). For that purpose, sort the p-values pj=p1+jraw=p({1,1+j})p_{j}=p^{\text{raw}}_{1+j}=p(\{1,1+j\}) with j=1,,D1j=1,\dots,D-1; denote them by p(1)p(D1)p_{(1)}\leq\dots\leq p_{(D-1)}. Starting from j=D1j=D-1, determine the largest index jj such that

p(j)αj:=jα(D1).p_{(j)}\leq\alpha_{j}:=\frac{j\alpha}{(D-1)}.

If no such index exists, then reject no hypotheses. Otherwise, if j{1,,D1}j\in\{1,\dots,D-1\}, reject the hypotheses that belong to the p-values p(1),,p(j)p_{(1)},\dots,p_{(j)}.

Again, one can compute adjusted pp-values p~(j)\tilde{p}_{(j)} such that the procedure is equivalent to rejecting those hypotheses for which p~(j)α\tilde{p}_{(j)}\leq\alpha. For that purpose, let p~(D1)=min{1,(D1)p(D1)}\tilde{p}_{(D-1)}=\min\{1,(D-1)p_{(D-1)}\} and recursively define, for j=D2,,1j=D-2,\ldots,1,

p~(j)=min{1,min{(D1)p(j)j,p~(j+1)}}.\tilde{p}_{(j)}=\min\left\{1,\min\left\{(D-1)\frac{p_{(j)}}{j},\tilde{p}_{(j+1)}\right\}\right\}.

Under an additional assumption on the p-values that belong to the true null hypotheses (they must exhibit some positive dependence), the BH procedure is known to asymptotically control the false discovery rate (FDR) at level α\alpha, i.e.,

FDR:=𝔼[Number of false rejectionsNumber of all rejections𝟏(at least one rejections)]α,\mathrm{FDR}:=\mathbb{E}\Big{[}\frac{\text{Number of false rejections}}{\text{Number of all rejections}}\bm{1}(\text{at least one rejections})\Big{]}\leq\alpha,

see Theorem 9.3.3 in Leh21. Control of the FDR will be confirmed by the simulation experiments in Section 3.

If one were interested in guaranteed theoretical control of the FDR rate, one might alternatively apply the Benjamini Yekutieli (BY) stepup procedure, see (Ben01) and Theorem 9.3.3 in Leh21. In view of the fact that the procedure is much more conservative than BH, we do not recommend its application in the current setting.

Concerning a subsequent analysis, estimators based on a pooled sample obtained from the BH procedure can be expected to exhibit bias and variance to be somewhere between the IM and Holm procedure.

Remark 1.

The decomposition of H0EDH_{0}^{\mathrm{ED}} into hypotheses of smaller dimensionality is not unique. For instance, we may alternatively write

H0ED=k=1KH0ED(Bk),\displaystyle H_{0}^{\mathrm{ED}}=\bigcap_{k=1}^{K}H_{0}^{\mathrm{ED}}(B_{k}), (12)

where {1}B1B2BK={1,,d}\{1\}\subset B_{1}\subset B_{2}\dots\subset B_{K}=\{1,\dots,d\} denotes an increasing sequence of regions with 2|B1|<|B2|<<|BK|=d2\leq|B_{1}|<|B_{2}|<\dots<|B_{K}|=d (for instance, Bk={1,2,,1+k}B_{k}=\{1,2,\dots,1+k\} with k=1,,D1k=1,\dots,D-1). In practice, the sequence is supposed to be derived from some expert knowledge of the region of interest; it shall represent a sequence of possible pooling regions where BkB_{k} is constructed from Bk1B_{k-1} by adding the locations which are a priori ‘most likely’ homogeneous to the locations in BkB_{k}. Note that, necessarily, KD1K\leq D-1, which provides an upper bound on the number of hypotheses to be tested.

The derivation of respective testing methods is straightforward. In view of the facts that the choice of the sequence is fairly subjective and that the eventual results crucially depend on that choice, we do not pursue the method any further.

3 Simulation Study

A large-scale Monte Carlo simulation study was conducted to assess the performance of the proposed bootstrap procedures in finite sample situations. We aim at answering the following questions:

  1. (a)

    Regarding the test’s power: What percentage of locations that are heterogeneous w.r.t. the location of primary interest can be expected to be identified correctly?

  2. (b)

    Regarding the test’s error rates: What percentage of locations that are homogeneous w.r.t. the location of primary interest can be expected to be wrongly identified as heterogeneous (FDR)? What is the probability of wrongly identifying at least one location that is homogeneous w.r.t. the location of interest as heterogeneous (FWER)?

  3. (c)

    Regarding the chosen pooling regions: How does return level (RL) estimation based on the pooling regions proposed by the bootstrap procedures compare to RL estimation based on the location of interest only or the whole (heterogeneous) region?

The data was generated in such a way that the temporal spatial dynamics from the case study in Section 4 are mimicked. To achieve this, we started by fitting the scale-GEV model from Equation (2) to annual block-maxima of observations from 1950–2021 at 16 spatial locations in Western Europe (i.e., n=72n=72 and D=16D=16) that are arranged in a 4×44\times 4 grid; see Figure 1 and the additional explanations in Section 4. The locations correspond to the center points of the grid cells; the distance between the center points of two neighbouring grid cells is approximately 140 km. The location-wise GEV parameter estimates ϑ^d\hat{\bm{\vartheta}}_{d} exhibit the following approximate ranges over d{1,,16}d\in\{1,\dots,16\}: μ^d(18.1,30.8)\hat{\mu}_{d}\in(18.1,30.8) with a mean of 20.8520.85, σ^d(4.185,7.92)\hat{\sigma}_{d}\in(4.185,7.92) with a mean of 5.3, γ^d(0.13,0.36)\hat{\gamma}_{d}\in(-0.13,0.36) with a mean of 0.08 and α^d(2.3,5.08)\hat{\alpha}_{d}\in(-2.3,5.08) with a mean of 1.5. Fitting the scale-GEV model to the full pooled sample of size nD=1152n\cdot D=1152, we obtained parameter estimates that were close to the means over the location-wise parameter estimates, with 20.37,5.8,0.1,1.520.37,5.8,0.1,1.5 for location, scale, shape and trend parameter, respectively. Next, we transformed the margins to (approximate) unit Fréchet by applying the transformation from Equation (9), such that we can fit several max-stable process models to the transformed data. The best fit was Smith’s model with approximate dependence parameters σ11=0.4,σ12=0.2,σ22=0.9\sigma_{11}=0.4,\sigma_{12}=0.2,\sigma_{22}=0.9; see davisonModelSpatExt for details on the model.

Refer to caption
Refer to caption
Figure 1: Illustration of the grid used for the simulation. The regions contained in AdevA_{\text{dev}} are shaded in blue, with |Adev|=2|A_{\text{dev}}|=2 shown in the left plot and |Adev|=7|A_{\text{dev}}|=7 shown on the right. The region of interest is the one labelled 10.

Based on these model fits, we chose to generate data with the following specifications: first, the sample size was fixed to n=75n=75 and the regional 4×44\times 4 grid was chosen as described before, i.e., d=16d=16. The grid cell/location labelled ‘1010’ is chosen as the one of primary interest. Further, the dependence structure is fixed to that of Smith’s model with (approximately) those parameters that gave the best fit on the observed data, i.e. σ11=0.4,σ12=0.2,σ22=0.9\sigma_{11}=0.4,\sigma_{12}=0.2,\sigma_{22}=0.9. For simulating data, we first simulate from this max-stable process model (SpatialExtremes) and then transform the margins to scale-GEV distributions, either in a homogeneous or in a heterogeneous manner. Here, the globally homogeneous model is defined by fixing the marginal scale-GEV parameters to approximately the mean values of the location-wise GEV parameters obtained for the real observations, i.e.,

μd=20,σd=5.5,γd=0.1,αd=1.5\displaystyle\mu_{d}=20,\ \sigma_{d}=5.5,\ \gamma_{d}=0.1,\ \alpha_{d}=1.5 (13)

for each d{1,,16}d\in\{1,\dots,16\}.

Starting from this homogeneous model, we consider two different heterogeneous scenarios. In the first scenario, we fix ϑd=(μd,σd,γd,αd)\bm{\vartheta}_{d}=(\mu_{d},\sigma_{d},\gamma_{d},\alpha_{d})^{\top} as in Equation (13) for all dAhom={1,,16}{4,8}d\in A_{\text{hom}}=\{1,\dots,16\}\setminus\{4,8\}, while

μd=20+cμ,cμ{3,1.5,0,1.5,3},σd=5.5cσ,cσ{0.7,0.85,1,1.15,1.3},γd=0.1+cγ,cγ{0.1,0,0.1},αd=1.5+cα,cα{1,0,1},\displaystyle\begin{aligned} \mu_{d}&=20+c_{\mu},&\hskip 14.22636pt&c_{\mu}\in\{-3,-1.5,0,1.5,3\},\\ \sigma_{d}&=5.5\cdot c_{\sigma},&\hskip 14.22636pt&c_{\sigma}\in\{0.7,0.85,1,1.15,1.3\},\\ \gamma_{d}&=0.1+c_{\gamma},&\hskip 14.22636pt&c_{\gamma}\in\{-0.1,0,0.1\},\\ \alpha_{d}&=1.5+c_{\alpha},&\hskip 14.22636pt&c_{\alpha}\in\{-1,0,1\},\end{aligned} (14)

for dAdev={4,8}d\in A_{\text{dev}}=\{4,8\} with (cμ,cσ,cγ,cα)(0,0,0,0)(c_{\mu},c_{\sigma},c_{\gamma},c_{\alpha})\neq(0,0,0,0). Note that this defines 55331=2245\cdot 5\cdot 3\cdot 3-1=224 different heterogeneous models. In the second scenario, we consider the same construction with Ahom={5,6,7,9,10,11,13,14,15}A_{\text{hom}}=\{5,6,7,9,10,11,13,14,15\} and Adev={1,2,3,4,8,12,16}A_{\text{dev}}=\{1,2,3,4,8,12,16\}. An illustration of the grid cells and their partition into homogeneous and non-homogeneous areas can be found in Figure 1. Overall, we obtain 448 different heterogeneous models and one homogeneous model.

For each of the 449 models, we now apply the following three bootstrap procedures, each carried out with B=300B=300 bootstrap replications (recall that the grid cell of interest is the one labelled with 10):

  1. (B1)

    The bootstrap procedure from Algorithm 1 with A={1,,16}A=\{1,\dots,16\}.

  2. (B2)

    The bootstrap procedure from Algorithm 1 for all sets Ad={10,d}A_{d}=\{10,d\} with d{1,,16}{10}d\in\{1,\ldots,16\}\setminus\{10\}.

  3. (B3)

    The bootstrap procedure from Algorithm 2 for all sets Ad={10,d}A_{d}=\{10,d\} with d{1,,16}{10}d\in\{1,\ldots,16\}\setminus\{10\}.

Note that the second and third method both yield 15 raw p-values. Each procedure was applied to 500 simulated samples from all models under consideration.

Regarding (B1), we compute the percentage of rejections among the 500 replications, which represents the empirical type I error of the test under the homogeneous model and the empirical power under the heterogeneous models. The results can be found in Figure 2. The null hypothesis is met in the central square only, and we observe that the nominal level of α=0.1\alpha=0.1 is perfectly matched. All non-central squares correspond to different alternatives, and we observe decent power properties in both scenarios. Note that a rejection only implies that the entire region {1,,16}\{1,\dots,16\} is not homogeneous; there is no information on possible smaller subgroups that are homogeneous to the location of interest.

Refer to caption

Figure 2: Rejection rates in %\% obtained for (B1), in the setting where either 2 (left plot) or 7 (right plot) regions deviate from the others. Each coloured square contains the rejection rate for one of the 225 different models, with the central square with cμ=cσ=0c_{\mu}=c_{\sigma}=0 corresponding to the null hypothesis. The xx- and yy-axis and the facets determine the values of the scale-GEV parameter vector of the deviating locations through Equation (14).

Regarding (B2) and (B3), rejection decisions were obtained for each hypothesis H0ED({10,d})H_{0}^{\mathrm{ED}}(\{10,d\}) by one of the three methods from Section 2.5. The empirical family-wise error rate is then the percentage of cases (over 500 replications) for which at least one null hypothesis was rejected. Likewise, for the false discovery rate, we calculate, for each replication, the number of false rejections and divide that by the total number of rejections (when the number of total rejections is 0, this ratio is set to 0). The empirical false discovery rate is obtained by taking the mean over all 500 replications. Similarly, for assessing the power properties, we calculate the empirical proportion of correct rejections (i.e., among the 2 or 7 locations that deviate, the proportion of detected heterogeneous locations) over all 500 replications.

Results for the false discovery and family-wise error rate are given in Table 1. We find that the pp-value combination methods from Section 2.5 are sufficiently accurate: the BH method controls the false discovery rate, while Holm’s method controls the family-wise error rate. This holds exactly for procedures (B3), where the maximal FDR (FWER) of the BH (Holm) method is at 9.4% (8.7%), and approximately for (B2), where the maximal FDR (FWER) is at 12.2% (12.6%). Further, we see that the IM procedure neither controls the FWER nor the FDR.

Method min FDR max FDR mean FDR min FWER max FWER mean FWER (B2) (B3) (B2) (B3) (B2) (B3) (B2) (B3) (B2) (B3) (B2) (B3) Scenario 1: |Adev|=2|A_{\mathrm{dev}}|=2 BH 7.3 5.6 12.2 9.4 9.4 7.5 9.1 6.9 21.2 19.0 14.4 11.8 Holm 3.0 2.3 11.7 8.3 7.1 5.1 6.9 5.0 12.6 8.7 9.5 6.6 IM 25.8 25.3 61.7 60.1 37.7 37.2 53.4 53.4 64.5 62.8 59.1 58.3 Scenario 2: |Adev|=7|A_{\mathrm{dev}}|=7 BH 3.6 2.4 12.1 8.9 5.6 4.9 6.2 4.2 32.0 29.8 18.6 16.8 Holm 1.0 0.9 11.3 7.9 3.4 2.6 3.8 2.4 11.3 7.9 7.1 5.1 IM 7.9 7.8 61.5 60.1 16.0 15.8 40.9 40.9 61.5 60.1 47.3 46.4

Table 1: False Discovery Rate (FDR) and family-wise Error Rate (FWER) for the three p-value combination methods from Section 2.5 and the two bootstrap methods (B2) and (B3). The stated values are the minimum, maximum and mean across the 224 alternative models from each scenario.

The power properties for procedure (B2) combined with the BH method are shown in Figure 3. We see that the procedure is able to detect some of the deviations of the null hypothesis, with more correct rejections the stronger the deviation is. The method is particularly powerful when the location and scale parameters deviate into opposite directions, i.e. when cμ>0c_{\mu}>0 and cσ<1c_{\sigma}<1 or cμ<0c_{\mu}<0 and cσ>1c_{\sigma}>1. There is no obvious pattern regarding the deviations of the shape and trend parameter. Further, we analogously show the power properties of the IM method with bootstrap (B2) in Figure 3. As expected, this method has more power against all alternatives under consideration. However, this comes at the cost of more false discoveries, as can be seen in Table 1.

Refer to caption Refer to caption

Figure 3: Proportion of correct rejections in %\% obtained with the BH procedure (upper row) and the IM procedure (lower row) at a level of 0.1, in the setting where two stations deviate from the rest (left column) or 7 locations deviate from the rest (right column), with the bootstrap procedure based on max-stable processes. The axis and facets are as described in Figure 2.

The results for bootstrap scheme (B3) were very similar and are therefore not shown here, but can be found in Section B of the supplementary material. Likewise, we omit the results for the more conservative Holm procedure, which exhibits, as expected, less power against all alternatives. Further, we repeated the simulation study with an increased location-wise sample size of n=100n=100. As one would expect, the tests have more power in this case.

The results presented so far show that the proposed pooling methods work ‘as intended’, since the theoretical test characteristics are well approximated in finite sample situations, and since we observe decent power properties. In practical applications however, spatial pooling of locations is usually the starting point for subsequent analyses. For instance, one may be interested in estimating return levels at the location of interest based on the data from all locations that were identified as homogeneous. Moreover, the analysis of alternative data sets like climate model data may be based on the homogeneous locations identified within the analysis of observations.

This suggests that the methods should be examined with regard to their quality in subsequent analyses. For that purpose, we consider, as an example, the problem of return level estimation at the location of interest. The state-of-the-art method would consist of GEV fitting at the location of interest only, which results in (asymptotically) unbiased estimators that suffer from large variance. Basing the estimator on pooled regions will decrease the variance, but at the same time increase its bias if some heterogeneous locations have been wrongly identified as homogeneous.

In particular, pooling based on a conservative testing approach like the BH procedure leads to the acceptance of many locations and thus to a large pooling area and low estimation variance. Most likely, some of the chosen locations will be violating the null hypothesis though, which yields a rather large estimation bias. For pooling based on a more liberal rejection approach like the IM procedure, the estimation bias and variance behave exactly opposite: since the null hypotheses are more likely to be rejected, the resulting pooling sample is smaller (i.e., larger estimation variance) but ‘more accurate’ (i.e., smaller estimation bias).

For our comparison, we consider fitting the scale-GEV model based on pooled locations that have been obtained from one of the following eight methods

m{LOI,full,MSIM,MSHolm,MSBH,biv.IM,biv.Holm,biv.BH}.\displaystyle m\in\{\mathrm{LOI},\mathrm{full},\mathrm{MS\ IM},\mathrm{MS\ Holm},\mathrm{MS\ BH},\mathrm{biv.\ IM},\mathrm{biv.\ Holm},\mathrm{biv.\ BH}\}.

Here, LOI refers to considering the location of interest only (no pooling), full refers to pooling all available locations, and the last six methods encode pooling based on any combination of the proposed p-value combination methods and bootstrap approaches.

For each method, we compute the maximum likelihood estimate ϑ^=(μ^,σ^,γ^,α^)\hat{\bm{\vartheta}}=(\hat{\mu},\hat{\sigma},\hat{\gamma},\hat{\alpha})^{\top} of the scale-GEV model parameters and transform this to an estimate of the TT-year return level (RL) in the reference climate of year tt by

RL^t(T)=G(μ^(t),σ^(t),γ^)1(11/T),\widehat{\mathrm{RL}}_{t}(T)=G^{-1}_{(\hat{\mu}(t),\hat{\sigma}(t),\hat{\gamma})}(1-1/T),

where μ^(t)=μ^exp(α^GMST(t)/μ^)\hat{\mu}(t)=\hat{\mu}\exp(\hat{\alpha}\mathrm{GMST^{\prime}}(t)/\hat{\mu}) and σ^(t)=σ^exp(α^GMST(t)/μ^)\hat{\sigma}(t)=\hat{\sigma}\exp(\hat{\alpha}\mathrm{GMST^{\prime}}(t)/\hat{\mu}) and where GG is the cumulative distribution function of the GEV-distribution, see Equation (1). Now, in our simulation study, we know that the true value of the target RL is given by RLt(T)=G(μ0(t),σ0(t),γ0)1(11/T)\mathrm{RL}_{t}(T)=G^{-1}_{(\mu_{0}(t),\sigma_{0}(t),\gamma_{0})}(1-1/T) with

μ0(t)=20exp(1.5GMST(t)20),σ0(t)=5.5exp(1.5GMST(t)20),γ0=0.1.\mu_{0}(t)=20\exp\left(\frac{1.5\mathrm{GMST^{\prime}}(t)}{20}\right),\,\sigma_{0}(t)=5.5\exp\left(\frac{1.5\mathrm{GMST^{\prime}}(t)}{20}\right),\,\gamma_{0}=0.1.

From the 500 replications we can therefore compute the empirical Mean Squared Error (MSE) of method mm as

MSE(m)=1500j=1500(RL^t(m,j)(T)RLt(T))2,\mathrm{MSE}(m)=\frac{1}{500}\sum_{j=1}^{500}\left(\widehat{\mathrm{RL}}^{(m,j)}_{t}(T)-\mathrm{RL}_{t}(T)\right)^{2},

where RL^t(m,j)(T)\widehat{\mathrm{RL}}^{(m,j)}_{t}(T) denotes the estimated RL obtained in the jj-th replication with method mm. Note that we have suppressed the MSE’s dependence on TT and tt from the notation.

In Figure 4 we compare MSEs of the 100-year RL with reference climate as in year 2021, which is given by RL2021(100)=55.87\mathrm{RL}_{2021}(100)=55.87, by plotting the difference MSE(m1)MSE(m2)\mathrm{MSE}(m_{1})-\mathrm{MSE}(m_{2}) with m1{MSBH,MSIM}m_{1}\in\{\mathrm{MS\,BH,MS\,IM}\} and m2{full,ROI}m_{2}\in\{\mathrm{full},\mathrm{ROI}\} as obtained for the setting where |Adev|=7|A_{\mathrm{dev}}|=7. The plots reveal that both the MS BH and the MS IM method are superior to the the LOI fit for almost all scenarios. Comparing the two methods to the full fit reveals that there are certain scenarios for which the full fit performs substantially worse, mostly when the shape and scale parameter deviate towards the same direction for the alternatives. For those scenarios where the full fit outperforms the two methods, the discrepancy is not very large, with the BH method performing slightly better than the IM method.

Refer to caption

Figure 4: Comparison of MSEs of RL2021(100)\mathrm{RL}_{2021}(100) obtained for different choices of the method mm, in the setting where |Adev|=7|A_{\mathrm{dev}}|=7. Shown are the differences MSE(m1)MSE(m2)\mathrm{MSE}(m1)-\mathrm{MSE}(m2) with m1m1 and m2m2 as indicated in the plot title. Negative values (red) therefore indicate a lower MSE for the method mentioned first, and vice versa for positive values. The axis and facets are as described in Figure 2.

Refer to caption

Figure 5: Comparison of MSEs of RL2021(100)\mathrm{RL}_{2021}(100) in the setting where |Adev|=2|A_{\mathrm{dev}}|=2 (left) and |Adev|=7|A_{\mathrm{dev}}|=7 (right). Shown are the differences MSE(MS BH) - MSE(MS IM). Negative values (red) therefore indicate a lower MSE for the BH method, while positive values (blue) indicate a lower MSE for the IM method. The axis and facets are as described in Figure 2.

A comparison between MS BH and MS IM is shown in Figure 5 for |Adev|{2,7}|A_{\mathrm{dev}}|\in\{2,7\}. The results reveal that the BH method slightly outperforms the IM method in the case |Adev|=2|A_{\mathrm{dev}}|=2 for almost all alternative scenarios. In case |Adev|=7|A_{\mathrm{dev}}|=7, the results are quite mixed, with the IM method becoming clearly superior when the shape, scale and location parameters deviate jointly to the top. In all other scenarios, the differences are only moderate, sometimes favoring one method and sometimes the other. Corresponding results for the bootstrap methods based on bivariate extreme value distributions are very similar and therefore not shown. Further, the results were found to be robust against the choices of t=2021t=2021 and T=100T=100 that were made here for the return level estimation.

Overall, the results suggest the following practical recommendation: if the full sample is expected to be quite homogeneous a priori (for instance, because it was built based on expert knowledge), then estimation based on BH-based pooling is preferable over the other options (LOI, the full and the IM-based fit). If one expects to have a larger number of heterogeneous locations, it is advisable to apply the IM procedure (or any other liberal procedure), which likely rejects most of the heterogeneous locations and hence reduces the bias. In general, the liberal behavior of IM-based pooling suggests its use when it is of highest practical interest to obtain a pooled region that is as homogeneous as possible (as a trade-off, one has to accept that the region is probably much smaller than the initial full region).

4 Severe flooding in Western Europe during July 2021 revisited

We illustrate the new pooling methods in a case study by revisiting the extreme event attribution study for the heavy precipitation event that led to severe flooding in Western Europe during July 2021, see Kre21; Tra22. In that study, observational data were pooled together based on expert knowledge and on ad hoc tests, with the ultimate goal of assessing the influence of human-made climate change on the likelihood and severity of similar events in Western and Central Europe.

The full region under investigation in Kre21; Tra22 consists of sixteen (2.0×1.25)(2.0^{\circ}\times 1.25^{\circ}) (i.e. about (140km×140km)(140\,\mathrm{km}\times 140\,\mathrm{km})) grid cells reaching from the northern Alps to the Netherlands, see Figure 5 in Kre21 or the right-hand side of Figure 6. Two of the 16 locations were rejected in that study due to expert knowledge and too large deviations in fitted GEV-parameters (regions 17 and 11 of Figure ). Among other things, our illustrative application of the methods explained above will reveal that grid cell 11 has been rightfully dismissed, while grid cell 17 might have been considered homogeneous. Further, there is no clear evidence that any other grid cell that has been declared homogeneous should rather be considered non-homogeneous.

Refer to caption
Refer to caption
Figure 6: Regions analysed within this case study and the respective numbering used here. The data consists of April-September block maxima of tile-wise averaged daily precipitation sums (RX1day) from 1950-2021.

For illustrative purposes, we apply our methods to two different initial areas:

  1. (A)

    An area consisting of 6×66\times 6 grid cells covering a large part of Western/ Central Europe, as shown in Figure 6 on the left.

  2. (B)

    The original 4×44\times 4 grid cells from Kre21 as shown in Figure 6 on the right.

Note that homogeneity for the 20 grid cells at the boundary of the larger area in (A) has been dismissed based on expert knowledge in Kre21; the larger area is included here for illustrative purposes only.

The data used throughout the study consists of April-September block-maxima of tile-wise averaged 1-day accumulated precipitation amounts of the E-OBS data set (cornes2018ensemble, Version 23.1e). In both cases, the grid cell with label 21 is the one of primary interest, since it is the one containing the target location of the study, i.e., the region that accumulated the highest precipitation sum and experienced the largest impacts during the flooding of July 2021. The time series are shown in Figure C.1 in the supplementary material. There, we also plot values of μ^(t)=μ^exp(α^GMST(t)/μ^)\hat{\mu}(t)=\hat{\mu}\exp\left(\hat{\alpha}\mathrm{GMST^{\prime}}(t)/\hat{\mu}\right) obtained from different data sets: once from data of location 21 only, once from data of the respective location only, and once from the pooled data of the respective pair (21,d)(21,d) for d{1,,36}{21}d\in\{1,\ldots,36\}\setminus\{21\}.

We apply the two proposed bootstrap procedures to areas (A) and (B). Note that the raw p-values obtained with the bootstrap based on bivariate extreme value distributions should be very similar (or even identical when using the same seed for random number generation) for those grid cells that appear in both areas, while they may differ to a greater extent for the MS bootstrap. This is because the p-value for a given pair obtained with the bivariate bootstrap procedure only depends on the observations of the pair, while the respective p-value obtained with the MS bootstrap also depends on the spatial model that was fitted to the whole area. However, even if the raw p-value of a given pair obtained for setting (B) coincides with the raw p-value obtained for setting (A), the adjustment for multiple testing can lead to slightly different rejection decisions of the pair at a given level α\alpha. The bootstrap procedures are applied with B=2000B=2000 bootstrap replications.

We start by discussing the results of the application to the larger grid in (A). Recall that, for a given significance level α\alpha, one rejects the null hypothesis for all grid cells whose p-value is smaller than α\alpha. To visualise the results, we therefore shade the grid cells according to the magnitude of their (adjusted) p-value. Here, we divide the colour scale into three groups: [0,0.05],(0.05,0.1][0,0.05],(0.05,0.1] and (0.1,1](0.1,1], with a dark red tone assigned to the first group, a brighter red tone for Group 2 and an almost transparent shade for Group 3. This allows us to see the test decisions for significance levels of α{0.05,0.1}\alpha\in\{0.05,0.1\}: when the significance level is chosen as α=0.1\alpha=0.1, all tiles with a reddish shade are rejected, while when working with a level of α=0.05\alpha=0.05 only tiles shaded in the dark shade are rejected.

Results for both the conservative BH procedure and the liberal IM procedure are shown in Figure 7. For completeness, results on Holm’s method, which is even more conservative than BH, as well as the BH and IM p-values themselves can be found in the supplementary material, Tables C.2 and C.3. One can see that, for a given rejection method (i.e. BH or IM), the MS and bivariate procedures mostly agree on the rejection decisions that would be made at a level of 10% (compare the rows of Figure 7 to see this). The same holds when working with a significance level of 5%.

Further, as expected, the IM method rejects more hypotheses than the BH method. However, according to the results of the simulation study, it is quite likely that at least one of these rejections is a false discovery.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 7: (Adjusted) pp-values obtained with the BH (left) and the IM (right) method on the 6×66\times 6 grid, with the bootstrap based on max-stable processes (top row) and the bootstrap based on bivariate extreme value distributions (bottom row).

Analogous results for the 4×44\times 4 grid in (B) are shown in Figure 8. As discussed above, except for the MS BH method, the results are consistent with the results obtained for the 6×66\times 6 grid in the sense that for those locations which are contained in both grids, the locations with p-values of critical magnitude (<10%<10\%) coincide (compare the plot in the upper right corner of Figure 8 to the plot in the upper right corner of Figure 7 to see this for the MS IM method, and similar for the other methods). For the MS BH method, grid cells 10, 14, 15, and 16 are not significant anymore at a level of 10 %, but we recorded an adjusted p-value of 0.106 for those four grid cells, so this is a rather tight decision. The p-values obtained for the 4×44\times 4 grid can be found in Table C.1 in the supplementary material.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 8: Adjusted pp-values obtained with the BH (left) and the IM (right) method on the 4×44\times 4 grid, obtained with the bootstrap based on max-stable processes (top row) and the bootstrap based on bivariate extreme value distributions (bottom row).

Let us now move on to the interpretation: considering the larger grid first, some grid cells for which the characteristics of extreme precipitation are different (according to expert opinion) from the grid cell of the target location are detected as heterogeneous. These rejected grid cells are located along the coast and in the mountainous terrain. Comparing the results with Kre21; Tra22, we observe that grid cell 11 has been rejected in their study based on expert knowledge. For grid cell 17, however, we do not detect any statistical evidence that the probabilistic behavior of extreme precipitation is different from the grid cell of the target location, even when applying the liberal IM procedure. We would like to stress though that non-rejection of a null hypothesis does not provide any evidence of the null hypothesis, even when ignoring the multiple testing issue. Hence, expert knowledge that leads to rejection should, in general, outweigh any statistical non-rejection. This particularly applies to the eastern (continental) grid cells in the larger 6×\times6-grid, which can be influenced by heavy precipitation caused by different synoptic situations compared to the target region.

Moreover, as the results for locations 10, 14, 15, and 16 showed some discrepancy across the different testing procedures, we suggest that the final decision on the exclusion or inclusion of these locations in a spatial pooling approach should be based on expert knowledge of the meteorological characteristics, and the willingness to trade possible bias for variance (with a possibly larger bias when including the locations – note that statistical evidence against homogeneity in the bivariate extreme value distribution-based bootstrap is only weak, and wrongly declaring the regions as homogeneous is possibly not too harmful). The same holds for locations 9, 20, 23 and 27 for which only the IM method yielded p-values between 5% and 10%. Again, these rather small p-values could be subject to false discoveries though, and since the heterogeneity signal is also not too strong, there is no clear evidence that these need to be excluded from pooling.

For a last evaluation of results from pairwise tests, we estimated the 100-year RLs in the reference climate of the year 2021, i.e. with reference value GMST(2021)=0.925C\mathrm{GMST^{\prime}}(2021)=0.925^{\circ}C, on five different data sets obtained from the 4×44\times 4 grid. Here, we use the data sets consisting of data from

  • the location of interest only

  • pooling those grid cells suggested by the results of the case study (i.e., all cells but 11, or all cells but 10, 11, 14, 15, 16) or expert opinion (i.e., all cells but 11, 17)

  • pooling all grid cells of the 4×44\times 4 grid.

The results can be found in Table 2 and reveal that excluding cell 11 has a clear effect on the estimated RL. Ex- or including grid cell 17 once 11 is excluded does not have a large effect, while excluding cells 10, 14, 15 and 16 additionally to cell 11 has a moderate effect.

cells excluded μ^\hat{\mu} σ^\hat{\sigma} γ^\hat{\gamma} α^\hat{\alpha} RL^2021(100)\widehat{\mathrm{RL}}_{2021}(100)
none 20.37 5.80 0.1039 1.50 58.43
11 20.01 5.44 0.0676 1.45 52.74
11, 17 20.01 5.40 0.0760 1.29 52.82
10, 11, 14, 15, 16 19.90 5.41 0.0484 1.79 51.93
all but 21 21.92 6.08 0.0634 0.00-0.00 54.37
Table 2: Estimated parameters and estimate of RL2021(100)\mathrm{RL}_{2021}(100) obtained when pooling all grid cells but the ones given in the first column.

Finally, we would like to mention that similar results were obtained when applying the BH test procedures to all triples containing the pair of grid cells (20,21)(20,21), i.e., the extended target region considered in the study of Kre21; Tra22, consisting of those regions in Germany and Belgium affected worst by the July 2021 flooding.

5 Extensions

In this section, we discuss how to estimate region-wise return levels under homogeneity assumptions (Section 5.1). We also propose two possible extensions of the pooling approach from the previous sections to other hypotheses (Section 5.2) or other underlying model assumptions (Section 5.3).

5.1 Estimation of regional return levels and return periods

As pointed out in Kre21; Tra22 among others, an estimated return period (RP) of TT years for a given event and in a fixed reference climate (e.g., the preindustrial climate), obtained based on a fit of the GEV distribution to a pooled homogeneous sample, has the following interpretation: for each fixed location/tile within the region, one can expect one event of the same or larger magnitude within TT (imaginary) years of observing the reference climate. We refer to this quantity as the local return period. Obviously, one would expect more than one event of similar magnitude happening at at least one of the locations of the pooling region. Likewise, for a given TT, one would expect a higher TT-year return level for the whole region. The latter corresponds to the value that is expected to be exceeded only once in TT years at at least one of the locations.

Mathematically, using the notation from Section 2.1, the exceedance probability of value rr at at least one among D2D\geq 2 locations in the reference climate corresponding to year tt is given by

pt(r)=P(j{1,,D}:Mj(t)r)\displaystyle p_{t}(r)=P\Big{(}\exists\,j\in\{1,\dots,D\}:M_{j}^{(t)}\geq r\Big{)} =P(maxd=1,,DMd(t)r),\displaystyle=P\Big{(}\max_{d=1,\ldots,D}M_{d}^{(t)}\geq r\Big{)},

such that the return period for event rr of the region is RPtreg(r)=1pt(r)\mathrm{RP}_{t}^{\mathrm{reg}}(r)=\frac{1}{p_{t}(r)}. Further, the TT-year return level of the region in the climate corresponding to year tt is the minimal value RLtreg(T)\mathrm{RL}_{t}^{\mathrm{reg}}(T) for which

P(maxd=1,,DMd(t)RLtreg(T))1TP\Big{(}\max_{d=1,\ldots,D}M_{d}^{(t)}\geq\mathrm{RL}_{t}^{\mathrm{reg}}(T)\Big{)}\leq\frac{1}{T}

holds. Both quantities could be computed (exactly) if one had access to the distribution of maxd=1,,DMd(t)\max_{d=1,\ldots,D}M_{d}^{(t)}. For example, if the random variables Md(t),d=1,,DM_{d}^{(t)},\ d=1,\ldots,D were independent, pt(r)p_{t}(r) could be further simplified to

pt(r)=1P(maxd=1,,DMd(t)r)=1(G(μ(t),σ(t),γ)(r))D,\displaystyle p_{t}(r)=1-P\Big{(}\max_{d=1,\ldots,D}M_{d}^{(t)}\leq r\Big{)}=1-(G_{(\mu(t),\sigma(t),\gamma)}(r))^{D},

where GG is the distribution function of the GEV distribution and where μ(t),σ(t)\mu(t),\sigma(t) and γ\gamma denote the parameters at reference climate of year tt from Equation (2) under the homogeneity assumption from Equation (3).

The locations are, however, usually not independent in applications. In the following, we propose a simulation-based estimation method that involves max-stable process models to account for the spatial dependence. As before, the R package SpatialExtremes (SpatialExtremes) allows for fitting and simulating max-stable process models.

Algorithm 3.

(Simulation-based estimation of the regionwise RL and RP)

  1. (1)

    Fit the scale-GEV parameters to the pooled homogeneous sample, resulting in the parameter vector ϑ^=(μ^,σ^,γ^,α^)\hat{\bm{\vartheta}}=(\hat{\mu},\hat{\sigma},\hat{\gamma},\hat{\alpha})^{\top}.

  2. (2)

    Transform the margins of the pooled data to approximately unit Fréchet by applying transformation from Equation (9) with the parameter estimate from Step 1. Then fit several max-stable process models to the obtained data and choose the best fit according to the information criterion CLIC.

  3. (3)

    Replicate for b=1,,Bb=1,\ldots,B the following steps:

    1. (i)

      Generate one random observation (y1,b(t),,,yD,b(t),)(y_{1,b}^{(t),\ast},\ldots,y_{D,b}^{(t),\ast}) from the chosen max-stable process model.

    2. (ii)

      Transform the margins to GEV margins, by applying the transformation in (10) with parameters as estimated in Step 1, resulting in the observation (m1,b(t),,,mD,b(t),)(m_{1,b}^{(t),\ast},\ldots,m_{D,b}^{(t),\ast}).

    3. (iii)

      Compute the maximum mmax,b(t),=maxd=1,,Dmd,b(t),.m_{\max,b}^{(t),\ast}=\max_{d=1,\ldots,D}m_{d,b}^{(t),\ast}.

  4. (4)

    The regionwise TT-year return level RLt,reg(T)\mathrm{RL}_{t,\mathrm{reg}}(T) and the return period RPt,reg(r)\mathrm{RP}_{t,\mathrm{reg}}(r) of an event with value rr can now be estimated from the empirical cumulative distribution function F^t\hat{F}_{t}^{\ast} of the sample (mmax,b(t),)b=1,,B(m_{\max,b}^{(t),\ast})_{b=1,\dots,B} through

    RL^treg(T)\displaystyle\widehat{\mathrm{RL}}_{t}^{\mathrm{reg}}(T) =(F^t)1(11/T),RP^treg(r)=11F^t(r).\displaystyle=(\hat{F}_{t}^{\ast})^{-1}(1-1/T),\qquad\widehat{\mathrm{RP}}_{t}^{\mathrm{reg}}(r)=\frac{1}{1-\hat{F}_{t}^{\ast}(r)}.

Especially, when we have estimated the local 100-year RL, we can get an estimate of the return time this event has for the whole region. Likewise, when we have an estimate of the local return period of an event with value rr, we can estimate what the event value for that return period would be for the whole region.

We illustrate the estimators for the pooled data sets from Section 4. The estimates are based on B=100 000B=100\,000 simulation replications and are shown in Table 3. We see that the local 100-year return levels have substantially shorter region-wise return periods. In the region with 15 tiles (only cell 11 excluded), the estimated local 100-year RL at reference climate of 2021 can be expected to be exceeded once in approximately 19 years in at least one of the 15 tiles. We find a similar region-wise return period for the pooling region consisting of 14 tiles. In the pooling region consisting of 11 tiles, the local 100-year return level becomes a region-wise 33-year event. This comparably larger value arises from the smaller region that is considered: the smaller the region, the less likely it is that one of the locations exceeds a high threshold. Further, as expected, we find that the region-wise 100-year return levels at reference climate of 2021 are larger than their local counterparts. For the regions consisting of 15 and 14 tiles, this increase is approximately 26%, while it is approximately 17.3% for the region consisting of 11 tiles.

cells excluded RL2021(100)\mathrm{RL}_{2021}(100) RP2021reg(RL2021(100))\mathrm{RP}_{2021}^{\mathrm{reg}}(\mathrm{RL}_{2021}(100)) RL2021reg(100)\mathrm{RL}_{2021}^{\mathrm{reg}}(100)
11 52.74 18.90 66.40
11, 17 52.82 18.32 67.08
10, 11, 14, 15, 16 51.93 32.76 60.93
Table 3: Estimated local (second column) and regional (fourth column) 100-year RLs for reference climate 2021, for three possible choices of pooling regions as indicated by the first column. Column 3 shows the regional return periods of the local 100-year events.

5.2 A homogeneous scaling model with location-wise scaling factor

In this section, we maintain the temporal dynamics from the scale-GEV model from Equation (2). However, instead of testing for the homogeneity assumption from Equation (3), we additionally allow for a location-wise scaling factor under the null hypothesis. Such an approach can be useful when it is known that observations from different locations occur on different scales, but, apart from that, show a common probabilistic behaviour. In fact, a stationary version of the following model is commonly used in hydrology, where it is known as the Index Flood approach (dalrymple1960flood).

More precisely, suppose that

Mt,dcdexp(αGMST(t)μ)GEV(μ,σ,γ)t,d,\displaystyle M_{t,d}\sim c_{d}\exp\left(\frac{\alpha\mathrm{GMST^{\prime}}(t)}{\mu}\right)\mathrm{GEV}(\mu,\sigma,\gamma)\quad\forall t,d, (15)

where cd>0c_{d}>0 is a location-specific scaling factor that we may fix to 1 at the location of primary interest (say d=1d=1, i.e., c1=1c_{1}=1). Writing μd=cdμ,σd=cdσ,αd=cdα\mu_{d}=c_{d}\mu,\sigma_{d}=c_{d}\sigma,\alpha_{d}=c_{d}\alpha, the model in Equation (15) can be rewritten as

Mt,dGEV(μd(t),σd(t),γ)t,d,\displaystyle M_{t,d}\sim\mathrm{GEV}(\mu_{d}(t),\sigma_{d}(t),\gamma)\quad\forall t,d,

where

μd(t)=μdexp(αdGMST(t)μd),σd(t)=σdexp(αdGMST(t)μd).\displaystyle\mu_{d}(t)=\mu_{d}\exp\left(\frac{\alpha_{d}\mathrm{GMST^{\prime}}(t)}{\mu_{d}}\right),\quad\sigma_{d}(t)=\sigma_{d}\exp\left(\frac{\alpha_{d}\mathrm{GMST^{\prime}}(t)}{\mu_{d}}\right). (16)

Note that the parameters μ1,,μD,σ1,,σD,α1,,αD\mu_{1},\dots,\mu_{D},\sigma_{1},\dots,\sigma_{D},\alpha_{1},\dots,\alpha_{D} satisfy the relationships

μdσdδ,αdμdη,αdσdκ\frac{\mu_{d}}{\sigma_{d}}\equiv\delta,\quad\frac{\alpha_{d}}{\mu_{d}}\equiv\eta,\quad\frac{\alpha_{d}}{\sigma_{d}}\equiv\kappa

for certain parameters δ,η,κ\delta,\eta,\kappa; in particular, μ1,,μD,σ1,,σD,α1,,αD\mu_{1},\dots,\mu_{D},\sigma_{1},\dots,\sigma_{D},\alpha_{1},\dots,\alpha_{D} can be retrieved from μ1,,μD,δ,η\mu_{1},\dots,\mu_{D},\delta,\eta (note that the constraint on αd/σd\alpha_{d}/\sigma_{d} is not needed, but comes as a consequence of the first two relations). Fitting this model instead of fitting the scale-GEV distribution to each location separately has the advantage of reducing the number of parameters that need to be estimated substantially (4+(D1)=D+34+(D-1)=D+3 instead of 4D4D parameters). Once the local scaling factors are identified, we can bring all observations to the same scale by dividing each location by its location-specific scaling factor.

Now one can test whether such a local scaling model holds on a subset A={d1,,dk}{1,,D}A=\{d_{1},\ldots,d_{k}\}\subset\{1,\ldots,D\} with cardinality k=|A|2k=\left|{A}\right|\geq 2, by testing the hypothesis

H0LS(A):δA,ηA,γAdA:μdσd=δA,αdμd=ηA,γd=γA,\displaystyle H_{0}^{\mathrm{LS}}(A):\quad\exists\,\delta_{A},\eta_{A},\gamma_{A}\,\forall d\in A:\quad\frac{\mu_{d}}{\sigma_{d}}=\delta_{A},\quad\frac{\alpha_{d}}{\mu_{d}}=\eta_{A},\quad\gamma_{d}=\gamma_{A}, (17)

with a Wald-type test statistic. In this case, the latter is defined as

TnLS(A)=n(gA(𝜽^))(𝑮A(𝜽^)𝚺^n𝑮A(𝜽^))1gA(𝜽^),\displaystyle T_{n}^{\mathrm{LS}}(A)=n(g_{A}(\hat{\bm{\theta}}))^{\top}\left(\bm{G}_{A}(\hat{\bm{\theta}})\hat{\bm{\Sigma}}_{n}\bm{G}_{A}(\hat{\bm{\theta}})^{\top}\right)^{-1}g_{A}(\hat{\bm{\theta}}), (18)

where gA:4D3(k1)g_{A}:\mathbb{R}^{4D}\to\mathbb{R}^{3(k-1)} is given by

gA(𝜽)=(μd1σd1μd2σd2,γd1γd2,αd1μd1αd2μd2,,γdk1γdk,αdk1μdk1αdkμdk),\displaystyle g_{A}(\bm{\theta})=\left(\frac{\mu_{d_{1}}}{\sigma_{d_{1}}}-\frac{\mu_{d_{2}}}{\sigma_{d_{2}}},\gamma_{d_{1}}-\gamma_{d_{2}},\frac{\alpha_{d_{1}}}{\mu_{d_{1}}}-\frac{\alpha_{d_{2}}}{\mu_{d_{2}}},\ldots,\gamma_{d_{k-1}}-\gamma_{d_{k}},\frac{\alpha_{d_{k-1}}}{\mu_{d_{k-1}}}-\frac{\alpha_{d_{k}}}{\mu_{d_{k}}}\right)^{\top},

with Jacobian matrix GA(𝜽)3(k1)×4DG_{A}(\bm{\theta})\in\mathbb{R}^{3(k-1)\times 4D}, since the hypothesis in Equation (17) may be rewritten as H0LS(A):gA(𝜽)=0.H_{0}^{\mathrm{LS}}(A):g_{A}(\bm{\theta})=0.

When considering this kind of modification, the bootstrap algorithms from Section 2.4, steps (5)-(7), must be adapted accordingly. In step (5), one has to estimate the parameter under the constraint of the considered null hypothesis by adapting the log\log-likelihood accordingly. The estimated parameters are then used during the transformation step (6). Further, the test statistic in steps (6) and (7) is replaced by TnLS(A)T_{n}^{\mathrm{LS}}(A) from (18). Further details are omitted for the sake of brevity.

5.3 General homogeneous models with smooth parametrization

In this section, we consider general GEV models in which the location, scale and shape parameters are allowed to depend in a (fixed) differentiable way on some parameter vector ϑq\bm{\vartheta}\in\mathbb{R}^{q} and some temporal covariate c(t)pc^{(t)}\in\mathbb{R}^{p} with p,qp,q\in\mathbb{N}. More precisely, suppose that fμ,fσf_{\mu},f_{\sigma} and fγf_{\gamma} are (known) real-valued functions of ϑ\bm{\vartheta} and cc that are differentiable with respect to their first argument ϑ\bm{\vartheta}. We assume that, for each d=1,,dd=1,\dots,d, there exists an unknown parameter ϑd\bm{\vartheta}_{d} such that Md(t)GEV(μd(t),σd(t),γd(t))M_{d}^{(t)}\sim\mathrm{GEV}(\mu_{d}(t),\sigma_{d}(t),\gamma_{d}(t)) with

μd(t)=fμ(ϑd;c(t)),σd(t)=fσ(ϑd;c(t)),γd(t)=fγ(ϑd;c(t)).\mu_{d}(t)=f_{\mu}(\bm{\vartheta}_{d};c^{(t)}),\quad\sigma_{d}(t)=f_{\sigma}(\bm{\vartheta}_{d};c^{(t)}),\quad\gamma_{d}(t)=f_{\gamma}(\bm{\vartheta}_{d};c^{(t)}).

The global null hypothesis of interest within this model is assumed to be expressible as h(ϑ1,,ϑD)=0h(\bm{\vartheta}_{1},\ldots,\bm{\vartheta}_{D})=0 for a differentiable function h:qDsh:\mathbb{R}^{qD}\to\mathbb{R}^{s} with ss\in\mathbb{N}.

An example is given by the linear shift model that is frequently considered when modelling temperature extremes in Extreme Event Attribution studies (see Phi20), where

μd(t)=μd+αdGMST(t),σd(t)σd,γd(t)γd.\mu_{d}(t)=\mu_{d}+\alpha_{d}\mathrm{GMST^{\prime}}(t),\quad\sigma_{d}(t)\equiv\sigma_{d},\quad\gamma_{d}(t)\equiv\gamma_{d}.

A possible global null hypothesis of interest could be

H0:ϑ×(0,)×2d{1,,D}:ϑd=ϑ,H_{0}:\ \exists\,\bm{\vartheta}\in\mathbb{R}\times(0,\infty)\times\mathbb{R}^{2}\ \forall d\in\{1,\ldots,D\}:\quad\bm{\vartheta}_{d}=\bm{\vartheta},

where ϑ=(μ,σ,γ,α)\bm{\vartheta}=(\mu,\sigma,\gamma,\alpha)^{\top} and ϑd=(μd,σd,γd,αd)\bm{\vartheta}_{d}=(\mu_{d},\sigma_{d},\gamma_{d},\alpha_{d})^{\top}.

When considering this kind of extension, one has to adapt the maximum likelihood estimator as well as the estimator of its covariance matrix, hence steps (1)-(2) and (5)-(7) in the bootstrap algorithms are affected. Further details are omitted for the sake of brevity.

6 Conclusion

Extreme event attribution studies can build upon a GEV scaling model. Depending on the analysed variable, it may be useful to apply spatial pooling and fit the GEV distribution to a pooled sample of observations collected at sufficiently homogeneous spatial locations as it has been done in Kre21; Tra22; Vau15, among others. Here, we propose several statistical methods that enable the selection of a homogeneous pooling region from a larger initial region. The BH approach was found to be quite conservative, hence some heterogeneous locations are likely to be declared homogeneous. The IM approach is a more liberal alternative with a higher proportion of rejected locations that may contain some homogeneous ones. In subsequent analyses, the selected pooling region typically results in a classical bias-variance trade-off: the larger the pooling region, the smaller the variance. At the same time, the bias may be larger, given that some heterogeneous regions may have been declared homogeneous. In practice, the tests should always be complemented by expert knowledge on the driving meteorological/climatological background processes.

To make the statistical approach to select homogeneous pooling regions for attribution studies as described here usable for the extreme event attribution community, we have developed a software package that can be freely downloaded and used by applied researchers (findpoolreg). The selection of spatial pooling regions for attribution studies may hence be based on a combination of expert knowledge and thorough statistical tests. The experts applying the methods can thereby decide between a conservative approach, which tends to reject more locations and a liberal approach which tends to accept more locations as being homogeneous. This decision depends on the a priori knowledge about the meteorology of the analysed area and the specific requirements of the study.

If the ultimate interest is estimation of, for example, return levels, one may, as an alternative to the classical approach based on pooling selected locations, consider penalized maximum likelihood estimators with a penalty on large heterogeneities (Buc21). A detailed investigation of the resulting bias-variance trade-off would be a worthwhile topic for future research.

Declaration

Ethical Approval

Not applicable.

Availability of supporting data

All methods are implemented in the R package findpoolreg (findpoolreg) that is publicly available at https://github.com/leandrazan/findpoolreg. The data used throughout the case study is derived from the E-OBS gridded data set, publicly available at https://www.ecad.eu/download/ensembles/download.php.

Competing interests

The authors declare that they have no conflict of interest.

Funding

This work has been supported by the integrated project “Climate Change and Extreme Events - ClimXtreme Module B - Statistics (subprojects B1.2, grant number: 01LP1902B and B3.3, grant number: 01LP1902L)” funded by the German Federal Ministry of Education and Research (BMBF).

Authors’ contributions

LZ and AB wrote the main manuscript and worked out the mathematical details. LZ implemented all methods and carried out the Monte Carlo simulation study and the case study. FK, PL and JT contributed by extensive discussions, provided the data for the case study and improved the text.

Acknowledgements

Computational infrastructure and support were provided by the Centre for Information and Media Technology at Heinrich Heine University Düsseldorf, which is gratefully acknowledged.

References

SUPPLEMENT TO THE PAPER:
“Regional Pooling in Extreme Event Attribution Studies: an Approach Based on Multiple Statistical Testing”

Leandra Zanger, Axel Bücher, Frank Kreienkamp,
Philip Lorenz, Jordis Tradowsky

This supplement contains mathematical details on the maximum likelihood estimator and the estimation of its covariance matrix, as well as additional simulation results and further details on the case study. References like (1) refer to equations from the main paper, while references like (A.1) or Figure B.2 refer to equations or Figures within this appendix.

Appendix A Mathematical Details

A.1 Maximum Likelihood estimation

Throughout this section, we provide mathematical details on the coordinate-wise maximum likelihood estimator from Equation (6). In particular, we motivate the approximate normality of 𝜽^=(ϑ^1,,ϑ^D)ΘD\hat{\bm{\theta}}=(\hat{\bm{\vartheta}}_{1}^{\top},\dots,\hat{\bm{\vartheta}}_{D}^{\top})^{\top}\in\Theta^{D} with mean 𝜽=(ϑ1,,ϑD){\bm{\theta}}=({\bm{\vartheta}}_{1}^{\top},\dots,{\bm{\vartheta}}_{D}^{\top})^{\top} and covariance n1𝚺nn^{-1}\bm{\Sigma}_{n} with 𝚺n=(𝚺n;j,k)j,k=1D4D×4D\bm{\Sigma}_{n}=(\bm{\Sigma}_{n;j,k})_{j,k=1}^{D}\in\mathbb{R}^{4D\times 4D} as defined in Equation (7). As in the stationary GEV-model, the derivations require γ>1/2\gamma>-1/2, see BucSeg17.

We start by some explicit formulas for the functions appearing in Equations (6) and (7). For that purpose, let l(μ,σ,γ)(x)l_{(\mu,\sigma,\gamma)}(x) denote the log density of the plain GEV(μ,σ,γ)\mathrm{GEV}(\mu,\sigma,\gamma) distribution (see Appendix B in BucSeg17), i.e.,

l(μ,σ,γ)(x)=log(σ)uγ(xμσ)+(γ+1)log(uγ(xμσ))\displaystyle l_{(\mu,\sigma,\gamma)}(x)=-\log(\sigma)-u_{\gamma}\Big{(}\frac{x-\mu}{\sigma}\Big{)}+(\gamma+1)\log\Big{(}u_{\gamma}\Big{(}\frac{x-\mu}{\sigma}\Big{)}\Big{)} (A.1)

for xx such that 1+γxμσ>01+\gamma\frac{x-\mu}{\sigma}>0; here

uγ(z)\displaystyle u_{\gamma}(z) ={(1+γz)1γ,γ0,exp(z),γ=0.\displaystyle=\begin{cases}(1+\gamma z)^{-\frac{1}{\gamma}},&\gamma\neq 0,\\ \exp(-z),&\gamma=0.\end{cases}

Then, writing ϑ=(μ,σ,γ,α)\bm{\vartheta}=(\mu,\sigma,\gamma,\alpha)^{\top}, the log\log-density ϑ(x,c)\ell_{\bm{\vartheta}}(x,c) from Equation (5) may be written as

ϑ(x,c)=l(μ(c),σ(c),γ)(x),\displaystyle\ell_{\bm{\vartheta}}(x,c)=l_{(\mu(c),\sigma(c),\gamma)}(x), (A.2)

where

μ(c)\displaystyle\mu(c) =μ(cμ,α)=μexp(αc/μ),\displaystyle=\mu(c\mid\mu,\alpha)=\mu\exp(\alpha c/\mu), σ(c)\displaystyle\sigma(c) =σ(cμ,σ,α)=σexp(αc/μ).\displaystyle=\sigma(c\mid\mu,\sigma,\alpha)=\sigma\exp(\alpha c/\mu).

We next derive formulas for the gradient and the Hessian of ϑϑ(x,c)\bm{\vartheta}\mapsto\ell_{\bm{\vartheta}}(x,c). For that purpose, let l˙(μ,σ,γ)(x)\dot{l}_{(\mu,\sigma,\gamma)}(x) and l¨(μ,σ,γ)(x)\ddot{l}_{(\mu,\sigma,\gamma)}(x) denote the respective gradient and Hessian of the standard GEV log density (see Appendix B in BucSeg17) for precise formulas). Note that, in view of the fact that the GEV distribution is a location scale family,

l˙(μ,σ,γ)(x)\displaystyle\dot{l}_{(\mu,\sigma,\gamma)}(x) =Tσ1l˙(0,1,γ)(xμσ),Tσ=diag(σ,σ,1)3×3,\displaystyle=T_{\sigma}^{-1}\dot{l}_{(0,1,\gamma)}\Big{(}\frac{x-\mu}{\sigma}\Big{)},\qquad T_{\sigma}=\operatorname{diag}(\sigma,\sigma,1)\in\mathbb{R}^{3\times 3}, (A.3)
l¨(μ,σ,γ)(x)\displaystyle\ddot{l}_{(\mu,\sigma,\gamma)}(x) =Tσ1l˙(0,1,γ)(xμσ)Tσ1.\displaystyle=T_{\sigma}^{-1}\dot{l}_{(0,1,\gamma)}\Big{(}\frac{x-\mu}{\sigma}\Big{)}T_{\sigma}^{-1}. (A.4)

Next, consider the function pc:Θ(0,)2×p_{c}:\Theta\to(0,\infty)^{2}\times\mathbb{R} defined by pc(μ,σ,γ,α)=(μexp(αc/μ),σexp(αc/μ),γ)p_{c}(\mu,\sigma,\gamma,\alpha)=(\mu\exp(\alpha c/\mu),\sigma\exp(\alpha c/\mu),\gamma)^{\top}, whose Jacobian is given by Bc(μ,σ,α)B_{c}(\mu,\sigma,\alpha)^{\top}, where

Bc(μ,σ,α)=((1αcμ)exp(αμc)σαcμ2exp(αμc)00exp(αμc)0001cexp(αμc)σcμexp(αμc)0).\displaystyle B_{c}(\mu,\sigma,\alpha)=\begin{pmatrix}\left(1-\frac{\alpha c}{\mu}\right)\exp\left(\frac{\alpha}{\mu}c\right)&-\frac{\sigma\alpha c}{\mu^{2}}\exp\left(\frac{\alpha}{\mu}c\right)&0\\ 0&\exp\left(\frac{\alpha}{\mu}c\right)&0\\ 0&0&1\\ c\exp\left(\frac{\alpha}{\mu}c\right)&\frac{\sigma c}{\mu}\exp\left(\frac{\alpha}{\mu}c\right)&0\end{pmatrix}.

Then, in view of Equations (A.2) and (A.3), the multivariate chain rule yields

˙ϑ(x,c)\displaystyle\dot{\ell}_{\bm{\vartheta}}(x,c) =Bc(μ,σ,α)l˙(μ(c),σ(c),γ)(x)\displaystyle=B_{c}(\mu,\sigma,\alpha)\cdot\dot{l}_{(\mu(c),\sigma(c),\gamma)}(x)
=Bc(μ,σ,α)Tσ(c)1l˙(0,1,γ)(xμ(c)σ(c)).\displaystyle=B_{c}(\mu,\sigma,\alpha)\cdot T_{\sigma(c)}^{-1}\cdot\dot{l}_{(0,1,\gamma)}\left(\frac{x-\mu(c)}{\sigma(c)}\right). (A.5)

In view of the multivariate product rule, this equation further implies

¨ϑ(x,c)\displaystyle\ddot{\ell}_{\bm{\vartheta}}(x,c) =(l˙(μ(c),σ(c),γ)(x)B˙c,1(μ,σ,α)l˙(μ(c),σ(c),γ)(x)B˙c,2(μ,σ,α)l˙(μ(c),σ(c),γ)(x)B˙c,3(μ,σ,α)l˙(μ(c),σ(c),γ)(x)B˙c,4(μ,σ,α))+Bc(μ,σ,α)l¨(μ(c),σ(c),γ)(x)Bc(μ,σ,α)\displaystyle=\begin{pmatrix}\dot{l}_{(\mu(c),\sigma(c),\gamma)}(x)^{\top}\dot{B}_{c,1}(\mu,\sigma,\alpha)\\ \dot{l}_{(\mu(c),\sigma(c),\gamma)}(x)^{\top}\dot{B}_{c,2}(\mu,\sigma,\alpha)\\ \dot{l}_{(\mu(c),\sigma(c),\gamma)}(x)^{\top}\dot{B}_{c,3}(\mu,\sigma,\alpha)\\ \dot{l}_{(\mu(c),\sigma(c),\gamma)}(x)^{\top}\dot{B}_{c,4}(\mu,\sigma,\alpha)\end{pmatrix}+B_{c}(\mu,\sigma,\alpha)\ddot{l}_{(\mu(c),\sigma(c),\gamma)}(x)B_{c}(\mu,\sigma,\alpha)^{\top}
=(l˙(0,1,γ)(xμ(c)σ(c))Tσ(c)1B˙c,1(μ,σ,α)l˙(0,1,γ)(xμ(c)σ(c))Tσ(c)1B˙c,2(μ,σ,α)l˙(0,1,γ)(xμ(c)σ(c))Tσ(c)1B˙c,3(μ,σ,α)l˙(0,1,γ)(xμ(c)σ(c))Tσ(c)1B˙c,4(μ,σ,α))\displaystyle=\begin{pmatrix}\dot{l}_{(0,1,\gamma)}(\tfrac{x-\mu(c)}{\sigma(c)})^{\top}T_{\sigma(c)}^{-1}\dot{B}_{c,1}(\mu,\sigma,\alpha)\\ \dot{l}_{(0,1,\gamma)}(\tfrac{x-\mu(c)}{\sigma(c)})^{\top}T_{\sigma(c)}^{-1}\dot{B}_{c,2}(\mu,\sigma,\alpha)\\ \dot{l}_{(0,1,\gamma)}(\tfrac{x-\mu(c)}{\sigma(c)})^{\top}T_{\sigma(c)}^{-1}\dot{B}_{c,3}(\mu,\sigma,\alpha)\\ \dot{l}_{(0,1,\gamma)}(\tfrac{x-\mu(c)}{\sigma(c)})^{\top}T_{\sigma(c)}^{-1}\dot{B}_{c,4}(\mu,\sigma,\alpha)\\ \end{pmatrix}
+Bc(μ,σ,α)Tσ(c)1l¨(0,1,γ)(xμ(c)σ(c))Tσ(c)1(Bc(μ,σ,α)),\displaystyle\hskip 48.36958pt+B_{c}(\mu,\sigma,\alpha)T_{\sigma(c)}^{-1}\ddot{l}_{(0,1,\gamma)}\left(\frac{x-\mu(c)}{\sigma(c)}\right)T_{\sigma(c)}^{-1}(B_{c}(\mu,\sigma,\alpha))^{\top}, (A.6)

where we used Equations (A.3) and (A.4) for the second equality and where B˙c,j(μ,σ,α)\dot{B}_{c,j}(\mu,\sigma,\alpha) denotes the Jacobian in 3×4\mathbb{R}^{3\times 4} of the jjth row of Bc(μ,σ,α)B_{c}(\mu,\sigma,\alpha) (derivative with respect to (μ,σ,γ,α)(\mu,\sigma,\gamma,\alpha)). The latter can be derived explicitly by a tedious but straightforward calculation; we omit precise formulas for the sake of brevity.

We next motivate the claimed normal approximation. First of all, in view of the differentiability of ϑϑ\bm{\vartheta}\mapsto\ell_{\bm{\vartheta}}, the vector of maximum likelihood estimators is necessarily a zero of the gradient of the log-likelihood function, i.e., we have

0=1nt=1n(˙ϑ^1(M1(t),c(t))˙ϑ^D(MD(t),c(t))).0=\frac{1}{n}\sum_{t=1}^{n}\begin{pmatrix}\dot{\ell}_{\hat{\bm{\vartheta}}_{1}}(M_{1}^{(t)},c^{(t)})\\ \vdots\\ \dot{\ell}_{\hat{\bm{\vartheta}}_{D}}(M_{D}^{(t)},c^{(t)})\end{pmatrix}.

Denoting the true parameter vector by 𝜽\bm{\theta}, a Taylor expansion implies that

0\displaystyle 0 =1nt=1n(˙ϑ1(M1(t),c(t))˙ϑD(MD(t),c(t)))\displaystyle=\frac{1}{n}\sum_{t=1}^{n}\begin{pmatrix}\dot{\ell}_{\bm{\vartheta}_{1}}(M_{1}^{(t)},c^{(t)})\\ \vdots\\ \dot{\ell}_{\bm{\vartheta}_{D}}(M_{D}^{(t)},c^{(t)})\end{pmatrix}
+{1nt=1n(¨ϑ1(M1(t),c(t))000¨ϑ2(M2(t),c(t))000¨ϑD(MD(t),c(t)))}(𝜽^𝜽)+Rn\displaystyle\quad+{\small\left\{\frac{1}{n}\sum_{t=1}^{n}\begin{pmatrix}\ddot{\ell}_{\bm{\vartheta}_{1}}(M_{1}^{(t)},c^{(t)})&0&\ldots&0\\ 0&\ddot{\ell}_{\bm{\vartheta}_{2}}(M_{2}^{(t)},c^{(t)})&\ldots&0\\ \vdots&\vdots&&\vdots\\ 0&0&\ldots&\ddot{\ell}_{\bm{\vartheta}_{D}}(M_{D}^{(t)},c^{(t)})\end{pmatrix}\right\}\small}(\hat{\bm{\theta}}-\bm{\theta})+R_{n}
Ln,𝜽+In,𝜽(𝜽^𝜽)+Rn,\displaystyle\equiv L_{n,\bm{\theta}}+I_{n,\bm{\theta}}\cdot(\hat{\bm{\theta}}-\bm{\theta})+R_{n},

where RnR_{n} denotes higher order terms which are negligible. Solving for n(𝜽^𝜽)\sqrt{n}(\hat{\bm{\theta}}-\bm{\theta}), we obtain that

n(𝜽^𝜽)In,𝜽1nLn,𝜽.\sqrt{n}(\hat{\bm{\theta}}-\bm{\theta})\approx-I_{n,\bm{\theta}}^{-1}\cdot\sqrt{n}L_{n,\bm{\theta}}.

By Equation (A.6), each (4×4)(4\times 4) block matrix In,d,ϑd=1nt=1n¨ϑd(Md(t),c(t))I_{n,d,\bm{\vartheta}_{d}}=\frac{1}{n}\sum_{t=1}^{n}\ddot{\ell}_{\bm{\vartheta}_{d}}(M_{d}^{(t)},c^{(t)}) on the block-diagonal of In,𝜽I_{n,\bm{\theta}} is of the form

1nt=1nf(c(t))g(Zd(t))\displaystyle\frac{1}{n}\sum_{t=1}^{n}f(c^{(t)})g(Z_{d}^{(t)}) (A.7)

for suitable functions ff and gg, where

Zd(t)={Md(t)μd(c(t))}/σd(c(t))\displaystyle Z_{d}^{(t)}=\{M_{d}^{(t)}-\mu_{d}(c^{(t)})\}/\sigma_{d}(c^{(t)}) (A.8)

with μd(c(t))=μdexp(αdc(t)/μd)\mu_{d}(c^{(t)})=\mu_{d}\exp(\alpha_{d}c^{(t)}/\mu_{d}) and σ(c(t))=σdexp(αdc(t)/μd)\sigma(c^{(t)})=\sigma_{d}\exp(\alpha_{d}c^{(t)}/\mu_{d}) is GEV(0,1,γd)\mathrm{GEV}(0,1,\gamma_{d})-distributed and independent over tt. Under suitable assumptions on tf(c(t))t\mapsto f(c^{(t)}) (and hence on c(t)c^{(t)}), we obtain that the variance of In,d,ϑdI_{n,d,\bm{\vartheta}_{d}} is of the order 1/n1/n. As a consequence, In,d,ϑdI_{n,d,\bm{\vartheta}_{d}} is close to its expectation, that is, In,d,ϑd=Jn,d,ϑd+o(1)I_{n,d,\bm{\vartheta}_{d}}=J_{n,d,\bm{\vartheta}_{d}}+o(1) with Jn,d,ϑdJ_{n,d,\bm{\vartheta}_{d}} defined just after Equation (7). More precisely, in an asymptotic framework where one assumes that c(t)=h(t/n)c^{(t)}=h(t/n) for some continuous function h:[0,1]h:[0,1]\to\mathbb{R}, expressions as in Equation (A.7) converge to 01f(h(t))dt×𝔼[g(Zd)]\int_{0}^{1}f(h(t)){\,\mathrm{d}}t\times{\mathbb{E}}[g(Z_{d})] with ZdGEV(0,1,γd)Z_{d}\sim\mathrm{GEV}(0,1,\gamma_{d}) (note that both the integral and the expectation exist).

Next, consider nLn,𝜽\sqrt{n}L_{n,\bm{\theta}}. It suffices to argue that we may apply a suitable version of the Central Limit Theorem, under suitable assumptions on tc(t)t\mapsto c^{(t)}. Similar as for In,𝜽I_{n,\bm{\theta}}, by Equation (A.1), each entry of nLn,𝜽\sqrt{n}L_{n,\bm{\theta}} is of the form

1nt=1nf(c(t))g(Zd(t))\frac{1}{\sqrt{n}}\sum_{t=1}^{n}f(c^{(t)})g(Z_{d}^{(t)})

for certain functions ff and gg and for some d{1,,D}d\in\{1,\dots,D\}. In view of the independence over tt and the fact that Zd(t)GEV(0,1,γd)Z_{d}^{(t)}\sim\mathrm{GEV}(0,1,\gamma_{d}), we may for instance apply the Ljyapunov CLT for independent triangular arrays, see Theorem 27.3 in Bil95. Since 𝔼[g(Zd(t))p]<{\mathbb{E}}[g(Z_{d}^{(t)})^{p}]<\infty for sufficiently small p>2p>2 and for the functions gg of interest, a sufficient condition for its applicability is

limnt=1n{f(c(t))}p[t=1n{f(c(t))}2]p/2=0,\lim_{n\to\infty}\frac{\sum_{t=1}^{n}\{f(c^{(t)})\}^{p}}{[\sum_{t=1}^{n}\{f(c^{(t)})\}^{2}]^{p/2}}=0,

which readily follows for instance if one assumes that c(t)=h(t/n)c^{(t)}=h(t/n) for some continuous function h:[0,1]h:[0,1]\to\mathbb{R}.

A.2 Covariance Estimation

Throughout this section, we provide an estimator for 𝚺n=(𝚺n;j,k)j,k=1D\bm{\Sigma}_{n}=(\bm{\Sigma}_{n;j,k})_{j,k=1}^{D} defined in Equation (7). First of all, we denote by J^n,d,ϑd\hat{J}_{n,d,\bm{\vartheta}_{d}} an (approximate) Hessian of the function

ϑd1nt=1nϑd(Md(t),c(t))\bm{\vartheta}_{d}\mapsto\frac{1}{n}\sum_{t=1}^{n}\ell_{{\bm{\vartheta}}_{d}}(M_{d}^{(t)},c^{(t)})

evaluated at its maximizer ϑ^d\hat{\bm{\vartheta}}_{d}, possibly obtained by numerical differentiation. Note that this matrix is routinely returned by standard implementations for maximization; for instance, the optim-function in R returns an output value hessian.

It remains to estimate the matrix

𝑪n,j,k:=1nt=1nCov[˙ϑj(Mj(t),c(t)),˙ϑk(Mk(t),c(t))]4×4\bm{C}_{n,j,k}:=\frac{1}{n}\sum_{t=1}^{n}\operatorname{Cov}\big{[}\dot{\ell}_{\bm{\vartheta}_{j}}(M_{j}^{(t)},c^{(t)}),\dot{\ell}_{\bm{\vartheta}_{k}}(M_{k}^{(t)},c^{(t)})\big{]}\in\mathbb{R}^{4\times 4}

for all 1j<kD1\leq j<k\leq D. By Equation (A.1), we may write

𝑪n,j,k=1nt=1nBc(t)(μj,σj,αj)Tσj(c(t))1×Cov(l˙(0,1,γj)(Zj(t)),l˙(0,1,γk)(Zk(t)))Tσk(c(t))1(Bc(t)(μk,σk,αk)).\bm{C}_{n,j,k}=\frac{1}{n}\sum_{t=1}^{n}B_{c^{(t)}}(\mu_{j},\sigma_{j},\alpha_{j})T_{\sigma_{j}(c^{(t)})}^{-1}\\ \times\operatorname{Cov}\left(\dot{l}_{(0,1,\gamma_{j})}(Z_{j}^{(t)}),\dot{l}_{(0,1,\gamma_{k})}(Z_{k}^{(t)})\right)T_{\sigma_{k}(c^{(t)})}^{-1}(B_{c^{(t)}}(\mu_{k},\sigma_{k},\alpha_{k}))^{\top}.

with l˙(0,1,γ)\dot{l}_{(0,1,\gamma)} the gradient of l(0,1,γ)l_{(0,1,\gamma)} from Equation (A.1) and with Zj(t)Z_{j}^{(t)} as defined in Equation (A.8), which is GEV(0,1,γj)\mathrm{GEV}(0,1,\gamma_{j})-distributed for any j=1,Dj=1,\dots D. Note that the cross covariance 𝚪j,k=Cov(l˙(0,1,γj)(Zj(t)),l˙(0,1,γk)(Zk(t)))\bm{\Gamma}_{j,k}=\operatorname{Cov}(\dot{l}_{(0,1,\gamma_{j})}(Z_{j}^{(t)}),\dot{l}_{(0,1,\gamma_{k})}(Z_{k}^{(t)})) does not depend on tt, and may hence be estimated empirically after replacing the true parameters by their estimators. More precisely, we obtain the estimator

𝑪^n,j,k=1nt=1nBc(t)(μ^j,σ^j,α^j)Tσ^j(c(t))1𝚪^n,j,kTσ^k(c(t))1Bc(t)(μ^k,σ^k,α^k)\hat{\bm{C}}_{n,j,k}=\frac{1}{n}\sum_{t=1}^{n}B_{c^{(t)}}(\hat{\mu}_{j},\hat{\sigma}_{j},\hat{\alpha}_{j})T_{\hat{\sigma}_{j}(c^{(t)})}^{-1}\hat{\bm{\Gamma}}_{n,j,k}T_{\hat{\sigma}_{k}(c^{(t)})}^{-1}B_{c^{(t)}}(\hat{\mu}_{k},\hat{\sigma}_{k},\hat{\alpha}_{k})

where σ^j(c)=σ^jexp(α^jc/μ^j)\hat{\sigma}_{j}(c)=\hat{\sigma}_{j}\exp(\hat{\alpha}_{j}c/\hat{\mu}_{j}) and where 𝚪^n,j,k\hat{\bm{\Gamma}}_{n,j,k} denotes the empirical cross covariance matrix of the two samples (l˙(0,1,γ^j)(Z^j(t)))t=1n(\dot{l}_{(0,1,\hat{\gamma}_{j})}(\hat{Z}_{j}^{(t)}))_{t=1}^{n} and (l˙(0,1,γ^k)(Z^k(t)))t=1n(\dot{l}_{(0,1,\hat{\gamma}_{k})}(\hat{Z}_{k}^{(t)}))_{t=1}^{n} with

Z^j(t)=Mj(t)μ^j(c(t))σ^j(c(t))\hat{Z}_{j}^{(t)}=\frac{M_{j}^{(t)}-\hat{\mu}_{j}(c^{(t)})}{\hat{\sigma}_{j}(c^{(t)})}

and μ^j(c)=μ^jexp(α^jc/μ^j)\hat{\mu}_{j}(c)=\hat{\mu}_{j}\exp(\hat{\alpha}_{j}c/\hat{\mu}_{j}). The final estimator for 𝚺n\bm{\Sigma}_{n} is 𝚺^n=(𝚺^n;j,k)j,k=1D\hat{\bm{\Sigma}}_{n}=(\hat{\bm{\Sigma}}_{n;j,k})_{j,k=1}^{D} with

𝚺^n;j,k=J^n,j,ϑd1𝑪^n,j,kJ^n,k,ϑk1.\hat{\bm{\Sigma}}_{n;j,k}=\hat{J}_{n,j,\bm{\vartheta}_{d}}^{-1}\hat{\bm{C}}_{n,j,k}\hat{J}_{n,k,\bm{\vartheta}_{k}}^{-1}.

Appendix B Additional results of the simulation study

B.1 Additional results for record length n=75n=75

We report the power properties obtained with the BH and IM method for procedure (B3) in Figures B.1 and B.2, respectively. Power properties obtained with the Holm method are shown in Figures B.3 (for (B2)) and B.4 (for (B3)).

Refer to caption

Figure B.1: Proportion of correct rejections in %\% obtained with the Benjamini Hochberg procedure at a level of 0.1, in the setting where two stations deviate from the rest (left column) or 7 stations deviate from the rest (right column), with the bootstrap procedure based on bivariate extreme value distributions. The axis and facets are as described in Figure 2.

Refer to caption

Figure B.2: Proportion of correct rejections in %\% obtained with the Benjamini Hochberg procedure at a level of 0.1, in the setting where two stations deviate from the rest (left column) or 7 stations deviate from the rest (right column), with the bootstrap procedure based on bivariate extreme value distributions. The axis and facets are as described in Figure 2.

Refer to caption

Figure B.3: Proportion of correct rejections in %\% obtained with the Holm procedure at a level of 0.1, in the setting where two stations deviate from the rest (left column) or 7 stations deviate from the rest (right column), with the bootstrap procedure based on max-stable processes. The axis and facets are as described in Figure 2.

Refer to caption

Figure B.4: Proportion of correct rejections in %\% obtained with the Holm procedure at a level of 0.1, in the setting where two stations deviate from the rest (left column) or 7 stations deviate from the rest (right column), with the bootstrap procedure based on bivariate extreme value distributions. The axis and facets are as described in Figure 2.

B.2 Additional results for record length n=100n=100

Since the bootstrap procedures implicitly depend on the asymptotic distribution of the test statistic, we repeated the simulation study with a larger sample size, in order to investigate the sample size’s impact on the performance of the bootstrap procedure. The location-wise sample size is increased to n=100n=100. Again, the FDR and FWER are reported (Table B.1), as well as the power plots for BH and (B2) in Figure B.5, and for IM and (B2) in Figure B.6. As expected, the error rates are again sufficiently controlled by those methods that claim to do so, while the power has substantially increased (on average by 50% for the BH method and by 18% for the IM method). The results for the other methods and (B3) were again similar.

Method min FDR max FDR mean FDR min FWER max FWER mean FWER (B2) (B3) (B2) (B3) (B2) (B3) (B2) (B3) (B2) (B3) (B2) (B3) Scenario 1: |Adev|=2|A_{\mathrm{dev}}|=2 BH 6.9 5.2 12.4 11.4 9.4 7.8 8.7 6.8 23.9 20.4 15.8 13.4 Holm 2.7 1.4 10.3 8.0 6.4 4.6 6.8 3.9 12.9 9.4 9.5 6.7 IM 26.4 25.7 60.3 59.9 35.2 34.6 53.4 52.7 64.4 63.8 59.2 58.3 Scenario 2: |Adev|=7|A_{\mathrm{dev}}|=7 BH 4.0 2.7 10.7 8.3 5.6 5.0 6.3 5.1 32.4 32.6 21.6 19.9 Holm 0.8 0.5 10.3 6.5 2.9 2.1 4.2 3.0 11.2 8.0 7.1 5.2 IM 8.2 7.8 60.1 59.7 14.2 13.8 41.8 40.9 60.1 59.7 47.9 46.6

Table B.1: False Discovery Rate (FDR) and Familiywise Error Rate (FWER) for the three p-value combination methods from Section 2.5 and the two bootstrap methods (B2) and (B3), obtained in the simulations with record length n=100n=100. The stated values are the minimum, maximum and mean across the 224 models from each scenario.

Refer to caption

Figure B.5: Proportion of correct rejections in %\% obtained with the Benjamini Hochberg procedure at a level of 0.1, in the setting where two stations deviate from the rest (left column) or 7 stations deviate from the rest (right column), with the bootstrap procedure based on max-stable processes and record length n=100n=100. The axis and facets are as described in Figure 2.

Refer to caption

Figure B.6: Proportion of correct rejections in %\% obtained when ignoring the multiple testing problem,at a level of 0.1, in the setting where two stations deviate from the rest (left column) or 7 stations deviate from the rest (right column), with the bootstrap procedure based on max-stable processes and record length n=100n=100. The axis and facets are as described in Figure 2.

Appendix C Additional results for the case study

The complete results of the bootstrap procedures applied to the 4×44\times 4 can be found in Table C.1. For the 6×66\times 6 grid, the complete results of the bootstrap based on bivariate extreme value distributions can be found in Table C.2, and the results for the bootstrap procedure based on max-stable processes in Table C.3.

The time series used throughout the case study are shown in Figure C.1. Along with the Apr-Sep maxima of 1950-2021, we plot values of

μ^(t)=μ^exp(α^GMST(t)μ^),\hat{\mu}(t)=\hat{\mu}\exp\left(\frac{\hat{\alpha}\mathrm{GMST^{\prime}}(t)}{\hat{\mu}}\right),

where μ^\hat{\mu} and α^\hat{\alpha} are estimated on the data of location 21 only (blue line), the respective location dd (red line) or the pooled data of the pair (21,d)(21,d) (green line), for d{1,,36}{21}d\in\{1,\ldots,36\}\setminus\{21\}. Note that these three lines should not differ much when the homogeneity assumption holds. On the other hand, perfectly matching lines do not imply that the homogeneity assumption is true, since they do not give any information about the scale and shape parameter of the distributions.

MS bootstrap (B2) bivariate bootstrap (B3)
Pair tnt_{n} prawp_{\text{raw}} pBHp_{\text{BH}} pHolmp_{\text{Holm}} prawp_{\text{raw}} pBHp_{\text{BH}} pHolmp_{\text{Holm}}
(21, 11) 78.8 0.00 0.00 0.00 0.05 0.75 0.75
(21, 16) 15.6 1.60 10.64 22.39 1.45 9.90 20.29
(21, 15) 15.2 2.50 10.64 32.48 2.50 9.90 32.48
(21, 10) 13.6 3.40 10.64 40.78 3.30 9.90 36.58
(21, 14) 13.7 3.55 10.64 40.78 3.05 9.90 36.58
(21, 23) 12.1 5.30 13.24 52.97 6.30 15.74 62.97
(21, 20) 10.4 7.15 15.09 64.32 9.85 16.36 78.76
(21, 27) 10.6 8.05 15.09 64.37 8.10 16.36 72.86
(21, 9) 9.8 10.00 15.59 69.97 10.44 16.36 78.76
(21, 22) 9.4 10.39 15.59 69.97 11.69 16.36 78.76
(21, 8) 9.0 13.04 17.79 69.97 11.99 16.36 78.76
(21, 26) 7.2 20.34 25.42 81.36 22.04 27.55 88.16
(21, 17) 4.4 46.08 53.17 100.00 45.73 52.76 100.00
(21, 28) 3.9 52.17 55.90 100.00 50.92 54.56 100.00
(21, 29) 2.8 66.92 66.92 100.00 65.77 65.77 100.00
Table C.1: (Adjusted) p-values for all three methods from Section 2.5 obtained with both bootstrap methods applied to the 4×44\times 4 grid. Values that are significant at the 10%-level are in boldface. Results are based on B=2000B=2000 bootstrap replications.

Refer to caption

Figure C.1: Observations and fitted trend as estimated for each location d,d=1,,36,d,\,d=1,\ldots,36, (red line) as well as the trend obtained from the GEV fit for location 21 (blue line) and the GEV-fit obtained from the pooled sample consisting of the pair (21,d),d=1,,36(21,d),\,d=1,\ldots,36 (green line). Location labels are given in the top right corner.
Pair tnt_{n} prawp_{\text{raw}} pBHp_{\text{BH}} pHolmp_{\text{Holm}}
(21, 3) 41.5 0.10 0.50 3.50
(21, 4) 87.3 0.10 0.50 3.50
(21, 6) 46.0 0.10 0.50 3.50
(21, 11) 78.8 0.10 0.50 3.50
(21, 12) 75.2 0.10 0.50 3.50
(21, 13) 33.7 0.10 0.50 3.50
(21, 25) 38.5 0.10 0.50 3.50
(21, 19) 28.6 0.20 0.87 5.59
(21, 31) 33.3 0.30 1.17 8.09
(21, 7) 16.1 1.50 5.24 38.96
(21, 5) 15.7 1.70 5.40 42.46
(21, 16) 15.6 2.00 5.83 47.95
(21, 10) 13.6 2.70 6.99 62.04
(21, 15) 15.2 2.80 6.99 62.04
(21, 14) 13.8 4.10 9.56 86.01
(21, 33) 12.5 4.70 10.27 93.91
(21, 23) 12.1 6.89 14.19 100.00
(21, 20) 10.4 8.39 16.19 100.00
(21, 27) 10.6 8.79 16.19 100.00
(21, 36) 10.5 9.89 16.98 100.00
(21, 34) 9.9 10.19 16.98 100.00
(21, 2) 9.6 11.19 17.03 100.00
(21, 9) 9.8 11.19 17.03 100.00
(21, 22) 9.4 13.69 19.44 100.00
(21, 8) 9.0 13.89 19.44 100.00
(21, 35) 8.2 15.88 21.38 100.00
(21, 26) 7.2 19.78 25.64 100.00
(21, 30) 6.1 29.17 36.46 100.00
(21, 32) 5.8 33.07 39.91 100.00
(21, 17) 4.4 46.75 54.55 100.00
(21, 28) 3.9 52.75 59.55 100.00
(21, 29) 2.8 66.13 72.33 100.00
(21, 24) 2.7 70.23 74.49 100.00
(21, 1) 2.4 73.13 75.28 100.00
(21, 18) 1.8 83.82 83.82 100.00
Table C.2: (Adjusted) p-values for all three methods from Section 2.5 obtained with the bootstrap based on bivariate extreme value distributions. Values that are significant at the 10%-level are in boldface. Results are based on B=2000B=2000 bootstrap replications.
Pair tnt_{n} prawp_{\text{raw}} pBHp_{\text{BH}} pHolmp_{\text{Holm}}
(21, 3) 41.5 0.00 0.00 0.00
(21, 4) 87.2 0.00 0.00 0.00
(21, 11) 78.8 0.00 0.00 0.00
(21, 12) 75.2 0.00 0.00 0.00
(21, 25) 38.5 0.00 0.00 0.00
(21, 6) 46.0 0.10 0.50 3.00
(21, 13) 33.4 0.10 0.50 3.00
(21, 31) 33.3 0.20 0.87 5.59
(21, 19) 28.7 0.30 1.17 8.09
(21, 7) 16.0 1.80 6.29 46.75
(21, 5) 15.7 2.60 7.53 64.94
(21, 15) 15.2 2.70 7.53 64.94
(21, 16) 15.6 2.80 7.53 64.94
(21, 14) 13.7 3.70 9.24 81.32
(21, 10) 13.6 4.40 10.26 92.31
(21, 23) 12.1 5.09 11.15 100.00
(21, 33) 12.5 5.89 12.13 100.00
(21, 27) 10.6 8.59 15.96 100.00
(21, 20) 10.4 8.89 15.96 100.00
(21, 36) 10.5 9.19 15.96 100.00
(21, 9) 9.8 9.59 15.96 100.00
(21, 2) 9.6 10.49 15.96 100.00
(21, 34) 9.9 10.49 15.96 100.00
(21, 22) 9.4 10.99 16.03 100.00
(21, 8) 9.0 12.39 17.34 100.00
(21, 35) 8.2 16.38 22.05 100.00
(21, 26) 7.2 21.88 28.36 100.00
(21, 30) 6.1 28.77 35.96 100.00
(21, 32) 5.8 31.87 38.46 100.00
(21, 17) 4.4 46.25 53.96 100.00
(21, 28) 3.9 50.95 57.52 100.00
(21, 29) 2.8 66.33 70.99 100.00
(21, 24) 2.7 66.93 70.99 100.00
(21, 1) 2.4 71.03 73.12 100.00
(21, 18) 1.8 81.92 81.92 100.00
Table C.3: (Adjusted) p-values for all three methods from Section 2.5 obtained with the bootstrap based on max-stable processes. Values that are significant at the 10%-level are in boldface. Results are based on B=2000B=2000 bootstrap replications.

References