This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Improving the Hosmer-Lemeshow Goodness-of-Fit Test in Large Models with Replicated Trials

Nikola Surjanovic and Thomas M. Loughin
(Department of Statistics and Actuarial Science, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada)

Summary

The Hosmer-Lemeshow (HL) test is a commonly used global goodness-of-fit (GOF) test that assesses the quality of the overall fit of a logistic regression model. In this paper, we give results from simulations showing that the type 1 error rate (and hence power) of the HL test decreases as model complexity grows, provided that the sample size remains fixed and binary replicates are present in the data. We demonstrate that the generalized version of the HL test by Surjanovic et al. (2020) can offer some protection against this power loss. We conclude with a brief discussion explaining the behaviour of the HL test, along with some guidance on how to choose between the two tests.

Key Words: chi-squared test, generalized linear model, goodness-of-fit test, Hosmer-Lemeshow test, logistic regression


Note: The latest version of this article is available at https://doi.org/10.1080/02664763.2023.2272223.

1 Introduction

Logistic regression models have gained a considerable amount of attention as a tool for estimating the probability of success of a binary response variable, conditioning on several explanatory variables. Researchers in health and medicine have used these models in a wide range of applications. One of many examples includes estimating the probability of hospital mortality for patients in intensive care units as a function of various covariates (Lemeshow et al., 1988).

Regardless of the application, it is always desirable to construct a model that fits the observed data well. One of several ways of assessing the quality of the fit of a model is with goodness-of-fit (GOF) tests (Bilder and Loughin, 2014). In general, GOF tests examine a null hypothesis that the structure of the fitted model is correct. They may additionally identify specific alternative models or deviations to test against, but this is not required. Global (omnibus) GOF tests are useful tools that allow one to assess the validity of a model without restricting the alternative hypothesis to a specific type of deviation.

An example of a well-known global GOF test for logistic regression is the Hosmer-Lemeshow (HL) test, introduced by Hosmer and Lemeshow (1980). The test statistic is a Pearson statistic that compares observed and expected event counts from data grouped according to ordered fitted values from the model. The decision rule for the test is based on comparing the test statistic to a chi-squared distribution with degrees of freedom that depend on the number of groups used to create the test statistic. The HL test is relatively easy to implement in statistical software, and since its creation, the HL test has become quite popular, particularly in fields such as biostatistics and the health sciences.

Despite its popularity, the HL test is known to have some drawbacks. In both experimental and observational studies, it is possible to have data for which binary observations have the same explanatory variable patterns (EVPs). In this case, responses can be aggregated into binomial counts and trials. When there are observations with the same EVP present in the data, it is possible to obtain many different p-values depending on how the data are sorted (Bertolini et al., 2000). A related disadvantage of HL-type tests is that the test statistic depends on the way in which groups are formed, as remarked upon by Hosmer et al. (1997). In this paper we will highlight a related but different problem with the HL test that does not appear to be well known.

Models for logistic regression with binary responses normally assume a Bernoulli model where the probability parameter is related to explanatory variables through a logit link. As mentioned, when several observations have the same EVP, responses can be summed into binomial counts. Other times, the joint distribution of covariates may cause observed values to be clustered into near-replicates, so that the responses might be viewed as being approximately binomially distributed. These cases present no problem for model fitting and estimating probabilities. However, it turns out that this clustering in the covariate space may materially impact the validity of the HL test applied to the fitted models.

For such data structures, a chi-squared distribution does not represent the null distribution of the HL test statistic well in finite samples, as suggested by simulation results in Section 4. This, in turn, adversely affects both the type 1 error rate and the power of the HL test. Bertolini et al. (2000) demonstrated that it is possible to obtain a wide variety of p-values and test statistics when there are replicates in the data, simply by reordering the data. Our analysis also deals with replicates in the data. However, we find a different phenomenon: as the model size grows for a fixed sample size, the type 1 error rate tends to decrease.

In this paper we show that the HL test can be improved upon by using another existing global GOF test, the generalized HL (GHL) test from Surjanovic et al. (2020). Empirical results suggest that the GHL test performs reasonably well even when applied to moderately large models fit on data with exact replicates or clusters in the covariate space. We offer a brief discussion as to why one might expect clustering in the covariate space to affect the regular HL test. A simple decision tree is offered to summarize when each test is most appropriate.

An overview of the HL and GHL tests is given in Section 2. The design of the simulation study comparing the performance of these two tests is outlined in Section 3, with the results given in Section 4. We end with a discussion of the implications of these results and offer some guidance on how to choose between the two tests in Section 5.

2 Methods

In what follows, we use the notation of Surjanovic et al. (2020). We let Y{0,1}Y\in\{0,1\} denote a binary response variable associated with a dd-dimensional covariate vector, XdX\in\mathbb{R}^{d}, where the first element of XX is equal to one. The pairs (Xi,Yi)(X_{i},Y_{i}), i=1,,ni=1,\ldots,n denote a random sample, with each pair being distributed according to the joint distribution of (X,Y)(X,Y). The observed values of (Xi,Yi)(X_{i},Y_{i}) are denoted using lowercase letters as (xi,yi)(x_{i},y_{i}).

In a logistic regression model with binary responses, one assumes that

E(Y|X=x)=π(βx)=exp(βx)1+exp(βx),\operatorname{E}(Y|X=x)=\pi(\beta^{\top}x)=\frac{\exp(\beta^{\top}x)}{1+\exp(\beta^{\top}x)},

for some βd\beta\in\mathbb{R}^{d}. The likelihood function is

(β)=i=1nπ(βxi)yi(1π(βxi))1yi.\mathcal{L}(\beta)=\prod_{i=1}^{n}\pi(\beta^{\top}x_{i})^{y_{i}}(1-\pi(\beta^{\top}x_{i}))^{1-y_{i}}.

From this likelihood, a maximum likelihood estimate (MLE), βn\beta_{n}, of β\beta is obtained.


The HL Test Statistic
To compute the HL test statistic, one partitions the observed data, (xi,yi)(x_{i},y_{i}), into GG groups. Typically, the groups are created so that fitted values are similar within each group and the groups are approximately of equal size. To achieve this, a partition is defined by a collection of G+1G+1 interval endpoints, =k0<k1<<kG1<kG=-\infty=k_{0}<k_{1}<\cdots<k_{G-1}<k_{G}=\infty. The kgk_{g} often depend on the data, usually being set equal to the logits of equally-spaced quantiles of the fitted values, π^i=π(βnxi)\hat{\pi}_{i}=\pi(\beta_{n}^{\top}x_{i}). We define Ii(g)=𝟙(kg1<βnxikg)I_{i}^{(g)}=\mathbbm{1}(k_{g-1}<\beta_{n}^{\top}x_{i}\leq k_{g}), Og=i=1nyiIi(g)O_{g}=\sum_{i=1}^{n}y_{i}I_{i}^{(g)}, Eg=i=1nπ^iIi(g)E_{g}=\sum_{i=1}^{n}\hat{\pi}_{i}I_{i}^{(g)}, ng=i=1nIi(g)n_{g}=\sum_{i=1}^{n}I_{i}^{(g)}, and π¯g=Eg/ng\bar{\pi}_{g}=E_{g}/n_{g}, where 𝟙(A)\mathbbm{1}(A) is the indicator function on a set AA. With this notation, the number of observations in the ggth group is represented by ngn_{g}, and π¯g\bar{\pi}_{g} denotes the mean of the fitted values in this group. The HL test statistic is a quadratic form that is commonly written in summation form as

C^G=g=1G(OgEg)2ngπ¯g(1π¯g).\widehat{C}_{G}=\sum_{g=1}^{G}\frac{(O_{g}-E_{g})^{2}}{n_{g}\bar{\pi}_{g}(1-\bar{\pi}_{g})}.

When G>dG>d, Hosmer and Lemeshow (1980) find that the HL test statistic is asymptotically distributed as a weighted sum of chi-squared random variables under the null hypothesis, after checking certain conditions of Theorem 5.1 in Moore and Spruill (1975). Precisely,

C^G𝑑χGd2+j=1dλjχ1j2,\widehat{C}_{G}\xrightarrow{d}\chi^{2}_{G-d}+\sum_{j=1}^{d}\lambda_{j}\chi_{1j}^{2}, (1)

with each χ1j2\chi_{1j}^{2} being a chi-squared random variable with 1 degree of freedom, and each λj\lambda_{j} an eigenvalue of a certain matrix that depends on both the distribution of XX and the vector β0\beta_{0}, the true value of β\beta under the null hypothesis. Hosmer and Lemeshow (1980) conclude through simulations that the right side of (1) is well approximated by a χG22\chi^{2}_{G-2} distribution in various settings.

The HL test statistic and the corresponding p-value both depend on the chosen number of groups, GG. Typically, G=10G=10 groups are used, so that observations are partitioned into groups that are associated with “deciles of risk”. Throughout this paper we use 10 groups, and therefore compare the HL test statistic to a chi-squared distribution with 8 degrees of freedom, but the results hold for more general choices of GG.


The GHL Test Statistic
The GHL test introduced by Surjanovic et al. (2020) generalizes several GOF tests, allowing them to be applied to other generalized linear models. Tests that are generalized by the GHL test include the HL test (Hosmer and Lemeshow, 1980), the Tsiatis (Tsiatis, 1980) and generalized Tsiatis tests (Canary et al., 2016), and a version of the “full grouped chi-square” from Hosmer and Hjort (2002) with all weights equal to one. The test statistic is a quadratic form like C^G\widehat{C}_{G}, but with important changes to the central matrix. The theory behind this test depends on the residual process, Rn1(u)R_{n}^{1}(u), uu\in\mathbb{R}, defined in Stute and Zhu (2002). In the case of logistic regression,

Rn1(u)=1ni=1n[Yiπ(βnXi)]𝟙(βnXiu),R_{n}^{1}(u)=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}[Y_{i}-\pi(\beta_{n}^{\top}X_{i})]\mathbbm{1}(\beta_{n}^{\top}X_{i}\leq u),

a cumulative sum of residuals that are ordered according to the size of their corresponding fitted values. This process is transformed into a GG-dimensional vector, Sn1S_{n}^{1}, which forms the basis of the HL and GHL test statistics, with

Sn1=(Rn1(k1)Rn1(k0),,Rn1(kG)Rn1(kG1)).S_{n}^{1}=(R_{n}^{1}(k_{1})-R_{n}^{1}(k_{0}),\ldots,R_{n}^{1}(k_{G})-R_{n}^{1}(k_{G-1}))^{\top}.

In order to approximate the variance of Sn1S_{n}^{1}, we need to define several matrices. Let

(Gn)gi\displaystyle\left(G_{n}^{*}\right)_{gi} =Ii(g),\displaystyle=I_{i}^{(g)},
V1/2\displaystyle V^{*1/2} =diag([π(β0xi)(1π(β0xi))]1/2),\displaystyle=\operatorname{diag}\left([\pi(\beta_{0}^{\top}x_{i})(1-\pi(\beta_{0}^{\top}x_{i}))]^{1/2}\right),

for i=1,,ni=1,\ldots,n, and g=1,,Gg=1,\ldots,G. Also, define XX^{*} to be the n×dn\times d matrix with iith row given by xix_{i}^{\top}, and let Vn1/2V_{n}^{*1/2} be the same as V1/2V^{*1/2}, but evaluated at the estimate βn\beta_{n} of β0\beta_{0}. Finally, define

Σn\displaystyle\Sigma_{n} =1nGn(VnVnX(XVnX)1XVn)Gn\displaystyle=\frac{1}{n}G_{n}^{*}\left(V_{n}^{*}-V_{n}^{*}X^{*}(X^{*\top}V_{n}^{*}X^{*})^{-1}X^{*\top}V_{n}^{*}\right)G_{n}^{*\top}
=1nGnVn1/2(InVn1/2X(XVnX)1XVn1/2)Vn1/2Gn,\displaystyle=\frac{1}{n}G_{n}^{*}V_{n}^{*1/2}\left(I_{n}-V_{n}^{*1/2}X^{*}(X^{*\top}V_{n}^{*}X^{*})^{-1}X^{*\top}V_{n}^{*1/2}\right)V_{n}^{*1/2}G_{n}^{*\top}, (2)

where InI_{n} is the n×nn\times n identity matrix.

For logistic regression models, the GHL test statistic is then

XGHL2=Sn1Σn+Sn1,X^{2}_{\text{GHL}}=S_{n}^{1\top}\Sigma_{n}^{+}S_{n}^{1},

where Σn+\Sigma_{n}^{+} is the Moore-Penrose pseudoinverse of Σn\Sigma_{n}. Under certain conditions given by Surjanovic et al. (2020),

Sn1Σn+Sn1𝑑χν2,S_{n}^{1\top}\Sigma_{n}^{+}S_{n}^{1}\xrightarrow{d}\chi^{2}_{\nu},

where ν=rank(Σ)\nu=\operatorname{rank}(\Sigma), with Σ\Sigma a matrix defined in their paper. Since the rank of Σ\Sigma might be unknown, they use the rank of Σn\Sigma_{n} as an estimate. We use the same approach to estimating ν\nu, empirically finding that the estimated rank of Σn\Sigma_{n} is often equal to G1G-1 for logistic regression models.

The GHL test statistic for logistic regression models is equivalent to the Tsiatis GOF test statistic (Tsiatis, 1980) and the Xw2X^{2}_{w} statistic from Hosmer and Hjort (2002) with all weights set equal to 1, when a particular grouping method is used—that is, when GnG_{n}^{*} is the same for all methods. However, use of the GHL test is justified for a wide variety of GLMs and grouping procedures with a proper modification of Σn\Sigma_{n}, as described by Surjanovic et al. (2020). For both the HL and GHL tests, we use the grouping method proposed by Surjanovic et al. (2020), which uses random interval endpoints, kgk_{g}, so that i=1nπ^i(1π^i)Ii(g)\sum_{i=1}^{n}\hat{\pi}_{i}(1-\hat{\pi}_{i})I_{i}^{(g)} is roughly equal between groups. Further details of the implementation are provided in the supplementary material of their paper.

It is important to note that Σn\Sigma_{n} is a non-diagonal matrix that standardizes and accounts for correlations between the grouped residuals in the vector Sn1S_{n}^{1}. This can be seen from (2), which shows that Σn\Sigma_{n} contains a generalized hat matrix for logistic regression. In contrast, when written as a quadratic form, the central matrix of the HL test statistic is diagonal and does not account for the number of parameters in the model, dd, when standardizing the grouped residuals. We expect this standardization to be very important when exact replicates are present, as the binomial responses might be more influential than sparse, individual binary responses.

It is extremely common to fit logistic regression models to data where multiple Bernoulli trials are observed at some or all EVPs, even when the underlying explanatory variables are continuous. As with any fitted model, a test of model fit would be appropriate, and the HL test would likely be a candidate in a typical problem. It is therefore important to explore how the HL and GHL tests behave with large models when exact or near-replicates are present in the data.

3 Simulation Study Design

We compare the performance of the HL and GHL tests by performing a simulation study. Of particular interest is the rejection rate under the null, when the tests are applied to moderately large models that are fit to data with clusters or exact replicates in the covariate space.

In all settings, the true regression model is

E(Y|X=x)=logit(β0+β1x1++βd1xd1),\operatorname{E}(Y|X=x)=\operatorname{logit}(\beta_{0}+\beta_{1}x_{1}+\ldots+\beta_{d-1}x_{d-1}), (3)

with dd in {2,3,,25}\{2,3,\ldots,25\}. Here, β0\beta_{0} represents the intercept term. To produce replicates in the covariate space, mnm\leq n unique EVPs are drawn randomly from a (d1)(d-1)-dimensional spherical normal distribution with marginal mean 0 and marginal variance σ2=1\sigma^{2}=1 for each simulation realization. At each EVP, n/mn/m replicate Bernoulli trials are then created, with probabilities determined by (3). In our simulation study, we fix n=500n=500 and select m{50,100,500}m\in\{50,100,500\} so that the number of replicates at each EVP, n/mn/m, is 10, 5, or 1, respectively.

We set β0=0.1\beta_{0}=0.1 and β1==βd1=0.535/d1\beta_{1}=\ldots=\beta_{d-1}=0.535/\sqrt{d-1}. This results in fitted values that rarely fall outside the interval [0.1,0.9][0.1,0.9], regardless of the number of parameters in the model, so that the number of expected counts in each group is sufficiently large for the use of the Pearson-based test statistics.

We also perform some simulations with n=100n=100, using smaller values of dd and mm than for n=500n=500. However, we focus on results for n=500n=500 because we are then able to increase the number of replicates per EVP, n/mn/m, while still maintaining large enough mm so that it is possible to create ten viable groupings. In each simulation setting, 10,000 realizations are produced. All simulations are performed using R.

4 Simulation Results

Figure 1 presents plots of the sample mean and variance of the HL and GHL test statistics against the number of variables in the model, separately for each mm. An analogous plot of the estimated type 1 error rate of the tests against the number of variables is also presented. For the HL test, all three statistics show a clear decreasing pattern with increasing model size when replicates are present, with a sharper decrease when the number of replicates per EVP is larger. Since the estimated variance is not always twice the size of the mean, we can infer that the chi-squared approximation to the null distribution of the HL test statistic is not adequate in finite samples for these data structures. Simulation results with a sample size of n=100n=100 are not displayed, but are quite similar.

From the same figure, we see that the GHL test performs well in the settings considered. The estimated mean and variance of the test statistic stay close to the desired values of G1=9G-1=9 and 2(G1)=182(G-1)=18. We note that the GHL test can have an inflated type 1 error rate, particularly when it is applied to highly complex models. The models considered here are only moderately large, with dmin{n/20,m/2}d\leq\min\{n/20,m/2\}. If one wishes to use the GHL test to assess the quality of fit of larger models with only a moderate sample size, one should be wary of an inflated type 1 error rate that can become considerably large for complex models. A possible explanation for this is that estimating the off-diagonal elements of the matrix Σn\Sigma_{n} can potentially introduce a considerable amount of variance into the test statistic in small samples.

Recall from (1) that the asymptotic χG22\chi^{2}_{G-2} distribution for the HL test proposed by Hosmer and Lemeshow (1980) is based on a sum of chi-squares, where one has GdG-d degrees of freedom. We investigated whether maintaining G=10G=10 while increasing dd contributes to the phenomena we have observed. We set G=26G=26 and performed a similar simulation study. The adverse behavior of the HL statistic still persists despite this modification.

We also investigated the effect of near-replicate clustering in the covariate space. We fixed nn and mm as in Section 3, but added a small amount of random noise with marginal variance σe2\sigma^{2}_{e} to each replicate within the mm sampled vectors. The amount of clustering was controlled by varying σe2\sigma^{2}_{e}, as shown in Figure 2. As expected, increasing σe2\sigma^{2}_{e} reduces the severity of the decreasing mean, variance and type 1 error rate for the HL test statistic. However, the pattern remains evident while σe2/σ2\sigma^{2}_{e}/\sigma^{2} remains small.

5 Discussion

The original HL test, developed by Hosmer and Lemeshow (1980), is a commonly-used test for logistic regression among researchers in biostatistics and the health sciences. Although its performance is well documented (Lemeshow and Hosmer, 1982; Hosmer et al., 1997; Hosmer and Hjort, 2002), we have identified an issue that does not seem to be well known. For moderately large logistic regression models fit to data with clusters or exact replicates in the covariate space, the null distribution of the HL test statistic can fail to be adequately represented by a chi-squared distribution in finite samples. Using the original chi-squared distribution with G2G-2 degrees of freedom can result in a reduced type 1 error rate, and hence lower power to detect model misspecifications. Based on the results of the simulation study, the GHL test can perform noticeably better in such settings, albeit with a potentially inflated type 1 error rate.

Similar behaviour of the HL test was observed in Surjanovic et al. (2020), where the regular HL test was naively generalized to allow for it to be used with Poisson regression models. In their setup, even without the presence of clusters or exact replicates in the covariate space, as the number of model parameters increased for a fixed sample size, the estimated type 1 error rate decreased. The central matrix in the GHL test statistic, Σn\Sigma_{n}, makes a form of correction to the HL test statistic by standardizing and by accounting for correlations between the grouped residuals that comprise the quadratic form in both the HL and GHL tests. This is evident from (2), which shows that Σn\Sigma_{n} contains the generalized hat matrix subtracted from an identity matrix. To empirically assess the behaviour of Σn\Sigma_{n}, we varied σe2\sigma^{2}_{e} in the setup with replicates and added noise, described at the end of Section 4. For large dd and moderate nn, both fixed, we found that the diagonal elements of Σn\Sigma_{n} tend to shrink, on average, as σe2\sigma^{2}_{e} decreases. In contrast, the elements of the HL central matrix remain roughly constant. Therefore, the GHL statistic seems to adapt to clustering or replicates in XX, whereas the HL test statistic does not.

In logistic regression with exact replicates, grouped binary responses can be viewed as binomial responses that can be more influential. In this scenario, as dd increases for a fixed sample size nn, the distribution of the regular HL test statistic diverges from a single chi-squared distribution, suggesting that the standardization offered by the central GHL matrix becomes increasingly important.

The real-life implications of the reduced type 1 error rate and power of the regular HL test are that in models with a considerable number of variables—provided that the data contains clusters or exact replicates—the HL test has limited ability to detect model misspecifications. Failure to detect model misspecification can result in retaining an inadequate model, which is arguably worse than rejecting an adequate model due to an inflated type 1 error rate, particularly when logistic regression models are used to estimate probabilities from which life-and-death decisions might be made.

Our advice for choosing between the two GOF tests is displayed as a simple decision tree in Figure 3. The advice should be interpreted for G=10G=10 groups, the most commonly used number of groups. With large samples, provided that mm is sufficiently large compared to dd, it should generally be safe to use the GHL test. Our simulations explored models with d25d\leq 25, so some caution should be exercised if the GHL test is to be used with larger models. For small or moderate samples, such as when n=100n=100 or 500500, it is important to identify whether there are clusters or exact replicates in the covariate space. One can compute the number of unique EVPs, mm, and compare this number to the sample size, nn. If n/m5n/m\geq 5, say, then there is a considerable amount of “clustering”. For data without exact replicates, clusters can still be detected using one of many existing clustering algorithms, and the average distances between and within clusters can be compared. Informal plots of the xix_{i} projected onto a two- or three-dimensional space can also be used as an aid in this process.

If there is no evidence of clustering or replicates, the HL test should not be disturbed by this phenomenon. On the other hand, if there is a noticeable amount of clustering, and the regression model is not too large, say dmin{n/20,m/2}d\leq\min\{n/20,m/2\}, where mm also represents the number of estimated clusters, then one can use the GHL test. In the worst-case scenario with a small sample size, clustering, and a large regression model, one can use both tests as an informal aid in assessing the quality of the fit of the model, recognizing that GHL may overstate the lack of fit, while HL may understate it. If the two tests agree, then this suggests that the decision is not influenced by the properties of the tests. When they disagree, conclusions should be drawn more tentatively.

Acknowledgements

We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number RGPIN-2018-04868]. Cette recherche a été financée par le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG), [numéro de référence RGPIN-2018-04868].

References

  • Bertolini et al. (2000) G Bertolini, Roberto D’amico, D Nardi, A Tinazzi, and G Apolone. One model, several results: the paradox of the Hosmer-Lemeshow goodness-of-fit test for the logistic regression model. Journal of Epidemiology and Biostatistics, 5(4):251–253, 2000.
  • Bilder and Loughin (2014) Christopher R. Bilder and Thomas M. Loughin. Analysis of categorical data with R. Chapman and Hall/CRC, 2014.
  • Canary et al. (2016) Jana D. Canary, Leigh Blizzard, Ronald P. Barry, David W. Hosmer, and Stephen J. Quinn. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions. Biometrical Journal, 58(3):674–690, 2016.
  • Hosmer and Hjort (2002) David W. Hosmer and Nils L. Hjort. Goodness-of-fit processes for logistic regression: simulation results. Statistics in Medicine, 21(18):2723–2738, 2002.
  • Hosmer and Lemeshow (1980) David W. Hosmer and Stanley Lemeshow. Goodness of fit tests for the multiple logistic regression model. Communications in Statistics - Theory and Methods, 9(10):1043–1069, 1980.
  • Hosmer et al. (1997) David W. Hosmer, Trina Hosmer, Saskia Le Cessie, and Stanley Lemeshow. A comparison of goodness-of-fit tests for the logistic regression model. Statistics in Medicine, 16(9):965–980, 1997.
  • Lemeshow and Hosmer (1982) Stanley Lemeshow and David W. Hosmer. A review of goodness of fit statistics for use in the development of logistic regression models. American Journal of Epidemiology, 115(1):92–106, 1982.
  • Lemeshow et al. (1988) Stanley Lemeshow, Daniel Teres, Jill Spitz Avrunin, and Harris Pastides. Predicting the outcome of intensive care unit patients. Journal of the American Statistical Association, 83(402):348–356, 1988.
  • Moore and Spruill (1975) David S. Moore and Marcus C. Spruill. Unified large-sample theory of general chi-squared statistics for tests of fit. The Annals of Statistics, pages 599–616, 1975.
  • Stute and Zhu (2002) Winfried Stute and Li-Xing Zhu. Model checks for generalized linear models. Scandinavian Journal of Statistics, 29(3):535–545, 2002.
  • Surjanovic et al. (2020) Nikola Surjanovic, Richard Lockhart, and Thomas M. Loughin. A generalized Hosmer-Lemeshow goodness-of-fit test for a family of generalized linear models. arXiv preprint arXiv:2007.11049, 2020.
  • Tsiatis (1980) Anastasios A. Tsiatis. A note on a goodness-of-fit test for the logistic regression model. Biometrika, 67(1):250–251, 1980.
Refer to caption
Refer to caption
Refer to caption
Figure 1: Null simulation results. Solid red lines are approximate 95% CIs. Intervals are omitted for the type 1 error rate plot, but can be approximated by adding and subtracting 0.005 from the estimated rejection rate.
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 2: Example of clustering in the covariate space with two predictor variables, X1X_{1} and X2X_{2}. Top left to bottom right: σe2=0\sigma^{2}_{e}=0, 0.0010.001, 0.010.01, and 0.10.1, with σ2=1\sigma^{2}=1.
\pgfmathresultpt
Is the model small or moderate in size?
(dmin{n/20,m/2}d\leq\min\{n/20,m/2\})
\pgfmathresultpt
Are there replicates or
clusters of observations?
\pgfmathresultptUse HL\pgfmathresultpt
Try both tests,
but proceed with caution
\pgfmathresultptVery large nn?\pgfmathresultpt
Are there replicates or
clusters of observations?
\pgfmathresultptUse HL\pgfmathresultptUse GHL or both tests\pgfmathresultptUse GHL
Figure 3: Decision tree offering guidance on how to choose between the two GOF tests when G=10G=10 and d25d\lesssim 25. In each decision, left=no and right=yes.