This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Generalised Linear Mixed Model Analysis, Maximum Likelihood Fitting, Prediction, Simulation, and Optimal Design in \proglangR

Samuel Watson
University of Birmingham
S.I.Watson@bham.ac.uk
\Plainauthor

Samuel Watson \PlaintitleGeneralised Linear Mixed Model Analysis, Maximum Likelihood Fitting, Prediction, Simulation, and Optimal Design in R \Shorttitleglmmr packages for R \Abstract We describe the \proglangR package \pkgglmmrBase and an extension \pkgglmmrOptim. \pkgglmmrBase provides a flexible approach to specifying, fitting, and analysing generalised linear mixed models. We use an object-orientated class system within \proglangR to provide methods for a wide range of covariance and mean functions relevant to multiple applications including cluster randomised trials, cohort studies, spatial and spatio-temporal modelling, and split-plot designs. The class generates relevant matrices and statistics and a wide range of methods including full maximum likelihood estimation of generalised linear mixed models using stochastic Maximum Likelihood, Laplace approximation, power calculation, standard error corrections, random effect and broader data simulation, and access to relevant calculations. The package includes Markov Chain Monte Carlo simulation of random effects, sparse matrix methods, and other functionality to support efficient estimation. The \pkgglmmrOptim package implements a set of algorithms to identify c-optimal experimental designs where observations are correlated and can be specified using the generalised linear mixed model classes. Several examples and comparisons to existing packages are provided to illustrate use of the packages. \KeywordsMarkov Chain Monte Carlo Maximum Likelihood, Stochastic Approximation Expectation Maximisation, Generalised Linear Mixed Model, Non-linear model, \proglangR, \proglangC++, c-Optimal Experimental design \PlainkeywordsMarkov Chain Monte Carlo Maximum Likelihood, Stochastic Approximation Expectation Maximisation, Generalised Linear Mixed Model, R, C++, c-Optimal Experimental design \Address Samuel Watson
Institute for Applied Health Research
University of Birmingham
Birmingham, UK
E-mail:

1 Introduction

Generalised linear mixed models (GLMM) are a highly flexible class of statistical models that incorporate both ‘fixed’ and ‘random’ effects. GLMMs permit the incorporation of latent effects and parameters and allow for complex covariance structures. For example, they are widely used in the analysis of: clustered data, such as from cluster randomised trials, to capture latent cluster means; cohort studies to incorporate temporal correlation between observations on individuals; or, in geospatial statistical models as the realisation of a Gaussian process used to model a latent spatial or spatio-temporal surface. Their use in such a wide variety of statistical applications means there exist several packages and libraries for software including \proglangR, Stata, and SAS to provide relevant calculations for study design, model fitting, and other analyses.

For many types of analysis, such as a power calculation for a cluster trial, there exist multiple different packages each implementing a set of specific models. Users may therefore be required to use multiple packages with different interfaces for a single analysis. A more general and flexible system that provides a wide range of functionality for this model type and that permits users to add and extend functionality may therefore simplify statistical workflows and facilitate more complex analyses.

In this article, we describe the \proglangR package \pkgglmmrBase and an extension \pkgglmmrOptim for the \proglangR programming language. These packages provide a general framework for GLMM specification, including non-linear functions of parameters and data, with calculation of relevant matrices, statistics, and other functions designed to provide useful analyses for a large range of model specifications and to support implementation of other GLMM related software in \proglangR. The aim of this \proglangR package (and its underlying \proglangC++ library) was to provide several features altogether not available in other software packages: MCMC Maximum likelihood model fitting, run-time specification of non-linear fixed effect forms for GLMM models, GLMM model fitting with flexible, easy-to-specify covariance functions, data simulation and model summary features, easy access to relevant matrices and calculations, and support for extensions and algorithms including optimal design algorithms. We summarise and compare existing software that provides functionality in these areas where relevant in each section.

1.1 Generalised linear mixed models

A generalised linear mixed model (GLMM) has the linear predictor for observation ii

ηi=g(𝐱i,𝜷)+𝐳i𝐮\eta_{i}=g(\mathbf{x}_{i},\boldsymbol{\beta})+\mathbf{z}_{i}\mathbf{u}

where 𝐱i\mathbf{x}_{i} is the iith row of matrix XX, which is a n×Pn\times P matrix of covariates, 𝜷\boldsymbol{\beta} is a vector of parameters, 𝐳i\mathbf{z}_{i} is the iith row of matrix ZZ, which is the n×Qn\times Q “design matrix” for the random effects, and 𝐮N(0,D)\mathbf{u}\sim N(0,D), where DD is the Q×QQ\times Q covariance matrix of the random effects terms that depends on parameters 𝜽\boldsymbol{\theta}. In this article we use bold lower case characters, e.g. 𝐱i\mathbf{x}_{i} to represent vectors, normal script lower case, e.g. β1\beta_{1}, to represent scalars, and upper case letters, e.g. XX, to represent matrices.

The model is then

yiG(h(ηi);ϕ)y_{i}\sim G(h(\eta_{i});\phi)

𝐲\mathbf{y} is a nn-length vector of outcomes with elements yiy_{i}, GG is a distribution, h(.)h(.) is the link function such that μi=h(ηi)\mu_{i}=h(\eta_{i}) where μi\mu_{i} is the mean value, and ϕ\phi is an additional scale parameter to complete the specification.

When GG is a distribution in the exponential family, we have:

fy|𝐮(yi|𝐮,β,ϕ)=exp(yiηic(ηi))/a(ϕ)+d(y,ϕ)𝐮f𝐮(𝐮|θ)\displaystyle\begin{split}f_{y|\mathbf{u}}(y_{i}|\mathbf{u},\beta,\phi)&=\exp{(y_{i}\eta_{i}-c(\eta_{i}))/a(\phi)+d(y,\phi)}\\ \mathbf{u}&\sim f_{\mathbf{u}}(\mathbf{u}|\theta)\end{split} (1)

The likelihood of this model is given by:

L(β,ϕ,θ|𝐲)=i=1nfy|𝐮(yi|𝐮,β,ϕ)f𝐮(𝐮|θ)d𝐮L(\beta,\phi,\theta|\mathbf{y})=\int\prod_{i=1}^{n}f_{y|\mathbf{u}}(y_{i}|\mathbf{u},\beta,\phi)f_{\mathbf{u}}(\mathbf{u}|\theta)d\mathbf{u} (2)

The likelihood (2) generally has no closed form solution, and so different algorithms and approximations have been proposed to estimate the model parameters. We discuss model fitting in Section 3. Where relevant we represent the set of all parameters as 𝚯=(𝜷,ϕ,𝜽)\boldsymbol{\Theta}=(\boldsymbol{\beta},\phi,\boldsymbol{\theta}).

2 A GLMM Model in glmmrBase

The \pkgglmmrBase package defines a \codeModel class using the \pkgR6 class system (Chang, 2022), which provides an encapsulated object orientated programming system for \pkgR. Most of the functionality of the package revolves around the \codeModel class, which interfaces with underlying \proglangC++ classes. A non-exhaustive list of the functions and objects calculated by or a member of the \codeModel class is shown in Table 1. The class also contains two subclasses, the \codecovariance and \codemean classes, which handle the random effects and linear predictor, respectively. \pkgR6 classes allow encapsulating classes to ‘share’ class objects, so that a single \codecovariance object could be shared by multiple \codeModel objects. \pkgR6 also provides familiar object-orientated functionality, including class inheritance, so that new classes can be created that inherit from the \codeModel class to make use of the range of functions. Within \proglangR, we use the \pkgMatrix package for matrix storage and linear algebra for operations, and the Eigen \proglangC++ library for linear algenra functionality in \proglangC++. Linkage between \proglangR and \proglangC++ is provided through the \proglangRcpp and \proglangRcppEigen packages (Eddelbuettel and François, 2011). We describe the base functionality and model specification in this section, and then describe higher-level functionality including model fitting in subsequent sections. The underlying \proglangC++ library is header-only and so can be imported into other projects.

Method Description Sec.
\codecovariance \codeD The matrix DD 2.4
\codeZ The matrix ZZ 2.7
\codechol_D() Cholesky decomposition of DD 2.4
\codelog_likelihood() Multivariate Gaussian log-likelihood with zero mean and covariance DD 3.5
\codesimulate_re() Simulates a vector 𝐮\mathbf{u} 2.13
\codesparse() Choose whether to use sparse matrix methods 2.4
\codeparameters The parameters θ\theta
\codeformula Random effects formula 1
\codeupdate_parameters() Updates θ\theta and related matrices
\codehsgp() HSGP approximation parameters 1
\codenngp() NNGP approximation parameters 1
\codemean \codeX The matrix X
\codeparameters The parameters β\beta
\codeoffset The optional model offset
\codeformula The fixed effects formula used to create XX 2.2
\codelinear_predictor() Generates XβX\beta plus offset
\codeupdate_parameters() Updates β\beta and related matrices
\codefamily A \proglangR family object
\codevar_par An optional scale parameter
\codefitted() Full linear predictor
\codepredict() Predictions from the model at new data values 3.14
\codesim_data() Simulates data from the model 2.13
\codeSigma() Generates Σ\Sigma (or an approximation) 2.9
\codeinformation_matrix() The information matrix 2.9, 3.9
\codesandwich() Robust sandwich matrix 3.10
\codesmall_sample_correction() Bias-corrected variance-covariance matrix of β^\hat{\beta} 3.10
\codebox() Inferential statistics for the modified Box correction 3.10
\codemarginal() Marginal effects of covariates 3.10
\codepartial_sigma() Matrices Σ/θ\partial\Sigma/\partial\theta and 2Σ/θiθj\partial^{2}\Sigma/\partial\theta_{i}\partial\theta_{j} 3.10
\codeuse_attenutation() Option for improving approximation of Σ\Sigma 2.9
\codepower() Estimates the power 2.12
\codeMCML() Markov Chain Monte Carlo Maximum Likelihood model fitting 3
\codeLA() Maximum Likelihood model fitting with Laplace approximation 3.9
\codemcmc_sample() Sample 𝐮\mathbf{u} using MCMC 3.3
\codew_matrix() Returns diag(W)diag(W) 2.9
\codedh_deta() Returns h1(η)/η\partial h^{-1}(\eta)/\partial\eta 2.9
\codecalculator_instructions() Prints the calculation instructions for the linear predictor 2.2
\codelog_gradient() Returns either 𝐮L(β,ϕ,θ|𝐲)\nabla_{\mathbf{u}}L(\beta,\phi,\theta|\mathbf{y}) or βL(β,ϕ,θ|𝐲)\nabla_{\beta}L(\beta,\phi,\theta|\mathbf{y}) 3.3
Table 1: A non-exhaustive list of the publicly available methods and objects in the \codeModel class. \codeModel also contains two sub-classes \codecovariance and \codemean whose respective methods are also shown. The column Sec. shows the relevant section of the article detailing the methods.

An example call to generate a new \codeModel is: {CodeChunk} {CodeInput} model <- Modelnew(formula=factor(t)1+(1|gr(j)ar1(t)),data=data,covariance=c(0.25,0.8),mean=c(rep(0,5)),family=gaussian())Wediscusseachoftheelementsofthiscallinturn.new(formula=~factor(t)-1+(1|gr(j)*ar1(t)),data=data,covariance=c(0.25,0.8),mean=c(rep(0,5)),family=gaussian())Wediscusseachoftheelementsofthiscallinturn.\par

2.1 Data Generation Tools

We introduce methods to generate data for hierarchical models and blocked designs. As we show in subsequent examples, ‘fake’ data generation is typically needed to specify a particular data structure at the design stage of a study, but can be useful in other circumstances. Nelder (1965) suggested a simple notation that could express a large variety of different blocked designs. The notation was proposed in the context of split-plot experiments for agricultural research, where researchers often split areas of land into blocks, sub-blocks, and other smaller divisions, and apply different combinations of treatments. However, the notation is useful for expressing a large variety of experimental designs with correlation and clustering, including cluster trials, cohort studies, and spatial and temporal prevalence surveys. We have included the function \codenelder() in the package that generates a data frame of a design using Nelder’s notation.

There are two operations:

  1. 1.
    \code

    > (or \to in Nelder’s notation) indicates “clustered in”.

  2. 2.
    \code

    * (or ×\times in Nelder’s notation) indicates a crossing that generates all combinations of two factors.

The function takes a formula input indicating the name of the variable and a number for the number of levels, such as \codeabc(12). So for example \code cl(4) > ind(5) means in each of five levels of \codecl there are five levels of \codeind, and the individuals are different between clusters. Brackets are used to indicate the order of evaluation. Some specific examples are illustrated in Table 2.

Formula Meaning
\code person(5) * time(10) A cohort study with five people, all observed in each of ten periods \codetime
\code (cl(4) * t(3)) > ind(5) A repeated-measures cluster study with four clusters (labelled \codecl), each observed in each time period \codet with cross-sectional sampling and five individuals (labelled \codeind) in each cluster-period.
\code (cl(4) > ind(5)) * t(3) A repeated-measures cluster cohort study with four clusters (labelled \codecl) with five individuals per cluster, and each cluster-individual combination is observed in each time period \codet.
\code ((x(100) * y(100)) > hh(4)) * t(2) A spatial-temporal grid of 100x100 and two time points, with 4 households per spatial grid cell.
Table 2: Examples of formulae for the \codenelder() function

Use of this function produces a data frame: {CodeChunk} {CodeInput} data <- nelder( (j(4) * t(5)) > i(5)) head(data) {CodeOutput} > j t i > 1 1 1 1 > 2 1 1 2 > 3 1 1 3 > 4 1 1 4 > 5 1 1 5 > 6 1 2 6

The data frame shown above may represent, for example, a cluster randomised study with cross-sectional sampling. Such an approach to study design assumes the same number of each factor for each other factor, which is not likely adequate for certain study designs. We may expect unequal cluster sizes, staggered inclusion/drop-out, and so forth, and so a user-generated data set would instead be required. Certain treatment conditions may be specified with this approach including parallel trial designs, stepped-wedge implementation, or factorial approaches by specifying a treatment arm as part of the block structure.

2.2 Linear Predictor

The package allows for flexible specification of g(𝐱𝐢,𝜷)g(\mathbf{x_{i}},\boldsymbol{\beta}). Standard model formulae in \proglangR can be used, for example \codeformula =   factor(t) - 1, the package also allows a wide range of specifications including non-linear functions and naming parameters in the function, including allowing the same parameter to appear in multiple places. For example, the function β0+βintinti+β1exp(β2xi)\beta_{0}+\beta_{int}\texttt{int}_{i}+\beta_{1}\exp(\beta_{2}\texttt{x}_{i}) can be specified as \codeformula =   int + b_1*exp(b_2*x). Names of data are assumed to be multiplied by a parameter as in the standard linear predictor \code  x would produce βxx\beta_{x}x. To include data without a parameter, it can be wrapped in brackets such as \code (x), which can also provide a way of specifying offsets, although this can also be achieved by directly specifying the offset.

We use a recursive, operator precedence algorithm to parse the formula and generate a reverse Polish notation representing the formula. The algorithm first splits the formula at the first ‘+’ or ‘-’ symbol outside of a bracket, then if there are none, at a multiply (‘*’) or divide (‘/’) symbol outside of a bracket, then a power (‘’̂) symbol, and finally brackets, until a token is produced (e.g. \codeb_2 or \codeexp). The resulting token is first checked against available functions, then if it is not a function it is checked against the column names of the data, and if it is not in the data it is considered the name of a parameter. The resulting tree structure can then be parsed up each of its branches to produce an instruction set. Figure 1 shows an example of the formula parsing algorithm, and calculation of the value of the formula. This scheme also allows for an auto-differention approach to obtain the first and second derivatives where required.

Refer to caption
Figure 1: Illustrative example of formula parsing and calculation including autodifferention with respect to parameter \codeb_1

2.3 Covariance specification

We adapt and extend the random effects specification approach of the popular \proglangR package \pkglme4 (Bates et al., 2015) and other related packages, building on the flexible formula parsing and calculation described above, to allow for relatively complex structures through the specification of a covariance function (see Section 3 for a summary of related packages). A covariance function is specified as an additive formula made up of components with structure \code(z|f(j)). The left side of the vertical bar specifies the covariates in the model that have a random effects structure; using \code(1|f(j)) specifies a ‘random intercept’ model. The right side of the vertical bar specifies the covariance function \codef for that term using variable \codej named in the data. Multiple covariance functions on the right side of the vertical bar are multiplied together, i.e., \code(1|f(j)*g(t)). The currently implemented functions (as of version 0.6.1) are listed in Table 3.

Function Cov(xi,xi)Cov(x_{i},x_{i^{\prime}}) θ\theta Code
Group membership θ12𝟏(xi=xi)\theta_{1}^{2}\mathbf{1}(x_{i}=x_{i^{\prime}}) θ1>0\theta_{1}>0 \codegr(x)
Exponential θ1exp(|xixi|/θ2)\theta_{1}\text{exp}(-|x_{i}-x_{i^{\prime}}|/\theta_{2}) θ1,θ2>0\theta_{1},\theta_{2}>0 \codefexp(x)
exp(|xixi|/θ1)\text{exp}(-|x_{i}-x_{i^{\prime}}|/\theta_{1}) θ1>0\theta_{1}>0 \codefexp0(x)
Squared Exponential θ1exp((|xixi|/θ2)2)\theta_{1}\text{exp}(-(|x_{i}-x_{i^{\prime}}|/\theta_{2})^{2}) θ1,θ2>0\theta_{1},\theta_{2}>0 \codesqexp(x)
exp((|xixi|/θ1)2)\text{exp}(-(|x_{i}-x_{i^{\prime}}|/\theta_{1})^{2}) θ1>0\theta_{1}>0 \codesqexp0(x)
Autoregressive order 1 θ1|xixi|\theta_{1}^{|x_{i}-x_{i^{\prime}}|} 0<θ1<10<\theta_{1}<1 \codear(x)
Bessel Kθ1(x)K_{\theta_{1}}(x) θ1\theta_{1} > 0 \codebessel(x)
Matern 21θ1Γ(θ1)(2θ1xθ2)θ1Kθ1(2θ1xθ2))\frac{2^{1-\theta_{1}}}{\Gamma(\theta_{1})}\left(\sqrt{2\theta_{1}}\frac{x}{\theta_{2}}\right)^{\theta_{1}}K_{\theta_{1}}\left(\sqrt{2\theta_{1}}\frac{x}{\theta_{2}})\right) θ1,θ2>0\theta_{1},\theta_{2}>0 \codematern(x)
Compactly supported*
Truncated Power 2 θ1(1|xixi|θ2)2\theta_{1}(1-|x_{i}-x_{i^{\prime}}|^{\theta_{2}})^{2} θ1>0\theta_{1}>0, 0<θ220<\theta_{2}\leq 2 \codetruncpow2(x)
Truncated Power 3 θ1(1|xixi|θ2)3\theta_{1}(1-|x_{i}-x_{i^{\prime}}|^{\theta_{2}})^{3} θ1>0\theta_{1}>0, 0<θ220<\theta_{2}\leq 2 \codetruncpow3(x)
Truncated Power 4 θ1(1|xixi|θ2)4\theta_{1}(1-|x_{i}-x_{i^{\prime}}|^{\theta_{2}})^{4} θ1>0\theta_{1}>0, 0<θ220<\theta_{2}\leq 2 \codetruncpow4(x)
Cauchy θ1(1+|xixi|θ2)θ3θ2\theta_{1}(1+|x_{i}-x_{i^{\prime}}|^{\theta_{2}})^{-\frac{\theta_{3}}{\theta_{2}}} θ1,θ3>0\theta_{1},\theta_{3}>0, 0<θ220<\theta_{2}\leq 2 \codecauchy(x)
Cauchy 3 θ1(1+|xixi|θ2)3\theta_{1}(1+|x_{i}-x_{i^{\prime}}|^{\theta_{2}})^{-3} θ1>0\theta_{1}>0, 0<θ220<\theta_{2}\leq 2 \codecauchy3(x)
Gaussian process
approximations
Nearest Neighbour See section 2.6 \codenngp_*(x)
Hilbert Space See section 2.6 \codehsgp_*(x)
Table 3: Supported covariance functions. |.||.| is the Euclidean distance. KaK_{a} is the modified Bessel function of the second kind. *Truncated power and Cauchy functions are defined for |xixi|<1|x_{i}-x_{i^{\prime}}|<1 and are otherwise equal to zero and are used for sparse matrix generation, see Section 2.5.

One combines smaller functions to provide the desired overall covariance function. For example, for a stepped-wedge cluster randomised trial we could consider the standard specification (see, for example, Li et al. (2021)) with an exchangeable random effect for the cluster level (\codej), and a separate exchangeable random effect for the cluster-period (with \codet representing the discrete time period), which would be \code (1|gr(j))+(1|gr(j,t)). Alternatively, we could consider an autoregressive cluster-level random effect that decays exponentially over time so that in a linear mixed model, for person ii in cluster jj at time tt, Cov(yijt,yijt)=θ12Cov(y_{ijt},y_{i^{\prime}jt})=\theta_{1}^{2}, for iii\neq i^{\prime}, Cov(yijt,yijt)=θ12θ2|tt|Cov(y_{ijt},y_{i^{\prime}jt^{\prime}})=\theta_{1}^{2}\theta_{2}^{|t-t^{\prime}|} for ttt\neq t^{\prime}, and Cov(yijt,yijt)=0Cov(y_{ijt},y_{i^{\prime}j^{\prime}t})=0 for jjj\neq j^{\prime}. This function would be specified as \code (1|gr(j)*ar1(t)). We could also supplement the autoregressive cluster-level effect with an individual-level random intercept, where \codeind specified the identifier of the individual, as \code (1|gr(j)*ar1(t)) + (1|gr(ind)), and so forth. To add a further random parameter on some covariate \codex we must add an additional term, for example, \code (1|gr(j)*ar1(t)) + (1|gr(ind)) + (x|gr(ind)). As another example, to implement a spatial Gaussian process model with exponential covariance function and two Cartesian coordinates \codex and \codey with autoregressive temporal decay we can specify \code (1|ar(t)*fexp0(x,y).

2.3.1 Covariance function parameters

The \codecovariance argument of the call to \codeModel is optional and if used, is typically used to specify the values of the parameters in the model. It can also receive a list, which can contain covariance function parameters, a random effects specification, and a particular data frame. Where parameter values are provided, the elements of the vector correspond to each of the functions in the covariance formula in the order they are written. For example,

  • Formula: \code (1|gr(j))+(1|gr(j,t)); parameters: \codec(0.05,0.01) describes the covariance function for iii\neq i^{\prime}

    Cov(yijt,yijt)={0.05+0.01if j=j,t=t0.05if j=j,tt0otherwiseCov(y_{ijt},y_{i^{\prime}j^{\prime}t^{\prime}})=\begin{cases}0.05+0.01&\text{if }j=j^{\prime},t=t^{\prime}\\ 0.05&\text{if }j=j^{\prime},t\neq t^{\prime}\\ 0&\text{otherwise}\end{cases}
  • Formula: \code (1|gr(j)*fexp0(t)); parameters: \codec(0.05,0.8) describes the covariance function

    Cov(yijt,yijt)={0.05exp(|tt|0.8)if j=j0otherwiseCov(y_{ijt},y_{i^{\prime}j^{\prime}t^{\prime}})=\begin{cases}0.05*\text{exp}(-\frac{|t-t^{\prime}|}{0.8})&\text{if }j=j^{\prime}\\ 0&\text{otherwise}\end{cases}

The Euclidean distance is used for all the functions. For example \code (1|fexp0(x,y)) will generate a covariance matrix where the covariance between two observations with positions s=(x,y)s=(x,y) and s=(x,y)s^{\prime}=(x^{\prime},y^{\prime}) is Cov(s,s)=exp(ss/θ)Cov(s,s^{\prime})=\text{exp}(-||s-s^{\prime}||/\theta) where ||.||||.|| is the Euclidean distance. For some of the covariance functions, we provide two parameterisations (for example, \codefexp and \codefexp0). The aim of these functions is to provide a version that is compatible with other functions with leading covariance parameters as, for example, a model specified as \codegr(j)*fexp(t) would not be identifiable as there would be two free parameters multiplying the exponential function.

2.4 Covariance Calculation and Storage

The formula specifying the random effects is translated into a form that can be used to efficiently generate the value at any position in the matrix DD so that it can be quickly updated when the parameter values change, which facilitates functionality such as model fitting when using an arbitrary combination of covariance functions.

The matrices DD and LL are stored using sparse compressed row storage (CRS) format since for a large number of cases DD and its Cholesky factorisation are sparse. For example, a large number of covariance function specifications and their related study designs lead to a block diagonal structure for DD:

D=[D1𝟎𝟎𝟎D2𝟎𝟎𝟎DB]D=\begin{bmatrix}D_{1}&\mathbf{0}&\ldots&\mathbf{0}\\ \mathbf{0}&D_{2}&\ldots&\mathbf{0}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\ldots&D_{B}\end{bmatrix} (3)

Internally, the formula into parsed into blocks, both determined by the presence of a \codegr function in any particular random effect specification, and for each additive component of the specification. These blocks may be of differing dimensions and have different formulae and parameters. The data defining each element is a Euclidean distance and within each (likely dense) block, only the data for the strictly lower triangular portion needs to be stored. These distances are calculated once and stored in a flat data structure. We use a reverse Polish notation for to specify the instructions for a given block, as with the linear predictor, along with parameter indices and other necessary information.

By default, the Cholesky decomposition LL, such that D=LLTD=LL^{T}, is calculated using a sparse matrix LDL algorithm (Davis, 2005), which also provides an efficient forward substitution algorithm for computing quadratic forms in the multivariate Gaussian likelihood (see Section 3). Additionally, an efficient permutation is calculated on instantiation of the class using the approximation minimum degree algorithm (Furrer et al., 2006; Amestoy et al., 2004), which may provide a performance gain when factorising the matrix PDPTPDP^{T} by ensuring the resulting factorisation is sparse. We provide a basic \proglangC++ sparse matrix class, the LDL decomposition, and basic operations in the \pkgSparseChol package for \proglangR.

As an alternative we can directly exploit the block diagonal structure of DD. We can directly generate the decomposition of each block LbL_{b} separately. We use the Cholesky–Banachiewicz algorithm for this purpose. Many calculations can also be broken down by block, for example XTΣ1X=bXbTLbTLb1XbX^{T}\Sigma^{-1}X=\sum_{b}X^{T}_{b}L_{b}^{-T}L_{b}^{-1}X_{b}. This example can also be further simplified using a forward substitution algorithm (see Section 3.5). To switch between sparse and dense matrix methods, one can use \codemodel$sparse(TRUE) and \codemodel$sparse(FALSE), respectively. The default is to use sparse methods.

2.5 Compactly Supported Covariance Functions

For many types of GLMM, the matrix DD is dense and not blocked or sparse. For example, geospatial statistical models often specify spatial or spatio-temporal Gaussian process models for the random effects (Diggle et al., 1998). For even moderately sized data sets, this can result in very slow model fitting for the multivariate Gaussian likelihood. There are several approaches to reducing model fitting time. One approach is to use a compactly supported covariance function that leads to a sparse matrix DD while still approximating a Gaussian process. This method can be viewed as ‘covariance tapering’ (Kaufman et al., 2008), although the approach implemented here is what they describe as the ‘one taper’ method, which may be biased in some circumstances. As shown in Table 3, we include several compactly supported and parameterised covariance functions, which are described in Gneiting (2002). A covariance function with compact support has value zero beyond some ‘effective range’. Using these functions can result in a sparse matrix DD, which is then amenable to the sparse matrix methods in this package. To implement these functions with an effective range of rr, beyond which the covariance is zero, one must divide each variable by rr. For example, if there are two spatial dimensions, \codex and \codey, in the data frame \codedata then one would create \codedata$xr <- data$x/r, and equivelently for \codey. Then, one could use the covariance function, for example, \code  truncpow3(xr,yr).

2.6 Gaussian Process Approximations

As an alternative to compactly supported covariance functions, one can instead use a Gaussian process approximation for large, dense covariance matrices. We provide two such approximations in this package.

2.6.1 Nearest Neighbour Gaussian Process

The multivariate Gaussian likelihood can be rewritten as the product of the conditional densities:

f(𝐮)=f(u1)i=2nf(ui|u1,,ui1)f(\mathbf{u})=f(u_{1})\prod_{i=2}^{n}f(u_{i}|u_{1},...,u_{i-1})

Vecchia (1988) proposed that one can approximate f(𝐮)f(\mathbf{u}) by limiting the conditioning sets for each uju_{j} to a maximum size of mm. For geospatial applications, (Datta et al., 2016a, b) proposed the nearest neighbour Gaussian process (NNGP), in which the conditioning sets are limited to the mm ‘nearest neighbours’ of each observation.

Let 𝒩j\mathcal{N}_{j} be the set of up to mm nearest neighbours of jj with index less than jj. The approximation is:

f(𝐮)f(u1)i=2nf(ui|𝒩i)f(\mathbf{u})\approx f(u_{1})\prod_{i=2}^{n}f(u_{i}|\mathcal{N}_{i})

which leads to:

u1=η1uj=i=1pajiu𝒩ji\displaystyle\begin{split}u_{1}&=\eta_{1}\\ u_{j}&=\sum_{i=1}^{p}a_{ji}u_{\mathcal{N}_{ji}}\end{split} (4)

where 𝒩ji\mathcal{N}_{ji} is the iith nearest neighbour of jj. Equation (4) can be more compactly written as 𝐮=A𝐮+𝜼\mathbf{u}=A\mathbf{u}+\boldsymbol{\eta} where AA is a sparse, strictly lower triangular matrix, 𝜼N(0,Ω)\boldsymbol{\eta}\sim N(0,\Omega) with Ω\Omega a diagonal matrix with entries Ω11=Var(u1)\Omega_{11}=\text{Var}(u_{1}) and Ωii=Var(ui|𝒩i)\Omega_{ii}=\text{Var}(u_{i}|\mathcal{N}_{i}) (Finley et al., 2019). The approximate covariance matrix can then be written as D(IA)1Ω(IA)TD\approx(I-A)^{-1}\Omega(I-A)^{-T}. We implement the algorithms described by Finley et al. (2019) to generate the matrices AA and DD. The cost of generating these matrices is O(nm3)O(nm^{3}) owing to the need to solve n1n-1 linear systems of up to size mm. An efficient algorithm for calculating the quadratic form in the mulivariate Gaussian likelihood is also used.

A nearest neighbour approximation can be used by prefixing \codenngp_ to the covariance function name with most of the functions in the package. For example, a nearest neighbour Gaussian process approximation using an exponential covariance function in two dimensions can be used by specifying in the formula \code(1|nngp_fexp(x,y)). The approximation parameters including the number of nearest neighbours can be set (or returned) using the function \codemodelcovariancecovariancenngp().

2.6.2 Hilbert Space Gaussian Process

Low, or reduced, rank approximations aim to approximate the matrix DD with a matrix D~\tilde{D} with rank m<nm<n. The optimal low-rank approximation is D~=ΦΛΦT\tilde{D}=\Phi\Lambda\Phi^{T} where Λ\Lambda is a diagonal matrix of the mm leading eigenvalues of DD and Φ\Phi the matrix of the corresponding eigenvectors. However, the computational complexity of generating the eigendecomposition scales the same as matrix inversion. Solin and Särkkä (2020) propose an efficient method to approximate the eigenvalues and eigenvectors using Hilbert space methods, so we refer to it as a Hilbert Space Gaussian Process (HSGP). Riutort-Mayol et al. (2023) provides further discussion of these methods.

Stationary covariance functions, including those in the Matern class like exponential and squared exponential, can be represented in terms of their spectral densities. For example, the spectral density function of the squared exponential function in DD dimensions is:

S(ω)=σ2(2π)DϕDexp(ϕ2ω2/2)S(\omega)=\sigma^{2}(\sqrt{2\pi})^{D}\phi^{D}\text{exp}(-\phi^{2}\omega^{2}/2)

Consider first a unidimensional space with support on [c,c][-c,c]. The eigenvalues λj\lambda_{j} (which are the diagonal elements of Λ\Lambda) and eigenvectors ϕj\phi_{j} (which form the columns of Φ\Phi) of the Laplacian operator in this domain are:

λj=(jπ2L)2\lambda_{j}=\left(\frac{j\pi}{2L}\right)^{2}

and

ϕj(x)=1Lsin(λj(x+L))\phi_{j}(x)=\sqrt{\frac{1}{L}}\text{sin}\left(\sqrt{\lambda_{j}}(x+L)\right)

Then the approximation in one dimension is

𝒵(x)j=1mS(λj)1/2ϕj(x)βj\mathcal{Z}(x)\approx\sum_{j=1}^{m}S\left(\sqrt{\lambda_{j}}\right)^{1/2}\phi_{j}(x)\beta_{j}

where βjN(0,1)\beta_{j}\sim N(0,1). This result can be generalised to multiple dimensions. The total number of eigenvalues and eigenfunctions in multiple dimensions is the combination of all univariate eigenvalues and eigenfunctions over all dimensions. The matrix Φ\Phi does not depend on the covariance parameters are can be pre-computed, so only the product ΦΛ12\Phi\Lambda^{\frac{1}{2}} needs to be re-calculated during model fitting, which scales as O(nm2)O(nm^{2}). Riutort-Mayol et al. (2023) provide a detailed analysis of the reduced rank GP for unidimensional linear Gaussian models, examining in particular how the choice of cc and pp affect performance of posterior inferences and model predictions.

The HSGP is only currently available with the exponential or squared expoential covariance functions as either \codehsgp_fexp(x) or \codehsgp_sqexp(x). The approximation parameters including the number of basis functions and the boundary condition can be set (or returned) using the function \codemodelcovariancecovariancehsgp().

2.7 Computation of matrix Z

The matrix ZZ is constructed in a similar way as other packages, such as described for \pkglme4 (Bates et al., 2015). ZZ is comprised of the matrices corresponding to each block Z=[Z1,Z2,,ZB]Z=[Z_{1},Z_{2},...,Z_{B}]. For a model formula \code(1|f(x1,x2,…)), the dimension of the corresponding matrix ZbZ_{b} is nn rows and number of columns equal to the number of unique rows of the combination of the variables in \codex1,x2,…. The iith row and jjth column of ZbZ_{b} is then equal to 1 if the iith individual has the value corresponding to the jjth unique combination of the variables and zero otherwise. For formulae specifying random effects on covariates (“variable slopes models”), e.g. \code(z|f(x1,x2,…)), then the iith row and jjth column of ZbZ_{b} is further multiplied by xix_{i}. ZZ is also stored in CRS format and uses sparse matrix methods for multiplication and related calculations.

2.8 Additional Arguments

For Gaussian models, and other distributions requiring an additional scale parameter ϕ\phi, one can also specify the option \codevar_par which is the conditional variance ϕ=σ\phi=\sigma at the individual level. The default value is 1. Currently (version 0.6.1), the package supports the following families (link functions): Gaussian (identity, log), Poisson (log, identity), Binomial (logit, log, probit, identity), Gamma (log, inverse, identity), and Beta (logit). For the beta distribution one must provide \codeBeta() to the \codefamily argument.

2.9 Approximation of the Covariance Matrix

The \codeModel class can provide an approximation to the covariance matrix Σ\Sigma with the member function \code$Sigma(). This approximation is also used to calculate the expected information matrix with \code$information_matrix() and estimate power in the member function \code$power() (see Section 2.12). We use the first-order approximation based on the marginal quasi-likelihood proposed by Breslow and Clayton (1993):

Σ=W1+ZDZT\Sigma=W^{-1}+ZDZ^{T} (5)

where W1=diag((h1(𝜼)𝜼)2Var(𝐲|𝐮))W^{-1}=\text{diag}\left(\left(\frac{\partial h^{-1}(\boldsymbol{\eta})}{\partial\boldsymbol{\eta}}\right)^{2}\text{Var}(\mathbf{y}|\mathbf{u})\right), which are recognisable as the GLM iterated weights (Breslow and Clayton, 1993; McCullagh and Nelder, 1989). For Gaussian-identity mixed models this approximation is exact. The diagonal of the matrix WW can be obtained using the member function \code$w_matrix(). The information matrix for β\beta is:

Mβ=(XTΣ1X)1M_{\beta}=(X^{T}\Sigma^{-1}X)^{-1} (6)

Zeger et al. (1988) suggest that when using the marginal quasilikelihood, one can improve the approximation to the marginal mean by “attenuating” the linear predictor for non-linear models. For example, with the log link the “attenuated” mean is E(yi)h1(xiβ+ziDziT/2)E(y_{i})\approx h^{-1}(x_{i}\beta+z_{i}Dz_{i}^{T}/2) and for the logit link E(yi)h1(xiβdet(aDziTzi+I)1/2)E(y_{i})\approx h^{-1}(x_{i}\beta\text{det}(aDz_{i}^{T}z_{i}+I)^{-1/2}) with a=163/(15π)a=16\sqrt{3}/(15\pi). To use “attenuation” one can set \codemod$use_attenutation(TRUE), the default is not to use attenutation.

2.10 Updating Parameters

The parameters of the covariance function and linear predictor can updated using the function \codeupdate_parameters(), which then triggers re-generation of the matrices and updates the data in the underlying \proglangC++ classes: \codemodel$update_parameters( cov.pars = c(0.1,0.9)).

2.11 Class Inheritance

The \pkgR6 class system provides many of the standard features of other object orientated systems, including class inheritance. The classes specified in the \pkgglmmrBase package can be inherited from to provide the full range of calculations to other applications. As an example we can define a new class that has a member function that returns the log determinant of the matrix DD: {CodeChunk} {CodeInput} CovDet <- R6::R6Class("CovDet", inherit = Covariance, public = list( det = function() return(Matrix::determinant(selfD)D)modulus) )) cov <- CovDetnew(formula=(1|gr(j)ar1(t)),parameters=c(0.05,0.8),data=data)covnew(formula=~(1|gr(j)*ar1(t)),parameters=c(0.05,0.8),data=data)covdet() [1] -72.26107 More complex applications may include classes to implement simulation-based analyses that simulate new data and fit a GLMM using one of the package’s model fitting methods.

2.12 Power Calculation

Power and sample size calculations are an important component of the design of many studies, particularly randomised trials, which we use as an example. Cluster randomised trials frequently use GLMMs for data analysis, and hence are the basis for estimating the statistical power of a trial design (Hooper et al., 2016). Different approaches are used to estimate power or calculate sample size given an assumed correlation structure, most frequently using “design effects” (Hemming et al., 2020). However, given the large range of possible models and covariance structures, many software packages implement a narrow range of specific models and provide wrappers to other functions. For example, we identified eight different \proglangR packages available on CRAN or via a R Shiny web interface for calculating cluster trial power or sample size. These packages and their features are listed in Tables 4 and 5. As is evident, the range of models is relatively limited. Beyond this specific study type the R package \pkgsimr provides functions to simulate data from GLMMs and estimate statistics like power using Monte Carlo simulation for specific designs.

Package Custom Designs Data simulation Power by simulation Power by approx. Non-simple randomisation Optimal designs
\pkgSWSamp ×\times ×\times
\pkgsamplingDataCRT ×\times ×\times ×\times
\pkgShinycRCT ×\times ×\times ×\times ×\times
\pkgswCRTdesign ×\times ×\times ×\times ×\times
\pkgclusterPower 1\sim^{1} ×\times 3\sim^{3} ×\times
\pkgCRTdistpower ×\times ×\times ×\times ×\times
\pkgswdpwr ×\times ×\times ×\times ×\times
\pkgSteppedPower ×\times ×\times ×\times ×\times
\pkgglmmrBase
Table 4: Features of packages for \proglangR to calculate power and sample size for cluster randomised trial designs
Package Non- canonical link Binom./ Poisson Other dist. Compound symmetry Temporal decay Random slopes Covariates Other functions
\pkgSWSamp ×\times ×\times ×\times ×\times ×\times ×\times
\pkgsamplingDataCRT ×\times \sim ×\times ×\times ×\times ×\times ×\times
\pkgShinycRCT ×\times ×\times ×\times ×\times ×\times
\pkgswCRTdesign ×\times ×\times ×\times ×\times ×\times ×\times
\pkgclusterPower ×\times ×\times ×\times ×\times ×\times ×\times
\pkgCRTdistpower ×\times ×\times 2\sim^{2} ×\times ×\times ×\times ×\times
\pkgswdpwr ×\times ×\times ×\times ×\times ×\times
\pkgSteppedPower ×\times \sim ×\times ×\times 3\sim^{3} ×\times
\pkgglmmrBase
Table 5: Supported models of packages for \proglangR to calculate power and sample size for cluster randomised trial designs

As an alternative, the flexible model specification and range of functionality of \pkgglmmrBase can be used to quickly estimate the power for a particular design and model. The \codeModel class member function \code$power() estimates power by calculating the information matrix MM described above. The square root of the diagonal of this matrix provides the (approximate) standard errors of β\beta. The power of a two-sided test for the iith parameter at a type I error level of α\alpha is given by Φ(|βi|/MiiΦ1(1α/2))\Phi(|\beta_{i}|/\sqrt{M_{ii}}-\Phi^{-1}(1-\alpha/2)) where Φ\Phi is the Gaussian cumulative distribution function. As an example, to estimate power for a stepped-wedge parallel cluster randomised trial (see Hooper et al. (2016)) with 10 clusters and 11 time periods and ten individuals per cluster period, where we use a Binomial-logit model with time period fixed effects set to zero, and use a auto-regressive covariance function with parameters 0.25 and 0.7: {CodeChunk} {CodeInput} data <- nelder( (cl(10) * t(11)) > i(10)) dataint<0#intistheinterventioneffectdata[dataint<-0#intistheinterventioneffectdata[datat > datacl,int]<1model<Modelcl,^{\prime}int^{\prime}]<-1model<-Modelnew(formula =   factor(t) + int - 1 + (1|gr(cl)*ar1(t)), data = data, covariance = c(0.05,0.7), mean = c(rep(0,12),0.5), family = binomial()) modelpower(){CodeOutput}ParameterValueSEPower12int0.50.18161360.7861501Onecanusethefunctionalityofthe\codeModelclasstoinvestigatepowerforarangeofmodelparameters.Forexample,toestimatepowerforcovarianceparametersintheranges(0.05,0.5)and(0.2,0.9)wecancreateadataframeholdingtherelevantcombinationofvaluesanditerativelyupdatethemodel:{CodeChunk}{CodeInput}paramdata<expand.grid(par1=seq(0.05,0.5,by=0.05),par2=c(0.2,0.9,by=0.1),power=NA)for(iin1:nrow(paramdata))modelupdateparameters(cov.pars=c(paramdatapar1[i],paramdatapar2[i]))paramdatapower[i]<modelpower()[12,3]head(paramdata){CodeOutput}par1par2power10.050.20.834886320.100.20.785229830.150.20.738165840.200.20.694513750.250.20.654497060.300.20.6180314power()\CodeOutput ParameterValueSEPower...12int0.50.18161360.7861501Onecanusethefunctionalityofthe\code{Model}classtoinvestigatepowerforarangeofmodelparameters.Forexample,toestimatepowerforcovarianceparametersintheranges(0.05,0.5)and(0.2,0.9)wecancreateadataframeholdingtherelevantcombinationofvaluesanditerativelyupdatethemodel:\CodeChunk\CodeInput param_{d}ata<-expand.grid(par1=seq(0.05,0.5,by=0.05),par2=c(0.2,0.9,by=0.1),power=NA)for(iin1:nrow(param_{d}ata)){model$update_{p}arameters(cov.pars=c(param_{d}ata$par1[i],param_{d}ata$par2[i]))param_{d}ata$power[i]<-model$power()[12,3]}head(param_{d}ata)\CodeOutput par1par2power10.050.20.834886320.100.20.785229830.150.20.738165840.200.20.694513750.250.20.654497060.300.20.6180314\par

2.13 Data Simulation

The \codeglmmrBase package also simplifies data simulation for GLMMs. Data simulation is a commonly used approach to evaluate the properties of estimators and statistical models. The \codeModel class has member function \codesim_data(), which will generate either vector of simulated output yy (argument \codetype="y"), a data frame that combines the linked dataset with the newly simulated data (argument \codetype="data"), or a list containing the simulated data, random effects, and matrices ZZ and XX (argument \codetype="all"). To quickly simulate data new random effects are drawn as 𝐯N(0,1)\mathbf{v}\sim N(0,1) and then the random effects component of the model simulated as ZL𝐯ZL\mathbf{v}. Simulation only of the random effects terms can be done with the relevant function in the \codeCovariance class, \code$covariance$simulate_re().

3 Model Fitting

The parameters and standard errors of GLMMs are difficult to estimate in a maximum likelihood context given the intergral in the likelihood in Equation 2. Several approximations and associated software are available for GLMMs, however, there are few options for fitting more complex models. The \codeModel class includes two functions: \codeMCML, for MCML model fitting, and \codeLA, which uses a Laplace approximation, both described below.

3.1 Existing software

Software for fitting generalised linear mixed models has been available in \proglangR environment (R Core Team, 2021) for many years. The widely-used package \pkglme4 provides maximum likelihood and restricted maximum likelihood (REML) estimators (Bates et al., 2015). \pkglme4 builds on similar methods proposed by e.g. Bates and DebRoy (2004) and Henderson (1982). For non-linear models it uses Laplacian approximation or adaptive Gaussian quadrature. However, \pkglme4 only allows for compound symmetric (or exchangable) covariance structures, i.e. group-membership type random effects, and linear, additive linear predictors. Function \codeglmmPQL() in package \pkgMASS (Venables and Ripley, 2002) implements Penalized quasi-likelihood (PQL) methods for GLMMs, but is again limited to the same covariance structures, and linear, additive model structure. The package \pkgnlme (Pinheiro et al., 2022) also uses REML approaches for linear mixed models and approximations for models with mean specifications with non-linear functions of covariates discussed in Lindstrom and Bates (1990). \pkgnlme provides a wider range of covariance structures, such as autoregressive functions, which are also available nested within a broader group structure for linear, additive linear predictors. The package providing Frequentist model estimation of GLMMs with the broadest support for different specifications, including many different covariance functions, is \pkgglmmTMB, which uses the \pkgTMB (Template Model Builder) package to support model specification and implementation of Laplacian Approximation to estimate models, although also only permits linear, additive linear predictors.

Outside of R, GLMM model fitting is also provided in other popular statistical software packages. Stata offers a range of functions for fitting mixed models. The \codextmixed function offers both maximum likelihood and restricted maximum likelihood methods to estimate linear mixed models with multi-level group membership random effect structures and heteroscedasticity. For GLMMs, there is a wide range of models that allow for random effects, many of which are accessed through the \codemeglm suite of commands. Estimation can use Gauss-Hermite quadrature or Laplace approximation, but random effect structures are generally limited to group membership type effects. For the software SAS, \codePROC MIXED, and \codePROC GLIMMIX provide GLMM model fitting using either quadrature or Laplace approximation methods. These commands provide a relatively broad random of random effect coviariance structures, including autoregressive, Toeplitz, and exponential structures. \codePROC NLMIXED allows for non-linear functions of fixed and random effects; it uses a Gaussian quadrature method to approximate the integral in the log-likelihood, and a Newton-Raphson step for empirical Bayes estimation of the random effects. However, \codePROC NLMIXED only allows for group membership random effects. Both Stata and SAS also provide a range of options for standard error estimation and correction. Both software packages are proprietary.

We also provide in-built robust and bias-corrected standard error estimates where required. In R, the packages \pkglmerTest and \pkgpbkrtest provide a range of corrections to standard errors for models fit with \codelme4. The package architecture means that \pkgglmmrBase can provide these standard errors for the range of models available in the package, and generally in less time as they do not require refitting of the model. Stata and SAS can calculate bias-corrected and robust standard error estimates with the commands described above.

We also note that there are also several packages for Bayesian model fitting in R, including \pkgbrms (Bürkner, 2017) and \pkgrstanarm, which interface with the Stan statistical software (described below), and \pkgMCMCglmm. Stan is a fully-fledged probabilistic programming language that implements MCMC, and allows for a completely flexible approach to model specification (Carpenter et al., 2017). Stan can also provide limited maximum likelihood fitting functionalist and related tools for calculating gradients and other model operations.

Evidently, there already exists a good range of support for GLMM model fitting available in \proglangR. \pkgglmmrBase may be seen to provide a complement to existing resources rather than another substitute. As we discuss below, in a maximum likelihood context, the existing software all provide approximations rather than full likelihood model fitting. These approximations are typically accurate and fast, but they can also frequently fail to converge and in more complex models the quality of the approximation may degrade, which has lead to methods for bias corrections and improvements to these approximations (e.g. Breslow and Lin (1995); Lin and Breslow (1996); Shun and McCullagh (1995); Capanu et al. (2013)). \pkgglmmrBase provides an alternative that can be used to support more complex analyses, flexible covariance specification, and provide a comparison to ensure approximations are reliable, while also providing an array of other functionality.

3.2 Stochastic Maximum Likelihood Estimation

Markov Chain Monte Carlo Maximum Likelihood (MCML) are a family of algorithms for estimating GLMMs using the full likelihood that treat the random effect terms as missing data, which are simulated on each iteration of the algorithm. Estimation can then be achieved to an arbitrary level of precision depending on the number of samples used. Approximate methods such as Laplace approximation or quadrature provide significant computational advantages over such approaches, however they may trade-off the quality of parameter and variance estimates versus full likelihood approaches. Indeed, the quality of the Laplace approximation method for GLMMs can deteriorate if there are smaller numbers of clustering groups, smaller numbers of observations to estimate the variance components, or if the random effect variance is relatively large.

The package provides full likelihood model fitting for the range of GLMMs that can be specified using the package via the MCML algorithms described by McCulloch (1997) or stochastic approximation expectation maximisation (SAEM) (Jank, 2006). Each algorithm has three main steps per iteration: draw samples of the random effects using Markov Chain Monte Carlo (MCMC), the mean function parameters are then estimated conditional on the simulated random effects, and then the parameters of the multivariate Gaussian likelihood of the random effects are then estimated. The process is repeated until convergence. We describe each step in turn and optimisations.

3.3 MCMC sampling of random effects

Reliable convergence of the algorithm requires a set of mim_{i} independent samples of the random effects on the iith iteration, i.e. realisations from the distribution of the random effect conditional on the observed data, and with fixed parameters. MCMC methods can be used to sample from the ‘posterior density’ f𝐮|𝐲(𝐮|𝐲,β(i),ϕ(i),θ(i))f𝐲|𝐮(𝐲|𝐮,β(i),ϕ(i))f𝐮(𝐮|θ(i))f_{\mathbf{u}|\mathbf{y}}(\mathbf{u}|\mathbf{y},\beta^{(i)},\phi^{(i)},\theta^{(i)})\propto f_{\mathbf{y}|\mathbf{u}}(\mathbf{y}|\mathbf{u},\beta^{(i)},\phi^{(i)})f_{\mathbf{u}}(\mathbf{u}|\theta^{(i)}). To improve sampling, instead of generating samples of 𝐮\mathbf{u} directly, we instead sample 𝐯\mathbf{v} from the model:

𝐲G(h(g(X,β^)+Z~𝐯);ϕ^)𝐯N(0,I)\displaystyle\begin{split}\mathbf{y}&\sim G(h(g(X,\hat{\beta})+\tilde{Z}\mathbf{v});\hat{\phi})\\ \mathbf{v}&\sim N(0,I)\end{split} (7)

where Z~=ZL\tilde{Z}=ZL, and II is the identity matrix. Once mim_{i} samples of 𝐯\mathbf{v} are returned, we then transform the samples as 𝐮=L𝐯\mathbf{u}=L\mathbf{v}.

We use Stan for MCMC sampling. Stan implements Hamiltonian Monte Carlo (HMC) with advances such as the No-U-Turn Sampler (NUTS) (Carpenter et al., 2017; Homan and Gelman, 2014), which automatically ‘tunes‘ the HMC parameters. Stan can generally achieve a good effective sample size per unit time and so lead to better convergence of the MCMCML algorithm than standard MCMC algorithms.

Each iteration of the MCMC sampling must evaluate the log-likelihood (and its gradient). Since the elements of both 𝐲\mathbf{y} and 𝐯\mathbf{v} are all independent in this version of the model, evaluating the log-likelihood is parallelisable within a single chain, which significantly improves running time.

3.4 Estimation of mean function parameters

There are several methods to generate new estimates of β\beta and ϕ\phi conditional on 𝐲\mathbf{y}, 𝐮(i)\mathbf{u}^{(i)}, and ϕ(i)\phi^{(i)}. We aim to minimise the negative log-likelihood:

(β|𝐮)=Eu[logf𝐲|𝐮(𝐲|𝐮,β,ϕ)]1mik=1milogf𝐲|𝐮(𝐲|𝐮(i,k),β,ϕ^)\mathcal{L}(\beta|\mathbf{u})=E_{u}\left[-\log f_{\mathbf{y}|\mathbf{u}}(\mathbf{y}|\mathbf{u},\beta,\phi)\right]\approx\frac{1}{m_{i}}\sum_{k=1}^{m_{i}}-\log f_{\mathbf{y}|\mathbf{u}}(\mathbf{y}|\mathbf{u}^{(i,k)},\beta,\hat{\phi}) (8)

There are three different approaches:

  • Monte Carlo Newton Raphson (MCNR): If the linear predictor is linear and additive in the data and parameters, then we can use the explicit formulation:

    β(i+1)=β(i)+Eu[XTW(θ(i),𝐮)X]1XT(Eu[W(θ(i),𝐮)h1(𝜼)𝜼(𝐲𝝁(β(i),𝐮))|𝐲])\beta^{(i+1)}=\beta^{(i)}+E_{u}\left[X^{T}W(\theta^{(i)},\mathbf{u})X\right]^{-1}X^{T}\left(E_{u}\left[W(\theta^{(i)},\mathbf{u})\frac{\partial h^{-1}(\boldsymbol{\eta})}{\partial\boldsymbol{\eta}}(\mathbf{y-\boldsymbol{\mu}}(\beta^{(i)},\mathbf{u}))|\mathbf{y}\right]\right)

    where the expectations are with respect to the random effects. We estimate the expectations using the realised samples from the MCMC step of the algorithm. For many GLMMs, the likelihoods can be multimodal Searle et al. (1992), which means the MCNR algorithm may fail to converge at all.

  • Monte Carlo Quasi-Newton (MCQN): As the gradient is relatively inexpensive to calculate, one can make use of quasi-Newton algorithms, including L-BFGS.

  • Monte Carlo Expectation Maximisation (MCEM): We use the BOBYQA algorithm, a numerical, derivative free optimser that allows bounds for expectation maximisation (Powell, 2009).

3.5 Estimation of covariance parameters

The final step of each iteration is to generate new estimates of θ\theta given the samples of the random effects. We aim to minimise

(θ|𝐮)=E𝐮[log(f𝐮(𝐮|θ))]1mik=1milog(f𝐮(𝐮(i,k)|θ))\mathcal{L}(\theta|\mathbf{u})=E_{\mathbf{u}}\left[-\log(f_{\mathbf{u}}(\mathbf{u}|\theta))\right]\approx\frac{1}{m_{i}}\sum_{k=1}^{m_{i}}\log(f_{\mathbf{u}}(\mathbf{u}^{(i,k)}|\theta)) (9)

The multivariate Gaussian density is:

logf𝐮(𝐮|θ)\displaystyle\log f_{\mathbf{u}}(\mathbf{u}|\theta) =m2log(2π)12log(|D|)12𝐮TD1𝐮\displaystyle=-\frac{m}{2}\log(2\pi)-\frac{1}{2}\log(|D|)-\frac{1}{2}\mathbf{u}^{T}D^{-1}\mathbf{u}
=m2log(2π)12j=1nlog(Ljj)12𝐮TLTL1𝐮\displaystyle=-\frac{m}{2}\log(2\pi)-\frac{1}{2}\sum_{j=1}^{n}\log(L_{jj})-\frac{1}{2}\mathbf{u}^{T}L^{-T}L^{-1}\mathbf{u}

where LjjL_{jj} is the jjth diagonal element of LL.

As described in Section 2.5, we can take two approaches to the Cholesky factorisation of matrix DD, either exploiting the block diagonal structure or using sparse matrix methods. In both cases the quadratic form can be rewritten as 𝐮TD1𝐮=𝐮TLTL1𝐮=𝐳T𝐳\mathbf{u}^{T}D^{-1}\mathbf{u}=\mathbf{u}^{T}L^{-T}L^{-1}\mathbf{u}=\mathbf{z}^{T}\mathbf{z}, so we can obtain 𝐳\mathbf{z} by solving the linear system L𝐳=𝐮L\mathbf{z}=\mathbf{u} using forward substitution, which, if LL is also sparse provides efficiency gains. The log determinant is log(|D|)=log(diag(L))\log(|D|)=\sum\log(\text{diag}(L)). In the case of using a block diagonal formulation, we calculate the form:

logf𝐮(𝐮|θ)=b=0B(m2log(2π)12i=rbsblog(diag(L)i)12𝐳rb:sbT𝐳rb:sb)\log f_{\mathbf{u}}(\mathbf{u}|\theta)=\sum_{b=0}^{\texttt{B}}\left(-\frac{m}{2}\log(2\pi)-\frac{1}{2}\sum_{i=r_{b}}^{s_{b}}\log(\text{diag}(L)_{i})-\frac{1}{2}\mathbf{z}^{T}_{r_{b}:s_{b}}\mathbf{z}_{r_{b}:s_{b}}\right)

where rbr_{b} and sbs_{b} indicates the start and end indexes of the vector, respectively, corresponding to block bb.

The gradient of the log-likelihood with respect to θ\theta is:

θlogf𝐮(𝐮|θ)\displaystyle\nabla_{\theta}\log f_{\mathbf{u}}(\mathbf{u}|\theta) =12trace(D1Dθ)12𝐮TD1DθD1𝐮\displaystyle=-\frac{1}{2}\text{trace}\left(D^{-1}\frac{\partial D}{\partial\theta}\right)-\frac{1}{2}\mathbf{u}^{T}D^{-1}\frac{\partial D}{\partial\theta}D^{-1}\mathbf{u}
=12trace(LTL1Dθ)12𝐮TLTL1DθLTL1𝐮\displaystyle=-\frac{1}{2}\text{trace}\left(L^{-T}L^{-1}\frac{\partial D}{\partial\theta}\right)-\frac{1}{2}\mathbf{u}^{T}L^{-T}L^{-1}\frac{\partial D}{\partial\theta}L^{-T}L^{-1}\mathbf{u}

The quadratic form can similarly be solved by applying forward and backward substitution. The L-BFGS-B algorithm can be used then to minimise the log-likelihood. However, the inversion of DD and calculation of its partial derivatives may make the gradient computationally demanding. A derivative-free algorithm, BOBYQA, while requiring more function evaluations may be more efficient at solving for the parameter values. Both methods are available.

3.6 Stochastic Approximation Expectation Maximisation

The efficiency of the MCML algorithms can be improved in two ways: first, one can dynamically alter the number of MCMC samples per step, which is discussed in the next section, and second, one can make use of MCMC sample information from previous iterations, reducing the number of samples per iteration. Stochastic Approximation Expectation Maximisation (SAEM) is a Ruppert-Monroe algorithm that can be used to approximate the log likelihoods (8) and (9) (Jank, 2006). For β\beta the log-likelihood on the i+1i+1th iteration is:

^(i+1)=^(i)+γ(1mik=1milogf𝐲|𝐮(𝐲|𝐮(i,k),β,ϕ^)^(i))\hat{\mathcal{L}}^{(i+1)}=\hat{\mathcal{L}}^{(i)}+\gamma\left(\frac{1}{m_{i}}\sum_{k=1}^{m_{i}}-\log f_{\mathbf{y}|\mathbf{u}}(\mathbf{y}|\mathbf{u}^{(i,k)},\beta,\hat{\phi})-\hat{\mathcal{L}}^{(i)}\right)

and equivalently for θ\theta, where γ(1i)α\gamma\propto\left(\frac{1}{i}\right)^{\alpha} for α[0.5,1)\alpha\in[0.5,1). This approach makes use of the all the MCMC samples from every iteration, where the contribution of previous values diminish over the iterations depending on the parameter α\alpha. Larger values of α\alpha result in faster ‘forgetting’. For some applications, Ruppert-Polyak averaging can further improve convergence of the algorithms (Polyak and Juditsky, 1992), where the estimate of the log-likelihood on iteration ii is:

^¯(i)=1ij=1i^(i)\bar{\hat{\mathcal{L}}}^{(i)}=\frac{1}{i}\sum_{j=1}^{i}\hat{\mathcal{L}}^{(i)}

3.7 Stopping Criteria and MCMC Sample Sizes

We provide two types of rule for terminating the algorithm. First, one can monitor the differences in the parameter estimates between successive iterations, and terminate the algorithm when the largest difference falls below some tolerance. However, this method may lead to premature termination. A running mean could prevent such issues, however, a perhaps preferable method is to assess the changes in the estimates of the log-likelihood:

Δ^(i)=^(i+1)^(i)\Delta\hat{\mathcal{L}}^{(i)}=\hat{\mathcal{L}}^{(i+1)}-\hat{\mathcal{L}}^{(i)}

An expectation maximisation algorithm is guaranteed to improve the objective function value over time. However, the estimates of the log-likelihood are subject to Monte Carlo error and so Δ^(i)\Delta\hat{\mathcal{L}}^{(i)} may increase or decrease at convergence. Caffo et al. (2005) proposed a stopping criteria that monitors the probability of convergence. The estimated upper bound for the difference is:

U^(i)=Δ^(i)+zγσ^ΔU\hat{\mathcal{L}}^{(i)}=\Delta\hat{\mathcal{L}}^{(i)}+z_{\gamma}\hat{\sigma}_{\Delta}

where zγz_{\gamma} is the γ\gamma quantile of a standard Gaussian distribution and σ^Δ\hat{\sigma}_{\Delta} is the estimated standard deviation of Δ^(i)\Delta\hat{\mathcal{L}}^{(i)}. This upper bound will be negative with probability γ\gamma at convergence.

Caffo et al. (2005) use a similar argument to dynamically alter the number of MCMC samples on each iteration to acheive convergence with probability γ\gamma. The propose:

mi+1=max(mi,((β)+(θ)zγ+zϵΔ^(i))m_{i+1}=\max\left(m_{i},(\mathcal{L}(\beta)+\mathcal{L}(\theta)\frac{z_{\gamma}+z_{\epsilon}}{\Delta\hat{\mathcal{L}}^{(i)}}\right)

We provide MCML algorithm with and without dynamic sample sizes.

3.8 Model Estimation

The member function \codeMCML fits the model. There are multiple options to control the algorithm and set its parameters; most options are determined by the relevant parameters of the \codeModel object. The default fitting method is SAEM without Ruppert-Polyak averaging, and with the log-likelihood stopping rule with probability 0.95, as this has been found to provide the most efficient and reliable model fitting in many tests.

3.9 Laplace Approximation

We also provide model fitting using a Laplace approximation to the likelihood with the \codeModel member function \codeLA(). We include this approximation as it often provides a faster means of fitting the models in this package. These estimates may be of interest in their own right, but can also serve as useful starting values for the MCML algorithm. We approximate the log-likelihood of the re-parameterised model given in Equation (7) as:

logL(β,ϕ,θ,𝐯|𝐲)12|I+LTZTWZL|+i=1nlog(fy|𝐮(yi|𝐯,β,ϕ)))12𝐯T𝐯\log L(\beta,\phi,\theta,\mathbf{v}|\mathbf{y})\approx-\frac{1}{2}|I+L^{T}Z^{T}WZL|+\sum_{i=1}^{n}\log(f_{y|\mathbf{u}}(y_{i}|\mathbf{v},\beta,\phi)))-\frac{1}{2}\mathbf{v}^{T}\mathbf{v} (10)

As before, we use the first-order approximation (with or without attenutation), W1(θ,𝐮)=diag((h1(𝜼)𝜼)2Var(𝐲|𝐮))W^{-1}(\theta,\mathbf{u})=\text{diag}\left(\left(\frac{\partial h^{-1}(\boldsymbol{\eta})}{\partial\boldsymbol{\eta}}\right)^{2}\text{Var}(\mathbf{y}|\mathbf{u})\right). We iterate model fitting to obtain estimates of β\beta, 𝐯\mathbf{v} and ϕ\phi and then to then obtain estimates of θ\theta until the values converge. We provide two iterative algorithms for model fitting, using a similar pattern to the MCNR, and MCEM algorithms. First, the values of β\beta and 𝐯\mathbf{v} are updated. We can either use a scoring algorithm approach, or numerical optimisation. For the former method the score system we use is:

[XTWXXTWZLZTLTWXZTLTWLZ+I][β𝐯]=[XTW𝜼𝝁(𝐲𝝁)LTZTW𝜼𝝁(𝐲𝝁)]\begin{bmatrix}X^{T}WX&X^{T}WZL\\ Z^{T}L^{T}WX&Z^{T}L^{T}WLZ+I\end{bmatrix}\begin{bmatrix}\beta\\ \mathbf{v}\end{bmatrix}=\begin{bmatrix}X^{T}W\frac{\partial\boldsymbol{\eta}}{\partial\boldsymbol{\mu}}(\mathbf{y}-\boldsymbol{\mu})\\ L^{T}Z^{T}W\frac{\partial\boldsymbol{\eta}}{\partial\boldsymbol{\mu}}(\mathbf{y}-\boldsymbol{\mu})\end{bmatrix} (11)

where 𝜼𝝁\frac{\partial\boldsymbol{\eta}}{\partial\boldsymbol{\mu}} is a diagonal matrix with elements ηiμi\frac{\partial\eta_{i}}{\partial\mu_{i}}. Where there are non-linear functions of parameters in the linear predictor, a first-order approximation is used where XX is replaced by an n×Pn\times P matrix with row ii equal to JβJ_{\beta} evaluated at the values of xix_{i}. This system of score equations is highly similar to those used in other software packages and articles on approximate inference for GLMMs (e.g. Bates and DebRoy (2004); Bates et al. (2015); Breslow and Clayton (1993)). The alternative approach fits β\beta and 𝐯\mathbf{v} by minimising the log-likelihood in Equation (10) using the BOBYQA algorithm. In the second step, the variance parameters θ\theta are updated by again minimising (10), fixing β\beta and 𝐯\mathbf{v} to their current estimates.

3.10 Standard errors

There are several options for the standard errors of β^\hat{\beta} and θ^\hat{\theta} returned by either \codeLA or \codeMCML and are accessible through a \codeModel object. The currently implemented methods are described here.

3.10.1 GLS standard errors

The default GLS standard errors are given by the square root of the diagonal of (XTΣ1X)1(X^{T}\Sigma^{-1}X)^{-1}. The standard errors for the covariance parameter estimates are estimated from the inverse expected information matrix for θ\theta, MθM_{\theta}, the elements of which are given by:

[Mθ]i,j=tr(Σ1θiΣΣ1θjΣ)[M_{\theta}]_{i,j}=\text{tr}\left(\frac{\partial\Sigma^{-1}}{\partial\theta_{i}}\Sigma\frac{\partial\Sigma^{-1}}{\partial\theta_{j}}\Sigma\right)

Using the expression for Σ\Sigma given in Section 2.9, we can write

Ai=Σθi=ZDθiZTA_{i}=\frac{\partial\Sigma}{\partial\theta_{i}}=Z\frac{\partial D}{\partial\theta_{i}}Z^{T}

for parameters of the covariance function, and Ai=IA_{i}=I for the variance parameter of a Gaussian distribution. Then, the information matrix for the covariance parameters has elements:

[Mθ]i,j=tr(Σ1AiΣ1Aj)[M_{\theta}]_{i,j}=\text{tr}\left(\Sigma^{-1}A_{i}\Sigma^{-1}A_{j}\right)

and equivalently for the scale parameter such as σ2\sigma^{2}. The partial derivatives of matrix DD can then readily be obtained using the auto-differentiation scheme implemented in the calculation of the elements of the matrix.

3.11 Small sample corrections

The GLS standard errors with maximum likelihood estimate of the model parameters are known to exhibit a small-sample bias and underestimate the standard error when, for example, the number of higher-level groups or clusters is small. The small sample bias exists for two reasons: the standard errors fail to take into account the variability from estimating the variance parameters θ\theta and ϕ\phi; and, the GLS estimator is itself a biased estimator of the variance of the fixed effect parameters. So the standard GLS standard error can underestimate the sampling error of β\beta, resulting in overly narrow confidence intervals. There have been a range of corrections proposed in the literature, typically with regards to the degrees of freedom of reference distributions for related test statistics (e.g. Satterthwaite (1946); Kenward and Roger (1997)). We can also use the tools in this package to generate a bias corrected variance-covariance matrix for β^\hat{\beta} using the approach proposed by Kenward and Roger (1997). The correction is valid for restricted maximum likelihood (REML) estimates of θ\theta and ϕ\phi as the standard maximum likelihood estimates of these parameters are biased downwards due to not accounting for the loss of degrees of freedom from estimating β\beta (Harville, 1977). At the time of writing, we have not implemented REML with the Laplace approximation methods described in the preceding section. However one can view the estimates of the variance components from the MCML algorithms as representing REML estimates, since the estimated sufficient statistics for θ\theta do not depend on β\beta (Laird and Ware, 1982). Thus, using these estimates and to apply the small sample bias correction, we implement MθM_{\theta} as:

[Mθ]i,jcorr=[Mθ]i,jtr(MβXTΣ1AiΣ1[2IXMβXTΣ1]AjΣ1X)[M_{\theta}]_{i,j}^{corr}=[M_{\theta}]_{i,j}-\text{tr}\left(M_{\beta}X^{T}\Sigma^{-1}A_{i}\Sigma^{-1}[2I-XM_{\beta}X^{T}\Sigma^{-1}]A_{j}\Sigma^{-1}X\right)

Then, letting K=(Mθcorr)1K=(M_{\theta}^{corr})^{-1} and

Pi\displaystyle P_{i} =XTΣ1AiΣ1X\displaystyle=-X^{T}\Sigma^{-1}A_{i}\Sigma^{-1}X
Qij\displaystyle Q_{ij} =XTΣ1AiΣ1AjΣ1X\displaystyle=X^{T}\Sigma^{-1}A_{i}\Sigma^{-1}A_{j}\Sigma^{-1}X
Rij\displaystyle R_{ij} =XTΣ12ΣθiθjΣ1X\displaystyle=X^{T}\Sigma^{-1}\frac{\partial^{2}\Sigma}{\partial\theta_{i}\partial\theta_{j}}\Sigma^{-1}X

the bias corrected variance covariance matrix for β\beta is

Vβcorr=Mβ1+2Mβ1(i=1rj=1rKij(QijPiMPj14Rij))Mβ1V_{\beta}^{corr}=M_{\beta}^{-1}+2M_{\beta}^{-1}\left(\sum_{i=1}^{r}\sum_{j=1}^{r}K_{ij}(Q_{ij}-P_{i}MP_{j}-\frac{1}{4}R_{ij})\right)M_{\beta}^{-1}

The degrees of freedom correction is also given in Kenward and Roger (1997), which is the same degrees of freedom correction originally proposed by Satterthwaite (1946). The Kenward-Roger correction, though, can over-estimate the standard errors in very small samples in some cases. Kenward and Roger (2009) proposed an improved correction for covariance functions not linear in parameters (such as the autoregressive or exponential functions). The improved correction adds an additional adjustment factor. We have implemented these corrections in the package, which can returned directly from a \codeModel object using the \codesmall_sample_correction() member function, or they can be returned during model fitting by selecting the appropriate option for the \codese argument of the \codeMCML and \codeLA functions. For use in other calculations one can also retrieve the first and second-order partial derivatives of Σ\Sigma with respect to the covariance function parameters using \codepartial_sigma().

3.12 Modified Box correction

Finally, we also provide the “modified Box correction” for Gaussian-identity GLMMs given by Skene and Kenward (2010).

3.13 Additional inputs and outputs

One can specify weights for the model when generating a new \codeModel object. For binomial models the number of trials can also be passed to the model via the \codetrials argument, and offsets can be given in the same way via the \codeoffset argument.

The \codeMCML and \codeLA functions return an object of (S3) class \codemcml for which print and summary methods are provided. The print function returns standard regression output and basic diagnostics. We implement the conditional Akaike Information Criterion method described by Vaida and Blanchard (2005) for GLMMs, and approximate conditional and marginal R-squared values using the method described by Nakagawa and Schielzeth (2013) for GLMMs. The object returned by \codeMCML also contains other potentially useful output. In particular, there is often interest in the random effects themselves. As these are simulated during the MCMCML algorithm, the final set of mm samples are returned as a Q×mQ\times m matrix facilitating further analysis.

3.14 Prediction

One can generate predictions from the model at new values of XX, ZZ, and the variables that define the covariance function. Conditional on the estimated or simulated values of 𝐮\mathbf{u}, the distribution of the random effects at new values is:

𝐮newN(D01D001𝐮,D11D01D001D01T)\mathbf{u}_{new}\sim N\left(D_{01}D_{00}^{-1}\mathbf{u},D_{11}-D_{01}D_{00}^{-1}D_{01}^{T}\right)

where D00D_{00} is the covariance matrix of the random effects at the observed values, D11D_{11} is the covariance matrix at the new values, and D01D_{01} is the covariance between the random effects at the observed and new values. Where there are multiple samples of 𝐮\mathbf{u}, the conditional mean in averaged across them. Following a model fit, we can generate the linear predictor and mean and covariance at new locations using the member function \code$predict(newdata). This differs from the functionality of the function \code$fitted(), which generates the linear predictor (with or without random effects) for the existing model at the observed data locatons, and \code$sim_data(), which simulates outcome data at the existing data values.

4 Examples

We provide two examples illustrating model fitting and related functionality to demonstrate how \pkgglmmrBase may be useful in certain statistical workflows.

4.1 Cluster randomised trial

We consider a stepped-wedge cluster randomised trial (see Hemming et al. (2020), for example) with six cluster sequences and seven time periods, and ten individuals per cluster period cross-sectionally sampled. All clusters start in the control state and in each period one cluster switches to the intervention state. We simulate data from a Poisson-log GLMM with cluster and cluster-period exchangeable covariance function for the example. The Poisson-log model is, for individual ii in cluster jj at time tt:

yijt\displaystyle y_{ijt} Poisson(λijt)\displaystyle\sim\text{Poisson}(\lambda_{ijt})
λijt\displaystyle\lambda_{ijt} =exp(β0dj+β1I(t=1)++β6I(t=5)+γjt)\displaystyle=\exp(\beta_{0}d_{j}+\beta_{1}I(t=1)+...+\beta_{6}I(t=5)+\gamma_{jt})

where djd_{j} is an indicator for treatment status, I(.)I(.) is the indicator function, and γjt\gamma_{jt} is a random effect. For the data generating process we use β0=0.5\beta_{0}=0.5 and β1,,β6\beta_{1},...,\beta_{6} range from -1.5 to -0.3. We specify the exchangeable correlation structure γjt=γ1,j+γ2,jt\gamma_{jt}=\gamma_{1,j}+\gamma_{2,jt} where γ1,jN(0,0.32)\gamma_{1,j}\sim N(0,0.3^{2}) and γ2,jtN(0,0.152)\gamma_{2,jt}\sim N(0,0.15^{2}). See Li et al. (2021) for a discussion of such models in the context of cluster randomised trials.

We can specify the model as follows, where we have included the parameter values: {CodeChunk} {CodeInput} data <- nelder(  (cl(6)*t(7))>i(10)) dataint<0data[dataint<-0data[datat > datacl,int]<1model<Modelcl,^{\prime}int^{\prime}]<-1\par model<-Modelnew( formula =   int + factor(t)-1 + (1|gr(cl))+(1|gr(cl,t)), covariance = c(0.05, 0.01), mean = c(0.5,seq(-1.5,-0.3,by=0.2)), data = data, family = poisson()) y <- modelsimdata()Thefinallinegeneratesrandomsimulateddata.Thecodetofitthesemodelsinthepopular\pkglme4and\pkgglmmTMBpackagesis:{CodeChunk}{CodeInput}lme4::glmer(yint+factor(t)1+(1|cl/t),family=poisson(),data=data)glmmTMB::glmmTMB(yint+factor(t)1+diag(1|cl/t),data=data,family=poisson())Wefitthemodelsin\pkgglmmrBasespecifiedaboveusingthefollowingcodeforeachofthefits.Notethatafterfittingthemodel,theestimatedparametersarestoredinthemodelandwillbeusedforsubsequentmodelfits.{CodeChunk}{CodeInput}modelmcmcoptionssamps<50#setthenumberofsamplesto50fit3<modelMCML(y)modelmcmcoptionssamps<500#MCEM,LBFGS,convergencewhendifferencebetweenparameterestimates<1e3fit4<modelMCML(y,method=mcem,algo=1,conv.criterion=1,tol=1e3)NotethatthenumberofMCMCsamplesisrelativelysmallfortheSAEMalgorithm,assamplesarestoredfromallpreviousiterations.WeincreasethenumberofsamplesfortheMCEMalgorithm.Onecanobtainbiascorrectedstandarderrorsforthesemodelseitherbyfittingthemodelswiththe\codese=kroptions,orretrievethecorrectedstandarderrorsafterfittingusing\codemodelsmallsamplecorrection().sim_{d}ata()Thefinallinegeneratesrandomsimulateddata.\par Thecodetofitthesemodelsinthepopular\pkg{lme4}and\pkg{glmmTMB}packagesis:\CodeChunk\CodeInput lme4::glmer(y~int+factor(t)-1+(1|cl/t),family=poisson(),data=data)\par glmmTMB::glmmTMB(y~int+factor(t)-1+diag(1|cl/t),data=data,family=poisson())\par Wefitthemodelsin\pkg{glmmrBase}specifiedaboveusingthefollowingcodeforeachofthefits.Notethatafterfittingthemodel,theestimatedparametersarestoredinthemodelandwillbeusedforsubsequentmodelfits.\CodeChunk\CodeInput model$mcmc_{o}ptions$samps<-50#setthenumberofsamplesto50fit3<-model$MCML(y)model$mcmc_{o}ptions$samps<-500#MCEM,L-BFGS,convergencewhendifferencebetweenparameterestimates<1e-3fit4<-model$MCML(y,method="mcem",algo=1,conv.criterion=1,tol=1e-3)NotethatthenumberofMCMCsamplesisrelativelysmallfortheSAEMalgorithm,assamplesarestoredfromallpreviousiterations.WeincreasethenumberofsamplesfortheMCEMalgorithm.\par Onecanobtainbias-correctedstandarderrorsforthesemodelseitherbyfittingthemodelswiththe\code{se="kr"}options,orretrievethecorrectedstandarderrorsafterfittingusing\code{model$small_{s}ample_{c}orrection()}.\par

4.2 Geospatial data analysis with approximate covariance

As described in Section 2.5 and elsewhere, fitting models with large, dense covariance matrices is computationally intensive. We provide several approximations in this package to reduce fitting times. Here, we provide an example using the nearest neighbour Gaussian process, and generating predictions across an area of interest.

We generate a 600 random points in the square [1,1]×[1,1][-1,1]\times[-1,1]. We then simulate data from a Gaussian-identity model with a spatial Gaussian process with exponential covariance function: {CodeChunk} {CodeInput} n <- 600 df <- data.frame(x = runif(n,-1,1), y = runif(n,-1,1)) sim_model <- Modelnew(formula=(1|fexp(x,y)),data=df,covariance=c(0.25,0.3),mean=c(0),family=gaussian())ysim<simmodelnew(formula=~(1|fexp(x,y)),data=df,covariance=c(0.25,0.3),mean=c(0),family=gaussian())y_{s}im<-sim_{m}odelsim_data() We then generate a new model with nearest neighbour Gaussian process covariance, and where we name the intercept parameter \codebaseline and include sensible starting values: {CodeChunk} {CodeInput} analysis_model <- Modelnew(formula=(1|nngpfexp(x,y)),data=df,family=gaussian())analysismodelnew(formula=~(1|nngp_{f}exp(x,y)),data=df,family=gaussian())analysis_{m}odelset_trace(1) # for verbose output analysis_modelmcmcoptionsmcmc_{o}ptionssamps <- 50 fit_sp <- analysis_modelMCML(ysim,algo=2)Wecanusethemodelwiththefittedparameterstothengeneratepredictionsacrossthewholeareaofinterest,whichisusefulforplottingandotherapplications.{CodeChunk}{CodeInput}#aregularlatticeacrosstheareaofinterestplotdata<expand.grid(x=seq(1,1,by=0.05),y=seq(1,1,by=0.05))#thiswillreturnalistwiththelinearpredictorandlatenteffectspreds<analysismodelpredict(newdata=plotdata)plotdatavalue<predsreparametersvecrequire(ggplot2)ggplot(data=plotdata,aes(x=x,y=y,fill=value))+geomtile()+scalefillviridisc()MCML(y_{s}im,algo=2)Wecanusethemodelwiththefittedparameterstothengeneratepredictionsacrossthewholeareaofinterest,whichisusefulforplottingandotherapplications.\CodeChunk\CodeInput#aregularlatticeacrosstheareaofinterestplot_{d}ata<-expand.grid(x=seq(-1,1,by=0.05),y=seq(-1,1,by=0.05))#thiswillreturnalistwiththelinearpredictorandlatenteffectspreds<-analysis_{m}odel$predict(newdata=plot_{d}ata)plot_{d}ata$value<-preds$re_{p}arameters$vecrequire(ggplot2)ggplot(data=plot_{d}ata,aes(x=x,y=y,fill=value))+geom_{t}ile()+scale_{f}ill_{v}iridis_{c}()\par

5 C-Optimal Experimental Designs with glmmrOptim

The \pkgglmmrOptim package provides a set of algorithms and methods to solve the c-optimal experimental design problem for GLMMs. More detail on the methods in this package and a comparison of their performance can be found in Watson and Pan (2022).

We assume there are nn total possible observations indexed by i[1,,n]i\in[1,...,n] whose data generating process is described by a GLMM described in Section 1.1. The observations (indexes) are grouped into JJ ‘units’, for example clusters or individual people, which are labelled as j[1,,n]\mathcal{E}_{j}\subset[1,...,n]. These units may contain just a single observation/index. The set of all units is the design space 𝒟:=e1,,eJ\mathcal{D}:={e_{1},...,e_{J}} and a ‘design’ is dDd\subset D. The optimal experimental design problem is then to find the design dd of size J<JJ^{\prime}<J that minimises the c-optimality function:

g(d)=cTMd1cg(d)=c^{T}M_{d}^{-1}c (12)

where MM is the information matrix described in Equation 6 associated with design dd. There is no exact solution to this problem, in part because of the complexity resulting from the non-zero correlation between observations and experimental units.

There are several algorithms that can be used to generate approximate solutions. A full discussion of this problem in the context of GLMMs and evaluation of relevant algorithms is given in (Watson and Pan, 2022) (see also Fedorov (1972); Wynn (1970); Nemhauser and Wolsey (1978); Fisher et al. (1978)). Here, we briefly describe the relevant algorithms and their implementation in the \pkgglmmrOptim package.

5.1 Existing software

The package \pkgglmmrOptim provides algorithms for identifying approximate c-optimal experimental designs when the observations and experimental units may be correlated. In \proglangR, there are several packages that provide related functionality. The \pkgskpr package provides D, I, Alias, A, E, T, and G-optimal, but not c-optimal, designs for models including with correlated observations, but only where the experimental units are uncorrelated. The \proglangR package \pkgAlgDesign also provides algorithms for estimating D, A, and I-optimal experimental designs, including for models with correlated observations, but again without correlation between experimental units. \pkgAlgDesign implements a version of the Greedy Algorithm (described below) for D-optimality. Both packages utilise, among other approaches, the approximation described below in Section 5.3. Other relevant packages include \pkgOptimalDesign, which uses the commercial software \pkgGurobi, and the packages \pkgFrF2 and \pkgconf.design, which consider factorial designs specifically. Only \pkgskpr provides both optimal experimental design algorithms and associated methods like Monte Carlo simulation and power calculation. \pkgglmmrBase can provide power calculations using the functions discussed in Section 2.12, as well as the other suite of methods to interrogate and fit relevant models once an optimal design has been identified, which emphasizes the usefulness of the linked set of packages we present.

5.2 Combinatorial Optimisation Algorithms

There are three basic combinatorial algorithms relevant to the c-optimal design problem. ‘Combinatorial’ here means choosing JJ^{\prime} discrete elements (experimental units) from the set of JJ to minimise Equation (12).

Algorithm 3 describes the ‘local search algorithm’. This algorithm starts with a random design d0d_{0} of size JJ^{\prime} and then makes the optimal swap between an experimental unit in the design and one not currently in the design. It proceeds until no improving swap can be made. Algorithm 4 shows the ‘reverse greedy search algorithm’ that instead starts from the complete design space and iteratively removes the worst experimental unit until a design of size JJ^{\prime} is achieved. A third algorithm, the ‘greedy search’ is also included in the package, which starts from a small number of units and iteratively adds the best unit. However, this algorithm performs worse than the other two in empirical comparisons and so is not discussed further here. We offer these three algorithms in the \pkgglmmrOptim package. We allow the user to use any combination of these algorithms in succession. While such combinations do not necessarily have any theoretical guarantees, they may be of interest to users of the package. A more complete discussion of the methods provided by this package can be found in Watson and Pan (2022), with further examples in Watson et al. (2023).

Algorithm 1 Local search algorithm
Let d0d_{0} be size JJ^{\prime} design
Set δ=1\delta=1 and dd0d\leftarrow d_{0}
while δ>0\delta>0 do
  for all element ejde_{j}\in d and ejD/de_{j^{\prime}}\in D/d do Calculate g(d/{ej}{ej})g(d/\{e_{j}\}\cup\{e_{j^{\prime}}\})
  end for
  Set dargminj,jg(d/{ej}{ej})d^{\prime}\leftarrow\operatorname*{argmin}_{j,j^{\prime}}g(d/\{e_{j}\}\cup\{e_{j^{\prime}}\})
  δ=g(d)g(d)\delta=g(d^{\prime})-g(d)
  if δ>0\delta>0  then ddd\leftarrow d^{\prime}
  end if
end while
Algorithm 2 Reverse greedy search algorithm
Set dDd\leftarrow D
Set k=|D|k=|D|;
while k>Jk>J^{\prime} do
  for all element ejde_{j}\in d do Calculate g(d/{ej})g(d/\{e_{j}^{\prime}\})
  end for
  Set dd/argminejg(d/{ej})d\leftarrow d/\operatorname*{argmin}_{e_{j}}g(d/\{e_{j}\})
  kk1k\leftarrow k-1
end while

5.2.1 Computation

We use several methods to improve computational time to execute these algorithms. The most computationally expensive step is the calculation of g(d)g(d) after adding or swapping an experimental unit, since the calculation of the information matrix requires inversion of the covariance matrix Σ\Sigma (see Equation (6)). However, we can avoid inverting Σ\Sigma on each step by using rank-1 up- and down-dating.

For a design dd with JJ^{\prime} observations with inverse covariance matrix Σd1\Sigma^{-1}_{d} we can obtain the inverse of the covariance matrix of the design with one observation removed d=d/{i}d^{\prime}=d/\{i\}, Σd1\Sigma^{-1}_{d^{\prime}} as follows. Without loss of generality we assume that the observation to be removed is the last row/column of Σd1\Sigma^{-1}_{d}. We can write Σd1\Sigma^{-1}_{d} as

Σd1=B=(CffTe)\Sigma^{-1}_{d}=B=\begin{pmatrix}C&f\\ f^{T}&e\\ \end{pmatrix}

where CC is the (J1)×(J1)(J^{\prime}-1)\times(J^{\prime}-1) principal submatrix of BB, ff is a column vector of length (J1)(J^{\prime}-1) and ee is a scalar. Then,

Σd/{i}1=G=CffT/e\Sigma^{-1}_{d/\{i\}}=G=C-ff^{T}/e

For a design dd with JJ^{\prime} observations with inverse covariance matrix Σd1\Sigma^{-1}_{d}, we aim now to obtain the inverse covariance matrix of the design d=d{i}d^{\prime}=d\cup\{i^{\prime}\}. Recall that ZZ is a n×Qn\times Q design effect matrix with each row corresponding to an observation. We want to generate H1=Σd1H^{-1}=\Sigma_{d^{\prime}}^{-1}. Note that:

H=Σd=(G1kkTh)H=\Sigma_{d^{\prime}}=\begin{pmatrix}G^{-1}&k\\ k^{T}&h\\ \end{pmatrix}

where k=ZidDZik=Z_{i\in d}DZ_{i^{\prime}} is the column vector corresponding to the elements of Σ=W1+ZDZT\Sigma=W^{-1}+ZDZ^{T} with rows in the current design and column corresponding to ii^{\prime}, and hh is the scalar Wi,i1+ZiDZiTW^{-1}_{i^{\prime},i^{\prime}}+Z_{i^{\prime}}DZ_{i^{\prime}}^{T}. Also now define:

H=(Σd00h)H^{*}=\begin{pmatrix}\Sigma_{d}&0\\ 0&h\end{pmatrix}

so that

H1=(Σd1001/h)H^{*-1}=\begin{pmatrix}\Sigma^{-1}_{d}&0\\ 0&1/h\end{pmatrix}

and

H=(Σdk0h)H^{**}=\begin{pmatrix}\Sigma_{d}&k\\ 0&h\end{pmatrix}

and u=(kT,0)Tu=(k^{T},0)^{T} and v=(0,,0,1)Tv=(0,...,0,1)^{T}, both of which are length JJ^{\prime} column vectors. So we can get HH^{**} from HH^{*} using a rank-1 update as H=H+uvTH^{**}=H^{*}+uv^{T} and similarly H=H+vuTH=H^{**}+vu^{T}. Using the Sherman-Morison formula:

H1=H1H1uvTH11+vTH1uH^{**-1}=H^{*-1}-\frac{H^{*-1}uv^{T}H^{*-1}}{1+v^{T}H^{*-1}u}

and

H1=H1H1vuTH11+uTH1vH^{-1}=H^{**-1}-\frac{H^{**-1}vu^{T}H^{**-1}}{1+u^{T}H^{**-1}v}

So we have calculated the updated inverse with only matrix-vector multiplication, which has complexity O(n2)O(n^{2}) rather than the O(n3)O(n^{3}) required if we just took a new inverse of matrix Σd\Sigma_{d^{\prime}}.

Other steps we include to improve efficiency are to check if any experimental units are repeated, if so then we can avoid checking a swap of an experimental unit for itself. Internally, the program only stores the unique combinations of rows of XX and ZZ and tracks the counts of each in or out of the design. Finally, when the function is executed (see Section 5.7), a check is performed to determine whether the experimental units are all uncorrelated with one another. If they are uncorrelated then we can implement an alternative approach to calculating g(d)g(d), since we can write the information matrix as:

Md=j=1JMej=j=1JXiejTΣiej1XiejM_{d}=\sum_{j=1}^{J^{\prime}}M_{e_{j}}=\sum_{j=1}^{J^{\prime}}X_{i\in e_{j}}^{T}\Sigma^{-1}_{i\in e_{j}}X_{i\in e_{j}} (13)

where we use XiejX_{i\in e_{j}} to indicate the rows of XX in condition eje_{j}, and Σiej1\Sigma^{-1}_{i\in e_{j}} the submatrix of Σ1\Sigma^{-1} in eje_{j}. Thus, rather than using a rank-1 up- or down-date procedure iterated over the observations in each experimental unit, we can calculate the ‘marginal’ information matrix associated with each experimental unit and add or subtract it from MdM_{d} as required. This method is generally faster as the number of observations per experimental unit is often much smaller than JJ^{\prime}.

5.3 Approximate Unit Weights

If the experimental units are uncorrelated, we can use a different method to combinatorial optimisation. We assume that all the experimental units in DD are unique (the software automatically removes duplicates when using this method), and we place a probability measure over them ϕ={(ϕj,Xiej,Ziej):j=1,,J,ϕj[0,1]}\phi=\{(\phi_{j},X_{i\in e_{j}},Z_{i\in e_{j}}):j=1,...,J,\phi_{j}\in[0,1]\} where jϕj=1\sum_{j}\phi_{j}=1; ϕ\phi is then an approximate design. The weights ϕj\phi_{j} can be interpreted as the proportion of ‘effort’ placed on each experimental unit. We can rewrite Equation (13) for the information matrix of this design as:

Mϕ=j=1JXiejTΣiej1XiejϕjM_{\phi}=\sum_{j=1}^{J}X_{i\in e_{j}}^{T}\Sigma^{-1}_{i\in e_{j}}X_{i\in e_{j}}\phi_{j} (14)

where the subscript ieji\in e_{j} indicates the submatrix relating to the indexes in eje_{j}. The problem is then to find the optimal design ϕ=argminϕcTMϕc\phi^{*}=\operatorname*{argmin}_{\phi}c^{T}M_{\phi}c.

Holland-Letz et al. (2011) and Sagnol (2011) generalise Elfving’s Theorem (Elfving, 1952), which provides a geometric characterisation of the c-optimal design problem in these terms, to the case of correlation within experimental units. Sagnol (2011) shows that this problem can be written as a second order cone program, and as such be solved with interior point methods for conic optimisation problems. We modify this program to solve Holland-Letz et al. (2011) version of the theorem and include it in the \pkgglmmrOptim package, which we implement using the \pkgCVXR package for conic optimisation.

5.4 Optimal Mixed Model Weights

The final alternative algorithm identifies the optimal mixed model weights. The best linear unbiased estimator for the linear combination b=cTβb=c^{T}\beta can be written as

b^\displaystyle\hat{b} =𝐚T𝐲\displaystyle=\mathbf{a}^{T}\mathbf{y}
=𝐚TL𝐲\displaystyle=\mathbf{a^{\prime}}^{T}L\mathbf{y}

where 𝐚=LT𝐚\mathbf{a}=L^{T}\mathbf{a^{\prime}} is a vector of weights. It can be shown, see Watson, Girling, and Hemming REF, that the experimental unit weights 𝚽\boldsymbol{\Phi} that minimise the bias are generated by Algorithm 3. As before these weights can then be used to generate optimal counts for each experimental unit.

Algorithm 3 Optimal mixed model weights for JJ experimental units with a target total number of observations JJ^{\prime} and ϵ\epsilon is the tolerance of the algorithm.
procedure Optimal mixed model weights
  Let ϕ=[ϕ1,,ϕJ]\boldsymbol{\phi}=[\phi_{1},...,\phi_{J}] with ϕj=1/J\phi_{j}=1/J^{\prime} for all jj
  Set δ=1\delta=1
  while δ>ϵ\delta>\epsilon do
   aΣ1X(XTΣ1X)1ca\leftarrow\Sigma^{-1}X(X^{T}\Sigma^{-1}X)^{-1}c
   for all j{1,,J}j\in\{1,...,J\} do ϕj|aj|j|aj|\phi^{\prime}_{j}\leftarrow\frac{|a_{j}|}{\sum_{j}|a_{j}|}
   end for
   δargmaxj|ϕjϕj|\delta\leftarrow\operatorname*{arg\,max}_{j}|\phi_{j}-\phi^{\prime}_{j}|
   for all j{1,,J}j\in\{1,...,J\} do ϕjϕj\phi_{j}\leftarrow\phi^{\prime}_{j}
   end for
   ΣW1diag(ϕ1)+ZDZT\Sigma\leftarrow W^{-1}\text{diag}(\boldsymbol{\phi}^{-1})+ZDZ^{T}
  end while
end procedure

5.5 Rounding

The approximate weighting approaches return the set of weights ϕ1,,ϕJ\phi_{1},...,\phi_{J} corresponding to each unique experimental unit. These weights need to be converted into an exact design with integer counts of each experimental unit n1,,nJn_{1},...,n_{J}. The problem of allocating a fixed set of JJ^{\prime} items to JJ categories according to a set of proportions is known as the apportionment problem. There a several rounding methods used for the apportionment problem proposed by the founding fathers of the United States for determining the number of representatives from each state. PUKELSHEIM and RIEDER (1992) show that a modified version of Adams’ method is most efficient when there needs to be at least one experimental unit of each type. In other cases the methods of Hamilton, Webster, or Jefferson may be preferred. We provide the function \codeapportion(), which generates exact designs from all these methods for a desired sample size and set of weights. The output of this function is automatically provided when the approximate weighting method is used.

5.6 Robust c-Optimal Designs

The preceding discussion has assumed that the correct model specification is known. However, in many scenarios there may be multiple plausible models, and a design optimal for one model or set of parameters, may perform poorly for another. Robust optimal design methods aim to produce an (approximate) optimal design over multiple candidate models. We implement a method amendable to the combinatorial algorithms described in Section 5.2 following Dette (1993). Let \mathcal{M} represent a GLMM model. We assume, following Dette (1993), that the true model belongs to a class of GLMMs 𝚵={1,,R}\boldsymbol{\Xi}=\{\mathcal{M}_{1},...,\mathcal{M}_{R}\} and we define a vector ρ={ρ1,,ρR}\rho=\{\rho_{1},...,\rho_{R}\}, where ρr[0,1]\rho_{r}\in[0,1] and r=1Rρr=1\sum_{r=1}^{R}\rho_{r}=1, which represents the prior weights or prior probabilities of each model. There are two robust c-optimality criteria we can use. The first:

h(d)=r=1Rρrlog(crTMd,r1cr)h(d)=\sum_{r=1}^{R}\rho_{r}\log(c^{T}_{r}M^{-1}_{d,r}c_{r})

was proposed by Dette (1993) and Läuter (1974). The second is the weighted mean

h(d)=r=1RρrcrTMd,r1crh(d)=\sum_{r=1}^{R}\rho_{r}c^{T}_{r}M^{-1}_{d,r}c_{r}

Both criteria result in functions with the appropriate properties to ensure the local search algorithm maintains its theoretical guarantees. Dette (1993) generalises Elfving’s theorem to this robust criterion, however, further work is required to ‘doubly’ generalise it to both correlated observations and robust criterion. While there may be a straightforward combination of the work of Dette (1993) and that of Holland-Letz et al. (2011) and Sagnol (2011); translating this into a program for conic optimisation methods is a topic for future research.

5.7 Implementation

The \pkgglmmrOptim package adds an additional \codeDesignSpace class, which references one or more \codeModel objects. The \codeModel objects must represent the same number of observations, with each row of XX and ZZ representing an observation. The other arguments to initialise the class are (optionally) the weights on each design, units of each observation. The observations are assumed to be separate experimental units unless otherwise specified.

The main member function of the \codeDesignSpace class is \codeoptimal, which runs one of the algorithms described above and returns the rows in the optimal design, or weights for each experimental unit, along with other relevant information. If the experimental units are uncorrelated with one another then the approximate weighting method is used by default; combinatorial algorithms can instead be used with the option \codeuse_combin=TRUE. The user can run one or more of the combinatorial algorithms sequentially. The algorithms are numbered as 1 is the local search, 2 is the greedy search, and 3 is the reverse greedy search. Specifying \codealgo=1 will run the local search. Specifying, \codealgo=c(3,1) will first run a reverse greedy search and then run a local search on the resulting design. We note that some combinations will be redundant, for example, running a greedy search after a reverse greedy search will have no effect since the resulting design will already be of size JJ^{\prime}. However, some users may have interest in combining the approaches. A list (one per model in the design space) containing the vectors cc in Equation (12) must be provided.

For certain problems, an optimal design may include all of the same value of one or more dichotomous covariates, which would result in a non-positive definite information matrix. For example, some models include adjustment for discrete time periods, but not all time periods feature in the subset of observations in the optimal design. The program checks that the information matrix is positive definite at each step, and if not, it reports which columns may be causing the failure. These columns can then be removed using the \coderm_cols argument of the \codeoptimal function.

5.8 Examples

We present a set of examples relating to identifying an optimal cluster trial design within a design space consisting of six possible sequences over five time periods shown in the top row of Figure 2. There are six clusters and five time periods, and we may observe a maximum of ten individuals per cluster-period. Girling and Hemming (2016) and Watson et al. (2023) consider similar design problems.

The function that executes the algorithms in the \codeDesignSpace class is \codeoptim. The argument \codealgo specifies the algorithm with: 1 = local search, 2 = greedy search, 3 = reverse greedy search, and “girling” for the optimal mixed model weights (named after the proposer). One can also string together algorithms to run in sequence such as \codealgo = c(3,1).

Refer to caption
Figure 2: Top row: the cluster-trial design space. Middle: an optimal design of 100 individual observations across cluster-periods in the cluster-randomised trial design space. Bottom left: optimal design of ten complete cluster-periods comprising ten observations. Bottom right: optimal design of two cluster sequences (rows).

5.8.1 Correlated experimental units with a Single Observation

First, we consider the case when each observation is a separate experimental unit so we may observe any number of individuals in a time period. We will identify an approximate optimal design with 100 observations with a maximum of ten observations in a single cluster-period. As the experimental units are correlated we will use the local search algorithm (option \codealgo=1). Given that the algorithm is not guaranteed to produce the optimal design, we run it ten times and select the design with lowest value of the objective function. We will specify a Gaussian model with identity link function and exchangeable covariance function: {CodeChunk} {CodeInput} #simulate data df <- nelder(formula(  (cl(6) * t(5)) > ind(10))) dfint<0df[dfint<-0df[dft >= dfcl,int]<1des<Modelcl,^{\prime}int^{\prime}]<-1des<-Modelnew(formula =  factor(t) + int - 1+(1|gr(cl)) + (1|gr(cl,t)), covariance = c(0.05,0.01), mean = rep(0,6), data = df, family=gaussian()) ds <- DesignSpacenew(des)#createdesignspaceopt<dsnew(des)#createdesignspaceopt<-dsoptimal(m=100,C=list(c(rep(0,5),1)),algo=1) #run the algorithm The middle panel of Figure 2 shows the approximate optimal design produced by the algorithm. Code to reproduce the plots is provided in the replication materials.

5.8.2 Correlated experimental units with Multiple Observations

We secondly consider the case where each cluster period is an experimental unit containing ten observations, and we aim to select a design of size ten cluster-periods. {CodeChunk} {CodeInput} # update the experimental units ds <- DesignSpacenew(des,experimentalcondition=rep(1:30,each=10))opt2<dsnew(des,experimental_{c}ondition=rep(1:30,each=10))opt2<-dsoptimal(m=10,C=list(c(rep(0,5),1)),algo=1) The bottom left panel of Figure 2 again shows the approximate optimal design produced by the algorithm, reflecting the ‘staircase’ design from the previous example.

5.8.3 Uncorrelated experimental units

Finally, we conside the case where each whole cluster represents an experimental unit and we aim to pick two of these six clusters. In this example, the experimental units are uncorrelated. By default the \codeoptim function will use the second-order cone program and return optimal weights for each experimental unit. We can force the function to instead use the local or greedy search algorithms with the option \codeforce_hill=TRUE: {CodeChunk} {CodeInput} # update the experimental units dsexperimentalcondition<dfexperimental_{c}ondition<-dfcl opt3 <- dsoptimal(m=2,C=list(c(rep(0,5),1)),algo=1,usecombin=TRUE)Figure2againshowstheapproximateoptimaldesignproducedbythealgorithm.Wenotethatadesigncontainingrows1and5achievesthesamevariance.Wecancomparetheseresultstotheapproximateoptimalweights:{CodeChunk}{CodeInput}w<dsoptimal(m=2,C=list(c(rep(0,5),1)),algo=1)wweights{CodeOutput}0.24193370.12903340.12903290.12903290.12903340.2419337optimal(m=2,C=list(c(rep(0,5),1)),algo=1,use_{c}ombin=TRUE)Figure\ref{fig:optim}againshowstheapproximateoptimaldesignproducedbythealgorithm.Wenotethatadesigncontainingrows1and5achievesthesamevariance.Wecancomparetheseresultstotheapproximateoptimalweights:\CodeChunk\CodeInput w<-ds$optimal(m=2,C=list(c(rep(0,5),1)),algo=1)w$weights\CodeOutput 0.24193370.12903340.12903290.12903290.12903340.2419337\par

6 Discussion

In this article we describe the \pkgglmmrBase package for R that provides functionality for model specification, analysis, fitting, and other functions for GLMMs. We also describe the \pkgglmmrOptim package that extends its functionality. The number of packages in R alone that provide similar functionality attests to the growing popularity of GLMMs in statistical analyses. For example, we identified eight R packages for calculating power for cluster-randomised trials alone using similar models. Our intention with \pkgglmmrBase is to provide a broad set of tools and functions to support a wide range of GLMM related analyses, while maintaining a relatively simple interface through the \pkgR6 class system. For an analysis like a power calculation, the package can provide a direct estimate, any of the intermediary steps for similar analyses, or related functionality like data simulation. The power analysis can also be applied to a design identified through the \pkgglmmrOptim package as classes in this package inherit the functions of the \pkgglmmrBase classes. This support is intended to reduce the requirement for multiple different packages for different model structures where comparisons may be required.

The \pkgglmmrBase package provides MCML model fitting, along with a Laplace approximation. The package provides a wider range of covariance functions than is available in other packages in \proglangR, it allows for non-linear functions of parameters, and it can provide robust and bias-corrected standard errors. We plan to continue to expand this functionality and available outputs, and encourage users to suggest features. The package can therefore provide a complementary approach to GLMM model fitting alongside the existing software in \proglangR and other software. MCML may be especially useful when the approximations provided by packages like \pkglme4 or \pkgglmmTMB may fail to produce reliable estimates or their algorithms may not converge. For simpler models, such as those with exchangable covariance functions or large numbers of observations, methods like PQL and Laplacian approximation can produce highly reliable estimates, and in a fraction of the time that MCML can. However, in more complex cases there can be differences. More complex covariance structures are of growing interest to more accurately represent data generating processes. We have used a running example of a cluster randomised trial over multiple time periods. Early efforts at modelling used a single group-membership random effect, whereas more contemporary approaches focus on temporal decay models, like autoregressive random effects (Li et al., 2021). However, \pkgglmmTMB is one of very few packages, prior to \pkgglmmrBase to offer model fitting for this covariance structure in \proglangR; SAS provides similar functionality but not in conjunction with other features like non-linear fixed effect specifications.

We also discussed the \pkgglmmrOptim package, which provides algorithms for identifying approximate optimal experimental designs for studies with correlated observations. Again, this package is design to complement existing resources. The package was built to implement a range of relevant algorithms, and so provides the only software for c-optimal designs with correlated observations. However, for other types of optimality, including D-, A, or I-, other packages will be required, particularly \pkgskpr, although we are not aware of approaches that can handle these types of optimality yet for designs with correlated experimental units. One of the advantages of the integrated approach offered by our package is that one can quickly generate the design produced by the algorithm and interrogate it to calculate statistics like power, generate relevant matrices for subsequent analyses, and to simulate data.

Planned future developments of the package include allowing for heteroskedastic models with specification of functions to describe the individual-level variance and adding more complex models such as hurdle models, mixture models, and multivariate models.

References

  • Amestoy et al. (2004) Amestoy PR, Davis TA, Duff IS (2004). “Algorithm 837: AMD, an Approximate Minimum Degree Ordering Algorithm.” ACM Trans. Math. Softw., 30(3), 381–388. ISSN 0098-3500. 10.1145/1024074.1024081. URL https://doi.org/10.1145/1024074.1024081.
  • Bates et al. (2015) Bates D, Mächler M, Bolker B, Walker S (2015). “Fitting Linear Mixed-Effects Models Using lme4.” Journal of Statistical Software, 67(1), 1–48. 10.18637/jss.v067.i01.
  • Bates and DebRoy (2004) Bates DM, DebRoy S (2004). “Linear mixed models and penalized least squares.” Journal of Multivariate Analysis, 91(1), 1–17. ISSN 0047259X. 10.1016/j.jmva.2004.04.013. URL https://linkinghub.elsevier.com/retrieve/pii/S0047259X04000867.
  • Breslow and Clayton (1993) Breslow NE, Clayton DG (1993). “Approximate inference in generalized linear mixed models.” Journal of the American statistical Association, 88(421), 9–25.
  • Breslow and Lin (1995) Breslow NE, Lin X (1995). “Bias Correction in Generalised Linear Mixed Models with a Single Component of Dispersion.” Biometrika, 82(1), 81. ISSN 00063444. 10.2307/2337629. URL https://www.jstor.org/stable/2337629?origin=crossref.
  • Bürkner (2017) Bürkner PC (2017). “brms : An R Package for Bayesian Multilevel Models Using Stan.” Journal of Statistical Software, 80(1). ISSN 1548-7660. 10.18637/jss.v080.i01. URL http://www.jstatsoft.org/v80/i01/.
  • Caffo et al. (2005) Caffo BS, Jank W, Jones GL (2005). “Ascent-Based Monte Carlo Expectation– Maximization.” Journal of the Royal Statistical Society Series B: Statistical Methodology, 67(2), 235–251. ISSN 1369-7412. 10.1111/j.1467-9868.2005.00499.x.
  • Capanu et al. (2013) Capanu M, Gönen M, Begg CB (2013). “An assessment of estimation methods for generalized linear mixed models with binary outcomes.” Statistics in Medicine, 32(26), 4550–4566. ISSN 02776715. 10.1002/sim.5866. URL https://onlinelibrary.wiley.com/doi/10.1002/sim.5866.
  • Carpenter et al. (2017) Carpenter B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt M, Brubaker M, Guo J, Li P, Riddell A (2017). “Stan: A probabilistic programming language.” Journal of statistical software, 76(1).
  • Chang (2022) Chang W (2022). R6: Encapsulated Classes with Reference Semantics. Https://r6.r-lib.org, https://github.com/r-lib/R6/.
  • Datta et al. (2016a) Datta A, Banerjee S, Finley AO, Gelfand AE (2016a). “Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.” Journal of the American Statistical Association, 111, 800–812. ISSN 0162-1459. 10.1080/01621459.2015.1044091.
  • Datta et al. (2016b) Datta A, Banerjee S, Finley AO, Gelfand AE (2016b). “On nearest-neighbor Gaussian process models for massive spatial data.” WIREs Computational Statistics, 8, 162–171. ISSN 1939-5108. 10.1002/wics.1383.
  • Davis (2005) Davis TA (2005). “Algorithm 849: A Concise Sparse Cholesky Factorization Package.” ACM Trans. Math. Softw., 31(4), 587–591. ISSN 0098-3500. 10.1145/1114268.1114277. URL https://doi.org/10.1145/1114268.1114277.
  • Dette (1993) Dette H (1993). “Elfving’s Theorem for $D$-Optimality.” The Annals of Statistics, 21(2). ISSN 0090-5364. 10.1214/aos/1176349149. URL https://projecteuclid.org/journals/annals-of-statistics/volume-21/issue-2/Elfvings-Theorem-for-D-Optimality/10.1214/aos/1176349149.full.
  • Diggle et al. (1998) Diggle PJ, Tawn JA, Moyeed RA (1998). “Model-based geostatistics (with discussion).” Journal of the Royal Statistical Society, Series C, 47, 299–350.
  • Eddelbuettel and François (2011) Eddelbuettel D, François R (2011). “Rcpp : Seamless R and C++ Integration.” Journal of Statistical Software, 40. ISSN 1548-7660. 10.18637/jss.v040.i08.
  • Elfving (1952) Elfving G (1952). “Optimum Allocation in Linear Regression Theory.” The Annals of Mathematical Statistics, 23(2), 255–262. ISSN 0003-4851. 10.1214/aoms/1177729442. URL http://projecteuclid.org/euclid.aoms/1177729442.
  • Fedorov (1972) Fedorov V (1972). Theory of Optimal Experiments. Academic Press, New York.
  • Finley et al. (2019) Finley AO, Datta A, Cook BD, Morton DC, Andersen HE, Banerjee S (2019). “Efficient Algorithms for Bayesian Nearest Neighbor Gaussian Processes.” Journal of Computational and Graphical Statistics, 28, 401–414. ISSN 1061-8600. 10.1080/10618600.2018.1537924.
  • Fisher et al. (1978) Fisher ML, Nemhauser GL, Wolsey LA (1978). “An analysis of approximations for maximizing submodular set functions—II.” 10.1007/bfb0121195.
  • Furrer et al. (2006) Furrer R, Genton MG, Nychka D (2006). “Covariance Tapering for Interpolation of Large Spatial Datasets.” Journal of Computational and Graphical Statistics, 15, 502–523. ISSN 1061-8600. 10.1198/106186006X132178.
  • Girling and Hemming (2016) Girling AJ, Hemming K (2016). “Statistical efficiency and optimal design for stepped cluster studies under linear mixed effects models.” Statistics in Medicine, 35(13), 2149–2166. ISSN 02776715. 10.1002/sim.6850. URL https://onlinelibrary.wiley.com/doi/10.1002/sim.6850.
  • Gneiting (2002) Gneiting T (2002). “Compactly Supported Correlation Functions.” Journal of Multivariate Analysis, 83, 493–508. ISSN 0047259X. 10.1006/jmva.2001.2056.
  • Harville (1977) Harville DA (1977). “Maximum Likelihood Approaches to Variance Component Estimation and to Related Problems.” Journal of the American Statistical Association, 72, 320. ISSN 01621459. 10.2307/2286796.
  • Hemming et al. (2020) Hemming K, Kasza J, Hooper R, Forbes A, Taljaard M (2020). “A tutorial on sample size calculation for multiple-period cluster randomized parallel, cross-over and stepped-wedge trials using the Shiny CRT Calculator.” International Journal of Epidemiology, dyz237. ISSN 0300-5771. 10.1093/ije/dyz237. URL https://academic.oup.com/ije/advance-article/doi/10.1093/ije/dyz237/5748155.
  • Henderson (1982) Henderson CR (1982). “Analysis of Covariance in the Mixed Model: Higher-Level, Nonhomogeneous, and Random Regressions.” Biometrics, 38(3), 623. ISSN 0006341X. 10.2307/2530044. URL https://www.jstor.org/stable/2530044?origin=crossref.
  • Holland-Letz et al. (2011) Holland-Letz T, Dette H, Pepelyshev A (2011). “A geometric characterization of optimal designs for regression models with correlated observations.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 239–252. ISSN 13697412. 10.1111/j.1467-9868.2010.00757.x. URL https://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2010.00757.x.
  • Homan and Gelman (2014) Homan MD, Gelman A (2014). “The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo.” Journal of Machine Learning Research, 15(1), 1593–1623.
  • Hooper et al. (2016) Hooper R, Teerenstra S, de Hoop E, Eldridge S (2016). “Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.” Statistics in Medicine, 35(26), 4718–4728. ISSN 02776715. 10.1002/sim.7028. URL http://doi.wiley.com/10.1002/sim.7028.
  • Jank (2006) Jank W (2006). “Implementing and Diagnosing the Stochastic Approximation EM Algorithm.” Journal of Computational and Graphical Statistics, 15(4), 803–829. ISSN 1061-8600. 10.1198/106186006X157469.
  • Kaufman et al. (2008) Kaufman CG, Schervish MJ, Nychka DW (2008). “Covariance Tapering for Likelihood-Based Estimation in Large Spatial Data Sets.” Journal of the American Statistical Association, 103, 1545–1555. ISSN 0162-1459. 10.1198/016214508000000959.
  • Kenward and Roger (1997) Kenward MG, Roger JH (1997). “Small Sample Inference for Fixed Effects from Restricted Maximum Likelihood.” Biometrics, 53, 983. ISSN 0006341X. 10.2307/2533558. URL https://www.jstor.org/stable/2533558?origin=crossref.
  • Kenward and Roger (2009) Kenward MG, Roger JH (2009). “An improved approximation to the precision of fixed effects from restricted maximum likelihood.” Computational Statistics & Data Analysis, 53, 2583–2595. ISSN 01679473. 10.1016/j.csda.2008.12.013.
  • Laird and Ware (1982) Laird NM, Ware JH (1982). “Random-Effects Models for Longitudinal Data.” Biometrics, 38, 963. ISSN 0006341X. 10.2307/2529876.
  • Läuter (1974) Läuter E (1974). “Experimental design in a class of models.” Mathematische Operationsforschung und Statistik, 5(4-5), 379–398. ISSN 0047-6277. 10.1080/02331887408801175. URL https://www.tandfonline.com/doi/full/10.1080/02331887408801175.
  • Li et al. (2021) Li F, Hughes JP, Hemming K, Taljaard M, Melnick ER, Heagerty PJ (2021). “Mixed-effects models for the design and analysis of stepped wedge cluster randomized trials: An overview.” Statistical Methods in Medical Research, 30(2), 612–639. ISSN 0962-2802. 10.1177/0962280220932962. URL http://journals.sagepub.com/doi/10.1177/0962280220932962.
  • Lin and Breslow (1996) Lin X, Breslow NE (1996). “Bias Correction in Generalized Linear Mixed Models With Multiple Components of Dispersion.” Journal of the American Statistical Association, 91(435), 1007. ISSN 01621459. 10.2307/2291720. URL https://www.jstor.org/stable/2291720?origin=crossref.
  • Lindstrom and Bates (1990) Lindstrom MJ, Bates DM (1990). “Nonlinear mixed effects models for repeated measures data.” Biometrics, pp. 673–687.
  • McCullagh and Nelder (1989) McCullagh P, Nelder JA (1989). Generalized linear models, 2nd Edition. Routledge.
  • McCulloch (1997) McCulloch CE (1997). “Maximum Likelihood Algorithms for Generalized Linear Mixed Models.” Journal of the American statistical Association, 92(437), 162–170. 10.2307/2291460.
  • Nakagawa and Schielzeth (2013) Nakagawa S, Schielzeth H (2013). “A general and simple method for obtaining <i>R</i> <sup>2</sup> from generalized linear mixed-effects models.” Methods in Ecology and Evolution, 4, 133–142. ISSN 2041210X. 10.1111/j.2041-210x.2012.00261.x.
  • Nelder (1965) Nelder J (1965). “The analysis of randomized experiments with orthogonal block structure. I. Block structure and the null analysis of variance.” Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, 283(1393), 147–162. ISSN 0080-4630. 10.1098/rspa.1965.0012. URL https://royalsocietypublishing.org/doi/10.1098/rspa.1965.0012.
  • Nemhauser and Wolsey (1978) Nemhauser GL, Wolsey LA (1978). “Best Algorithms for Approximating the Maximum of a Submodular Set Function.” Mathematics of Operations Research, 3(3), 177–188. ISSN 0364-765X. 10.1287/moor.3.3.177. URL http://pubsonline.informs.org/doi/abs/10.1287/moor.3.3.177.
  • Pinheiro et al. (2022) Pinheiro J, Bates D, R Core Team (2022). nlme: Linear and Nonlinear Mixed Effects Models. R package version 3.1-157, URL https://CRAN.R-project.org/package=nlme.
  • Polyak and Juditsky (1992) Polyak BT, Juditsky AB (1992). “Acceleration of Stochastic Approximation by Averaging.” SIAM Journal on Control and Optimization, 30(4), 838–855. ISSN 0363-0129. 10.1137/0330046.
  • Powell (2009) Powell MJ (2009). “The BOBYQA Algorithm for Bound Constrained Optimization without Derivatives.” Cambridge NA Report NA2009/06, University of Cambridge, Cambridge, 26.
  • PUKELSHEIM and RIEDER (1992) PUKELSHEIM F, RIEDER S (1992). “Efficient rounding of approximate designs.” Biometrika, 79(4), 763–770. ISSN 0006-3444. 10.1093/biomet/79.4.763. URL https://academic.oup.com/biomet/article-lookup/doi/10.1093/biomet/79.4.763.
  • R Core Team (2021) R Core Team (2021). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.
  • Riutort-Mayol et al. (2023) Riutort-Mayol G, Bürkner PC, Andersen MR, Solin A, Vehtari A (2023). “Practical Hilbert space approximate Bayesian Gaussian processes for probabilistic programming.” Statistics and Computing, 33, 17. ISSN 0960-3174. 10.1007/s11222-022-10167-2.
  • Sagnol (2011) Sagnol G (2011). “Computing optimal designs of multiresponse experiments reduces to second-order cone programming.” Journal of Statistical Planning and Inference, 141(5), 1684–1708. ISSN 03783758. 10.1016/j.jspi.2010.11.031. URL https://linkinghub.elsevier.com/retrieve/pii/S0378375810005318.
  • Satterthwaite (1946) Satterthwaite FE (1946). “An Approximate Distribution of Estimates of Variance Components.” Biometrics Bulletin. ISSN 00994987. 10.2307/3002019.
  • Searle et al. (1992) Searle SR, Casella G, McCulloch CE (1992). Variance Components. Wiley. ISBN 9780471621621. 10.1002/9780470316856.
  • Shun and McCullagh (1995) Shun Z, McCullagh P (1995). “Laplace Approximation of High Dimensional Integrals.” Journal of the Royal Statistical Society: Series B (Methodological), 57(4), 749–760. ISSN 00359246. 10.1111/j.2517-6161.1995.tb02060.x. URL https://onlinelibrary.wiley.com/doi/10.1111/j.2517-6161.1995.tb02060.x.
  • Skene and Kenward (2010) Skene SS, Kenward MG (2010). “The analysis of very small samples of repeated measurements II: A modified Box correction.” Statistics in Medicine, 29, 2838–2856. ISSN 0277-6715. 10.1002/sim.4072.
  • Solin and Särkkä (2020) Solin A, Särkkä S (2020). “Hilbert space methods for reduced-rank Gaussian process regression.” Statistics and Computing, 30, 419–446. ISSN 0960-3174. 10.1007/s11222-019-09886-w. URL http://link.springer.com/10.1007/s11222-019-09886-w.
  • Vaida and Blanchard (2005) Vaida F, Blanchard S (2005). “Conditional Akaike information for mixed-effects models.” Biometrika, 92, 351–370. ISSN 1464-3510. 10.1093/biomet/92.2.351.
  • Vecchia (1988) Vecchia AV (1988). “Estimation and Model Identification for Continuous Spatial Processes.” Journal of the Royal Statistical Society: Series B (Methodological), 50, 297–312. ISSN 00359246. 10.1111/j.2517-6161.1988.tb01729.x.
  • Venables and Ripley (2002) Venables WN, Ripley BD (2002). Modern Applied Statistics with S. Fourth edition. Springer, New York. ISBN 0-387-95457-0, URL https://www.stats.ox.ac.uk/pub/MASS4/.
  • Watson et al. (2023) Watson SI, Girling A, Hemming K (2023). “Optimal Study Designs for Cluster Randomised Trials: An Overview of Methods and Results.” 2303.07953, URL http://arxiv.org/abs/2303.07953.
  • Watson and Pan (2022) Watson SI, Pan Y (2022). “Approximate c-Optimal Experimental Designs with Correlated Observations using Combinatorial Optimisation.” arXiv. 2207.09183, URL http://arxiv.org/abs/2207.09183.
  • Wynn (1970) Wynn HP (1970). “The Sequential Generation of $D$-Optimum Experimental Designs.” The Annals of Mathematical Statistics, 41(5), 1655–1664. ISSN 0003-4851. 10.1214/aoms/1177696809. URL http://projecteuclid.org/euclid.aoms/1177696809.
  • Zeger et al. (1988) Zeger SL, Liang KY, Albert PS (1988). “Models for Longitudinal Data: A Generalized Estimating Equation Approach.” Biometrics, 44(4), 1049. ISSN 0006341X. 10.2307/2531734. URL https://www.jstor.org/stable/2531734?origin=crossref.