This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Fast Bayesian estimation of brain activation with cortical surface fMRI data using EM

Daniel A. Spencer, David Bolin, Amanda F. Mejia
(July 28, 2025)
Abstract

Task functional magnetic resonance imaging (fMRI) is a type of neuroimaging data used to identify areas of the brain that activate during specific tasks or stimuli. These data are conventionally modeled using a massive univariate approach across all data locations, which ignores spatial dependence at the cost of model power. We previously developed and validated a spatial Bayesian model leveraging dependencies along the cortical surface of the brain in order to improve accuracy and power. This model utilizes stochastic partial differential equation spatial priors with sparse precision matrices to allow for appropriate modeling of spatially-dependent activations seen in the neuroimaging literature, resulting in substantial increases in model power. Our original implementation relies on the computational efficiencies of the integrated nested Laplace approximation (INLA) to overcome the computational challenges of analyzing high-dimensional fMRI data while avoiding issues associated with variational Bayes implementations. However, this requires significant memory resources, extra software, and software licenses to run. In this article, we develop an exact Bayesian analysis method for the general linear model, employing an efficient expectation-maximization algorithm to find maximum a posteriori estimates of task-based regressors on cortical surface fMRI data. Through an extensive simulation study of cortical surface-based fMRI data, we compare our proposed method to the existing INLA implementation, as well as a conventional massive univariate approach employing ad-hoc spatial smoothing. We also apply the method to task fMRI data from the Human Connectome Project and show that our proposed implementation produces similar results to the validated INLA implementation. Both the INLA and EM-based implementations are available through our open-source BayesfMRI R package.

Keywords— functional MRI, expectation maximization, neuroimaging, Bayesian, spatial model

1 Introduction

Task-based functional magnetic resonance imaging (fMRI) data measure the blood-oxygen level-dependent (BOLD) response, an indirect measure of neural activity, while a subject performs tasks (Lindquist, 2008; Poldrack et al., 2011). These relatively high spatial and temporal resolution data are widely-used, non-invasive experimental measures that allow researchers to study the way that the human brain works in vivo. fMRI data exhibit a low signal-to-noise ratio, requiring analyses to be as statistically efficient and powerful as possible to produce accurate results that reflect the full extent of activation rather than detecting only the extreme values. Traditional massive univariate analysis methods treat locations within the brain as independent before any clustering analysis methods are applied to the results, which diminishes the ability for models to make reliable conclusions about which areas of subjects’ brains are associated with a task (Elliott et al., 2020). Sophisticated models that leverage the spatiotemporal dependence structure of the data would improve inferential power, but estimating such models is difficult due to the size and complexity of the data. Recent advancements in computer processing and storage technology have begun to enable more complex analyses of fMRI data, improving the ability of scientists to detect brain signals amidst the noise of measurement and physiological mechanics. Recent models attempt to account for either spatial or temporal dependence, though often at the cost of computational efficiency.

The BOLD response is measured as an image array comprised of volumetric pixels, or voxels, each typically having a volume of about 8mm3mm^{3}. A commonly-used modeling technique is known to the neuroscience community as the general linear model (GLM)111The meaning of GLM in the neuroimaging community is not to be confused with generalized linear models in the statistical sense. The term GLM has expanded beyond task fMRI to basically refer to any massive univariate regression model approach.. The classical GLM regresses the time series of measurements for each voxel individually against task covariates to determine the effect of the task on neural activity around the brain (Chatfield and Collins, 2018; Friston et al., 1994). These regressions can be used to test whether a specific voxel has a statistically significant association with the task covariates and is thus “activated” by a task. The classical GLM requires multiple comparisons corrections due to the massive number of voxels being tested. Cluster-based methods help to incorporate spatial information in determining activation (Poline and Mazoyer, 1993; Poline et al., 1997; Smith and Nichols, 2009) but are post-hoc and do not benefit the statistical efficiency of the estimates, which will limit power. Parametric assumptions about spatial dependence used in these methods have been shown to be unrealistic in volumetric fMRI data (Eklund et al., 2016, 2019).

An alternative is to avoid massive univariate analysis and instead perform analysis using spatial Bayesian methods, examining the posterior probability distribution of all voxels together to determine where activations occur. Such techniques explicitly state the assumptions about spatial dependence made in the model through the prior distributions, avoiding the pitfalls of null hypothesis testing. Bayesian models can leverage spatial information through a prior that allows for nonzero covariance between adjacent data locations in space and time. Several spatial Bayesian methods for volumetric fMRI have been proposed, typically relying on variational Bayes (VB) methods to decrease computational load (Penny et al., 2005) at the cost of underestimating posterior variance and poorly estimating the posterior mode (Wang and Titterington, 2005; Bishop and Nasrabadi, 2006; Rue et al., 2009; Sidén et al., 2017). Zhang et al. (2016) utilized a Bayesian nonparametric framework to detect activations using both Markov chain Monte Carlo (MCMC) and variational Bayes (VB) methods for both single and multiple subjects, though even the VB method is rather expensive and requires dimension reduction. Sidén et al. (2017) proposed a fast method using a spatial prior on volumetric single-subject data using both MCMC and VB methods that scale well, but does not allow for group analyses. Guhaniyogi and Spencer (2021) and Spencer et al. (2020) used shrinkage priors and tensor decompositions to model volumetric task-based fMRI for both single and multiple-subject studies that rely on MCMC methods, which are time-consuming. However, all of these studies apply spatial priors on volumetric data using Euclidean distance to determine covariance, which does not represent the folded nature of the cerebral cortex. This results in signal being averaged across volumetric pixels that are close together in space though they may be neurologically distal due to their location on different folds of the cortex, often as part of functionally or anatomically distinct cortical regions. In addition, spatial methods used with volumetric data implicitly blur activation signal with noise from white matter and cerebrospinal fluid.

Recently, Mejia et al. (2020) proposed a surface-based spatial Bayesian (SBSB) GLM for cortical surface fMRI (cs-fMRI) data. Such data use a triangular mesh to represent the spatial configuration of the cortical surface. This mesh can be used to find the geodesic distances between points on the surface. These distances appropriately account for the cortex’s folded structure, unlike Euclidean distances in the volume. The SBSB GLM uses a stochastic partial differential equation (SPDE) prior, which is built on a triangular mesh and does not require a regular lattice structure. The SPDE prior is advantageous because it approximates a continuous Markov random field, it is invariant to finite resamplings, which allows for different resolutions in fMRI data, and it allows for dependence in neighboring data locations while keeping computational cost low through sparse precision matrices. In addition, results from single-subject analyses can be combined in a principled manner for group-level inference. Spencer et al. (2022) performed a comprehensive validation study of the SBSB GLM based on test-retest task fMRI data from the Human Connectome Project, showing gains in accuracy and power over the common practice of spatially smoothing data before model-fitting.

For Bayesian computation, Mejia et al. (2020) fit the SBSB GLM model using an empirical Bayes version of the integrated nested Laplace approximation (INLA) through the R software package R-INLA. The package is a powerful computational resource for spatial modeling, offering efficient backend code written in C++ that utilizes fast matrix operations and allows for parallelization via the PARDISO sparse linear algebra library (Alappat et al., 2020; Bollhöfer et al., 2020, 2019). These features of R-INLA drastically reduce model fitting times. However, there are some limitations of INLA in this context. First, the numerical approach to integrating out model hyperparameters used in INLA requires approximating their joint posterior using a multivariate Gaussian, which is memory intensive for more than 5 to 10 hyperparameters, effectively restricting its application to relatively simple task fMRI experimental designs (Opitz, 2017). Previous work has reported memory requirements of 30 - 50 GB for simple single-subject task analysis (Mejia et al., 2020), and in our experience, it is not uncommon to exceed 75GB of memory usage in many practical settings. Second, there are some challenges in developing and maintaining software that depends on R-INLA, which is not available on the Comprehensive R Archive Network (CRAN) and does not install easily on many systems. Therefore, carrying out model estimation can be especially difficult for researchers that do not administer their own computing. Third, in order to achieve reasonable computing speeds, a license for the PARDISO library is required, which is not always freely available. Finally, at the data scale of functional MRI, INLA requires significant system memory to run, which is prohibitive for some research teams.

In order to address these issues, in this paper we develop an efficient expectation-maximization (EM) algorithm (Dempster et al., 1977; Gelman et al., 2013) to produce posterior estimates and inference of activation for subject-level task in the SBSB GLM. We also develop results from these analyses that are combined across subjects in a principled way to identify group-level effects in multi-subject analysis. We compare this EM algorithm to the INLA-based implementation of the SBSB GLM from Mejia et al. (2020) using both simulated data and motor task fMRI data from the Human Connectome Project (HCP) (Barch et al., 2013). Using simulated data, we find that the EM algorithm achieves similar accuracy while cutting computation time dramatically. All model fitting and preprocessing steps have been implemented within the open source R package BayesfMRI222https://github.com/mandymejia/BayesfMRI/tree/1.8.EM.

This article will proceed with a description of the SBSB GLM and details of our EM algorithm in Section 2. Next, we compare our proposed EM algorithm with INLA in a study of simulated cortical surface fMRI data in Section 3. We show results from a study of task fMRI data from the HCP in Section 4. We end with conclusions and discussion of future work in Section 5.

2 Methodology

We will use the following matrix notation throughout this section. Fixed matrix-valued quantities will be represented using upper-case bold font, 𝐀\mathbf{A}, while vectors are in lower-case bold font 𝐚\mathbf{a}, and scalars are non-bold font in both upper- and lower-case, aa. Parameters in the model follow the same convention using Greek letters, i.e. 𝚺\boldsymbol{\Sigma} represents a matrix, 𝝈\boldsymbol{\sigma} represents a vector, and σ\sigma represents a scalar. |𝐀||\mathbf{A}| and Tr(𝐀)\text{Tr}(\mathbf{A}) represent the determinant and the trace of matrix 𝐀\mathbf{A}, respectively. Following this convention, we now introduce the modeling methodology.

2.1 The surface-based spatial Bayesian general linear model

Consider cortical surface fMRI (cs-fMRI) data from a scan, a time series of length TT represented as 𝐲vT\mathbf{y}_{v}\in\mathbb{R}^{T} for locations v=1,,Nv=1,\ldots,N for NN locations in order to model data from a single subject. These data are gathered while a subject completes KK different tasks at specific time intervals. The expected BOLD response to task kk, 𝐱kT\mathbf{x}_{k}\in\mathbb{R}^{T}, is generated by convolving the binary stimulus timing function (on/off) with a canonical haemodynamic response function (HRF) representing the time-lagged BOLD response that follows neuronal firing. After preprocessing the data to reduce noise, eliminating the baseline signal, and inducing residual independence as outlined in A, the general linear model can be written in the form

𝐲v=k=1Kβv,k𝐱v,k+𝐞v,𝐞vNormal(𝟎,σ2𝐈),v=1,,N,\displaystyle\mathbf{y}_{v}=\sum_{k=1}^{K}\beta_{v,k}\mathbf{x}_{v,k}+\mathbf{e}_{v},\quad\mathbf{e}_{v}\sim\text{Normal}(\mathbf{0},\sigma^{2}\mathbf{I}),\quad v=1,\ldots,N, (1)

where βv,k\beta_{v,k} is the activation effect in terms of percent signal change at location vv for task kk. Note that the task-based regressors xk,vx_{k,v} vary across locations vv. This is because our preprocessing includes location-specific (or spatially variable) prewhitening to account for differences in residual autocorrelation across the cortex (Parlak et al., 2022). It is typical to fit this model in a massive univariate manner which we refer to as the classical GLM, which is able to be fitted quickly, even to high-resolution data. However, this advantage comes at the cost of ignoring the spatial dependence in brain activation, which reduces model power significantly when compared to the Bayesian GLM, as shown in Spencer et al. (2022).

In order to facilitate spatial Bayesian modeling, we rewrite the model in Equation (1) to represent data across all locations and times at once in Equation (2). Here, the preprocessed response 𝐲=(𝐲1,,𝐲N)TN\mathbf{y}=(\mathbf{y}_{1}^{\prime},\ldots,\mathbf{y}_{N}^{\prime})^{\prime}\in\mathbb{R}^{TN} at times t=1,,Tt=1,\ldots,T and locations v=1,,Nv=1,\ldots,N is explained by linear effects 𝜷kN\boldsymbol{\beta}_{k}\in\mathbb{R}^{N} for each of KK tasks (𝐗k=block-diagonal(𝐱1,k,,𝐱N,k)TN×N\mathbf{X}_{k}=\text{block-diagonal}(\mathbf{x}_{1,k},\ldots,\mathbf{x}_{N,k})\in\mathbb{R}^{TN\times N}) and an error term 𝐞TN\mathbf{e}\in\mathbb{R}^{TN}.

𝐲=k=1K𝐗k𝜷k+𝐞,𝐞Normal(𝟎,σ2𝐈)\displaystyle\mathbf{y}=\sum_{k=1}^{K}\mathbf{X}_{k}\boldsymbol{\beta}_{k}+\mathbf{e},\quad\mathbf{e}\sim\text{Normal}(\mathbf{0},\sigma^{2}\mathbf{I}) (2)

This linear model facilitates incorporating spatial dependence, in contrast to the NN separate linear models constituting the classical GLM, which does not include spatial dependence at the model-fitting stage. We fit this model across an entire cortical hemisphere, and the two cortical surface hemispheres are analyzed separately for computational reasons, which does not affect the results because they are represented by separate surface meshes that are physically distinct.

The SBSB GLM incorporates spatial dependence in the activation amplitudes 𝜷k\boldsymbol{\beta}_{k} using a special class of Gaussian Markov random field (GMRF) processes known stochastic partial differential equation (SPDE) priors (Mejia et al., 2020; Spencer et al., 2022; Lindgren et al., 2011; Bolin and Lindgren, 2013). The specific construction of the prior is

𝜷k\displaystyle\boldsymbol{\beta}_{k} =𝚿k𝐰k,\displaystyle=\boldsymbol{\Psi}_{k}\mathbf{w}_{k}, 𝐰k\displaystyle\mathbf{w}_{k} Normal(𝟎,𝐐k1),\displaystyle\sim\text{Normal}(\mathbf{0},\mathbf{Q}_{k}^{-1}), 𝐐k\displaystyle\mathbf{Q}_{k} =τk2(κk4𝐂+2κk2𝐆+𝐆𝐂1𝐆)\displaystyle=\tau_{k}^{2}(\kappa_{k}^{4}\mathbf{C}+2\kappa_{k}^{2}\mathbf{G}+\mathbf{G}\mathbf{C}^{-1}\mathbf{G}) (3)
κk\displaystyle\kappa_{k} log-Normal(μκ,σκ2),\displaystyle\sim\text{log-Normal}(\mu_{\kappa},\sigma_{\kappa}^{2}), τk\displaystyle\tau_{k} log-Normal(μτ,στ2),\displaystyle\sim\text{log-Normal}(\mu_{\tau},\sigma_{\tau}^{2}),

where 𝚿kN×n\boldsymbol{\Psi}_{k}\in\mathbb{R}^{N\times n} maps the original NN data locations to a triangular mesh consisting of nn vertices. 𝐂\mathbf{C} is a fixed diagonal matrix describing the relative mesh vertex precisions. 𝐆\mathbf{G} is a fixed sparse matrix describing the neighborhood structure of the mesh vertices, in which entries corresponding to neighboring locations take nonzero values. The mesh is usually created to maximize the minimum interior angle across the triangles, and extra vertices may be added along the boundaries to avoid adverse boundary effects. An example of this mesh structure can be seen in Figure 7. The default setting for the INLA implementation used in Mejia et al. (2020) sets the priors for τk\tau_{k} and κk\kappa_{k} to be log-Normal with mean 2 and variance 2. Under the SPDE representation in d\mathbb{R}^{d}, the variance of the random field can be represented as ϕk=Γ(ν)(Γ(α)(4π)d/2κk2ντk2)1\phi_{k}=\Gamma(\nu)\left(\Gamma(\alpha)(4\pi)^{d/2}\kappa_{k}^{2\nu}\tau_{k}^{2}\right)^{-1} where ν=αd/2\nu=\alpha-d/2. Spectral theory shows that an integer must be picked for α\alpha to obtain a Markov field. The form of the prior in Equation (3) assumes α=2\alpha=2, which makes ν=1\nu=1 for a surface, which has dimension d=2d=2. Therefore, the variance of the random field simplifies to

ϕk=(4πκk2τk2)1.\displaystyle\phi_{k}=(4\pi\kappa_{k}^{2}\tau_{k}^{2})^{-1}. (4)

The product κk2τk2\kappa_{k}^{2}\tau_{k}^{2} in the variance remains the same if κk2=cκk2\kappa_{k}^{2^{\prime}}=c\kappa_{k}^{2} and τk2=τk2/c\tau_{k}^{2^{\prime}}=\tau_{k}^{2}/c for any constant cc, and early experiments showed that this can cause the EM algorithm to produce values for κk2\kappa_{k}^{2} and τk2\tau_{k}^{2} in which one approaches infinity while the other approaches zero. Therefore, we rewrite the precision in the SPDE prior in terms of the variance of the random field as

𝐐k\displaystyle\mathbf{Q}_{k} =4πϕk(κk2𝐂+2𝐆+κk2𝐆𝐂1𝐆)=4πϕk𝐐~k.\displaystyle=\frac{4\pi}{\phi_{k}}\left(\kappa_{k}^{2}\mathbf{C}+2\mathbf{G}+\kappa_{k}^{-2}\mathbf{GC}^{-1}\mathbf{G}\right)=\frac{4\pi}{\phi_{k}}\tilde{\mathbf{Q}}_{k}. (5)

In this form, ϕk\phi_{k} controls the variance and κk2\kappa_{k}^{2} controls the spatial correlation of the random field.

To avoid blurring across structural boundaries, cortical surface data are already represented on a triangular mesh.

For mathematical convenience, the model in Equation (2) can be rewritten using the notation in Equation (3) to include all kk task covariates in matrix form:

𝐲=𝐗𝚿𝐰+𝐞,𝐞Normal(𝟎,σ2𝐈),\displaystyle\mathbf{y}=\mathbf{X}\boldsymbol{\Psi}\mathbf{w}+\mathbf{e},\quad\mathbf{e}\sim\text{Normal}(\mathbf{0},\sigma^{2}\mathbf{I}), (6)

where 𝐗TN×NK\mathbf{X}\in\mathbb{R}^{TN\times NK} is a column-bound matrix of the covariates, 𝚿NK×nK\boldsymbol{\Psi}\in\mathbb{R}^{NK\times nK} is a block diagonal matrix projecting the covariate values onto the triangular mesh, and 𝐰nK×1\mathbf{w}\in\mathbb{R}^{nK\times 1} is the vector of coefficients corresponding to the covariates on the mesh.

2.2 Expectation-Maximization Procedure

In order to provide a computational alternative to INLA for the SBSB GLM, here we derive an EM algorithm (Dempster et al., 1977; Gelman et al., 2013) to find the mode of the posterior distribution of 𝐰\mathbf{w}, and by extension 𝜷\boldsymbol{\beta}, along with maximum likelihood parameter estimates for 𝜽=(κ12,,κK2,ϕi,,ϕK,σ2)\boldsymbol{\theta}=(\kappa_{1}^{2},\ldots,\kappa_{K}^{2},\phi_{i},\ldots,\phi_{K},\sigma^{2})^{\prime}. This algorithm could be easily extended to incorporate priors on these hyperparameters to account for uncertainty in their values and avoid underestimation of the posterior variance of the latent fields. In addition, we obtain an estimate of the posterior precision of 𝜷k\boldsymbol{\beta}_{k}, which can be used in conjunction with the excursions method developed by Bolin and Lindgren (2015, 2017, 2018) to determine areas of activation in the latent fields.

2.2.1 Expectation of the Log-Likelihood Density (E-step)

Beginning with the definition of the joint log likelihood (𝚯|𝐲,𝐰)=p(𝐲|𝐰,σ2)p(𝐰|κ2,ϕ)\mathcal{L}(\boldsymbol{\Theta}|\mathbf{y},\mathbf{w})=p(\mathbf{y}|\mathbf{w},\sigma^{2})p(\mathbf{w}|\kappa^{2},\phi), the expectation of the log likelihood density with respect to 𝐰\mathbf{w} given (κ12(s),,κK2(s),ϕ1(s),,ϕK(s),σ2(s))=𝜽^(s)(\kappa_{1}^{2(s)},\ldots,\kappa_{K}^{2(s)},\phi_{1}^{(s)},\ldots,\phi_{K}^{(s)},\sigma^{2(s)})^{\prime}=\hat{\boldsymbol{\theta}}^{(s)} at algorithm step ss is found to be:

R(𝜽|𝜽^(s))=E𝐰|𝐲,𝜽^(s)(logp(𝐲,𝐰|𝜽))\displaystyle R(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)})=E_{\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}}(\log p(\mathbf{y},\mathbf{w}|\boldsymbol{\theta})) =logp(𝐲|𝐰,𝜽)p(𝐰|𝐲,𝜽^(s))𝑑𝐰+logp(𝐰|𝜽)p(𝐰|𝐲,𝜽^(s))𝑑𝐰,\displaystyle=\int\log p(\mathbf{y}|\mathbf{w},\boldsymbol{\theta})p\left(\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)d\mathbf{w}+\int\log p(\mathbf{w}|\boldsymbol{\theta})p\left(\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)d\mathbf{w}, (7)
R1(𝜽|𝜽^(s))+R2(𝜽|𝜽^(s))\displaystyle\propto R_{1}\left(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}\right)+R_{2}\left(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}\right)

where 𝐰|𝐲,𝜽Normal(𝝁,𝚺)\mathbf{w}|\mathbf{y},\boldsymbol{\theta}\sim\text{Normal}\left(\boldsymbol{\mu},\boldsymbol{\Sigma}\right), such that

𝐐\displaystyle\mathbf{Q} =block-diagonal(𝐐1,,𝐐K),\displaystyle=\text{block-diagonal}(\mathbf{Q}_{1},\ldots,\mathbf{Q}_{K}), 𝚺\displaystyle\boldsymbol{\Sigma} =(𝐐+1σ2𝚿𝐗𝐗𝚿)1,\displaystyle=\left(\mathbf{Q}+\frac{1}{\sigma^{2}}\boldsymbol{\Psi}^{\prime}\mathbf{X}^{\prime}\mathbf{X}\boldsymbol{\Psi}\right)^{-1}, 𝝁\displaystyle\boldsymbol{\mu} =1σ2𝚺𝚿𝐗𝐲.\displaystyle=\frac{1}{\sigma^{2}}\boldsymbol{\Sigma}\boldsymbol{\Psi}^{\prime}\mathbf{X}^{\prime}\mathbf{y}.

In the interest of computational efficiency, we use several identities to avoid expensive matrix inversions (see Section 2.2.3). This avoids the calculation of 𝚺\boldsymbol{\Sigma} which is computationally prohibitive, as it requires the inversion of an nK×nKnK\times nK dense matrix, with nn between 1,000 and 30,000 in most practical applications. In the next section, we show the MLE of σ2\sigma^{2}, and we outline the optimization strategy for κk2\kappa_{k}^{2} and ϕk\phi_{k}.

2.2.2 Maximization of the Log Likelihood Density (M-step)

Next, we expand the expected joint log-likelihood in Equation (7), which we maximize in order to find the MLEs of the elements in 𝜽\boldsymbol{\theta}.

R(𝜽|𝜽^(s))=\displaystyle R(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)})= E𝐰|𝐲,𝜽^(s)(logp(𝐲,𝐰|𝜽))=logp(𝐲|𝐰,𝜽)p(𝐰|𝐲,𝜽^(s))𝑑𝐰+logp(𝐰|𝜽)p(𝐰|𝐲,𝜽^(s))𝑑𝐰,\displaystyle E_{\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}}(\log p(\mathbf{y},\mathbf{w}|\boldsymbol{\theta}))=\int\log p(\mathbf{y}|\mathbf{w},\boldsymbol{\theta})p\left(\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)d\mathbf{w}+\int\log p(\mathbf{w}|\boldsymbol{\theta})p\left(\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)d\mathbf{w},
\displaystyle\propto log[σTNexp{12σ2(𝐲𝐗𝚿𝐰)(𝐲𝐗𝚿𝐰)}]𝑑𝐰+log[|𝐐|1/2exp{12𝐰𝐐𝐰}]𝑑𝐰\displaystyle\int\log\left[\sigma^{-TN}\exp\left\{-\frac{1}{2\sigma^{2}}(\mathbf{y}-\mathbf{X}\boldsymbol{\Psi}\mathbf{w})^{\prime}(\mathbf{y}-\mathbf{X}\boldsymbol{\Psi}\mathbf{w})\right\}\right]d\mathbf{w}+\int\log\left[|\mathbf{Q}|^{1/2}\exp\left\{-\frac{1}{2}\mathbf{w}^{\prime}\mathbf{Q}\mathbf{w}\right\}\right]d\mathbf{w}
\displaystyle\propto TN2log(σ2)12σ2𝐲𝐲+[12σ2𝐲𝐗𝚿𝐰+12σ2𝐰𝚿𝐗𝐲12σ2𝐰𝚿𝐗𝐗𝚿𝐰]𝑑𝐰\displaystyle-\frac{TN}{2}\log(\sigma^{2})-\frac{1}{2\sigma^{2}}\mathbf{y}^{\prime}\mathbf{y}+\int\left[\frac{1}{2\sigma^{2}}\mathbf{y}^{\prime}\mathbf{X}\boldsymbol{\Psi}\mathbf{w}+\frac{1}{2\sigma^{2}}\mathbf{w}^{\prime}\boldsymbol{\Psi}^{\prime}\mathbf{X}^{\prime}\mathbf{y}-\frac{1}{2\sigma^{2}}\mathbf{w}^{\prime}\boldsymbol{\Psi}^{\prime}\mathbf{X^{\prime}X}\boldsymbol{\Psi}\mathbf{w}\right]d\mathbf{w}
+12log|𝐐|12𝐰𝐐𝐰𝑑𝐰\displaystyle+\frac{1}{2}\log|\mathbf{Q}|-\frac{1}{2}\int\mathbf{w^{\prime}Qw}d\mathbf{w}
\displaystyle\propto TN2log(σ2)12σ2𝐲𝐲+1σ2𝐲𝐗𝚿E(𝐰|𝐲,𝜽^(s))12σ2E(𝐰𝚿𝐗𝐗𝚿𝐰|𝐲,𝜽^(s))\displaystyle-\frac{TN}{2}\log(\sigma^{2})-\frac{1}{2\sigma^{2}}\mathbf{y}^{\prime}\mathbf{y}+\frac{1}{\sigma^{2}}\mathbf{y^{\prime}X}\boldsymbol{\Psi}\text{E}\left(\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)-\frac{1}{2\sigma^{2}}\text{E}\left(\mathbf{w}^{\prime}\boldsymbol{\Psi}^{\prime}\mathbf{X^{\prime}X}\boldsymbol{\Psi}\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)
+12log|𝐐|12E(𝐰𝐐𝐰|𝐲,𝜽^(s))\displaystyle+\frac{1}{2}\log|\mathbf{Q}|-\frac{1}{2}\text{E}\left(\mathbf{w^{\prime}Qw}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)

The two quantities E(𝐰𝚿𝐗𝐗𝚿𝐰|𝐲,𝜽^(s))\text{E}\left(\mathbf{w}^{\prime}\boldsymbol{\Psi}^{\prime}\mathbf{X^{\prime}X}\boldsymbol{\Psi}\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right) and E(𝐰𝐐𝐰|𝐲,𝜽^(s))\text{E}\left(\mathbf{w^{\prime}Qw}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right) are scalars, and are therefore equivalent to their trace. The trace operation is invariant to circular permutations, so we can rewrite these two expectations as

E(𝐰𝚿𝐗𝐗𝚿𝐰|𝐲,𝜽^(s))\displaystyle\text{E}\left(\mathbf{w}^{\prime}\boldsymbol{\Psi}^{\prime}\mathbf{X^{\prime}X}\boldsymbol{\Psi}\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right) =Tr(E(𝚿𝐗𝐗𝚿𝐰𝐰|𝐲,𝜽^(s)))=Tr(𝚿𝐗𝐗𝚿E(𝐰𝐰|𝐲,𝜽^(s)))\displaystyle=\text{Tr}\left(\text{E}\left(\boldsymbol{\Psi}^{\prime}\mathbf{X^{\prime}X}\boldsymbol{\Psi}\mathbf{ww^{\prime}}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)=\text{Tr}\left(\boldsymbol{\Psi}^{\prime}\mathbf{X^{\prime}X}\boldsymbol{\Psi}\text{E}\left(\mathbf{ww^{\prime}}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)
E(𝐰𝐐𝐰|𝐲,𝜽^(s))\displaystyle\text{E}\left(\mathbf{w^{\prime}Qw}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right) =Tr(E(𝐐𝐰𝐰|𝐲,𝜽^(s)))=Tr(𝐐E(𝐰𝐰|𝐲,𝜽^(s)))\displaystyle=\text{Tr}\left(\text{E}\left(\mathbf{Qww^{\prime}}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)=\text{Tr}\left(\mathbf{Q}\text{E}\left(\mathbf{ww^{\prime}}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)

We can now write the expected log-likelihood as

R(𝜽|𝜽^(s))=\displaystyle R(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)})= E𝐰|𝐲,𝜽^(s)(logp(𝐲,𝐰|𝜽))\displaystyle E_{\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}}(\log p(\mathbf{y},\mathbf{w}|\boldsymbol{\theta}))
\displaystyle\propto TN2log(σ2)12σ2𝐲𝐲+1σ2𝐲𝐗𝚿E(𝐰|𝐲,𝜽^(s))12σ2Tr(𝚿𝐗𝐗𝚿E(𝐰𝐰|𝐲,𝜽^(s)))R1(𝜽|𝜽^(s)\displaystyle\underbrace{-\frac{TN}{2}\log(\sigma^{2})-\frac{1}{2\sigma^{2}}\mathbf{y}^{\prime}\mathbf{y}+\frac{1}{\sigma^{2}}\mathbf{y^{\prime}X}\boldsymbol{\Psi}\text{E}\left(\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)-\frac{1}{2\sigma^{2}}\text{Tr}\left(\boldsymbol{\Psi}^{\prime}\mathbf{X^{\prime}X}\boldsymbol{\Psi}\text{E}\left(\mathbf{ww^{\prime}}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)}_{R_{1}(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}} (8)
+12log|𝐐|12Tr(𝐐E(𝐰𝐰|𝐲,𝜽^(s)))R2(𝜽|𝜽^(s))\displaystyle+\underbrace{\frac{1}{2}\log|\mathbf{Q}|-\frac{1}{2}\text{Tr}\left(\mathbf{Q}\text{E}\left(\mathbf{ww^{\prime}}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)}_{R_{2}\left(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}\right)}

The MLE for σ2\sigma^{2} can be found through the maximization of R1(𝜽|𝜽^(s))R_{1}\left(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}\right) with respect to σ2\sigma^{2}:

R1σ2\displaystyle\frac{\partial R_{1}}{\partial\sigma^{2}} =TN2σ2+12(σ2)2𝐲𝐲1(σ2)2𝐲𝐗𝚿E(𝐰|𝐲,𝜽^(s))+12(σ2)2Tr(𝚿𝐗𝐗𝚿E(𝐰𝐰|,𝐲,𝜽^(s))),\displaystyle=-\frac{TN}{2\sigma^{2}}+\frac{1}{2(\sigma^{2})^{2}}\mathbf{y^{\prime}y}-\frac{1}{(\sigma^{2})^{2}}\mathbf{y}^{\prime}\mathbf{X}\boldsymbol{\Psi}\text{E}(\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)})+\frac{1}{2(\sigma^{2})^{2}}\text{Tr}(\boldsymbol{\Psi}^{\prime}\mathbf{X}^{\prime}\mathbf{X}\boldsymbol{\Psi}\text{E}(\mathbf{ww}^{\prime}|,\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)})),
σ2^\displaystyle\widehat{\sigma^{2}} =1TN[𝐲𝐲2𝐲𝐗𝚿E(𝐰|𝐲,𝜽^(s))+Tr(𝚿𝐗𝐗𝚿E(𝐰𝐰|,𝐲,𝜽^(s))]\displaystyle=\frac{1}{TN}\left[\mathbf{y^{\prime}y}-2\mathbf{y^{\prime}X}\boldsymbol{\Psi}E(\mathbf{w}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)})+\text{Tr}(\boldsymbol{\Psi}^{\prime}\mathbf{X^{\prime}X}\boldsymbol{\Psi}E(\mathbf{ww^{\prime}}|,\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)})\right] (9)

Next, values for ϕk\phi_{k} and κk2\kappa_{k}^{2} must be found that maximize the log-likelihood. This is equivalent to optimizing R2(𝜽|𝜽^(s))R_{2}(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}) with respect to κk2\kappa_{k}^{2} and ϕk\phi_{k}. Setting 𝐐~k=(κk2𝐂+2𝐆+κk2𝐆𝐂1𝐆)\tilde{\mathbf{Q}}_{k}=\left(\kappa_{k}^{2}\mathbf{C}+2\mathbf{G}+\kappa_{k}^{-2}\mathbf{GC}^{-1}\mathbf{G}\right), we rewrite R2(𝜽|𝜽^(s))R_{2}\left(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}\right) as

R2(𝜽|𝜽^(s))\displaystyle R_{2}\left(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}\right) =12log|𝐐|12Tr(𝐐E(𝐰𝐰|𝐲,𝜽^(s)))\displaystyle=\frac{1}{2}\log\left|\mathbf{Q}\right|-\frac{1}{2}\text{Tr}\left(\mathbf{Q}\text{E}\left(\mathbf{ww}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)
=12log(k=1K|14πϕk𝐐~k|)12Tr(𝐐E(𝐰𝐰|𝐲,𝜽^(s)))\displaystyle=\frac{1}{2}\log\left(\prod_{k=1}^{K}\left|\frac{1}{4\pi\phi_{k}}\tilde{\mathbf{Q}}_{k}\right|\right)-\frac{1}{2}\text{Tr}\left(\mathbf{Q}\text{E}\left(\mathbf{ww}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)
=n2k=1Klog(14πϕk)+12k=1Klog(|𝐐~k|)12Tr(𝐐E(𝐰𝐰|𝐲,𝜽^(s)))\displaystyle=\frac{n}{2}\sum_{k=1}^{K}\log\left(\frac{1}{4\pi\phi_{k}}\right)+\frac{1}{2}\sum_{k=1}^{K}\log\left(|\tilde{\mathbf{Q}}_{k}|\right)-\frac{1}{2}\text{Tr}\left(\mathbf{Q}\text{E}\left(\mathbf{ww}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)
=nK2log(4π)n2k=1Klog(ϕk)+12k=1Klog(|𝐐~k|)12k=1KTr(14πϕk𝐐~kE(𝐰k𝐰k|𝐲,𝜽^(s)))\displaystyle=-\frac{nK}{2}\log(4\pi)-\frac{n}{2}\sum_{k=1}^{K}\log(\phi_{k})+\frac{1}{2}\sum_{k=1}^{K}\log\left(|\tilde{\mathbf{Q}}_{k}|\right)-\frac{1}{2}\sum_{k=1}^{K}\text{Tr}\left(\frac{1}{4\pi\phi_{k}}\tilde{\mathbf{Q}}_{k}\text{E}\left(\mathbf{w}_{k}\mathbf{w}_{k}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right)
𝐐\displaystyle\mathbf{Q} =block-diagonal(14πϕ1𝐐~1,,14πϕK𝐐~K)\displaystyle=\text{block-diagonal}\left(\frac{1}{4\pi\phi_{1}}\tilde{\mathbf{Q}}_{1},\ldots,\frac{1}{4\pi\phi_{K}}\tilde{\mathbf{Q}}_{K}\right) (10)

so that

R2ϕk(𝜽|𝜽^(s))\displaystyle\frac{\partial R_{2}}{\partial\phi_{k}}\left(\boldsymbol{\theta}|\hat{\boldsymbol{\theta}}^{(s)}\right) =n2ϕk+18πϕk2Tr(𝐐~kE(𝐰k𝐰k|𝐲,𝜽^(s))),\displaystyle=-\frac{n}{2\phi_{k}}+\frac{1}{8\pi\phi_{k}^{2}}\text{Tr}\left(\tilde{\mathbf{Q}}_{k}\text{E}\left(\mathbf{w}_{k}\mathbf{w}_{k}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right),
ϕ^k\displaystyle\hat{\phi}_{k} =14πnTr(𝐐~kE(𝐰k𝐰k|𝐲,𝜽^(s))).\displaystyle=\frac{1}{4\pi n}\text{Tr}\left(\tilde{\mathbf{Q}}_{k}\text{E}\left(\mathbf{w}_{k}\mathbf{w}_{k}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right).

The optimal value for κk2\kappa_{k}^{2} is found by maximizing

f(κk2|ϕ^k)=\displaystyle f(\kappa_{k}^{2}|\hat{\phi}_{k})= 12log(|𝐐~k|)18πϕ^kTr(𝐐~kE(𝐰k𝐰k|𝐲,𝜽^(s)))\displaystyle\frac{1}{2}\log\left(|\tilde{\mathbf{Q}}_{k}|\right)-\frac{1}{8\pi\hat{\phi}_{k}}\text{Tr}\left(\tilde{\mathbf{Q}}_{k}\text{E}\left(\mathbf{w}_{k}\mathbf{w}_{k}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)}\right)\right) (11)

with respect to κk2\kappa_{k}^{2}. Convergence of the EM algorithm occurs when the average of the squared difference between 𝜽(𝒔)\boldsymbol{\theta^{(s)}} and 𝜽(s1)\boldsymbol{\theta}^{(s-1)} drops below a given stopping rule tolerance, ϵ\epsilon, which we set to 0.001 in our analyses. This value is based on a study of different stopping rule tolerances using simulated datasets (see B).

2.2.3 Identities for computational efficiency

In order to perform each step of the EM algorithm, computationally-efficient identities are used in order to reduce time and memory requirements. To begin, to calculate 𝝁k=1σ2𝚺k𝚿k𝐗k𝐲\boldsymbol{\mu}_{k}=\frac{1}{\sigma^{2}}\boldsymbol{\Sigma}_{k}\boldsymbol{\Psi}_{k}^{\prime}\mathbf{X}_{k}^{\prime}\mathbf{y}, we solve the linear equation

𝚺k1𝝁k\displaystyle\boldsymbol{\Sigma}_{k}^{-1}\boldsymbol{\mu}_{k} =1σ2𝚿k𝐗k𝐲.\displaystyle=\frac{1}{\sigma^{2}}\boldsymbol{\Psi}_{k}^{\prime}\mathbf{X}_{k}^{\prime}\mathbf{y}.

Here, 𝚺k1\boldsymbol{\Sigma}_{k}^{-1} is sparse and inexpensive to calculate, and the Eigen C++ library (Guennebaud, Jacob, et al., 2010) has efficient routines for solving linear equations of the form 𝐀𝐱=𝐛\mathbf{Ax}=\mathbf{b} when 𝐀\mathbf{A} is sparse.

Finding the MLE values for ϕk\phi_{k} and κk2\kappa_{k}^{2} does not calculating the posterior precision in full, but approximating a trace for the matrix product involving the block-diagonal precision task component 𝚺k1\boldsymbol{\Sigma}_{k}^{-1}. This allows for operations to be performed on smaller matrices and in parallel across tasks. The EM algorithm requires finding a trace of the form Tr(𝐀E(𝐰k𝐰k|𝐲,𝜽^(s)))\text{Tr}(\mathbf{A}\text{E}(\mathbf{w}_{k}\mathbf{w}_{k}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)})) for the hyperparameters ϕk\phi_{k} and κk2\kappa_{k}^{2}. This trace form can be simplified as

Tr(𝐀E(𝐰k𝐰k|𝐲,𝜽^(s)))\displaystyle\text{Tr}\left(\mathbf{A}\text{E}(\mathbf{w}_{k}\mathbf{w}_{k}^{\prime}|\mathbf{y},\hat{\boldsymbol{\theta}}^{(s)})\right) =Tr(𝐀(𝚺k𝝁k𝝁k))\displaystyle=\text{Tr}\left(\mathbf{A}\left(\boldsymbol{\Sigma}_{k}-\boldsymbol{\mu}_{k}\boldsymbol{\mu}_{k}^{\prime}\right)\right)
=Tr(𝐀𝚺k)Tr(𝐀𝝁k𝝁k)\displaystyle=\text{Tr}\left(\mathbf{A}\boldsymbol{\Sigma}_{k}\right)-\text{Tr}\left(\mathbf{A}\boldsymbol{\mu}_{k}\boldsymbol{\mu}_{k}^{\prime}\right)
=Tr(𝐀𝚺k)Tr(𝝁k𝐀𝝁k)\displaystyle=\text{Tr}\left(\mathbf{A}\boldsymbol{\Sigma}_{k}\right)-\text{Tr}\left(\boldsymbol{\mu}_{k}^{\prime}\mathbf{A}\boldsymbol{\mu}_{k}\right)
=Tr(𝐀𝚺k)𝝁k𝐀𝝁k\displaystyle=\text{Tr}\left(\mathbf{A}\boldsymbol{\Sigma}_{k}\right)-\boldsymbol{\mu}_{k}^{\prime}\mathbf{A}\boldsymbol{\mu}_{k}

The calculation of Tr(𝐀𝚺k)\text{Tr}(\mathbf{A}\boldsymbol{\Sigma}_{k}) is done via approximation using the Hutchinson trace estimator (Hutchinson, 1989). To begin, consider the following linear system 𝚺1𝐏=𝐕\boldsymbol{\Sigma}^{-1}\mathbf{P}=\mathbf{V}, where 𝐏\mathbf{P} is a lower-dimensional approximation to the inverse of 𝚺\boldsymbol{\Sigma}, 𝐕nT×Ns\mathbf{V}\in\mathbb{R}^{nT\times N_{s}} is a random matrix taking the values of 1 and -1 with equal probability, and NsN_{s} is the size of the estimator. We choose to use Ns=50N_{s}=50 in our analyses because it limits the relative error of the approximation to about 10% with 95% probability (Skorski, 2021). The symbolic representation of the Cholesky decomposition of 𝚺\boldsymbol{\Sigma} is calculated before beginning the EM algorithm, reducing the memory and time requirements at each iteration when solving for 𝐏\mathbf{P}. We update 𝐕\mathbf{V} with new random values at each EM iteration to reduce sampling bias in the approximation. Combining these strategies results in the trace approximation

Tr(𝐀𝚺)1NsTr(𝐏𝐀𝐕).\text{Tr}(\mathbf{A}\boldsymbol{\Sigma})\approx\frac{1}{N_{s}}\text{Tr}\left(\mathbf{P^{\prime}AV}\right).

This method reduces the computation time of estimating the trace from 𝒪(n3)\mathcal{O}(n^{3}) to 𝒪(Ns3)\mathcal{O}(N_{s}^{3}) where NsnN_{s}\ll n.

The speed of the EM algorithm is improved through the use of the squared extrapolation (SQUAREM) method developed by Varadhan and Roland (2004) in order to reduce the total number of evaluations of the fixed-point function 𝜽(s+1)=f(𝜽(s))\boldsymbol{\theta}^{(s+1)}=f(\boldsymbol{\theta}^{(s)}) at any EM algorithm iteration ss. Code for the SQUAREM method in C++ is adapted from the ashr R package (Stephens et al., 2022).

All computations are implemented in the BayesfMRI package in R (Mejia et al., 2022), which uses the Rcpp and RcppEigen packages (Eddelbuettel and François, 2011; Eddelbuettel, 2013; Bates and Eddelbuettel, 2013) to implement C++ code. We rely upon the Eigen template library for linear algebra in C++ (Guennebaud et al., 2010) to achieve significant speed improvements for linear algebra operations.

Time to convergence is substantially decreased by using the classical GLM estimates for 𝐰k\mathbf{w}_{k} to inform the initial hyperparameter estimates 𝜽^(0)\hat{\boldsymbol{\theta}}^{(0)}. Initial values ϕ^k(0)\hat{\phi}_{k}^{(0)} and κ^k2(0)\hat{\kappa}_{k}^{2(0)} are found by iteratively solving the following two equations until a threshold of convergence is met.

ϕ^k(0)\displaystyle\hat{\phi}_{k}^{(0)} =14πn𝐰^k𝐐~k𝐰^k,\displaystyle=\frac{1}{4\pi n}\hat{\mathbf{w}}_{k}^{\prime}\tilde{\mathbf{Q}}_{k}\hat{\mathbf{w}}_{k}, κ^k2(0)\displaystyle\hat{\kappa}_{k}^{2(0)} =argmaxκk212log|𝐐~k|18πϕ^k(0)𝐰^k𝐐~k𝐰^k\displaystyle=\operatorname*{argmax}_{\kappa_{k}^{2}}\frac{1}{2}\log|\tilde{\mathbf{Q}}_{k}|-\frac{1}{8\pi\hat{\phi}_{k}^{(0)}}\hat{\mathbf{w}}_{k}^{\prime}\tilde{\mathbf{Q}}_{k}\hat{\mathbf{w}}_{k}

Finally, the variance in the residuals from the classical GLM is used as the initial value for the observed variance (σ^2(0)\hat{\sigma}^{2(0)}). This computationally-efficient algorithm allows for rapid convergence to Bayesian estimates for the hyperparameters at the subject level, which can then be combined to allow for group-level inference.

2.3 Multi-subject analyses

Mejia et al. (2020) develops a joint multi-subject model that combines results across multiple single-subject models by taking a weighted average of their hyperparameters (𝜽G\boldsymbol{\theta}_{G}), which results in the multi-subject posterior distribution for the SPDE hyperparameters. Our own multi-subject model differs because each single-subject result returns a point estimate of the maximum a posteriori values for 𝜽m\boldsymbol{\theta}_{m} for each subject mm, rather than a distribution. For our multi-subject model, we assume that the values found in the single-subject analysis are noisy estimates of the fixed true values of 𝜽G\boldsymbol{\theta}_{G}. Therefore, in order to find the maximum a posteriori values for 𝜽G\boldsymbol{\theta}_{G} across all MM subjects, the multi-subject estimate of the SPDE hyperparameters is found as

𝜽^G\displaystyle\hat{\boldsymbol{\theta}}_{G} =(ϕ1,G,,ϕK,G,κ1,G2,,κK,G2,σG2)=m=1Mλm𝜽^m,\displaystyle=(\phi_{1,G},\ldots,\phi_{K,G},\kappa_{1,G}^{2},\ldots,\kappa_{K,G}^{2},\sigma_{G}^{2})^{\prime}=\sum_{m=1}^{M}\lambda_{m}\hat{\boldsymbol{\theta}}_{m},

where 𝜽^m\hat{\boldsymbol{\theta}}_{m} is a vector of the maximum a posteriori values found for each subject mm and λm\lambda_{m} is a weight assigned to each subject’s observation, which can be assigned based on the amount of data in each subject’s scan and their respective noise levels. We then adjust the posterior distributions of 𝐰m\mathbf{w}_{m} found for each of the MM subjects to reflect these updated hyperparameter values, calculating 𝐐G\mathbf{Q}_{G} as in Equation (10) using 𝜽^G\hat{\boldsymbol{\theta}}_{G}:

𝐰m|𝐲,𝜽G\displaystyle\mathbf{w}_{m}^{*}|\mathbf{y},\boldsymbol{\theta}_{G} Normal(𝝁m,𝚺m),\displaystyle\sim\text{Normal}\left(\boldsymbol{\mu}_{m},\boldsymbol{\Sigma}_{m}\right), 𝚺m\displaystyle\boldsymbol{\Sigma}_{m} =(𝐐G+1σG2𝚿𝐗m𝐗m𝚿)1,\displaystyle=\left(\mathbf{Q}_{G}+\frac{1}{\sigma_{G}^{2}}\boldsymbol{\Psi}^{\prime}\mathbf{X}_{m}^{\prime}\mathbf{X}_{m}\boldsymbol{\Psi}\right)^{-1}, 𝝁m\displaystyle\boldsymbol{\mu}_{m} =1σG2𝚺m𝚿𝐗m𝐲.\displaystyle=\frac{1}{\sigma_{G}^{2}}\boldsymbol{\Sigma}_{m}\boldsymbol{\Psi}^{\prime}\mathbf{X}_{m}^{\prime}\mathbf{y}.

Next, HH samples are taken from these updated posterior distributions for the subjects, and a contrast vector 𝐜KM×1\mathbf{c}\in\mathbb{R}^{KM\times 1} is specified in order to find the multi-subject posterior distribution for a linear combination of tasks and subjects. For example, if an equally-weighted average of the task activation amplitudes across all subjects for the first task is desired, then the contrast vector would take the form

𝐜=(𝐜1,𝐜2,,𝐜M),𝐜m=(1/M,0,,0)K.\displaystyle\mathbf{c}=(\mathbf{c}_{1}^{\prime},\mathbf{c}_{2}^{\prime},\ldots,\mathbf{c}_{M}^{\prime})^{\prime},\quad\mathbf{c}_{m}=(1/M,0,\ldots,0)^{\prime}\in\mathbb{R}^{K}.

A contrast matrix is then created using the Kronecker product such that 𝐂=𝐈N𝐜N×NKM\mathbf{C}=\mathbf{I}_{N}\otimes\mathbf{c}^{\prime}\in\mathbb{R}^{N\times NKM} and the multi-subject posterior draws for the average activation amplitude for task 1 is found to be 𝜷=𝐂𝜷(1:H)\boldsymbol{\beta}^{*}=\mathbf{C}\boldsymbol{\beta}^{(1:H)}, where 𝜷(1:H)=(𝜷1𝜷H)NKM×H\boldsymbol{\beta}^{(1:H)}=(\boldsymbol{\beta}_{1}^{*}\cdots\boldsymbol{\beta}_{H}^{*})\in\mathbb{R}^{NKM\times H}, and 𝜷hNKM×1=(𝜷1,h𝜷M,h)\boldsymbol{\beta}_{h}^{*}\in\mathbb{R}^{NKM\times 1}=(\boldsymbol{\beta}_{1,h}^{*\prime}\cdots\boldsymbol{\beta}_{M,h}^{*\prime})^{\prime} is a column-bound matrix constructed where 𝜷m,hNK×1=𝚿𝐰m,h\boldsymbol{\beta}_{m,h}^{*^{\prime}}\in\mathbb{R}^{NK\times 1}=\boldsymbol{\Psi}\mathbf{w}_{m,h}^{*} and 𝐰m,hnK×1\mathbf{w}_{m,h}^{*}\in\mathbb{R}^{nK\times 1} is the vector of the hhth draw from the posterior distribution of 𝐰m\mathbf{w}_{m}^{*}. This principled combination of single-subject results provides the distribution of a multi-subject average activation amplitude effect, which can be used with the excursions method to determine population-level areas of activation, as discussed in the next section.

2.4 Determination of Activation

In addition to assessing an estimate of an effect size for brain activation, it is also desirable to determine which regions of the brain are significantly activated above a given threshold. The Bayesian GLM proposed by Mejia et al. (2020) and validated by Spencer et al. (2022) takes advantage of the posterior distributions for the activation effects at all data locations on the cortical surface or subcortical volume through the use of the excursions set method proposed by Bolin and Lindgren (2015). The excursions method examines the joint distribution of the activation 𝜷k=(β1,k,,βV,k)\boldsymbol{\beta}_{k}=(\beta_{1,k},\ldots,\beta_{V,k})^{\prime} for task kk and location v=1,,Vv=1,\ldots,V. Specifically, the excursions method finds the largest set 𝒱=(a1,a2,,ap)\mathcal{V}=(a_{1},a_{2},\ldots,a_{p})^{\prime} of data locations such that P(βa1,k>γ,βa2,k>γ,,βap,k>γ)1αP(\beta_{a_{1},k}>\gamma,\beta_{a_{2},k}>\gamma,\ldots,\beta_{a_{p},k}>\gamma)\geq 1-\alpha for a given threshold γ\gamma and credible level 1α1-\alpha. This is done using the posterior distributions, either at the single- or multi-subject, for the activation amplitudes 𝜷k\boldsymbol{\beta}_{k}. Implementation of the excursions method has been done in the excursions package in R (Bolin and Lindgren, 2015, 2017, 2018), which is used by the BayesfMRI package.

3 Simulated Data Analysis

Data were simulated for cortical surface in R using our brainSim333https://github.com/danieladamspencer/brainSim/tree/1.0 package. This package generates functional MRI data on the cortical surface using design matrices with spatially-dependent coefficients and temporally autoregressive error terms. The software can simulate data for a variable number of subjects, scanning sessions, and scanning runs, which allows for nested individual, session, and run variances from a “true” global coefficient.

3.1 Single-subject analysis

We begin with simulations for a single subject, simulating data under nine different conditions. These conditions are the combination of resampling resolution at n={2,000;5,000;10,000}n=\{2,000;5,000;10,000\} cortical surface vertices per hemisphere, and setting the number of tasks to be K={2,5,8}K=\{2,5,8\}. Under these conditions, the length of the scan is set to T=300T=300, and the maximum value for the spatial coefficient is set to 2. Under these conditions, the average value for the nonzero true coefficients is approximately 0.5 due to spatial smoothing in the data generation. The error term was set to have a variance of 1, and the error was assumed to be temporally independent.

Each of the nine conditions was used to generate 10 datasets, which were all fit to the classical GLM, and the SBSB GLM using the INLA and EM implementations. The INLA implementation allows for the use of the PARDISO linear algebra library (Alappat et al., 2020; Bollhöfer et al., 2020, 2019), which allows for multiple processor threads to be used in expensive matrix solve computations. In these trials, the INLA implementation is allowed to use 6 threads as an advantageous setting for analysis. We compare the results in terms of the amount of time to perform the analysis after preprocessing (Figure 1(a)) and in terms of their root mean squared error (RMSE) (Figure 1(b)). These results show that the EM implementation is faster in terms of computation time than the INLA implementation when the number of tasks (KK) is relatively small. The speeds are similar when K=5K=5, but the INLA implementation is currently faster than the EM implementation due to faster solve times stemming from its use of the PARDISO library. Implementing the EM algorithm to take advantage of a solver that allows the use of multiple cores is likely to further increase its speed, potentially outstripping the INLA implementation in terms of computation time across all settings. However, no parallelization libraries are currently available across UNIX and Windows systems that are also compatible with the RcppEigen library. Using the RMSE measure, the EM implementation’s accuracy is similar to that of the INLA implementation, though it is important to note that the INLA implementation failed to converge in the analysis of 13 of the 90 datasets, and thus did not produce estimates in these cases. Twelve of these datasets had the condition that K=2K=2, and of those, 7 had n=10000n=10000. This suggests that the EM algorithm is more stable than the INLA implementation when evaluating experiments with few tasks as resolution increases. A visualization of an estimated and true coefficient surface for the sampling condition when n=5000n=5000 and K=5K=5 can be seen in Figure 2(a).

Refer to caption
(a) Time to run a single-subject analysis after preprocessing
Refer to caption
(b) The square root of the mean squared error in single-subject analyses
Figure 1: Performance comparison for the single-subject, simulated data under the four different simulation scenarios. Each boxplot summarizes the distribution of times and errors for ten simulated datasets under each of nine scenarios analyzed using the three methods. RMSE performance of the INLA method is approximate, as the algorithm did not converge for 13 of the 90 datasets. Twelve of these cases occurred when K=2K=2.

Under the simulation condition with resolution n=5000n=5000 and number of tasks K=5K=5, simulated data were generated for two runs in a single session to allow for sharing of the hyperparameters 𝜽=(κ1,,κ5,ϕ1,,ϕ5,σ2)\boldsymbol{\theta}=(\kappa_{1},\ldots,\kappa_{5},\phi_{1},\ldots,\phi_{5},\sigma^{2})^{\prime} across the runs. The INLA and EM implementations of the Bayesian GLM were then used to analyze the first run by itself and the two runs combined. The effect estimates and activations illustrating the difference in the 1- and 2-run analyses can be seen in Figure 2. As there is a small run effect simulated in the generation of the data, the true values of the activation amplitude are not identical between the two runs. Therefore, the true coefficient nonzero region in the single run is not centered exactly in the same location as the true nonzero coefficient region across both runs. In terms of both the coefficient estimation and activation, the EM implementation performs very similarly to the INLA implementation. The posterior standard deviations for the activation amplitudes for a single-run analysis are shown for the EM and INLA implementations in Figure 8, which shows that the posterior standard deviations are slightly lower in the INLA implementation due to the prior imposed on the range and variance hyperparameters.

1 Run 2-Run Average

Truth

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
Refer to caption

EM

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

INLA

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

Classical

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
Refer to caption
(a) Amplitude of activations
1 Run 2-Run Average

Truth

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
γ=\gamma= \blacksquare 0% \blacksquare 0.5% \blacksquare 1%

EM

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

INLA

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

Classical

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
γ=\gamma= \blacksquare 0% \blacksquare 0.5% \blacksquare 1%
(b) Areas of Activation
Figure 2: Cortical surface estimates for the amplitude of activation and areas of activation in percent signal change for the INLA and EM implementations of the Bayesian GLM and the classical GLM. True amplitudes of activation and areas of activation are shown for comparison. Different color scales are shown for the true values and the estimates of the amplitudes of activation because the estimates are attenuated, as one would expect for models assuming sparsity. For the Bayesian GLM implementations, the areas of activation are found using the excursions method, and are found as areas deemed jointly greater than threshold γ\gamma with probability 0.01. For the classical GLM, the areas of activation are found by fixing the false discovery rate at 0.01.

3.2 Multi-subject analysis

In order to compare the performance of the multi-subject analysis between the EM and INLA implementations of the SBSB GLM and the classical GLM, we generated a simulated population of 100 subjects with true amplitude maps with location translations from a population amplitude map by an average of 5mm. Each subject’s data were simulated at a resolution of about 5,000 vertices per hemisphere, with a TR of 1 second for a total length of 300 seconds. Two tasks were included in the simulation, each with a maximum true signal-to-noise ratio of 2 in the simulation before the BOLD response and design matrices were scaled. Each subject’s data was analyzed separately following the single subject analysis for the EM and INLA implementations of the SBSB GLM and the classical GLM as outlined in Section 2.1. Next, 100 different subsets of size 10 were drawn from the population, and the multi-subject analysis was performed using the single-subject analysis results from the subjects within the subset. For each multi-subject analysis, the square root of the mean squared error was found with respect to the true simulated population amplitudes. The Dice coefficient (Dice, 1945) is a measure of set overlap between sets 𝒜\mathcal{A} and \mathcal{B} found as

Dice(𝒜,)=2×|𝒜||𝒜|+||,\displaystyle\text{Dice}(\mathcal{A},\mathcal{B})=\frac{2\times|\mathcal{A}\cap\mathcal{B}|}{|\mathcal{A}|+|\mathcal{B}|},

where |𝒳||\mathcal{X}| is the number of elements in set 𝒳\mathcal{X} and 𝒳𝒴\mathcal{X}\cap\mathcal{Y} is the intersection of sets 𝒳\mathcal{X} and 𝒴\mathcal{Y}. The Dice coefficient can take values between 0 and 1, and higher values indicate more overlap between sets. We calculate the Dice coefficient between the set of locations with true positive activation amplitudes in the population and the set of locations determined to have amplitudes significantly greater than zero (γ=0%\gamma=0\%) in each subset for each modeling method, as outlined in Section 2.4.

Figure 3(a) shows the true value for the multi-subject activation amplitude for a single task, as well as the locations where the true amplitudes are greater than γ={0%,0.5%,1%}\gamma=\{0\%,0.5\%,1\%\} signal change in the analysis of a single 10-subject subset. The EM and INLA implementations of the SBSB GLM produce similar amplitude estimates and areas of activation, showing attenuation between the true values and the estimates. In contrast, the point estimates from the classical GLM show many values that are appreciably distant from zero in the true zero-valued region, while only finding one location that is significantly different from zero within the true nonzero-valued region. None of the modeling methods found any locations in which the amplitude was found to be significantly greater than 0.5% due to the attenuation expected in the Bayesian model and the low power of the classical GLM. For the Bayesian GLM implementations, the areas of activation are found using the excursions method, and are found as areas deemed jointly greater than threshold γ\gamma with probability 0.01. For the classical GLM, the areas of activation are found by hypothesis test after fixing the false discovery rate at 0.01.

Figure 3(b) shows the calculated values for the square root of the mean squared error (RMSE) and the Dice coefficient for each of 100 subsets of ten subjects each in order to compare the accuracy between the different methods. The true amplitude values for the whole population of 100 simulated subjects was used to determine the values of the measures of accuracy, rather than the true values for each of the subsets, as population-level inference is the goal of the multi-subject analysis. The EM implementation of the SBSB GLM exhibits generally lower values for the RMSE than the INLA implementation and the classical GLM, while exhibiting only slightly lower values for the Dice coefficient. The EM estimates exhibit slightly less attenuation than in the INLA implementation (shown in Figure 9), which is expected because no prior distributions are assumed on the hyperparameters in the EM implementation. The classical GLM performs poorly in terms of the Dice coefficient compared to both implementations of the SBSB GLM. These analyses suggest that the multi-subject analysis using the EM implementation of the SBSB GLM is likely to deliver powerful results in the analysis of real cs-fMRI data.

Truth EM INLA Classical

Amplitude

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
Refer to caption Refer to caption

Activations

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
γ=\gamma= \blacksquare 0% \blacksquare 0.5% \blacksquare 1%
(a) Amplitude of activations and areas of activation from the multi-subject analysis of a single subset
Refer to caption
(b) The RMSE values and the Dice coefficients for the 100 subsets
Figure 3: Cortical surface estimates for the amplitude of activation and areas of activation in percent signal change for the INLA and EM implementations of the Bayesian GLM and the classical GLM. True amplitudes of activation and areas of activation are shown for comparison. Different color scales are shown for the true values and the estimates of the amplitudes of activation because the estimates are attenuated, as one would expect for models assuming sparsity. For the Bayesian GLM implementations, the areas of activation are found using the excursions method, and are found as areas deemed jointly greater than threshold γ\gamma with probability 0.01. For the classical GLM, the areas of activation are found by fixing the false discovery rate at 0.01.

4 Analysis of HCP Data

We selected 10 subjects from the Human Connectome Project (HCP) (Barch et al., 2013) 1200-subject release in order to simulate small sample conditions common in many neuroimaging experiments as Spencer et al. (2022) shows the INLA implementation of the SBSB GLM to be powerful in such small studies. Motor task data were analyzed using the INLA and EM implementations of the SBSB GLM in order to compare the methods and their performance. These data were collected with two fMRI runs performed for each session, one with the phase acquisition from left-to-right, and the other from right-to-left. We analyze the average across these two runs because the acquisition-related effects are not of interest. The motor tasks in the experiment included tapping the fingers on the left and right hands, moving the toes on the left and right feet, and pressing the tongue to the inside of the cheek. A visual cue was used to indicate which task to perform.

These data were obtained from the HCP after being preprocessed with the minimal preprocessing pipeline (Van Essen et al., 2013; Barch et al., 2013). This pipeline includes the projection of the volumetric blood oxygen level-dependent (BOLD) values to the cerebral cortex and the subcortical regions and registering these surfaces to a common surface template. The preprocessing also creates high-resolution surface meshes with 164,000 vertices using structural T1-weighted and T2-weighted MRI scans, which are resampled to 32,000 vertices per hemisphere to match the resolution of the fMRI scans. In order to regularize the mapping process to the cortical surface, the fMRI timeseries were smoothed using a Gaussian smoothing kernel with a 2mm full-width half-maximum (FWHM).

After these steps from the HCP minimal preprocessing pipeline, additional preprocessing of the fMRI time series as outlined in A are applied in order to remove nuisance effects and temporal autocorrelation in the data, and make the parameter estimates of 𝜷k\boldsymbol{\beta}_{k} interpretable in terms of percent signal change, given the unitless nature of fMRI BOLD measures. The data are resampled to around 5,000 vertices per hemisphere using the HCP Workbench (Marcus et al., 2011) via the R package ciftiTools (Pham et al., 2022), as this resolution is shown to be computationally efficient without sacrificing effective inferential resolution, as discussed in Spencer et al. (2022). See Figure 10 for a comparison of cortical surfaces between different subjects and resolutions. The preprocessed data for both runs of the first scanning session was analyzed separately for each hemisphere and each subject using the INLA and EM implementations of the SBSB GLM.

Figure 4 shows the estimates and areas of activation found in three different subjects for the tongue task. The tongue task is chosen for display due to the easily-visible pattern of activation in the sensorimotor cortex, which helps to highlight the individual subject differences found in the patterns of estimates and activations. It is clear here that the INLA and EM implementations perform very similarly in terms of the task coefficient estimates, and that the EM implementation shows slightly more activation, particularly at the lowest threshold, γ=0%\gamma=0\%. This is likely due to the reduced estimate of posterior variance in the EM method due to its treatment of the hyperparameters as fixed.

INLA EM

Subject A

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

Subject B

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

Subject C

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
Refer to caption
(a) Estimates
INLA EM

Subject A

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

Subject B

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

Subject C

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
γ=\gamma= \blacksquare 0% \blacksquare 0.5% \blacksquare 1%
(b) Activations
Figure 4: Coefficient estimates and activations from the INLA and EM implementations of the Bayesian GLM for the tongue task for three subjects. Estimates are shown with units in % signal change. Activations are regions determined to be above the threshold γ\gamma using the excursions method with joint probability of 0.99.

Figure 5 shows the multi-subject estimates and areas of activation for the tongue task, based on all 10 subjects. The amplitude estimates from the EM implementation of the SBSB GLM are highly very similar to those found using the INLA implementation, and the areas of activation show that the EM algorithm detects more activations, especially at the γ=0%\gamma=0\% threshold, again due to the underestimation of posterior variance. However, the activations found at the γ=0.5%\gamma=0.5\% and γ=1%\gamma=1\% thresholds are very similar, suggesting high levels of agreement at higher, more neurologically meaningful thresholds.

INLA EM

Estimates

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
Refer to caption

Activations

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
γ=\gamma= \blacksquare 0% \blacksquare 0.5% \blacksquare 1%
Figure 5: Group coefficient estimates and activations from the INLA and EM implementations of the Bayesian GLM for the tongue task for 10 subjects. Estimates are shown with units in % signal change. Activations are regions determined to be above the threshold γ\gamma using the excursions method with joint probability of 0.99.

5 Conclusions

We develop and implement an EM algorithm to fit the surface-based spatial Bayesian GLM on cortical surface task fMRI data. This provides an exact method to fit the Bayesian GLM, reduce the memory consumption required by the INLA implementation of the Bayesian GLM, and also to reduce the dependence on the INLA package. The INLA package, while powerful, is not available on the Comprehensive R Archive Network (CRAN), and may be difficult or impossible to install on various computer systems. The increased memory requirement for the INLA package also presents an impediment to using the SBSB GLM to researchers without significant computing resources at their disposal.

Through analysis of simulated data, we determine that the EM algorithm performs well compared to the INLA implementation, both of which strongly outperform the surface-based classical GLM in terms of accuracy. The group-level results for the INLA and EM implementations are highly consistent, both in terms of effect size estimation and activation detection.

Analysis of data from the Human Connectome Project motor task data confirms that the EM implementation performs similarly to the INLA implementation, as both methods are able to find sparse, spatially-contiguous nonzero estimates of activation amplitude that display subject-specific characteristics. As in the simulated data, the EM finds more activated locations than INLA at a threshold of γ=0%\gamma=0\%, with similar numbers of activated locations found at higher thresholds.

Future work on this method includes improving computational efficiency through the use of parallelized linear algebra solvers and speed improvements through conjugate gradient methods. If such an optimized parallel solver were implemented, we expect the EM algorithm to take less time than INLA in all scenarios. Such speed improvements will likely make the spatial Bayesian GLM applicable to subcortical data, which have more dense neighborhood structures. Forthcoming work in a software paper for the BayesfMRI package in R will make the application of the functions used to perform these analyses accessible to anyone performing inference on relatively modest laptop computers.

6 Acknowledgements

Data were provided in part by the Human Connectome Project, WU- Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.

The research of Daniel Spencer and Amanda Mejia is partially funded by the National Institute of Biomedical Imaging and Bioengineering (R01EB027119).

References

  • Alappat et al. (2020) Christie Alappat, Achim Basermann, Alan R. Bishop, Holger Fehske, Georg Hager, Olaf Schenk, Jonas Thies, and Gerhard Wellein. A recursive algebraic coloring technique for hardware-efficient symmetric sparse matrix-vector multiplication. ACM Trans. Parallel Comput., 7(3), June 2020. ISSN 2329-4949. doi: 10.1145/3399732. URL https://doi.org/10.1145/3399732.
  • Barch et al. (2013) Deanna M Barch, Gregory C Burgess, Michael P Harms, Steven E Petersen, Bradley L Schlaggar, Maurizio Corbetta, Matthew F Glasser, Sandra Curtiss, Sachin Dixit, Cindy Feldt, et al. Function in the human connectome: task-fMRI and individual differences in behavior. Neuroimage, 80:169–189, 2013.
  • Bates and Eddelbuettel (2013) Douglas Bates and Dirk Eddelbuettel. Fast and elegant numerical linear algebra using the RcppEigen package. Journal of Statistical Software, 52(5):1–24, 2013. doi: 10.18637/jss.v052.i05.
  • Bishop and Nasrabadi (2006) Christopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning, volume 4. Springer, 2006.
  • Bolin and Lindgren (2013) David Bolin and Finn Lindgren. A comparison between Markov approximations and other methods for large spatial data sets. Computational Statistics & Data Analysis, 61:7–21, 2013.
  • Bolin and Lindgren (2015) David Bolin and Finn Lindgren. Excursion and contour uncertainty regions for latent Gaussian models. Journal of the Royal Statistical Society, Series B (Statistical Methodology), 77(1):85–106, 2015.
  • Bolin and Lindgren (2017) David Bolin and Finn Lindgren. Quantifying the uncertainty of contour maps. Journal of Computational and Graphical Statistics, 26(3):513–524, 2017.
  • Bolin and Lindgren (2018) David Bolin and Finn Lindgren. Calculating probabilistic excursion sets and related quantities using excursions. Journal of Statistical Software, 86(5):1–20, 2018. doi: 10.18637/jss.v086.i05.
  • Bollhöfer et al. (2019) Matthias Bollhöfer, Aryan Eftekhari, Simon Scheidegger, and Olaf Schenk. Large-scale sparse inverse covariance matrix estimation. SIAM Journal on Scientific Computing, 41(1):A380–A401, 2019. doi: 10.1137/17M1147615. URL https://doi.org/10.1137/17M1147615.
  • Bollhöfer et al. (2020) Matthias Bollhöfer, Olaf Schenk, Radim Janalik, Steve Hamm, and Kiran Gullapalli. State-of-the-Art Sparse Direct Solvers. Springer International Publishing, Cham, 2020. ISBN 978-3-030-43736-7. doi: 10.1007/978-3-030-43736-7_1. URL https://doi.org/10.1007/978-3-030-43736-7_1.
  • Chatfield and Collins (2018) Christopher Chatfield and Alexander J Collins. Introduction to multivariate analysis. Routledge, 2018.
  • Dempster et al. (1977) Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1–22, 1977.
  • Dice (1945) Lee R Dice. Measures of the amount of ecologic association between species. Ecology, 26(3):297–302, 1945.
  • Eddelbuettel (2013) Dirk Eddelbuettel. Seamless R and C++ Integration with Rcpp. Springer, New York, 2013. doi: 10.1007/978-1-4614-6868-4. ISBN 978-1-4614-6867-7.
  • Eddelbuettel and François (2011) Dirk Eddelbuettel and Romain François. Rcpp: Seamless R and C++ integration. Journal of Statistical Software, 40(8):1–18, 2011. doi: 10.18637/jss.v040.i08.
  • Eklund et al. (2016) Anders Eklund, Thomas E Nichols, and Hans Knutsson. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the national academy of sciences, 113(28):7900–7905, 2016.
  • Eklund et al. (2019) Anders Eklund, Hans Knutsson, and Thomas E Nichols. Cluster failure revisited: Impact of first level design and physiological noise on cluster false positive rates. Human brain mapping, 40(7):2017–2032, 2019.
  • Elliott et al. (2020) Maxwell L Elliott, Annchen R Knodt, David Ireland, Meriwether L Morris, Richie Poulton, Sandhya Ramrakha, Maria L Sison, Terrie E Moffitt, Avshalom Caspi, and Ahmad R Hariri. What is the test-retest reliability of common task-functional MRI measures? New empirical evidence and a meta-analysis. Psychological Science, 31(7):792–806, 2020.
  • Eshel (2003) Gidon Eshel. The Yule Walker equations for the AR coefficients. Internet resource, 2:68–73, 2003.
  • Friston et al. (1994) Karl J Friston, Andrew P Holmes, Keith J Worsley, J-P Poline, Chris D Frith, and Richard SJ Frackowiak. Statistical parametric maps in functional imaging: a general linear approach. Human brain mapping, 2(4):189–210, 1994.
  • Gelman et al. (2013) Andrew Gelman, John B Carlin, Hal S Stern, David B Dunson, Aki Vehtari, and Donald B Rubin. Bayesian data analysis. CRC press, 2013.
  • Glasser et al. (2013) Matthew F Glasser, Stamatios N Sotiropoulos, J Anthony Wilson, Timothy S Coalson, Bruce Fischl, Jesper L Andersson, Junqian Xu, Saad Jbabdi, Matthew Webster, Jonathan R Polimeni, et al. The minimal preprocessing pipelines for the human connectome project. Neuroimage, 80:105–124, 2013.
  • Guennebaud et al. (2010) Gaël Guennebaud, Benoît Jacob, et al. Eigen v3. http://eigen.tuxfamily.org, 2010.
  • Guhaniyogi and Spencer (2021) Rajarshi Guhaniyogi and Daniel Spencer. Bayesian tensor response regression with an application to brain activation studies. Bayesian Analysis, 16(4):1221–1249, 2021.
  • Hutchinson (1989) Michael F Hutchinson. A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines. Communications in Statistics-Simulation and Computation, 18(3):1059–1076, 1989.
  • Lindgren et al. (2011) Finn Lindgren, Håvard Rue, and Johan Lindström. An explicit link between Gaussian fields and Gaussian Markov random fields: The stochastic partial differential equation approach (with discussion). Journal of the Royal Statistical Society B, 73(4):423–498, 2011.
  • Lindquist (2008) Martin A Lindquist. The statistical analysis of fMRI data. Statistical science, 23(4):439–464, 2008.
  • Marcus et al. (2011) Daniel S Marcus, John Harwell, Timothy Olsen, Michael Hodge, Matthew F Glasser, Fred Prior, Mark Jenkinson, Timothy Laumann, Sandra W Curtiss, and David C Van Essen. Informatics and data mining tools and strategies for the human connectome project. Frontiers in neuroinformatics, 5:4, 2011.
  • Mejia et al. (2022) Amanda Mejia, Daniel A. Spencer, Damon Pham, David Bolin, Sarah Ryan, and Yu (Ryan) Yue. BayesfMRI, April 2022. URL https://github.com/mandymejia/BayesfMRI.
  • Mejia et al. (2020) Amanda F Mejia, Yu Yue, David Bolin, Finn Lindgren, and Martin A Lindquist. A Bayesian general linear modeling approach to cortical surface fMRI data analysis. Journal of the American Statistical Association, 115(530):501–520, 2020.
  • Opitz (2017) Thomas Opitz. Latent gaussian modeling and inla: A review with focus on space-time applications. Journal de la société française de statistique, 158(3):62–85, 2017.
  • Parlak et al. (2022) Fatma Parlak, Damon Pham, Daniel Spencer, Robert Welsh, and Amanda Mejia. Sources of residual autocorrelation in multiband task fMRI and strategies for effective mitigation. arXiv preprint arXiv:2209.06783, 2022.
  • Penny et al. (2005) William D Penny, Nelson J Trujillo-Barreto, and Karl J Friston. Bayesian fMRI time series analysis with spatial priors. NeuroImage, 24(2):350–362, 2005.
  • Pham et al. (2022) Damon D Pham, John Muschelli, and Amanda F Mejia. ciftitools: A package for reading, writing, visualizing, and manipulating cifti files in r. NeuroImage, 250:118877, 2022.
  • Poldrack et al. (2011) Russell A Poldrack, Jeanette A Mumford, and Thomas E Nichols. Handbook of functional MRI data analysis. Cambridge University Press, 2011.
  • Poline and Mazoyer (1993) Jean-Baptiste Poline and Bernard M Mazoyer. Analysis of individual positron emission tomography activation maps by detection of high signal-to-noise-ratio pixel clusters. Journal of Cerebral Blood Flow & Metabolism, 13(3):425–437, 1993.
  • Poline et al. (1997) Jean-Baptiste Poline, Keith J Worsley, Alan C Evans, and Karl J Friston. Combining spatial extent and peak intensity to test for activations in functional imaging. Neuroimage, 5(2):83–96, 1997.
  • Rue et al. (2009) Håvard Rue, Sara Martino, and Nicholas Chopin. Approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations (with discussion). Journal of the Royal Statistical Society B, 71:319–392, 2009.
  • Sidén et al. (2017) Per Sidén, Anders Eklund, David Bolin, and Mattias Villani. Fast Bayesian whole-brain fMRI analysis with spatial 3D priors. NeuroImage, 146:211–225, 2017.
  • Skorski (2021) Maciej Skorski. Modern analysis of Hutchinson’s trace estimator. In 2021 55th Annual Conference on Information Sciences and Systems (CISS), pages 1–5. IEEE, 2021.
  • Smith and Nichols (2009) Stephen M Smith and Thomas E Nichols. Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference. Neuroimage, 44(1):83–98, 2009.
  • Spencer et al. (2020) Daniel Spencer, Rajarshi Guhaniyogi, and Raquel Prado. Joint Bayesian estimation of voxel activation and inter-regional connectivity in fMRI experiments. Psychometrika, 85(4):845–869, 2020.
  • Spencer et al. (2022) Daniel Spencer, Yu Ryan Yue, David Bolin, Sarah Ryan, and Amanda F Mejia. Spatial Bayesian GLM on the cortical surface produces reliable task activations in individuals and groups. NeuroImage, 249:118908, 2022.
  • Stephens et al. (2022) Matthew Stephens, Peter Carbonetto, Jason Willwerscheid, Chaoxing Dai, et al. ashr: An R package for adaptive shrinkage. https://github.com/stephens999/ashr, 2022.
  • Van Essen et al. (2013) David C Van Essen, Stephen M Smith, Deanna M Barch, Timothy EJ Behrens, Essa Yacoub, Kamil Ugurbil, Wu-Minn HCP Consortium, et al. The WU-Minn human connectome project: an overview. Neuroimage, 80:62–79, 2013.
  • Varadhan and Roland (2004) Ravi Varadhan and Ch Roland. Squared extrapolation methods (SQUAREM): A new class of simple and efficient numerical schemes for accelerating the convergence of the EM algorithm. Johns Hopkins University, Dept. of Biostatistics Working Papers, 2004.
  • Wang and Titterington (2005) Bo Wang and D Michael Titterington. Inadequacy of interval estimates corresponding to variational Bayesian approximations. In International Workshop on Artificial Intelligence and Statistics, pages 373–380. PMLR, 2005.
  • Welvaert et al. (2011) Marijke Welvaert, Joke Durnez, Beatrijs Moerkerke, Geert Verdoolaege, and Yves Rosseel. neuRosim: An R package for generating fMRI data. Journal of Statistical Software, 44(10):1–18, 2011. URL http://www.jstatsoft.org/v44/i10/.
  • Zhang et al. (2016) Linlin Zhang, Michele Guindani, Francesco Versace, Jeffrey M Engelmann, and Marina Vannucci. A spatiotemporal nonparametric Bayesian model of multi-subject fMRI data. The Annals of Applied Statistics, 10(2):638–666, 2016.

Appendix A Preprocessing steps

Consider cs-fMRI or subcortical fMRI data from a scan, represented as 𝐲tN\mathbf{y}_{t}\in\mathbb{R}^{N} for times t=1,,Tt=1,\ldots,T at NN locations. These data are gathered while a subject completes KK different tasks as part of an experimental design. Data representing this design are represented as 𝐱tkN\mathbf{x}_{t}^{k}\in\mathbb{R}^{N}, which may not be identical across all locations after preprocessing (see section A for details). The JJ nuisance regressors, accounting for unwanted effects such as motion and scanner drift, are represented through the notation ztjz_{t}^{j}. Modeling brain activation for data in this form is done through the general linear model

𝐲t=𝝁+k=1K𝐱tk𝜷k+j=1Jztjbj+𝐞t\displaystyle\mathbf{y}_{t}=\boldsymbol{\mu}+\sum_{k=1}^{K}\mathbf{x}_{t}^{k}\boldsymbol{\beta}^{k}+\sum_{j=1}^{J}z_{t}^{j}b^{j}+\mathbf{e}_{t} (12)

where 𝝁\boldsymbol{\mu} is the mean value, 𝜷k\boldsymbol{\beta}^{k} is the regression coefficient for task kk, 𝐛j\mathbf{b}^{j} is the nuisance regression coefficient for nuisance covariate jj, and 𝐞t\mathbf{e}_{t} is the error term. In our treatment of the model, the data are preprocessed to remove the mean effect 𝝁\boldsymbol{\mu}. Please refer to section A for more details on data preprocessing. These data are then fitted to either the classical or Bayesian general linear model, with the main objective of performing inference about the parameters 𝜷k\boldsymbol{\beta}^{k}.

Data preprocessing is a common practice in the modeling of neuroimaging data, done with the intention of quickly removing variance stemming from known sources of error, such as movement and temporal autocorrelation. We preprocess the data using functions within the BayesfMRI R software package, following the steps outlined below.

The values from the design matrix are preprocessed to account for the delay in task stimulus and physiological response through convolution with a haemodynamic response function (HRF), h(t)h(t),

xt,k=0txt,k(τ)h(tτ)𝑑τ.\displaystyle x_{t,k}=\int_{0}^{t}x_{t,k}(\tau)h(t-\tau)d\tau.

We use the canonical (double-gamma) HRF characterized as

h(t)=(ta1b1)a1e(ta1b1)/b1c(ta2b2)a2e(ta2b2)/b2.\displaystyle h(t)=\left(\frac{t}{a_{1}b_{1}}\right)^{a_{1}}e^{-(t-a_{1}b_{1})/b_{1}}-c\left(\frac{t}{a_{2}b_{2}}\right)^{a_{2}}e^{-(t-a_{2}b_{2})/b_{2}}. (13)

with values set according to default values given in the neuRosim package in R [Welvaert et al., 2011], specifically a1=6a_{1}=6, a2=12a_{2}=12, b1=b2=0.9b_{1}=b_{2}=0.9, and c=0.35c=0.35. The values of the HRF-convolved design matrix 𝐗T×K\mathbf{X}\in\mathbb{R}^{T\times K} are then scaled by dividing each column by its maximum value, then centering these values around 0. This is done to keep estimates of 𝜷k\boldsymbol{\beta}_{k} comparable across different tasks as measures of percent signal change.

In order to facilitate spatial modeling given current computing memory capacity, the first step in preprocessing the response data is to resample the data to a lower resolution, which is done for the HCP data using an interpolation method outlined in Glasser et al. [2013]. This resampling presents a tradeoff between computational efficiency and spatial resolution in the inference, and needs to be considered thoroughly. This is examined in detail within Spencer et al. [2022], showing that resampling to a resolution of around 5,000 vertices per hemisphere does not significantly alter inference on cortical surfaces. After resampling, the values from the response 𝐲v\mathbf{y}_{v} at each data location vv are centered and scaled using the function

f(𝐲v)=100×(𝐲vy¯v)y¯v,\displaystyle f(\mathbf{y}_{v})=100\times\frac{(\mathbf{y}_{v}-\bar{y}_{v})}{\bar{y}_{v}}, (14)

where 𝐲v\mathbf{y}_{v} is the cs-fMRI time series at location vv, and y¯v\bar{y}_{v} is the average value at that data location across time. This transformation makes the fMRI time series interpretable as percent signal change, while also removing the need for mean value parameters, as in Equation (12).

Next, nuisance regression is performed to remove the effects of known confounding variables ztjz_{t}^{j} from the response data. This is done by regressing the centered and scaled response data against the nuisance variables, and then subtracting the estimated effects of the nuisance variables

𝐲~v=f(𝐲v)j=1J𝐳j𝐛^j,\tilde{\mathbf{y}}_{v}=f(\mathbf{y}_{v})-\sum_{j=1}^{J}\mathbf{z}^{j}\hat{\mathbf{b}}^{j},

where f(𝐲v)f(\mathbf{y}_{v}) is as defined in Equation (14), 𝐳jT\mathbf{z}^{j}\in\mathbb{R}^{T} is the value of the nuisance covariate across time, and b^j\hat{b}^{j} is the regression estimate of the nuisance parameter found by regressing f(𝐲v)f(\mathbf{y}_{v}) against 𝐙=(𝐳1,,𝐳J)\mathbf{Z}=(\mathbf{z}^{1},\cdots,\mathbf{z}^{J}).

In order to remove temporal autocorrelation within the data, prewhitening is performed next. This process first finds the residual values from a regression of the response (𝐲~v\tilde{\mathbf{y}}_{v}) on the design matrix 𝐗vT×K\mathbf{X}_{v}\in\mathbb{R}^{T\times K}, and then fits an AR(6) autoregressive model on these values using the method of solving the Yule-Walker equations [Eshel, 2003]. The coefficients and the residual variance of the AR(6) model are then spatially smoothed using a Gaussian kernel with a full-width half-maximum (FWHM) of 6 mm. These are used to create a temporal covariance matrix (𝐒\mathbf{S}) for the response at each location. The inverse of the square root of the covariance matrix (𝐃=𝐒1/2\mathbf{D}=\mathbf{S}^{-1/2}) is found using singular value decomposition. Finally, both the response data and the design matrix are premultiplied by 𝐃\mathbf{D} to produce the preprocessed response and task covariate data at each data location.

After all of these preprocessing steps are applied, the data can be fit via a general linear model of the form shown in Equation (1).

Appendix B Choice of stopping rule tolerance

A simulation study was performed in which the stopping rule tolerance was allowed to vary from 1 down to 0.001 by powers of 10 to examine the effect of the choice of stopping rule on the speed and accuracy, measured through the square root of the mean squared error (RMSE), of the EM algorithm. In all cases, the model was fitted to simulated cortical surface data on the left hemisphere for four simulated tasks with a spatial resolution of 5,000 vertices per hemisphere. This data generation setting was used to generate 9 different datasets in order to give an idea of the variance in time and Figure 6 shows the differences in the accuracy and speed for the different tolerance levels. Based on this analysis, the stopping rule tolerance was set to ϵ=0.001\epsilon=0.001 for all of the analyses in this paper.

Refer to caption
Figure 6: A comparison of computation time and inferential accuracy for different stopping rule tolerances in the EM algorithm.

Appendix C Additional figures

Refer to caption
Figure 7: An example of the mesh structure on the cortical surface
EM INLA

Posterior SD

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
\addvbuffer[3pt 0pt]Refer to caption
Figure 8: The posterior standard deviations for the activation amplitudes of a single task in a single run of simulated data found using the posterior mode estimates of 𝜽\boldsymbol{\theta} for the EM implementation and the posterior samples for the INLA implementation. The EM implementation shows a slightly higher standard deviation than the INLA method across much of the field due to the lack of a prior distribution on the hyperparameters, which influences the location of the posterior mode.
Refer to caption
Figure 9: A comparison of the multi-subject point estimates between the EM and INLA implementations of the SBSB GLM and the classical GLM for a single subset of 10 subjects. The EM implementation exhibits less attenuation of estimates for higher values while still maintaining similar regularization for smaller values. For reference, the actual highest value in the population is about 1.4.
32k Vertices 5k Vertices

Subject A

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption

Subject B

\addvbuffer[3pt 0pt]Refer to caption \addvbuffer[3pt 0pt]Refer to caption
Figure 10: Full resolution (32k vertices) and resampled resolution (5k vertices) surfaces for two subjects showing the difference between subject-specific surfaces that are taken into account when using the SBSB GLM.