This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\AtAppendix

Bayesian outcome selection modelling

Khue-Dung Dang1∗, Louise M. Ryan2,3, Richard J. Cook4, Tugba Akkaya-Hocagil4, Sandra W. Jacobson5 And Joseph L. Jacobson5
Abstract.

Psychiatric and social epidemiology often involves assessing the effects of environmental exposure on outcomes that are difficult to measure directly. To address this problem, it is common to measure outcomes using a comprehensive battery of different tests thought to be related to a common, underlying construct of interest. In the application that motivates our work, for example, researchers wanted to assess the impact of in utero alcohol exposure on child cognition and neuropsychological development, which were evaluated using a range of different tests. Statistical analysis of the resulting multiple outcomes data can be challenging, not only because of the need to account for the correlation between outcomes measured on the same individual, but because it is often unclear, a priori, which outcomes are impacted by the exposure under study. While researchers will generally have some hypotheses about which outcomes are important, a framework is needed to help identify outcomes that are sensitive to the exposure and to quantify the associated treatment or exposure effects of interest. We propose such a framework using a modification of stochastic search variable selection (SSVS), a popular Bayesian variable selection model and use it to quantify an overall effect of the exposure on the affected outcomes. We investigate the performance of the method via simulation and illustrate its application to data from a study involving the effects of prenatal alcohol exposure on child cognition.

Keywords: Bayesian inference; fetal alcohol spectrum disorders; multiple outcomes; prenatal alcohol exposure; variable selection.

1 School of Mathematics and Statistics, University of Melbourne, Melbourne, Australia
2 School of Mathematical and Physical Sciences, University of Technology Sydney, Sydney, Australia
3 Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers, Australia
4
Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, Canada
5 Department of Psychiatry and Behavioral Neurosciences, Wayne State University School of Medicine, Detroit, Michigan, USA
Corresponding author: kd.dang@unimelb.edu.au

1. Introduction

Multiple outcomes are routinely collected in psychological and social epidemiology studies in which participants are assessed on a comprehensive battery of tests or tasks designed to measure psychological, neurological or cognitive outcomes that are difficult to measure directly. A large number of outcomes are often collected, so it can be challenging to decide which outcomes to include in the analysis. Scientists typically rely on previous studies, in combination with expert knowledge to select the outcomes on which to focus. Until recently, no statistical framework for identifying outcomes that are sensitive to an exposure has been developed, nor has such a framework been developed to quantify the magnitude of effects.

Our work is motivated by a cohort study aimed at providing insights into the precise impact of prenatal alcohol exposure (PAE) on child development. Numerous studies have shown that high levels of PAE can result in a distinct pattern of craniofacial anomalies, growth restriction, and cognitive and behavioral deficits, a condition known as fetal alcohol syndrome (FAS) [1, 2], the most severe of a continuum of fetal alcohol syndrome disorders (FASD) [3, 4, 5, 6]. Alternatively, some individuals with PAE exhibit cognitive and/or behavioural impairment without the characteristic craniofacial dysmorphology and/or growth restriction, a disorder known as alcohol-related neurodevelopmental disorder (ARND). It is also critical to understand the more subtle effects of low-dose exposures. In this longitudinal study, funded by the US National Institutes of Health, expectant mothers were interviewed about their drinking habits during the pregnancy, and their children were followed throughout childhood and adolescence, many of them until they were 20 years of age [4, 7]. At every follow-up visit, children were assessed using a broad range of cognitive and neuro-developmental tests. Each of the administered tests could be classified as relevant to one of several different domains including cognition, executive function, and behaviour, among others. Previous neurocognitive studies have suggested that the impact of PAE on all of these domains will not be the same, given that alcohol may have a stronger effect on certain parts of the brain, while other areas may be relatively unaffected or spared, depending on the timing, genetic vulnerability, and ethnic or racial group of the exposure [4, 8, 9]. Recent analyses by our group made use of expert knowledge to select outcomes for analysis and simply assumed that each had been affected by PAE to some extent [10]. However, a more sophisticated modelling framework is needed to discriminate between outcomes that are strongly or weakly affected by PAE. In this paper we develop and evaluate a method which address this need.

There is a rich literature on statistical methods for the analysis of multiple outcomes data. A simple approach is to analyze each outcome separately, but this approach can result in reduced power due to the need to adjust for multiple comparisons [11]. Structural equation models (SEM) can also be used to model correlated outcomes by introducing latent variables and treating the outcomes as manifestations of the latent variables [12, 13, 14]. The estimated factor loadings can provide some insight regarding whether or not the various outcomes are representative of the hidden factors. However interpreting regression coefficients characterizing the relationship between the exposure and the latent factor can be problematic and inferences can be very sensitive to model misspecification [15]. Meta-analysis is another popular approach to synthesis of multiple outcomes data, but relatively little work has been carried out for dealing with highly correlated outcomes in observational settings. [16, 17, 18]. Meta-analysis, adjusted for between-outcome correlations, was the methods used in the previous paper that we published on this topic [10]. Generalized estimating equations [19] have also been used to analyze multiple outcome data, with working covariance matrices specified to accommodate correlations across outcomes, since the repeated observations on each individual can be viewed as a special type of clustered. Generalized linear mixed models offer another framework to model the effect of exposure on multiple outcomes [20, 21].

A limitation of the available statistical methods for analyzing multiple outcomes data is that researchers must specify the outcomes to be included in the analysis. This is usually done using expert knowledge or following some gatekeeping procedure to select the subset of affected outcomes (see, for example, [22]), but this is challenging when outcomes are high dimensional or when expert knowledge does not provide strong a priori guidance. Moreover, using exploratory data analysis to guide the decision-making increases the risk of distorting inferences due to multiple comparisons. There is a clear need for a principled statistical approach for identification of relevant outcomes on which to model the exposure effects, whilst accounting for the correlation among the outcomes.

As we will show presently, the challenge of identifying which of many observed outcomes are sensitive to an exposure can be reframed as a variable selection problem. Variable selection is typically carried out to choose a subset of candidate predictors that together explain most of the variation in a single response variable. The variable selection literature has a long history, from earlier frequentist approaches such as “best subset” regression, model selection based on Akaike/Bayesian information criterion [23, 24], backward and forward stepwise regression, to the more recent Bayesian methods that involve a wide range of “slab-and-spike” or shrinkage priors (see Hastie et al. [25], O’Hara and Sillanpää [26] and Van Erp et al. [27] for reviews and discussions of some of these popular approaches). Despite this large literature, there is no method incorporate selection of key outcomes from a number of potentially correlated outcomes, with a view of identifying those that are sensitive to exposure.

In this paper, we first show how the multiple outcomes modelling problem can be reframed as one of variable selection. We adopt a Bayesian approach to analyze outcomes and identify those that are strongly affected by the exposure. The model is motivated by the popular stochastic search variable selection (SSVS) method for variable selection, though we extend the SSVS prior to allow estimation of a mean effect among the sensitive outcomes. We propose random effect model to account for the correlation among the outcomes of each individual.

The paper is organized as follows: following this introduction, in Section 2 we present the basic model and discuss the associated computing approach. In two simulation studies in Section 3 we assess the performance of our method in comparison to other variable selection models. In Section 4 we use the model to analyze the data from our motivating application related to the effect of in utero alcohol exposure on different measures of child cognition. In Section 5 we present the conclusions of the paper.

2. Methodology

2.1. Addressing the outcome selection problem

Suppose we observe KK continuous outcomes for each of the nn individuals. The outcomes will typically be correlated because they are measures from the same individual, though they may be of different scales and nature. For example, the outcomes measure different domains of a person’s cognition function. Therefore, exposure effects are not expected to be exactly the same across affected outcomes but vary around a mean level μ\mu, which we identify as the parameter of interest. For each individual, we observe the exposure variable xx and use zz to denote other observed predictor variables that can be added to the model. In the application to our motivating study in Section 4, zz is a propensity score computed for each individual to adjust for confounders.

We now show how to express our multiple outcomes data as panel data in long format. Consider a sample of nn independent individual labeled j=1,,nj=1,\dots,n, where each individual provides KK outcomes labeled p=1,,Kp=1,\dots,K. Suppose we stack the observations of all nn individuals together, this gives us a data set with nKnK observations in total. Row ii of this data set records the observed outcome corresponds to individual j[i]j[i] and outcome p[i]p[i].

To represent the multiple outcome problem as a multiple predictors problem, we first define a new set of covariates x1,,xKx_{1},\dots,x_{K},

xik=exposurej[i]𝟏(p[i]=k),x_{ik}=\text{exposure}_{j[i]}\bm{1}(p[i]=k),

for i=1,,nKi=1,\dots,nK, individual j=1,,nj=1,\dots,n and outcome p=1,,Kp=1,\dots,K. These are the interaction terms between the exposure level and the dummy variables indicating the outcome variables. An example of the dataframe format and how to map from the multiple outcome format to the stacked format for nn individuals and K=3K=3 outcome variables is presented in Figure 2.1.

Refer to caption
Figure 2.1. Illustration of a data table with nn individuals and K=3K=3 outcome variables.

We then model the yiy_{i} using a linear mixed model:

(2.1) yi=νp[i]+αj[i]+k=1Kβkxik+γp[i]zj[i]+ϵi,y_{i}=\nu_{p[i]}+\alpha_{j[i]}+\sum_{k=1}^{K}\beta_{k}x_{ik}+\gamma_{p[i]}z_{j[i]}+\epsilon_{i},

for i=1,,nKi=1,\dots,nK, individual j=1,,nj=1,\dots,n and outcome p=1,,Kp=1,\dots,K. The error terms ϵi\epsilon_{i} are independent and normally distributed, ϵiN(0,σp[i]2)\epsilon_{i}\sim N(0,\sigma^{2}_{p[i]}) and the random effect αj[i]N(0,σr2)\alpha_{j[i]}\sim N(0,\sigma_{r}^{2}) accounts for the within-individual correlation and αjαj\alpha_{j}\mathchoice{\mathrel{\hbox to0.0pt{$\displaystyle\perp$\hss}\mkern 2.0mu{\displaystyle\perp}}}{\mathrel{\hbox to0.0pt{$\textstyle\perp$\hss}\mkern 2.0mu{\textstyle\perp}}}{\mathrel{\hbox to0.0pt{$\scriptstyle\perp$\hss}\mkern 2.0mu{\scriptstyle\perp}}}{\mathrel{\hbox to0.0pt{$\scriptscriptstyle\perp$\hss}\mkern 2.0mu{\scriptscriptstyle\perp}}}\alpha_{j^{\prime}} for jjj\neq j^{\prime}.

The parameters νp\nu_{p} and γp\gamma_{p} are outcome-specific intercepts and coefficients for zz, and the coefficients βk\beta_{k} represents the exposure effect on outcome kk, assumed to vary around a common value μ\mu:

βkN(μ,τ2),\beta_{k}\sim N(\mu,\tau^{2}),

for k=1,,Kk=1,\dots,K.

This "trick" of expressing the multiple outcomes problem as a repeated measures problem has been widely used in the literature, making it straightforward to then analyze multiple outcomes using standard mixed modelling or GEE software [28]. Now we have a mixed model with both varying intercepts and varying coefficients. We will perform variable selection on the KK covariates x1,,xKx_{1},\dots,x_{K}.

If we assume that all outcomes are associated in a similar way with the exposure or treatment of interest, with effects varying around a mean level μ\mu, then we can assign appropriate priors for μ\mu and τ\tau and estimate the model using a Bayesian estimation procedure. In this new outcome selection framework, we assume that some of the outcomes are not relevant to the analysis. This implies βk=0\beta_{k}=0 for some kk. We then use variable selection priors to analyze βk\beta_{k}.

Over the past decade or so, variable selection methodologies have been extended to the random effect context, see for example [29, 30] and [31]. For our model, because we do not need to perform variable selection for the random effects, it is straightforward to use existing techniques for the independent predictors xkx_{k}. This means that we can potentially use any of a variety of sparsity priors, such as stochastic search variable selection [32], Bayesian LASSO [33, 34] and the horseshoe prior [35] for outcome selection. However, because we are interested in selecting the subset of affected outcomes variables and quantifying the mean exposure effect on these variables, in this study we focus on the use of the stochastic search variable selection method. We discuss this further in the next section.

2.2. Stochastic search variable selection

There is a large literature on Bayesian variable selection methods, however in this paper we only discuss the “slab and spike” type of priors, as they are suitable for our problem of identifying sensitive outcomes from a large number of outcomes. Methods that compare models by Bayes Factor [36] or criteria such as DIC [37] or WAIC [38] require fitting all candidate models and hence only applicable when comparing a small number of models. Therefore, they are not suitable for the outcome selection problem.

Methods involving a “slab and spike” prior can be divided into two categories: Methods that specify a prior that approximate the “slab and spike” shape for the coefficients βk\beta_{k}; and methods that use latent indicator variables that indicate whether a covariate is included in the model. Shrinkage priors such as the Bayesian LASSO [33, 34] and the horseshoe prior [35] belong to the first category. The implementation of these methods is straightforward and they have extensive use in the recent years, however it is not possible to modify these priors to incorporate a common mean of the non-zero coefficients.

The second category defines a latent variable IkI_{k} that indicates whether a coefficient βk\beta_{k} is non-zero. In the approaches proposed by [39] and [40], a coefficient βk\beta_{k} is set to 0 if Ik=0I_{k}=0. These approaches are harder to tune to ensure the iterates of IkI_{k} do not get stuck at 0 or 11. Our method is motivated by the stochastic search variable selection (SSVS) method [32], which defines a mixture prior for βk\beta_{k} instead: Let IkI_{k} be a latent indicator variable, with Ik=1I_{k}=1 means covariate kk is included in the model, Ik=0I_{k}=0 means it is not. The indicator affects the prior of βk\beta_{k}, so we can define a joint prior for (Ik,βk)(I_{k},\beta_{k}) as

p(Ik,βk)=p(βk|Ik)p(Ik).p(I_{k},\beta_{k})=p(\beta_{k}|I_{k})p(I_{k}).

Conditioning on IkI_{k}, the prior of βk\beta_{k} is

p(βk|Ik)=(1Ik)N(0,g1)+IkN(0,τ2).p(\beta_{k}|I_{k})=(1-I_{k})N(0,g_{1})+I_{k}N(0,\tau^{2}).

The tuning parameters g1g_{1} and τ2\tau^{2} specify the scale of the conditional prior of βk\beta_{k} given IkI_{k} and are typically data dependent: they should take into account the posterior distribution of the βk\beta_{k} and hence the size of the data [32]. Ideally, g1g_{1} should be small enough such that if βkN(0,g1)\beta_{k}\sim N(0,g_{1}), then βk\beta_{k} can be safely replaced by 0. On the other hand, τ\tau must be large enough such that non-zero βk\beta_{k} can be included in the model. These parameters can be fixed or estimated as the model’s parameters [32]. Another variation of SSVS is when g1g_{1} being τ\tau multiplied by a constant [41].

For our outcome selection framework, we propose to modify the SSVS prior to incorporate the mean exposure effect μ\mu on the sensitive outcomes. Conditioning on IkI_{k}, we now have a mixture prior for βk\beta_{k}

(2.2) p(βk|Ik)=(1Ik)N(0,g1)+IkN(μ,τ2).\displaystyle p(\beta_{k}|I_{k})=(1-I_{k})N(0,g_{1})+I_{k}N(\mu,\tau^{2}).

To improve the performance of the model, we follow [41] and modify the prior in (2.2) to

(2.3) p(βk|Ik)=(1Ik)N(0,τ2/c)+IkN(μ,τ2).\displaystyle p(\beta_{k}|I_{k})=(1-I_{k})N(0,\tau^{2}/c)+I_{k}N(\mu,\tau^{2}).

The constant cc can be chosen by trial-and-error to ensure good separation between the ‘in’ and ‘out’ variables. The standard deviations τ\tau, σk\sigma_{k} and σr\sigma_{r} are assigned log-normal priors in our simulation examples and real data application.

Note that the posterior mean of IkI_{k} will be the posterior probability that outcome kk is included in the model, hence it will be important in terms of interpreting the results of our model fit. The prior, p(Ik=1)p(I_{k}=1), can simply be a categorical distribution with a fixed probability parameter. This prior probability may be different across outcomes, based on the experts’ knowledge, or fixed at 0.5 so that the prior is non-informative. We can also treat p(Ik=1)p(I_{k}=1) as a parameter to be estimated [26].

3. Simulation examples

We now demonstrate the performance of the method in two scenarios. In the first scenario, we consider the performance of the model when the data are generated from the correct model in (2.1). Then in Section 3.2 we assess method’s performance in a more challenging scenario.

The aim of the simulation studies is to assess the performance of our prior for outcome selection for different effect sizes. In both exercises, we set the number of outcomes to K=20K=20 and a moderate sample size n=100n=100. We examine the performance of the proposed model with different numbers of relevant outcomes K1=5K_{1}=5, 1010, 1515.

For each setting we simulated 10 data sets, and for each data set we fit our model in (2.1) using the prior in (2.3) with c=100c=100. We examine both versions of SSVS: our proposed model in which μ\mu is a parameter and the standard SSVS prior where μ=0\mu=0. The prior probability of IkI_{k} is p(Ik=1)=0.5p(I_{k}=1)=0.5 for all outcome kk. For comparison, we also fit the model where βk\beta_{k} are assumed a hierarchical prior βkN(μ,τ2)\beta_{k}\sim N(\mu,\tau^{2}) with unknown μ\mu and τ2\tau^{2}. We also fit the model that only uses the correct relevant outcomes, assuming βkN(μ,τ2)\beta_{k}\sim N(\mu,\tau^{2}). We call this the “subset model”. The result of the subset model is treated as the “standard” because it is the model that uses the correct set of outcomes.

The rest of the parameters are assigned fairly flat priors. We use a normal N(0,100)N(0,100) prior for μ\mu, log-normal(0,10)(0,10) for σr\sigma_{r} and σk\sigma_{k}, k=1,,Kk=1,\dots,K. The prior for νk\nu_{k} and γk\gamma_{k} are normal N(0,1000)N(0,1000) and normal N(0,100)N(0,100), respectively. In all models, τ\tau is assigned a log-normal(0,1)(0,1) prior.

For each simulation we record the number of outcomes identified as relevant, the number of correctly identified outcomes, the number of false positive and the estimated μ\mu. An outcome is classified as “relevant” if the posterior mean of the corresponding IkI_{k} is greater than 0.5. The results presented here is the average over the 10 simulations for each setting. In each simulation, we have the exogenous variable ziN(0,1)z_{i}\sim N(0,1) and xi|zi𝟏(zi<0)N(0,0.52)+(1𝟏(zi<0))N(1,1)x_{i}|z_{i}\sim\bm{1}(z_{i}<0)N(0,0.5^{2})+(1-\bm{1}(z_{i}<0))N(1,1).

The SSVS models are fitted using the software JAGS [42] and the other models are implemented with STAN [43].

3.1. Simulation study 1: Performance under correct model specification

In the first simulation example, we generated data from the model (2.1):

yik\displaystyle y_{ik} =νk+βkxi+αi+γkzi+ϵik;ϵikN(0,σk2).\displaystyle=\nu_{k}+\beta_{k}x_{i}+\alpha_{i}+\gamma_{k}z_{i}+\epsilon_{ik};\quad\epsilon_{ik}\sim N(0,\sigma_{k}^{2}).

This is the same as the model that is the basis for our proposed method in Section 2.1. The data were generated as follows: The intercepts were generated from N(0,1)N(0,1) and the standard deviations σk2\sigma_{k}^{2} are generated from N(1.5,0.3)N(1.5,0.3). The coefficients βk\beta_{k} corresponding to the relevant outcomes were generated from N(μ,0.01μ2)N(\mu,0.01\mu^{2}) to ensure that the βk\beta_{k} are scattered closely enough around μ\mu. We used two different values for the mean common effect μ\mu: μ=0.1\mu=-0.1 and μ=3\mu=-3. The first setting represents a situation when the effect is weak with small data, so that the posterior standard deviation is large. In this case it would be difficult for the model to decide whether a βk\beta_{k} is 0 or not. The second setting mimics the situation in which the effect is stronger, and the selection method is expected to work better.

The performance of the SSVS algorithm in detecting the affected outcomes for different μ\mu and K1K_{1} is presented in the top 6 rows of Table 1. The result shows that the original and our modified SSVS algorithms have very similar performance, but both do not do well in detecting the affected outcomes when μ\mu is small. This is expected as the overall effect μ\mu is small, so some of the relevant βk\beta_{k} would be close to 0. Because here we used uninformative priors for IkI_{k}, the algorithm will keep switching between stage Ik=0I_{k}=0 and Ik=1I_{k}=1 for these outcomes. This is similar to the phenomenon observed by O’Hara and Sillanpää in [26], where the posterior probabilities of IkI_{k} are close to 0.50.5 and some outcomes are classified incorrectly by chance.

Table 2 shows the mean squared errors of estimating the individual coefficient βk\beta_{k}:

(3.1) MSE=1Kk=1K(β^kβk)2,\text{MSE}=\frac{1}{K}\sum_{k=1}^{K}(\widehat{\beta}_{k}-\beta_{k})^{2},

where we take the estimate β^k\widehat{\beta}_{k} to be the posterior mean of βk|Ik=1\beta_{k}|I_{k}=1 if the posterior mean of IkI_{k} is greater than 0.5; otherwise we set β^k=0\widehat{\beta}_{k}=0. For both large and small values of μ\mu, the SSVS priors provide more accurate estimates of βk\beta_{k} in terms of MSE, compared to the model without variable selection.

Lastly, Table 3 shows the estimated of μ\mu, averaged over 10 simulations, by different priors. Table 3 shows that our modified SSVS can provide estimates of μ\mu that are closer to the result from the subset model, especially for large μ\mu. However, when the effect is weak, the model is not able to estimate μ\mu accurately because it fails to identify the correct set of sensitive outcomes.

This simulation example shows that SSVS priors can provide accurate estimates of the coefficients, and accurately identify the affected outcomes and estimate the mean effect when μ\mu is far from 0. However, it may require more informative priors for IkI_{k} and better tuning to capture small effects accurately.

3.2. Simulation study 2: Performance under model misspecification

In this example, we attempt to fit the model to less ideal data, which have a more complicated correlation structure. The data were generated according to a confirmatory factor analysis as follows:

(3.2) 𝒚i\displaystyle\bm{y}_{i} =𝝂+𝚲[η1,iη2,i]+𝒆i,𝒆iN(0,Ψ)\displaystyle=\bm{\nu}+\bm{\Lambda}\begin{bmatrix}\eta_{1,i}\\ \eta_{2,i}\end{bmatrix}+\bm{e}_{i},\quad\bm{e}_{i}\sim N(0,\Psi)
(3.3) η1,i\displaystyle\eta_{1,i} =ωxi+0.1zi+ui,uiN(0,1)\displaystyle=\omega x_{i}+0.1z_{i}+u_{i},\quad u_{i}\sim N(0,1)
(3.4) η2,i\displaystyle\eta_{2,i} N(0,1).\displaystyle\sim N(0,1).

Here only the first factor η1\eta_{1} is linked to the exposure xx while the second factor does not. In this experiment, although we are not interested in fitting factor analysis models, we find them to be a useful strategy for generating data with correlated outcomes, but where only some of them are influenced by the exposure xx. In the example 𝚲\bm{\Lambda} will be a K×2K\times 2 matrix where Λ[k,1]=0\Lambda[k,1]=0 for k>K1k>K_{1}, for K1K_{1} being the number of relevant outcomes that load into η1,i\eta_{1,i}. When generating the data, the elements of 𝚲\bm{\Lambda} correspond to η1\eta_{1} and η2\eta_{2} were generated from N(1,0.1)N(1,0.1) and N(1,0.2)N(1,0.2) respectively. The intercepts were randomly sampled from N(0,100)N(0,100). The covariance matrix Ψ\Psi of the error term 𝒆i\bm{e}_{i} is diagonal.

Here we still examine the ability of SSVS to correctly capture the relevant outcomes in various settings and compare it to other approaches. We generated data with n=100n=100 individuals and total number of outcomes K=20K=20 from the two-factor model as outlined at the beginning of this section. We test our outcome selection priors for ω=0.5\omega=-0.5 and ω=3\omega=-3 while varying the number of relevant outcomes in K1=5,10,15K_{1}=5,10,15.

The last 6 rows of Table 1 record the performance of different priors in identifying the relevant outcomes in the two scenarios. An outcome is classified as “relevant” by the fitted model if the posterior mean of the corresponding IkI_{k} is greater than 0.5. Table 1 shows that, in this non-ideal situation, the proposed model may classify an affected outcome as irrelevant, or vice versa when the effect is weak. On the other hand, when the effect μ\mu is strong, most of the time the model correctly identifies the relevant outcomes. Our proposed prior where μ\mu is treated as a parameter seems to perform slightly better than the original SSVS where μ=0\mu=0.

The bottom part of Table 2 presents the MSE in estimating the coefficients, and we take βk=𝚲k,1×ω\beta_{k}=\bm{\Lambda}_{k,1}\times\omega. Similar to the previous experiment, the SSVS priors give more accurate estimates of βk\beta_{k} for ω=3\omega=-3, and have similar accuracy with the model without variable selection when the overall effect is small.

Table 4 shows the mean effect μ\mu estimated by different methods. Note that the ω\omega we used to generate the data is not equal to the true mean effect μ\mu. But the implied true value of μ\mu can be computed by multiplying ω\omega with the average of the non-zero loadings corresponding to the first factor. We provide these values in the third column of Table 4. Similar to the previous experiment, our proposed model does not perform well in the first scenario where ω=0.5\omega=-0.5. In both cases, inclusion of the irrelevant outcomes results in a smaller estimate of μ\mu. Not surprisingly, our modified SSVS prior shows similar results with the subset model and their estimates are close to the true values when ω=3\omega=-3. On the other hand, all priors being considered do not give accurate estimates of mean effect for small ω\omega. This is consistent with our observation in Section 3.1.

4. Application: Effect of prenatal alcohol exposure on children in Detroit, Michigan

In this section we return to the application that motivated our work. Specifically, we apply our model to select outcomes from a longitudinal cohort study conducted in Detroit, Michigan, whose aim was to investigate the long-term effects of PAE on a child’s cognitive and behavioral function. In this longitudinal study, the mothers were interviewed prenatally about their alcohol consumption during pregnancy, and the children were followed throughout childhood, many of them up until they were 20 years of age. At the time of each follow up visit, each child was assessed using a large number of cognitive tests. PAE is computed based on the mother’s average daily dose of absolute alcohol consumed (in ounces) during pregnancy (AA/day). Because the distribution of alcohol exposure is positively skewed with a minimum level 0, we compute log(AA/day + 1) and use this as the measure of PAE in the analysis.

The Detroit study collected a large number of variables reflecting responses on various neuro-cognitive tests and behavioral outcomes assessed on the children throughout childhood. To illustrate our methodology, we focus on a set of 14 outcomes collected when the children were approximately 7 years of age. The first 8 outcomes come from the Achenbach Child Behavior Checklist (CBCL) and Teacher’s Report Form (TRF) at age 7 [44]. The CBCL is a checklist completed by the parents and designed to detect emotional and behavioral problems in children and adolescents, while the TRF represents the child’s principal teacher’s report of the similar. These assessments include the child’s internalizing and externalizing behaviors, and social and attention problems. The remaining 6 outcomes correspond to the results of various cognitive and neuro-developmental tests related to IQ assessed on the Wechsler Intelligence Scales for Children–III [45], academic achievement in reading and arithmetic, learning and memory abilities and executive function. We have recently reported that, of these 14 outcomes, the first 8 are relatively less affected by PAE, whereas the last 6 are more sensitive to alcohol exposure [10].

4.1. Model and setup

We fit the model (2.1) with the prior in (2.3) to the data set. To adjust for confounders associated with both alcohol exposure and cognitive function we add a propensity score zz which was computed beforehand. For details on the covariates included in the propensity score and how it was constructed we refer readers to [46]. Before running the analysis, we rescaled all outcomes to mean 0 and variance 1.

We fit our proposed model with a few different settings. We start with an uninformative prior for the indicator IkI_{k} and set p(Ik=1)=0.5p(I_{k}=1)=0.5 for all kk. We use the prior in (2.3) with c=100c=100 for βk\beta_{k} where τ\tau is assigned a log-normal(0,1)(0,1) prior. We also choose a normal N(0,1)N(0,1) prior for μ\mu. As a comparison, we also try the prior in (2.2) where we fix g1=0.22g_{1}=0.2^{2} and a shrinkage prior. For the shrinkage prior, we simply follow [33] and assign a Laplace(0,1)\text{Laplace}(0,1) prior for the βk\beta_{k}. We also attempt the horseshoe prior [35] for βk\beta_{k} but the MCMC has convergence issue and hence the result is not presented here.

To assess how sensitive the result is to the prior probability p(Ik=1)p(I_{k}=1), we also fit the model with a more informative set of p(Ik=1)p(I_{k}=1),

p(Ik=1)=(0.5,0.5,0.2,0.5,0.8,0.8,0.2,0.8,0.5,0.5,0.8,0.8,0.8,0.5).p(I_{k}=1)=(0.5,0.5,0.2,0.5,0.8,0.8,0.2,0.8,0.5,0.5,0.8,0.8,0.8,0.5).

These prior probabilities were chosen by utilizing expertise knowledge and set the probability of the outcomes that are known to be relevant to be closer to 1. In practice, more informative priors may help the MCMC to have better mixing.

We also fit the model (2.1) to only those outcomes chosen by our SSVS model with a hierarchical prior βN(μ,τ2)\beta\sim N(\mu,\tau^{2}). We call this model the “subset” model. Similar to in the simulation studies, we will compare the estimates of βk\beta_{k} and μ\mu from this reduced model with our approach.

For the rest of the parameters, we choose relatively non-informative priors. We assign a normal N(0,1)N(0,1) prior for μ\mu, log-normal(0,10)(0,10) for σr\sigma_{r} and σk\sigma_{k}, k=1,,14k=1,\dots,14. The prior for νk\nu_{k} and γk\gamma_{k} are normal N(0,1000)N(0,1000) and normal N(0,100)N(0,100), respectively. The SSVS models are fitted using the software JAGS [42], running 3 chains each with 200,000200,000 burn-in and 200,000200,000 samples with thinning =10=10. The other models are implemented in STAN [43].

4.2. Results

The results are presented in Tables 5 and 6. Table 5 shows the mean posterior probability of Ik=1I_{k}=1. For the Laplace shrinkage prior, we report whether 0 is outside of the 95%95\% credible intervals of the parameters. All SSVS models choose the same set of relevant outcomes in different setting. The informative prior on p(Ik)p(I_{k}) results in different posterior mean of IkI_{k}, however, it does not affect the inference on the outcomes’ relevance for most outcome variables. The only exception is CBCL Externalizing at age 7, of which the posterior probability of being affected is slightly less than 0.5 (0.44 and 0.49) with a non-informative prior and slightly higher than 0.5 (0.53) with an informative prior. These results are also similar to the that of the Laplace shrinkage prior; however this prior shrinks more βk\beta_{k} towards 0 than the SSVS priors.

Table 6 presents the estimates of βk\beta_{k} and overall effect μ\mu. The SSVS with informative p(Ik=1)p(I_{k}=1) and the subset model suggest a strong negative effect of PAE on the cognitive outcomes. These findings are consistent with those in [10]. The estimate of τ\tau is similar for both models (0.19 vs 0.17). On the other hand, the SSVS models with the non-informative prior suggest a weaker effect (-0.33 and -0.32 versus -0.41). The non-informative prior also results in a larger estimate of τ\tau (0.22 and 0.24 versus 0.19).

Table 6 shows that all SSVS models produce smaller estimates for the coefficients of the affected outcomes and hence μ\mu, compared to the subset model. The result here is consistent with our observation in the simulation studies in Section 3. However, as shown in Tables 5 and 6, our proposed model produces very similar estimates of βk\beta_{k} compared to the Laplace prior in all settings. Table 6 also indicates that the informative prior for the indicator IkI_{k} produces estimates of βk\beta_{k} and μ\mu that are closer to the subset model that only includes the affected outcomes.

5. Conclusion

In this paper we propose a statistical method for identifying outcomes from a large number of observed variables that are directly affected by an exposure variable. Our method is an extension of standard Bayesian variable selection models to multiple outcomes data, which also provides an estimate of the overall effect of the exposure variable. We demonstrate the performance and limitations of our method in two simulation exercises and a real data application.

Our application in modelling the effect of PAE on cognition identified a set of neurodevelopmental tests that are significantly affected by fetal alcohol exposure. In addition, the model indicates a negative overall effect of PAE on the sensitive outcomes. A limitation of the current model is that we only use an individual random intercept to capture the correlations among the outcomes. This approach may not be ideal, and we may consider a more sophisticated correlation structure in future works.

Finally, the proposed framework is shown to be effective in identifying sensitive outcomes in various scenarios. However, it may underestimate the outcome-specific effect size and mean effect when the effects are mild. This is a common issue with variable selection priors; we expect that the result can be improved by using more informative priors for the indicators IkI_{k}.

Acknowledgments

This research was funded by grants from the National Institutes of Health/ National Institute on Alcohol Abuse and Alcoholism (NIH/NIAAA; R01-AA025905) and the Lycaki-Young Fund from the State of Michigan to Sandra W. Jacobson and Joseph L. Jacobson. Much of the work was accomplished while Khue-Dung Dang was a postdoctoral research fellow at UTS, supported by the Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS) grant CE140100049. Richard J. Cook was supported by the Natural Sciences and Engineering Research Council of Canada through grant RGPIN-2017-04207.

Data Availability Statement

Original data used in the application available on request due to privacy/ethical restrictions. Standardized, de-identified data to replicate the results in the paper will be made available.

References

  • [1] Hoyme HE, May PA, Kalberg WO, et al. A practical clinical approach to diagnosis of fetal alcohol spectrum disorders: clarification of the 1996 institute of medicine criteria. Pediatrics 2005; 115(1): 39–47.
  • [2] Hoyme HE, Kalberg WO, Elliott AJ, et al. Updated clinical guidelines for diagnosing fetal alcohol spectrum disorders. Pediatrics 2016; 138(2).
  • [3] Carter RC, Jacobson JL, Molteno CD, Dodge NC, Meintjes EM, Jacobson SW. Fetal alcohol growth restriction and cognitive impairment. Pediatrics 2016; 138(2).
  • [4] Jacobson SW, Jacobson JL, Sokol RJ, Chiodo LM, Corobana R. Maternal age, alcohol abuse history, and quality of parenting as moderators of the effects of prenatal alcohol exposure on 7.5-year intellectual function. Alcoholism: Clinical and Experimental Research 2004; 28(11): 1732–1745.
  • [5] Jacobson SW, Stanton ME, Molteno CD, et al. Impaired eyeblink conditioning in children with fetal alcohol syndrome. Alcoholism: Clinical and Experimental Research 2008; 32(2): 365–372.
  • [6] Mattson SN, Bernes GA, Doyle LR. Fetal alcohol spectrum disorders: A review of the neurobehavioral deficits associated with prenatal alcohol exposure. Alcoholism: Clinical and Experimental Research 2019; 43(6): 1046–1062.
  • [7] Jacobson JL, Jacobson SW, Sokol RJ, Martier SS, Ager JW, Kaplan-Estrin MG. Teratogenic effects of alcohol on infant development. Alcoholism: Clinical and Experimental Research 1993; 17(1): 174–183.
  • [8] Jacobson JL, Jacobson SW. Drinking moderately and pregnancy: effects on child development. Alcohol research & health 1999; 23(1): 25-30.
  • [9] Jacobson JL, Jacobson SW. Effects of prenatal alcohol exposure on child development. Alcohol Research & Health 2002; 26(4): 282-286.
  • [10] Jacobson JL, Akkaya-Hocagil T, Ryan LM, et al. Effects of prenatal alcohol exposure on cognitive and behavioral development: Findings from a hierarchical meta-analysis of data from six prospective longitudinal US cohorts. Alcoholism: Clinical and Experimental Research 2021.
  • [11] Lefkopoulou M, Ryan L. Global tests for multiple binary outcomes. Biometrics 1993: 975–988.
  • [12] Budtz-Jørgensen E, Keiding N, Grandjean P, Weihe P. Estimation of health effects of prenatal methylmercury exposure using structural equation models. Environmental Health 2002; 1(1): 1–22.
  • [13] Dunson DB. Bayesian latent variable models for clustered mixed outcomes. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2000; 62(2): 355–366.
  • [14] Sánchez BN, Budtz-Jørgensen E, Ryan LM, Hu H. Structural equation models: a review with applications to environmental epidemiology. Journal of the American Statistical Association 2005; 100(472): 1443–1455.
  • [15] Sammel MD, Ryan LM. Effects of covariance misspecification in a latent variable model for multiple outcomes. Statistica Sinica 2002: 1207–1222.
  • [16] Berkey C, Hoaglin D, Antczak-Bouckoms A, Mosteller F, Colditz G. Meta-analysis of multiple outcomes by regression with random effects. Statistics in Medicine 1998; 17(22): 2537–2550.
  • [17] Noortgate V. dW, López-López JA, Marín-Martínez F, Sánchez-Meca J. Meta-analysis of multiple outcomes: A multilevel approach. Behavior Research Methods 2015; 47(4): 1274–1294.
  • [18] Ryan L. Combining data from multiple sources, with applications to environmental risk assessment. Statistics in Medicine 2008; 27(5): 698–710.
  • [19] Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika 1986; 73(1): 13–22.
  • [20] Sammel M, Lin X, Ryan L. Multivariate linear mixed models for multiple outcomes. Statistics in Medicine 1999; 18(17-18): 2479–2492.
  • [21] Thurston SW, Ruppert D, Davidson PW. Bayesian models for multiple outcomes nested in domains. Biometrics 2009; 65(4): 1078–1086.
  • [22] Turk DC, Dworkin RH, McDermott MP, et al. Analyzing multiple endpoints in clinical trials of pain treatments: IMMPACT recommendations. Pain 2008; 139(3): 485–493.
  • [23] Akaike H. Information theory and an extension of the maximum likelihood principle. In: Selected papers of Hirotugu Akaike Springer. 1998 (pp. 199–213).
  • [24] Schwarz G. Estimating the dimension of a model. The Annals of Statistics 1978: 461–464.
  • [25] Hastie T, Tibshirani R, Tibshirani R. Best Subset, Forward Stepwise or Lasso? Analysis and recommendations based on extensive comparisons. Statistical Science 2020; 35(4): 579–592.
  • [26] O’Hara RB, Sillanpää MJ. A review of Bayesian variable selection methods: what, how and which. Bayesian Analysis 2009; 4(1): 85–117.
  • [27] Van Erp S, Oberski DL, Mulder J. Shrinkage priors for Bayesian penalized regression. Journal of Mathematical Psychology 2019; 89: 31–50.
  • [28] Lefkopoulou M, Moore D, Ryan L. The analysis of multiple correlated binary outcomes: Application to rodent teratology experiments. Journal of the American Statistical Association 1989; 84(407): 810–815.
  • [29] Bondell HD, Krishna A, Ghosh SK. Joint variable selection for fixed and random effects in linear mixed-effects models. Biometrics 2010; 66(4): 1069–1077.
  • [30] Fan Y, Li R. Variable selection in linear mixed effects models. Annals of statistics 2012; 40(4): 2043.
  • [31] Yang M, Wang M, Dong G. Bayesian variable selection for mixed effects model with shrinkage prior. Computational Statistics 2020; 35(1): 227–243.
  • [32] George EI, McCulloch RE. Variable selection via Gibbs sampling. Journal of the American Statistical Association 1993; 88(423): 881–889.
  • [33] Figueiredo MA. Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 2003; 25(9): 1150–1159.
  • [34] Park T, Casella G. The Bayesian Lasso. Journal of the American Statistical Association 2008; 103(482): 681–686.
  • [35] Carvalho CM, Polson NG, Scott JG. Handling Sparsity via the Horseshoe. In: Proceedings of the 12th International Conference on Artificial Intelligence and Statistics PMLR; 2009: 73–80.
  • [36] Kass RE, Raftery AE. Bayes factors. Journal of the American Statistical Association 1995; 90(430): 773–795.
  • [37] Spiegelhalter DJ, Best NG, Carlin BP, Linde AVD. Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2002; 64(4): 583–639.
  • [38] Watanabe S, Opper M. Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research 2010; 11(12).
  • [39] Kuo L, Mallick B. Variable selection for regression models. Sankhyā: The Indian Journal of Statistics, Series B 1998: 65–81.
  • [40] Dellaportas P, Forster JJ, Ntzoufras I. On Bayesian model and variable selection using MCMC. Statistics and Computing 2002; 12(1): 27–36.
  • [41] Meuwissen TH, Goddard ME. Mapping multiple QTL using linkage disequilibrium and linkage analysis information and multitrait data. Genetics Selection Evolution 2004; 36(3): 261–279.
  • [42] Plummer M. JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In: Proceedings of the 3rd International Workshop on Distributed Statistical Computing ; 2003: 1–10.
  • [43] Carpenter B, Gelman A, Hoffman MD, et al. STAN: A probabilistic programming language. Journal of Statistical Software 2017; 76(1).
  • [44] Achenbach TM. Manual for the Child Behavior Checklist/4-18 and 1991 profile. University of Vermont, Department of Psychiatry 1991.
  • [45] Wechsler D. Manual for the Wechsler Intelligence Scale for Children. Psychological Corporation. 3 ed. 1991.
  • [46] Akkaya Hocagil T, Cook RJ, Jacobson SW, Jacobson JL, Ryan LM. Propensity score analysis for a semi-continuous exposure variable: a study of gestational alcohol exposure and childhood cognition. Journal of the Royal Statistical Society: Series A (Statistics in Society) 2021; 184(4): 1390–1413.
Table 1. The average number of outcomes correctly identified as relevant and incorrectly chosen as relevant in different settings with data generated from (2.1) (Study 1) and SEM (Study 2). The table shows the results from the original SSVS prior with μ=0\mu=0 and our proposed prior where μ\mu is unknown. K1K_{1} is the true number of relevant outcomes. The fourth and fifth columns show the number of outcomes that each model detects as relevant. The next two columns show the number of relevant outcomes correctly identified by each model. The last two columns show the number of irrelevant outcomes that were detected as relevant. All numbers are averaged over 10 simulations.
Study μ\mu K1K_{1} No. identified as relevant No. correctly identified No. incorrectly identified
μ\mu unknown μ=0\mu=0 μ\mu unknown μ=0\mu=0 μ\mu unknown μ=0\mu=0
11 Small 5 8.5 5.5 1.9 1.3 6.6 4.2
10 6.6 5.9 3.2 3.3 3.4 2.6
15 6.6 5.8 5.4 4.2 1.2 1.6
Large 5 5 5 5 5 0 0
10 10 10 10 10 0 0
15 15 15 15 15 0 0
22 Small 5 7.9 7.1 3.6 3.2 4.3 3.9
10 9.3 8 8.2 5.4 1.1 2.6
15 10.1 7.3 9.6 4.6 0.5 2.7
Large 5 5 5 5 5 0 0
10 10 10 10 10 0 0
15 15 15 15 15 0 0
Table 2. The mean squared errors of the models with different effect sizes, in simulated data study 1 and 2. The result is averaged over 10 simulations.
Study K1K_{1} SSVS - μ\mu unknown SSVS - μ=0\mu=0 No variable selection
Small μ\mu Large μ\mu Small μ\mu Large μ\mu Small μ\mu Large μ\mu
1 5 0.008 0.008 0.002 0.008 0.027 0.058
10 0.009 0.012 0.006 0.018 0.018 0.044
15 0.009 0.021 0.008 0.030 0.014 0.043
2 5 0.049 0.023 0.047 0.020 0.048 0.060
10 0.049 0.031 0.091 0.046 0.043 0.068
15 0.088 0.035 0.179 0.053 0.056 0.066
Table 3. Estimates of μ\mu in different settings of the simulated data study in Section 3.1. The table shows the average of the posterior mean of μ\mu, averaged over 10 data sets, in different μ\mu and K1K_{1}. The standard errors are in brackets. The first column is the true μ\mu. The subset model is the model that used only the correct set of relevant outcomes.
μ\mu K1K_{1} SSVS No selection Subset model
0.1-0.1 5 -0.021 -0.003 -0.073
(0.082) (0.161) (0.172)
10 -0.036 -0.039 -0.082
(0.068) (0.116) (0.116)
15 -0.061 -0.114 -0.142
(0.085) (0.094) (0.107)
3-3 5 -2.906 -0.703 -2.903
(0.070) (0.175) (0.170)
10 -3.088 -1.520 -3.081
(0.047) (0.124) (0.122)
15 -2.948 -2.237 -2.962
(0.057) (0.110) (0.110)
Table 4. Estimates of μ\mu in different settings of the simulated data study in Section 3.2. The table shows the average of the posterior mean of μ\mu, averaged over 10 data sets, for different ω\omega and K1K_{1}. The standard errors are in brackets. The third column is the mean of the true loadings times ω\omega, which are considered as the true μ\mu. The subset model is the model that used only the correct set of relevant outcomes.
ω\omega K1K_{1} Mean(𝚲1:K1,1×ω\bm{\Lambda}_{1:K1,1}\times\omega) SSVS No selection Subset model
0.5-0.5 5 -0.498 -0.178 -0.098 -0.470
(0.217) (0.136) (0.144)
10 -0.5145 -0.417 -0.279 -0.548
(0.228) (0.110) (0.165)
15 -0.509 -0.268 -0.296 -0.399
(0.123) (0.134) (0.138)
3-3 5 -3.132 -3.231 -0.835 -3.263
(0.224) (0.089) (0.201)
10 -3.009 -2.971 -1.471 -2.965
(0.187) (0.126) (0.200)
15 -3.144 -3.160 -2.318 -3.105
(0.144) (0.154) (0.180)
Table 5. Summary of IkI_{k} for different models for Detroit data. For SSVS, the table shows the posterior mean of IkI_{k}. The table highlights the variables selected by the SSVS prior. For the other method, we report whether 0 is outside the 95%95\% credible interval of the corresponding βk\beta_{k}. The CBCL and TRF tests came from [44]; the IQ tests were based on the Wechsler Intelligence Test for Children-III [45].
Laplace prior p(Ik=1)=0.5p(I_{k}=1)=0.5 Informative p(Ik)p(I_{k})
g1=τ2/100g_{1}=\tau^{2}/100 g1=0.22g_{1}=0.2^{2} g1=0.22g_{1}=0.2^{2}
CBCL Social Problem 0 0.285 0.378 0.370
CBCL Attention Problem 0 0.509 0.535 0.580
CBCL Internalizing 0 0.193 0.240 0.056
CBCL Externalizing 0 0.438 0.493 0.532
TRF Social Problem 1 0.860 0.741 0.948
TRF Attention Problem 1 0.871 0.742 0.948
TRF Internalizing 0 0.204 0.259 0.065
TRF Externalizing 1 0.861 0.747 0.949
Verbal IQ 0 0.293 0.391 0.386
Performance IQ 0 0.340 0.431 0.444
Freedom from distractibility 1 0.968 0.813 0.970
Verbal fluency 0 0.671 0.631 0.900
Digit span backwards 1 0.875 0.738 0.948
Story memory 0 0.263 0.353 0.337
Table 6. Posterior mean of βk\beta_{k} and μ\mu for Detroit data. For SSVS, we report the mean of βk|Ik=1\beta_{k}|I_{k}=1 if the posterior mean of IkI_{k} is greater than 0.50.5 and 0 otherwise.
Laplace prior p(Ik=1)=0.5p(I_{k}=1)=0.5 Informative p(Ik)p(I_{k}) Subset
g1=τ2/100g_{1}=\tau^{2}/100 g1=0.22g_{1}=0.2^{2} g1=0.22g_{1}=0.2^{2}
CBCL Social Problem -0.089 0.000 0.000 0.000
CBCL Attention Problem -0.226 -0.274 -0.275 -0.346 -0.493
CBCL Internalizing 0.086 0.000 0.000 0.000
CBCL Externalizing -0.194 0.000 0.000 -0.330
TRF Social Problem -0.482 -0.406 -0.419 -0.464 0.629
TRF Attention Problem -0.459 -0.396 -0.411 -0.456 -0.621
TRF Internalizing 0.065 0.000 0.000 0.000
TRF Externalizing -0.498 -0.409 -0.423 -0.469 -0.636
Verbal IQ -0.111 0.000 0.000 0.000
Performance IQ -0.137 0.000 0.000 0.000
Freedom from distractibility -0.560 -0.447 -0.471 -0.509 -0.640
Verbal fluency -0.325 -0.332 -0.336 -0.393 -0.543
Digit span backwards -0.452 -0.391 -0.405 -0.452 -0.591
Story memory -0.057 0.000 0.000 0.000
μ\mu -0.326 -0.312 -0.405 -0.594
τ\tau 0.216 0.241 0.187 0.171