This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Modeling the Complex Dynamics and Changing Correlations of Epileptic Events

Drausin F. Wulsin Emily B. Fox Brian Litt Department of Bioengineering, University of Pennsylvania, Philadelphia, PA Department of Neurology, University of Pennsylvania, Philadelphia, PA Department of Statistics, University of Washington, Seattle, WA
Abstract

Patients with epilepsy can manifest short, sub-clinical epileptic “bursts” in addition to full-blown clinical seizures. We believe the relationship between these two classes of events—something not previously studied quantitatively—could yield important insights into the nature and intrinsic dynamics of seizures. A goal of our work is to parse these complex epileptic events into distinct dynamic regimes. A challenge posed by the intracranial EEG (iEEG) data we study is the fact that the number and placement of electrodes can vary between patients. We develop a Bayesian nonparametric Markov switching process that allows for (i) shared dynamic regimes between a variable number of channels, (ii) asynchronous regime-switching, and (iii) an unknown dictionary of dynamic regimes. We encode a sparse and changing set of dependencies between the channels using a Markov-switching Gaussian graphical model for the innovations process driving the channel dynamics and demonstrate the importance of this model in parsing and out-of-sample predictions of iEEG data. We show that our model produces intuitive state assignments that can help automate clinical analysis of seizures and enable the comparison of sub-clinical bursts and full clinical seizures.

keywords:
Bayesian nonparametric , EEG , factorial hidden Markov model , graphical model , time series
journal: Artificial Intelligence Journal

1 Introduction

Despite over three decades of research, we still have very little idea of what defines a seizure. This ignorance stems both from the complexity of epilepsy as a disease and a paucity of quantitative tools that are flexible enough to describe epileptic events but restrictive enough to distill intelligible information from them. Much of the recent machine learning work in electroencephalogram (EEG) analysis has focused on seizure prediction, [cf., 1, 2], an important area of study but one that generally has not focused on parsing the EEG directly, as a human EEG reader would. Such parsings are central for diagnosis and relating various types of abnormal activity. Recent evidence shows that the range of epileptic events extends beyond clinical seizures to include shorter, sub-clinical “bursts” lasting fewer than 10 seconds [3]. What is the relationship between these shorter bursts and the longer seizures? In this work, we demonstrate that machine learning techniques can have substantial impact in this domain by unpacking how seizures begin, progress, and end.

In particular, we build a Bayesian nonparametric time series model to analyze intracranial EEG (iEEG) data. We take a modeling approach similar to a physician’s in analyzing EEG events: look directly at the evolution of the raw EEG voltage traces. EEG signals exhibit nonstationary behavior during a variety of neurological events, and time-varying autoregressive (AR) processes have been proposed to model single channel data [4]. Here we aim to parse the recordings into interpretable regions of activity and thus propose to use autoregressive hidden Markov models (AR-HMMs) to define locally stationary processes. In the presence of multiple channels of simultaneous recordings, as is almost always the case in EEG, we wish to share AR states between the channels while allowing for asynchronous switches. The recent beta process (BP) AR-HMM of Fox et al. [5] provides a flexible model of such dynamics: a shared library of infinitely many possible AR states is defined and each time series uses a finite subset of the states. The process encourages sharing of AR states, while allowing for time-series-specific variability.

Conditioned on the selected AR dynamics, the BP-AR-HMM assumes independence between time series. In the case of iEEG, this assumption is almost assuredly false. Figure 1 shows an example of a 4x8 intracranial electrode grid and the residual EEG traces of 16 channels after subtracting the predicted value in each channel using a conventional BP-AR-HMM. While the error term in some channels remains low throughout the recording, other channels—especially those spatially adjacent in the electrode grid—have very correlated error traces. We propose to capture correlations between channels by modeling a multivariate innovations process that drives independently evolving channel dynamics. We demonstrate the importance of accounting for this error trace in predicting heldout seizure recordings, making this a crucial modeling step before undertaking large-scale EEG analysis.

Figure 1: An iEEG grid electrode and (bottom left) corresponding graphical model. (middle) Residual EEG values after subtracting predictions from a BP-AR-HMM assuming independent channels. All EEG scale bars indicate 1 mV vertically and 1 second horizontally.

To aid in scaling to large electrode grids, we exploit a sparse dependency structure for the multivariate innovations process. In particular, we assume a graph with known vertex structure that encodes conditional independencies in the multivariate innovations process. The graph structure is based on the spatial adjacencies of the iEEG channels, with a few exceptions to make the graphical model fully decomposable. Figure 1 (bottom left) shows an example of such a graphical model over the channels. Although the relative position of channels in the electrode grid is clear, determining the precise 3D location of each channel is extremely difficult. Furthermore, unlike in scalp EEG or magentoencephalogram (MEG), which have generally consistent channel positions from patient to patient, iEEG channels vary in number and position for each patient. These issues impede the use of alternative spatial and multivariate time series modeling techniques.

It is well-known that the correlations between EEG channels usually vary during the beginning, middle, and end of a seizure [6, 7]. Prado et al. [8] employ a mixture-of-expert vector autoregressive (VAR) model to describe the different dynamics present in seven channels of scalp EEG. We take a similar approach by allowing for a Markov evolution to an underlying innovations covariance state.

An alternative modeling approach is to treat the channel recordings as a single multivariate time series, perhaps using a switching VAR process as in Prado et al. [8]. However, such an approach (i) assumes synchronous switches in dynamics between channels, (ii) scales poorly with the number of channels, and (iii) requires an identical number of channels between patients to share dynamics between event recordings.

Other work has explored nonparametric modeling of multiple time series. The infinite factorial HMM of Van Gael et al. [9] considers an infinite collection of chains each with a binary state space. The infinite hierarchical HMM [10] also involves infinitely many chains with finite state spaces, but with constrained transitions between the chains in a top down fashion. The infinite DBN of Doshi-Velez et al. [11] considers more general connection structures and arbitrary state spaces. Alternatively, the graph-coupled HMM of Dong et al. [12] allows graph-structured dependencies in the underlying states of some NN Markov chains. Here, we consider a finite set of chains with infinite state spaces that evolve independently. The factorial structure combines the chain-specific AR dynamic states and the graph-structured innovations to generate the multivariate observations with sparse dependencies.

Expanding upon previous work [13], we show that our model for correlated time series has better out-of-sample predictions of iEEG data than standard AR- and BP-AR-HMMs and demonstrate the utility of our model in comparing short, sub-clinical epileptic bursts with longer, clinical seizures. Our inferred parsings of iEEG data concur with key features hand-annotated by clinicians but provide additional insight beyond what can be extracted from a visual read of the data. The importance of our methodology is multifold: (i) the output is interpretable to a practitioner and (ii) the parsings can be used to relate seizure types both within and between patients even with different electrode setups. Enabling such broad-scale automatic analysis, and identifying dynamics unique to sub-clinical seizures, can lead to new insights in epilepsy treatments.

Although we are motivated by the study of seizures from iEEG data, our work is much more broadly applicable in time series analysis. For example, perhaps one has a collection of stocks and wants to model shared dynamics between them while capturing changing correlations. The BP-AR-HMM was applied to the analysis of a collection of motion capture data assuming independence between individuals; our modeling extension could account for coordinated motion with a sparse dependency structure between individuals. Regardless, we find the impact in the neuroscience domain to be quite significant.

2 A Structured Bayesian Nonparametric Factorial AR-HMM

2.1 Dynamic Model

Consider an event with NN univariate time series of length TT. This event could be a seizure, where each time series is one of the iEEG voltage-recording channels. For clarity of exposition, we refer to the individual univariate time series as channels and the resulting NN-dimensional multivariate time series (stacking up the channel series) as the event. We denote the scalar value for each channel ii at each (discrete) time point tt as yt(i)y^{(i)}_{t} and model it using an rr-order AR-HMM [5]. That is, each channel is modeled via Markov switches between a set of AR dynamics. Denoting the latent state at time tt by zt(i)z_{t}^{(i)}, we have:

zt(i)𝝅zt1(i)(i),yt(i)=j=1razt(i),jytj(i)+ϵt(i)=𝐚zt(i)T𝐲~t(i)+ϵt(i).\displaystyle\begin{aligned} z_{t}^{(i)}&\sim\boldsymbol{\rm\pi}^{(i)}_{z_{t-1}^{(i)}},\\ y_{t}^{(i)}&=\sum_{j=1}^{r}a_{z_{t}^{(i)},j}y_{t-j}^{(i)}+\epsilon_{t}^{(i)}=\boldsymbol{\rm a}^{\mathrm{T}}_{z_{t}^{(i)}}\boldsymbol{\rm\widetilde{y}}_{t}^{(i)}+\epsilon_{t}^{(i)}.\end{aligned} (1)

Here, 𝐚k=(ak,1,,ak,r)T\boldsymbol{\rm a}_{k}=\left(a_{k,1},\dots,a_{k,r}\right)^{\rm T} are the AR parameters for state kk and πk\pi_{k} is the transition distribution from state kk to any other state. We also introduce the notation 𝐲~t(i)\widetilde{\boldsymbol{\rm y}}_{t}^{(i)} as the vector of rr previous observations (yt1(i),,ytr(i))T\left(y_{t-1}^{(i)},\dots,y_{t-r}^{(i)}\right)^{\rm T}.

In contrast to a vector AR (VAR) HMM specification of the event, our modeling of channel dynamics separately as in Eq. (1) allows for (i) asynchronous switches and (ii) sharing of dynamic parameters between recordings with a potentially different number of channels. However, a key aspect of our data is the fact that the channels are correlated. Likewise, these correlations change as the patient progresses through various seizure event states (e.g., “resting”, “onset”, “offset”, …). That is, the channels may display one innovation covariance before a seizure (e.g., relatively independent and low-magnitude) but quite a different covariance during a seizure (e.g., correlated, higher magnitude). To capture this, we jointly model the innovations ϵt=(ϵt(1),,ϵt(N))T\boldsymbol{\rm\epsilon}_{t}=\left(\epsilon_{t}^{(1)},\dots,\epsilon_{t}^{(N)}\right)^{\rm T} driving the AR-HMMs of Eq. (1) as

ZtϕZt1,ϵt𝒩(𝟎,ΔZt),\displaystyle\begin{aligned} Z_{t}&\sim\boldsymbol{\rm\phi}_{Z_{t-1}},\\ \boldsymbol{\rm\epsilon}_{t}&\sim{\cal N}(\boldsymbol{\rm 0},\Delta_{Z_{t}}),\end{aligned} (2)

where ZtZ_{t} denotes a Markov-evolving event state distinct from the individual channel states {zt(i)}\{z_{t}^{(i)}\}, ϕl\boldsymbol{\rm\phi}_{l} the transition distributions, and Δk\Delta_{k} the event-state-specific channel covariance. That is each Δl\Delta_{l} describes a different set of channel relationships.

For compactness, we sometimes alternately write

𝐲t=𝐀𝐳t𝐘~t+ϵt(Zt),\displaystyle\boldsymbol{\rm y}_{t}=\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\widetilde{\boldsymbol{\rm Y}}_{t}+\boldsymbol{\rm\epsilon}_{t}(Z_{t}), (3)

where 𝐲t\boldsymbol{\rm y}_{t} is the concatenation of NN channel observations at time tt and 𝐳t\boldsymbol{\rm z}_{t} is the vector of concatenated channel states. The overall dynamic model is represented graphically in Figure 2.

Figure 2: Graphical model of our factorial AR-HMM. The NN channel states zt(i)z_{t}^{(i)} evolve according to independent Markov processes (transition distributions omitted for simplicity) and index the AR dynamic parameters 𝐚k\boldsymbol{\rm a}_{k} used in generating observation yt(i)y_{t}^{(i)}. The Markov-evolving event state ZtZ_{t} indexes the graph-structured covariance Δl\Delta_{l} of the correlated AR innovations resulting in multivariate observations 𝐲t=[yt(1),,yt(N)]T\boldsymbol{\rm y}_{t}=[y_{t}^{(1)},\ldots,y_{t}^{(N)}]^{\rm T} sharing the same conditional independencies.

Scaling to large electrode grids

To scale our model to a large number of channels, we consider a Gaussian graphical model (GGM) for ϵt\boldsymbol{\rm\epsilon}_{t} capturing a sparse dependency structure amongst the channels. Let G=(V,E)G=(V,E) be an undirected graph with VV the set of channel nodes ii and EE the set of edges with (i,j)E(i,j)\in E if ii and jj are connected by an edge in the graph. Then, [Δl1]ij=0\left[\Delta_{l}^{-1}\right]_{ij}=0 for all (i,j)E(i,j)\not\in E, implying ϵt(i)\epsilon_{t}^{(i)} is conditionally independent of ϵt(j)\epsilon_{t}^{(j)} given ϵt(k)\epsilon_{t}^{(k)} for all channels ki,jk\neq i,j. In our dynamic model of Eq. (1), statements of conditional independence of ϵt\boldsymbol{\rm\epsilon}_{t} translate directly to statements of the observations 𝐲t\boldsymbol{\rm y}_{t}.

In our application, we choose GG based on the spatial adjacencies of channels in the electrode grid, as depicted in Figure 1 (bottom left). In addition to encoding the spatial proximities of iEEG electrodes, the graphical model also yields a sparse precision matrix Δl1\Delta_{l}^{-1}, allowing for more efficient scaling to the large number of channels commonly present in iEEG. These computational efficiencies are made clear in Section 3.

Interpretation as a sparse factorial HMM

Recall that our formulation involves N+1N+1 independently evolving Markov chains: NN chains for the channel states zt(i)z_{t}^{(i)} plus one for the event state sequence ZtZ_{t}. As indicated by the observation model of Eq. (3), the N+1N+1 Markov chains jointly generate our observation vector 𝐲t\boldsymbol{\rm y}_{t} leading to an interpretation of our formulation as a factorial HMM [14]. However, here we have a sparse dependency structure in how the Markov chains influence a given observation 𝐲t\boldsymbol{\rm y}_{t}, as induced by the conditional independencies in ϵt\boldsymbol{\rm\epsilon}_{t} encoded in the graph GG. That is, yt(i)y_{t}^{(i)} only depends on ZtZ_{t} the set of zt(j)z_{t}^{(j)} for which jj is a neighbor of ii in GG.

2.2 Prior Specification

Emission parameters

As in the AR-HMM, we place a multivariate normal prior on the AR coefficients,

𝐚k𝒩(𝐦0,Σ0),\boldsymbol{\rm a}_{k}\sim{\cal N}(\boldsymbol{\rm m}_{0},\Sigma_{0}), (4)

with mean 𝐦0\boldsymbol{\rm m}_{0} and covariance Σ0\Sigma_{0}. Throughout this work, we let 𝐦0=𝟎\boldsymbol{\rm m}_{0}=\boldsymbol{\rm 0}.

For the channel covariances Δl\Delta_{l} with sparse precisions Δl1\Delta_{l}^{-1} determined by the graph GG, we specify a hyper-inverse Wishart (HIW) prior,

ΔlHIWG(b0,D0),\Delta_{l}\sim\mbox{HIW}_{G}(b_{0},D_{0}), (5)

where b0b_{0} denotes the degrees of freedom and D0D_{0} the scale. The HIW prior [15] enforces hyper-Markov conditions specified by GG.

Feature constrained channel transition distributions

A natural question is how many AR states are the channels switching between? Likewise, which are shared between the channels and which are unique? We expect to see similar dynamics present in the channels (sharing of AR processes), but also some differences. For example, maybe only some of the channels ever get excited into a certain state. To capture this structure, we take a Bayesian nonparametric approach building on the beta process (BP) AR-HMM of Fox et al. [16]. Through the beta process prior [17], the BP-AR-HMM defines a shared library of infinitely many AR coefficients {𝐚k}\{\boldsymbol{\rm a}_{k}\}, but encourages each channel to only use a sparse subset of them.

The BP-AR-HMM specifically defines a featural model. Let 𝐟(i)\boldsymbol{\rm f}^{(i)} be a binary feature vector associated with channel ii with fk(i)=1f_{k}^{(i)}=1 indicating that channel ii uses the dynamic described by 𝐚k\boldsymbol{\rm a}_{k}. Formally, the feature assignments fk(i)f_{k}^{(i)} and their corresponding parameters 𝐚k\boldsymbol{\rm a}_{k} are generated by a beta process random measure and the conjugate Bernoulli process (BeP),

BBP(1,B0),X(i)BeP(B),\displaystyle\begin{aligned} B&\sim\mbox{BP}(1,B_{0}),\\ X^{(i)}&\sim\mbox{BeP}(B),\end{aligned} (6)

with base measure B0B_{0} over the parameter space Θ=r\Theta=\mathbb{R}^{r} for our rr-order autoregressive parameters 𝐚k\boldsymbol{\rm a}_{k}. As specified in Eq. (4), we take the normalized measure B0/B0(Θ)B_{0}/B_{0}(\Theta) to be 𝒩(𝐦0,Σ0){\cal N}(\boldsymbol{\rm m}_{0},\Sigma_{0}). The discrete measures BB and X(i)X^{(i)} can be represented as

B=k=1ωkδ𝐚k,X(i)=k=1fk(i)δ𝐚k,\displaystyle B=\sum_{k=1}^{\infty}\omega_{k}\delta_{\boldsymbol{\rm a}_{k}},\quad X^{(i)}=\sum_{k=1}^{\infty}f^{(i)}_{k}\delta_{\boldsymbol{\rm a}_{k}}, (7)

with fk(i)Ber(ωk)f^{(i)}_{k}\sim\mbox{Ber}(\omega_{k}).

The resulting feature vectors 𝐟(i)\boldsymbol{\rm f}^{(i)} constrain the set of available states zt(i)z_{t}^{(i)} can take by constraining each transition distributions, 𝝅j(i)\boldsymbol{\rm\pi}^{(i)}_{j}, to be 0 when fk(i)=0f_{k}^{(i)}=0. Specifically, the BP-AR-HMM defines 𝝅j(i)\boldsymbol{\rm\pi}^{(i)}_{j} by introducing a set of gamma random variables, ηjk(i)\eta_{jk}^{(i)}, and setting

ηjk(i)\displaystyle\eta^{(i)}_{jk} Gamma(γc+κcδ(j,k))\displaystyle\sim\mbox{Gamma}(\gamma_{\rm c}+\kappa_{\rm c}\delta(j,k)) (8)
𝝅j(i)\displaystyle\boldsymbol{\rm\pi}^{(i)}_{j} =𝜼j(i)𝐟(i)kfk(i)=1ηjk(i).\displaystyle=\frac{\boldsymbol{\rm\eta}^{(i)}_{j}\circ\boldsymbol{\rm f}^{(i)}}{\sum_{k\mid f_{k}^{(i)}=1}\eta^{(i)}_{jk}}. (9)

The positive elements of 𝝅j(i)\boldsymbol{\rm\pi}^{(i)}_{j} can also be thought of as a sample from a finite Dirichlet distribution with only K(i)K^{(i)} dimensions, where K(i)=kfk(i)K^{(i)}=\sum_{k}f^{(i)}_{k} represents the number of states channel ii uses. For convenience, we sometimes denote the set of transition variables {ηjk(i)}j\{\eta_{jk}^{(i)}\}_{j} as 𝜼(i)\boldsymbol{\rm\eta}^{(i)}. As in the sticky HDP-HMM of Fox et al. [18], the parameter κc\kappa_{c} encourages self-transitions (i.e., state jj at time t1t-1 to state jj at time tt).

Unconstrained event transition distributions

We again take a Bayesian nonparametric approach to define the event state HMM, building on the sticky HDP-HMM [18]. In particular, the transition distributions ϕl\boldsymbol{\rm\phi}_{l} are hierarchically defined as

𝜷stick(α),ϕlDP(αe𝜷+κe𝐞l),\displaystyle\begin{aligned} \boldsymbol{\rm\beta}&\sim\mbox{stick}(\alpha),\\ \boldsymbol{\rm\phi}_{l}&\sim\mbox{DP}(\alpha_{\rm e}\boldsymbol{\rm\beta}+\kappa_{\rm e}\boldsymbol{\rm e}_{l}),\end{aligned} (10)

where stick(α)\mbox{stick}(\alpha) refers to a stick-breaking measure, also known as GEM(α)\mbox{GEM}(\alpha), with 𝜷\boldsymbol{\rm\beta} generated by

βkBeta(1,α),k=1,2,,βk=βk=1k1(1β),k=1,2,,=βk(1=1k1β)k=1,2,.\displaystyle\begin{aligned} \beta_{k}^{\prime}&\sim\mbox{Beta}(1,\alpha),&\qquad k=1,2,\ldots,\\ \beta_{k}&=\beta_{k}^{\prime}\prod_{\ell=1}^{k-1}(1-\beta_{\ell}^{\prime}),&\qquad k=1,2,\ldots,\\ &=\beta_{k}^{\prime}\left(1-\sum_{\ell=1}^{k-1}\beta_{\ell}\right)&\qquad k=1,2,\ldots.\end{aligned} (11)

Again, the sticky parameter κe\kappa_{\rm e} promotes self-transitions, reducing state redundancy.

We term this model the sparse factorial BP-AR-HMM. Although the graph GG can be arbitrarily structured, because of our motivating seizure modeling application with a focus on a spatial-based graph structure, we often describe the sparse factorial BP-AR-HMM as capturing spatial correlations. We depict this model in the directed acyclic graphs shown in Figure 3. Note that while we formally consider a model of only a single event for notational simplicity, our formulation scales straightforwardly to multiple independent events. In this case, everything except the library of AR states {𝐚k}\{\boldsymbol{\rm a}_{k}\} becomes event-specific. If all events share the same channel setup, we can assume the channel covariances {Δl}\{\Delta_{l}\} are shared as well.

Figure 3: Referencing the channel state and event state sequences of Figure 2, here we depict the graphical model associated with our Bayesian nonparametric prior specification of Section 2.2. (top) The channel ii feature indicators 𝐟(i)\boldsymbol{\rm f}^{(i)} are samples from a Bernoulli process with weights {ωk}\{\omega_{k}\} and constrain the channel transition distributions 𝝅(i)\boldsymbol{\rm\pi}^{(i)}. Channel states zt(i)z_{t}^{(i)} evolve independently for each channel according to these feature-constrained transition distributions 𝝅(i)\boldsymbol{\rm\pi}^{(i)}. (bottom) The event state ZtZ_{t} evolves independently of each channel ii’s state zt(i)z_{t}^{(i)} according to transition distributions {ϕl}\{\boldsymbol{\rm\phi}_{l}\}, which are coupled by global transition distribution 𝜷\boldsymbol{\rm\beta}.

3 Posterior Computations

Although the components of our model related to the individual channel dynamics are similar to those in the BP-AR-HMM, our posterior computations are significantly different due to the coupling of the Markov chains via the correlated innovations ϵt\boldsymbol{\rm\epsilon}_{t}. In the BP-AR-HMM, conditioned on the feature assignments, each time series is independent. Here, however, we are faced with a factorial HMM structure and the associated challenges. Yet the underlying graph structure of the channel dependencies mitigates the scale of these challenges.

Conditioned on channel sequences {𝐳1:T(𝐢)}\{\boldsymbol{\rm z}_{1:T}^{(\boldsymbol{\rm i}^{\prime})}\}, we can marginalize z1:T(i)z_{1:T}^{(i)}; because of the graph structure, we need only condition on a sparse set of other channels 𝐢\boldsymbol{\rm i}^{\prime} (i.e., neighbors of channel ii in the graph). This step is important for efficiently sampling the feature assignments 𝐟(i)\boldsymbol{\rm f}^{(i)}.

Algorithm 1 Sparse factorial BP-AR-HMM master MCMC sampler
1:for  each MCMC iteration  do
2:  get a random permutation 𝐡\boldsymbol{\rm h} of the channel indices,
3:  for each channel i𝐡i\in\boldsymbol{\rm h} do
4:    sample feature indicators 𝐟(i)\boldsymbol{\rm f}^{(i)} as in Eq. (12)
5:    sample state sequence z1:T(i)z_{1:T}^{(i)} as in Eq. (13)
6:    sample state transition parameters 𝜼(i)\boldsymbol{\rm\eta}^{(i)} as in Eq. (14)
7:  end for
8:  sample event states sequence Z1:TZ_{1:T}
9:  sample event state transition parameters ϕ\boldsymbol{\rm\phi} as in Eq. (17)
10:  sample channel AR parameters {𝐚k}\{\boldsymbol{\rm a}_{k}\} as in Eq. (19)
11:  sample channel {Δl}\{\Delta_{l}\} as in Eq. (18)
12:  (sample hyperparameters γc\gamma_{c}, κc\kappa_{c}, αe\alpha_{e}, κe\kappa_{e}, γe\gamma_{e}, and αc=B0(Θ)\alpha_{c}=B_{0}(\Theta))
13:end for

At a high level, each MCMC iteration proceeds through sampling channel states, events states, dynamic model parameters, and hyperparameters. Algorithm 1 summarizes these steps, which we briefly describe below and more fully in Appendices B-D.

Individual channel variables

We harness the fact that we can compute the marginal likelihood of 𝐲1:T\boldsymbol{\rm y}_{1:T} given 𝐟(i)\boldsymbol{\rm f}^{(i)} and the neighborhood set of other channels 𝐳1:T(𝐢)\boldsymbol{\rm z}_{1:T}^{(\boldsymbol{\rm i}^{\prime})} in order to block sample {𝐟(i),z1:T(i)}\{\boldsymbol{\rm f}^{(i)},z_{1:T}^{(i)}\}. That is, we first sample 𝐟(i)\boldsymbol{\rm f}^{(i)} marginalizing z1:T(i)z_{1:T}^{(i)} and then sample z1:T(i)z_{1:T}^{(i)} given the sampled 𝐟(i)\boldsymbol{\rm f}^{(i)}. Sampling the active features 𝐟(i)\boldsymbol{\rm f}^{(i)} for channel ii follows as in Fox et al. [5], using the Indian buffet process (IBP) [19] predictive representation associated with the beta process, but using a likelihood term that conditions on neighboring channel state sequences 𝐳1:T(𝐢)\boldsymbol{\rm z}_{1:T}^{(\boldsymbol{\rm i}^{\prime})} and observations 𝐲1:T(𝐢)\boldsymbol{\rm y}_{1:T}^{(\boldsymbol{\rm i}^{\prime})}. We additionally condition on the event state sequence Z1:TZ_{1:T} to define the sequence of distributions on the innovations. Generically, this yields

p(fk(i)y1:T(i),𝐲1:T(𝐢),𝐳1:T(𝐢),Z1:T,𝐅ik,𝜼(i),{𝐚k},{Δl})p(fk(i)𝐅ik)p(y1:T(i)𝐲1:T(𝐢),𝐳1:T(𝐢),Z1:T,𝐅ik,fk(i),𝜼(i),{𝐚k},{Δl}).p\left(f^{(i)}_{k}\mid y^{(i)}_{1:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},Z_{1:T},\boldsymbol{\rm F}^{-ik},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)\propto\\ p\left(f^{(i)}_{k}\mid\boldsymbol{\rm F}^{-ik}\right)p\left(y^{(i)}_{1:T}\mid\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},Z_{1:T},\boldsymbol{\rm F}^{-ik},f^{(i)}_{k},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right). (12)

Here, 𝐅ik\boldsymbol{\rm F}^{-ik} denotes the set of feature assignments not including fk(i)f^{(i)}_{k}. The first term is given by the IBP prior and the second term is the marginal conditional likelihood (marginalizing z1:T(i)z_{1:T}^{(i)}). Based on the derived marginal conditional likelihood, feature sampling follows similarly to that of Fox et al. [5].

Conditioned on 𝐟(i)\boldsymbol{\rm f}^{(i)}, we block sample the state sequence z1:T(i)z_{1:T}^{(i)} using a backward filtering forward sampling algorithm (see A)based on a decomposition of the full conditional as

p(z1:T(i)y1:T(i),𝐲1:T(𝐢),𝐳1:T(𝐢),𝐟(i),𝜼(i),{𝐚k},{Δl})=p(z1(i)y1(i),𝐲1(𝐢),𝐳1(𝐢),𝐟(i),𝜼(i),{𝐚k},{Δl})t=2Tp(zt(i)yt:T(i),𝐲t:T(𝐢),zt1(i),𝐳t:T(𝐢),𝐟(i),𝜼(i).{𝐚k},{Δl}).p\left(z^{(i)}_{1:T}\mid y^{(i)}_{1:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm f}^{(i)},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)=\\ p\left(z^{(i)}_{1}\mid y^{(i)}_{1},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1},\boldsymbol{\rm f}^{(i)},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)\cdot\\ \prod_{t=2}^{T}p\left(z^{(i)}_{t}\mid y^{(i)}_{t:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{t:T},z^{(i)}_{t-1},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{t:T},\boldsymbol{\rm f}^{(i)},\boldsymbol{\rm\eta}^{(i)}.\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right). (13)

For sampling the transition parameters 𝜼(i)\boldsymbol{\rm\eta}^{(i)}, we follow Hughes et al. [20, Supplement] and sample from the full conditional

p(ηjk(i)z1:T(i),fk(i))(ηjk(i))njk(i)+γc+κcδ(j,k)1eηjk(i)kfk(i)=1ηjk(i),\displaystyle p(\eta^{(i)}_{jk}\mid z^{(i)}_{1:T},f^{(i)}_{k})\propto\frac{(\eta_{jk}^{(i)})^{n^{(i)}_{jk}+\gamma_{\rm c}+\kappa_{\rm c}\delta(j,k)-1}e^{\eta^{(i)}_{jk}}}{\sum_{k^{\prime}\mid f^{(i)}_{k}=1}\eta^{(i)}_{jk^{\prime}}}, (14)

where njk(i)n^{(i)}_{jk} denotes the number of times channel ii transitions from state jj to state kk. We sample 𝜼j(i)=Cj(i)𝜼¯j(i)\boldsymbol{\rm\eta}^{(i)}_{j}=C^{(i)}_{j}\bar{\boldsymbol{\rm\eta}}^{(i)}_{j} from its posterior via two auxiliary variables,

𝜼¯j(i)Dir(γc+𝐞jκc+𝐧j(i))Cj(i)Gamma(Kγc+κc,1),\displaystyle\begin{aligned} \bar{\boldsymbol{\rm\eta}}^{(i)}_{j}&\sim\mbox{Dir}(\gamma_{\rm c}+\boldsymbol{\rm e}_{j}\kappa_{\rm c}+\boldsymbol{\rm n}_{j}^{(i)})\\ C_{j}^{(i)}&\sim\mbox{Gamma}(K\gamma_{\rm c}+\kappa_{\rm c},1),\end{aligned} (15)

where 𝐧j(i)\boldsymbol{\rm n}_{j}^{(i)} gives the transition counts from state jj in channel ii.

Event variables {ϕl,Δl,Z1:T}\{\boldsymbol{\rm\phi}_{l},\Delta_{l},Z_{1:T}\}

Conditioned on the channel state sequences 𝐳1:T\boldsymbol{\rm z}_{1:T} and AR coefficients {𝐚k}\{\boldsymbol{\rm a}_{k}\}, we can compute an innovations sequence as ϵt=𝐲t𝐀𝐳t𝐘~t\boldsymbol{\rm\epsilon}_{t}=\boldsymbol{\rm y}_{t}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t}, where we recall the definition of 𝐀k\boldsymbol{\rm A}_{k} and 𝐘~t\boldsymbol{\rm\widetilde{Y}}_{t} from Eq. (3). These innovations are the observations of the sticky HDP-HMM of Eq. (2). For simplicity and to allow block-sampling of 𝐳1:T\boldsymbol{\rm z}_{1:T}, we consider a weak limit approximation of the sticky HDP-HMM as in [18]. The top-level Dirichlet process is approximated by an LL-dimensional Dirichlet distribution [21], inducing a finite Dirichlet for ϕl\boldsymbol{\rm\phi}_{l}:

𝜷Dir(γe/L,,γe/L),ϕlDir(αe𝜷+κe𝐞l).\displaystyle\begin{aligned} \boldsymbol{\rm\beta}&\sim\mbox{Dir}(\gamma_{\rm e}/L,\ldots,\gamma_{\rm e}/L),\\ \boldsymbol{\rm\phi}_{l}&\sim\mbox{Dir}(\alpha_{\rm e}\boldsymbol{\rm\beta}+\kappa_{\rm e}\boldsymbol{\rm e}_{l}).\end{aligned} (16)

Here, LL provides an upper bound on the number of states in the HDP-HMM. The weak limit approximation still encourages using a subset of these LL states.

Based on the weak limit approximation, we first sample the parent transition distribution 𝜷\boldsymbol{\rm\beta} as in [22, 18], followed by sampling each ϕl\boldsymbol{\rm\phi}_{l} from its Dirichlet posterior,

p(ϕlZ1:T,𝜷)Dir(αe𝜷+𝐞lκe+𝐧l),\displaystyle p\left(\boldsymbol{\rm\phi}_{l}\mid Z_{1:T},\boldsymbol{\rm\beta}\right)\propto\mbox{Dir}(\alpha_{\rm e}\boldsymbol{\rm\beta}+\boldsymbol{\rm e}_{l}\kappa_{\rm e}+\boldsymbol{\rm n}_{l}), (17)

where 𝐧l\boldsymbol{\rm n}_{l} is a vector of transition counts of Z1:TZ_{1:T} from state ll to the LL different states.

Using standard conjugacy results, based on “observations” ϵt=𝐲t𝐀𝐳t𝐘~t\boldsymbol{\rm\epsilon}_{t}=\boldsymbol{\rm y}_{t}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t} for tt such that Zt=lZ_{t}=l, the full conditional for Δl\Delta_{l} is given by

p(Δl𝐲1:T,𝐳1:T,Z1:T,{𝐚k})HIWG(bl,Dl),p(\Delta_{l}\mid\boldsymbol{\rm y}_{1:T},\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k}\})\propto\mbox{HIW}_{G}(b_{l},D_{l}), (18)

where

bl\displaystyle b_{l} =b0+|{tZt=l,t=1,,T}|,\displaystyle=b_{0}+|\{t\mid Z_{t}=l,\;t=1,\ldots,T\}|,
Dl\displaystyle D_{l} =D0+tZt=lϵtϵtT.\displaystyle=D_{0}+\sum_{t\mid Z_{t}=l}\boldsymbol{\rm\epsilon}_{t}\boldsymbol{\rm\epsilon}_{t}^{\rm T}.

Details on how to efficiently sample from a HIW distribution are provided in [23].

Conditioned on the truncated HDP-HMM event transition distributions {ϕl}\{\boldsymbol{\rm\phi}_{l}\} and emission parameters {Δl}\{\Delta_{l}\}, we use a standard backward filtering forward sampling scheme to block sample Z1:TZ_{1:T}.

AR coefficients, {𝐚k}\{\boldsymbol{\rm a}_{k}\}

Each observation 𝐲t\boldsymbol{\rm y}_{t} is generated based on a matrix of AR parameters 𝐀𝐳t=[𝐚zt(1)𝐚zt(N)]\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}=[\boldsymbol{\rm a}_{z_{t}^{(1)}}\mid\cdots\mid\boldsymbol{\rm a}_{z_{t}^{(N)}}]. Thus, sampling 𝐚k\boldsymbol{\rm a}_{k} involves conditioning on {𝐚k}kk\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k} and disentangling the contribution of 𝐚k\boldsymbol{\rm a}_{k} on each 𝐲t\boldsymbol{\rm y}_{t}. As derived in  E, the full conditional for 𝐚k\boldsymbol{\rm a}_{k} is a multivariate normal

p(𝐚k𝐲1:T,𝐳1:T,Z1:T,{𝐚k}kk,{Δl})𝒩(𝝁k,Σk),p(\boldsymbol{\rm a}_{k}\mid\boldsymbol{\rm y}_{1:T,}\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\})\propto{\cal N}(\boldsymbol{\rm\mu}_{k},\Sigma_{k}), (19)

where

Σk1\displaystyle\Sigma_{k}^{-1} =Σ01+t=1T𝐘¯t(𝐤+)ΔZt1(𝐤+,𝐤+)(𝐘¯t(𝐤+))T,\displaystyle=\Sigma_{0}^{-1}+\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\left(\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\right)^{\rm T},
Σk1𝝁k\displaystyle\Sigma_{k}^{-1}\boldsymbol{\rm\mu}_{k} =t=1T𝐘¯t(𝐤+)(ΔZt1(𝐤+,𝐤+)𝐲t(𝐤+)+ΔZt1(𝐤+,𝐤)ϵt(𝐤)).\displaystyle=\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}+\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\epsilon}_{t}^{(\boldsymbol{\rm k}^{-})}\right).

The vectors 𝐤+\boldsymbol{\rm k}^{+} and 𝐤\boldsymbol{\rm k}^{-} denote the indices of channels assigned and not assigned to state kk at time tt, respectively. We use these to index into the rows and columns of the vectors ϵt\boldsymbol{\rm\epsilon}_{t}, 𝐲t\boldsymbol{\rm y}_{t}, and matrix ΔZt\Delta_{Z_{t}}. Each column of matrix 𝐘¯t(𝐤+)\bar{\boldsymbol{\rm Y}}_{t}^{(\boldsymbol{\rm k}^{+})} is the previous rr observations for one of the channels assigned to state kk at time tt.

Hyperparameters

See D for the prior and full conditionals of the hyperparameters γc\gamma_{c}, κc\kappa_{c}, αe\alpha_{e}, κe\kappa_{e}, γe\gamma_{e}, and αc=B0(Θ)\alpha_{c}=B_{0}(\Theta).

4 Experiments

4.1 Simulation Experiments

To initially explore some characteristics of our sparse factorial BP-AR-HMM, we examined a small simulated dataset of six time series in a 2x3 spatial arrangement, with vertices connecting all adjacent nodes (i.e., two cliques of 4 nodes each). We generated an event of length 2000 time points as follows. We defined five first-order AR channel states linearly spaced between 0.9-0.9 and 0.90.9 and three event states with covariances shown in the bottom left of Figure 4. Channel and event state transition distributions were set to 0.990.99 and 0.90.9, respectively, for a self-transition and uniform between the other states. Channel feature indicators fk(i)f^{(i)}_{k} were simulated from an IBP with αc=10\alpha_{c}=10 (no channel had indicators exceeding the five specified states). The sampled 𝐟(i)\boldsymbol{\rm f}^{(i)} were then used to modify the channel state transition distributions by setting to 0 transitions to states with fk(i)=0f^{(i)}_{k}=0 and then renormalizing. Using these feature-constrained transition distributions, we simulated sequences z1:T(i)z_{1:T}^{(i)} for each channel i=1,,6i=1,\dots,6 and for T=2000T=2000. The event sequence Z1:TZ_{1:T} was likewise simulated. Based on these sampled state sequences, and using the defined state-specific AR coefficients and channel covariances, we generated observations 𝐲1:T\boldsymbol{\rm y}_{1:T} as in Eq. (3).

Figure 4: (top left) The six simulated channel time series overlaid on the five true channel states, denoted by different colors; the three true event states are shown in grayscale in the bar below. (top right) The true and estimated channel (color) and event (grayscale) states shown below for comparison after 6000 MCMC iterations. The true (bottom left) and estimated (bottom right) event state innovation covariances.
channel state true 𝐚k\boldsymbol{\rm a}_{k} post. 𝐚k\boldsymbol{\rm a}_{k} mean post. 𝐚k\boldsymbol{\rm a}_{k} 95% interval
1 -0.900 -0.906 [-0.917, -0.896]
2 -0.450 -0.456 [-0.474, -0.436]
3 0 -0.009 [-0.038, 0.020]
4 0.450 0.445 [0.425, 0.466]
5 0.900 0.902 [0.890, 0.913]
Table 1: The true and estimated values for the channel state coefficients in the simulated dataset. We include the posterior mean and 95% credible interval.

We ran our MCMC sampler for 6000 iterations, discarding the first 1000 as burn-in and thinning the chain by 10. Figure 4 shows the generated data and its true states along with the inferred states and learned channel covariances for a representative posterior sample. The event state matching is almost perfect, and the channel state matching is quite good, though we see that the sampler added an additional (yellow) state in the middle of the first time series when it should have assigned that section to the cyan state. The scale and structure of the estimated event state covariances match the true covariances quite well. Furthermore, Table 1 shows how the posterior estimates of the channel state AR coefficients also center well around the true values.

4.2 Parsing a Seizure

We tested the sparse factorial BP-AR-HMM on two similar seizures (events) from a patient of the Children’s Hospital of Pennsylvania. These seizures were chosen because qualitatively they displayed a variety of dynamics throughout the beginning, middle, and end of the seizure and thus are ideal for exploring the extent to which our sparse factorial BP-AR-HMM can parse a set of rich neurophysiologic signals. We used the 90 seconds of data after the clinically-determined starts of each seizure from 16 channels, whose spatial layout in the electrode grid is shown in Figure 5 along with the graph encoding our conditional independence assumptions. The data were low-pass filtered and downsampled from 200 to 50 Hz, preserving the clinically important signals but reducing the computational burden of posterior inference. The data was also scaled to have 99% of values within [-10, 10] for numerical reasons. We examined a 55th-order sparse factorial BP-AR-HMM and ran 10 MCMC chains for 6000 iterations, discarding 1000 samples as burn-in and using 10-sample thinning.

The sparse factorial BP-AR-HMM inferred state sequences for the sample corresponding to a minimum expected Hamming distance criterion ([18]) are shown in Figure 5. The results were analyzed by a board-certified epileptologist who agreed with the model’s judgement in identifying the subtle changes from the background dynamic (cyan) initially present in all channels. The model’s grouping of spatially-proximate channels into similar state transition patterns (e.g., channels 03, 07, 11, 15) was clinically intuitive and consistent with his own reading of the raw EEG. Using only the raw EEG, and prior to disclosing our results, he independently identified roughly six points in the duration of the seizure where the dynamics fundamentally change. The three main event state transitions shown in Figure 5 occurred almost exactly at the same time as three of his own marked transitions. The fourth coincides with a major shift in the channel dynamics with most channels transitioning to the green dynamic. The other two transitions he marked that are not displayed occurred after this onset period. From this analysis, we see that our event states provide an important global summary of the dynamics of the seizure that augments the information conveyed from the channel state sequences.

Figure 5: The graph used for a 16 channel iEEG electrode and corresponding traces over 25 seconds of a seizure onset with colors indicating the inferred channel states. The event states are shown below along with the associated innovation covariances. Vertical dashed lines indicate the EEG transition times marked independently by an epileptologist. Vertical and horizontal scale bars denote 1 mV and 1 second, respectively.

Clinical relevance

While interpreting these state sequences and covariances from the model, it is important to keep in mind that they are ultimately estimates of a system whose parsing even highly-trained physicians disagree upon. Nevertheless, we believe that the event states directly describe the activity of particular clinical interest.

In modeling the correlations between channels, the event states give insight into how different physiologic areas of the brain interact over the course of a seizure. In the clinical workup for resective brain surgery, these event states could help define and specifically quantify the full range of ways in which neurophysiologic regions initiate seizures and how others are recruited over the numerous seizures of a patient. In addition, given fixed model parameters, our model can fit the channel and event state sequences of an hour’s worth of 64-channel EEG data in a matter of minutes on a single 8-core machine, possibly facilitating epileptologist EEG annotation of long-term monitoring records.

The ultimate clinical aim of this work, however, involves understanding the relationship between epileptic bursts and seizures. Because the event state aspect of our model involves a Markov assumption, the intrinsic length of the event has little bearing on the states assigned to particular time points. Thus, these event states allow us to straightforwardly compare the neurophysiologic relationship dynamics in short bursts (often less than two seconds long) to those in much longer seizures (on the order of two minutes long), as explored in Section 4.4. Prior to this analysis, we first examine the importance of our various model components by comparing to baseline alternatives.

4.3 Model Comparison

The advantages of a spatial model

We explored the extent to which the spatial information and sparse dependencies encoded in the HIW prior improves our predictions of heldout data relative to a number of baseline models. To assess the impact of the sparse dependencies induced by the Gaussian graphical model for ϵt\boldsymbol{\rm\epsilon}_{t}, we compare to a full-covariance model with an IW prior on Δl\Delta_{l} (dense factorial). For assessing the importance of spatial correlations, we additionally compare to two alternatives where channels evolve independently: the BP-AR-HMM of Fox et al. [5] and an AR-HMM without the feature-based modeling provided by the beta process [24]. Both of these models use inverse gamma (IG) priors on the individual channel innovation variances. We learned a set of AR coefficients {𝐚k}\{\boldsymbol{\rm a}_{k}\} and event covariances {Δl}\{\Delta_{l}\} on one seizure and then computed the heldout log-likelihood on a separate seizure, constraining it use the learned model parameters from the training seizure.

For the training seizure, MCMC samples were collected over 5000 samples across 10 chains, each with a 1000-sample burn in and 10-sample thinning. To compute the predictive log-likelihood of the heldout seizure, we analytically marginalized the heldout event state sequence Z1:TZ_{1:T} but perform a Monte Carlo integration over the feature vectors 𝐟(i)\boldsymbol{\rm f}^{(i)} and channel states 𝐳1:T\boldsymbol{\rm z}_{1:T} using our MCMC sampler. For each original MCMC sample generated from the training seizure, a secondary chain is run fixing {𝐚k}\{\boldsymbol{\rm a}_{k}\} and {Δl}\{\Delta_{l}\} and sampling zt(i)z_{t}^{(i)}, ZtZ_{t}, 𝐟(i)\boldsymbol{\rm f}^{(i)}, 𝜼(i)\boldsymbol{\rm\eta}^{(i)}, and {ϕl}\{\boldsymbol{\rm\phi}_{l}\} for the heldout seizure. We approximate p(𝐲1:T{ϕl},{𝐚k},{Δl})p(\boldsymbol{\rm y}_{1:T}\mid\{\boldsymbol{\rm\phi}_{l}\},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}) by averaging the secondary chain’s closed-form p(𝐲1:T𝐳1:T,{ϕl},{𝐚k},{Δl})p(\boldsymbol{\rm y}_{1:T}\mid\boldsymbol{\rm z}_{1:T},\{\boldsymbol{\rm\phi}_{l}\},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}), described in B.

Figure 6: (left) An example 16-channel clip of iEEG with the middle section of one channel zoomed in and innovations from the original BP-AR-HMM and our sparse factorial BP-AR-HMM shown below. (right) Boxplots of the heldout event log-likelihoods from the two original and factorial models with mean and median posterior likelihood given in green and red lines. Boxes denote the middle 50% prediction interval.

Figure 6 (left) shows how conditioning on the innovations of neighboring channels in the sparse factorial model improves the prediction of an individual channel, as seen by its reduced innovation trace relative to the original BP-AR-HMM. The quantitative benefits of accounting for these correlations are seen in our predictions of heldout events, as depicted in Figure 6 (right), which compares the heldout log-likelihoods for the original and the factorial models listed above. As expected, the factorial models have significantly larger predictive power than the original models. Though hard to see due to the large factorial/original difference, the BP-based model also improves on the standard non-feature-based AR-HMM. Performance of the sparse factorial model is also at least as competitive as a full-covariance model (dense factorial). We would expect to see even larger gains for electrode grids with more channels due to the parsimonious representation presented by the graphical model. Regardless, these results demonstrate that the assumptions of sparsity in the channel dependencies do not adversely affect our performance.

Refer to caption
Figure 7: The average time per MCMC iteration required to calculate all of the channel likelihoods under each AR model at each time point.

The advantages of sparse factorial dependencies

In addition to providing a parsimonious modeling tool, the sparse dependencies among channels induced by the HIW prior allow our computations to scale linearly to the large number of channels present in iEEG. We compared a dense factorial BP-AR-HMM (entailing a fully-connected spatial graph) and a sparse factorial BP-AR-HMM on five datasets of 8, 16, 32, 64, and 96 channels (from three 32-channel electrodes) from the same seizure used previously. We ran the two models on each of the five datasets for at least 1000 MCMC iterations, using a profiler to tabulate the time spent in each step of the MCMC iteration.

Figure 7 shows the average time required to calculate the channel likelihoods at each time point under each AR channel state. This computation is used both for calculating the marginal likelihood (averaging over all the state sequences z1:T(i)z_{1:T}^{(i)}) required in active feature sampling as well as in sampling the state sequences z1:T(i)z_{1:T}^{(i)}. In our sparse factorial model, each channel has a constant set of MM dependencies, assuming MM neighboring channels. As such, the channel likelihood computation at each time point has an O(M)O(M) complexity, implying an O(MN)O(MN) complexity for calculating the likelihoods of all NN channels at each time point. In contrast, the likelihood computation at each time point under the full covariance model had complexity O(N)O(N), implying O(N2)O(N^{2}) for calculating all the channel likelihoods. For MNM\ll N, as is typically the case, our sparse dependency model is significantly more computationally efficient.

Anecdotally, we also found that the IW prior experiments—especially those with larger number of channels—tended to occasionally have numerical underflow problems associated with the inverse term ΔZt1(𝐢,𝐢)\Delta^{-1(\boldsymbol{\rm i}^{\prime},\boldsymbol{\rm i}^{\prime})}_{Z_{t}} in the conditional channel likelihood calculation. This underflow in the IW prior model calculations is not surprising since the matrices inverted are of dimension N1N-1 (for NN channels), whereas in the HIW prior, the sparse spatial dependencies of the electrode grids make these matrices no larger than eight-by-eight.

4.4 Comparing Epileptic Events of Different Scales

We applied our sparse factorial BP-AR-HMM to six channels of iEEG over 15 events from a human patient with hippocampal depth electrodes. These events comprise 14 short sub-clinical epileptic bursts of roughly five to eight seconds and a final, 2-3 minute clinical seizure. Our hypothesis was that the sub-clinical bursts display initiation dynamics similar to those of a full, clinical seizure and thus contain information about the seizure-generation process.

Figure 8: 6 iEEG traces from two sub-clinical bursts and onset of the single seizure with colors indicating inferred channel and event states. The dashed lines indicates the start of the red state in the three events. Vertical and horizontal scale bars denote 1mv and 1 second, respectively.

The events were automatically extracted from the patient’s continuous iEEG record by taking sections of iEEG whose median line-length feature [25] crossed a preset threshold, also including 10 seconds before and after each event. The iEEG was preprocessed in the same way as in the previous section. The six channels studied came from a depth electrode implanted in the left temporal lobe of the patient’s brain. We ran our MCMC sampler jointly on the 15 events. In particular, the AR channel state and event state parameters, {𝐚k}\{\boldsymbol{\rm a}_{k}\} and {Δl}\{\Delta_{l}\}, were shared between the 15 events such that the parsings of each recording jointly informed the posteriors of these shared parameters. The hyperparameter settings, number of MCMC iterations, chains, and thinning was as in the experiment of Section 4.2.

Figure 8 compares two of the 14 sub-clinical bursts and the onset of the single seizure. We have aligned the three events relative to the beginnings of the red event state common to all three, which we treat as the start of the epileptic activity. The individual channel states of the four middle channels are also all green throughout most of the red event state. It is interesting to note that at this time the fifth channel’s activity in all three events is much lower than those of the three channels above it, yet it is still assigned to the green state and continues in that state along with the other three channels as the event state switches from the red to the lime green state in all three events. While clinical opinions can vary widely in EEG reading, a physician would most likely not consider this segment of the fifth channel similar to the other three, as our model consistently does. But on a relative voltage axis, the segments actually look quite similar. In a sense, the fifth channel has the same dynamics as the other three but just with smaller magnitude. This kind of relationship is difficult for the human EEG readers to identify and shows how models such as ours are capable of providing new perspectives not readily apparent to a human reader. Additionally, we note the similarities in event state transitions.

The similarities mentioned above, among others, suggest some relationship between these two different classes of epileptic events. However, all bursts make a notable departure from the seizure: a large one-second depolarization in the middle three channels, highlighted at the end by the magenta event state and followed shortly thereafter by the end of the event. Neither the states assigned by our model nor the iEEG itself indicates that dynamic present in the clinical seizure. This difference leads us to posit that perhaps these sub-clinical bursts are a kind of false-start seizure, with similar onset patterns but a disrupting discharge that prevents the event from escalating to a full-blown seizure.

5 Conclusion

In this work, we develop a sparse factorial BP-AR-HMM model that allows for shared dynamic regimes between a variable number of time series, asynchronous regime-switching, and an unknown dictionary of dynamic regimes. Key to our model is capturing the between-series correlations and their evolution with a Markov-switching multivariate innovations process. For scalability, we assume a sparse dependency structure between the time series using a Gaussian graphical model.

This model is inspired by challenges in modeling high-dimensional intracranial EEG time series of seizures, which contain a large variety of small- and large-scale dynamics that are of interest to clinicians. We demonstrate the value of this unsupervised model by showing its ability to parse seizures in a clinically intuitive manner and to produce state of the art out-of-sample predictions on the iEEG data. Finally, we show how our model allows for flexible, large-scale analysis of different classes of epileptic events, opening new, valuable clinical research directions previously too challenging to pursue. Such analyses have direct relevance to clinical decision-making for patients with epilepsy.

References

  • D’Alessandro et al. [2005] M. D’Alessandro, G. Vachtsevanos, R. Esteller, J. Echauz, S. Cranstoun, G. Worrell, L. Parish, B. Litt, A multi-feature and multi-channel univariate selection process for seizure prediction., Clinical Neurophysiology 116 (2005) 506–516.
  • Mirowski et al. [2009] P. Mirowski, D. Madhavan, Y. Lecun, R. Kuzniecky, Classification of patterns of EEG synchronization for seizure prediction., Clinical Neurophysiology 120 (2009) 1927–1940.
  • Litt et al. [2001] B. Litt, R. Esteller, J. Echauz, M. D’Alessandro, R. Shor, T. Henry, P. Pennell, C. Epstein, R. Bakay, M. Dichter, G. Vachtsevanos, Epileptic seizures may begin hours in advance of clinical onset: a report of five patients., Neuron 30 (2001) 51–64.
  • Krystal et al. [1999] A. D. Krystal, R. Prado, M. West, New methods of time series analysis of non-stationary EEG data: eigenstructure decompositions of time varying autoregressions., Clinical Neurophysiology 110 (1999) 2197–2206.
  • Fox et al. [2009] E. B. Fox, E. B. Sudderth, M. I. Jordan, A. S. Willsky, Sharing features among dynamical systems with beta processes, Advances in Neural Information Processing Systems (2009).
  • Schindler et al. [2007] K. Schindler, H. Leung, C. E. Elger, K. Lehnertz, Assessing seizure dynamics by analysing the correlation structure of multichannel intracranial EEG., Brain 130 (2007) 65–77.
  • Schiff et al. [2005] S. J. Schiff, T. Sauer, R. Kumar, S. L. Weinstein, Neuronal spatiotemporal pattern discrimination: the dynamical evolution of seizures., NeuroImage 28 (2005) 1043–1055.
  • Prado et al. [2006] R. Prado, F. Molina, G. Huerta, Multivariate time series modeling and classification via hierarchical VAR mixtures, Computational Statistics & Data Analysis 51 (2006) 1445–1462.
  • Van Gael et al. [2008] J. Van Gael, Y. Saatci, Y. W. Teh, Z. Ghahramani, Beam sampling for the infinite hidden Markov model, in: Proceedings of the 25th International Conference on Machine Learning, 2008.
  • Heller et al. [2009] K. Heller, Y. W. Teh, D. Gorur, The Infinite Hierarchical Hidden Markov Model, in: Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, 2009.
  • Doshi-Velez et al. [2011] F. Doshi-Velez, D. Wingate, J. Tenenbaum, N. Roy, Infinite dynamic bayesian networks, in: Proceedings of the 28th International Conference on Machine Learning, 2011.
  • Dong et al. [2012] W. Dong, A. Pentland, K. A. Heller, Graph-Coupled HMMs for Modeling the Spread of Infection, in: Proceedings of the 28th Conference Conference on Uncertainty in Artificial Intelligence, 2012.
  • Wulsin et al. [2013] D. F. Wulsin, E. B. Fox, B. Litt, Parsing Epileptic Events Using a Markov Switching Process for Correlated Time Series, in: Proceedings of the 30th International Conference on Machine Learning, 2013.
  • Ghahramani and Jordan [1997] Z. Ghahramani, M. I. Jordan, Factorial hidden Markov models, Machine Learning 29 (1997) 245–273.
  • Dawid and Lauritzen [1993] A. P. Dawid, S. L. Lauritzen, Hyper Markov laws in the statistical analysis of decomposable graphical models, The Annals of Statistics 21 (1993) 1272–1317.
  • Fox et al. [2011] E. B. Fox, E. B. Sudderth, M. I. Jordan, A. S. Willsky, Joint Modeling of Multiple Related Time Series via the Beta Process, Arxiv preprint arXiv:1111.4226v1 (2011).
  • Thibaux and Jordan [2007] R. Thibaux, M. I. Jordan, Hierarchical beta processes and the Indian buffet process, in: Proceedings of the 10th Conference on Artificial Intelligence and Statistics, 2007.
  • Fox et al. [2011] E. B. Fox, E. B. Sudderth, M. I. Jordan, A. S. Willsky, A sticky HDP-HMM with application to speaker diarization, The Annals of Applied Statistics 5 (2011) 1020–1056.
  • Griffiths and Ghahramani [2005] T. L. Griffiths, Z. Ghahramani, Infinite latent feature models and the Indian buffet process, Gatsby Computational Neuroscience Unit, Technical Report #2005-001 (2005).
  • Hughes et al. [2012] M. C. Hughes, E. B. Fox, E. B. Sudderth, Effective split-merge Monte Carlo methods for nonparametric models of sequential data, in: Advances in Neural Information Processing Systems, 2012.
  • Ishwaran and Zarepour [2002] H. Ishwaran, M. Zarepour, Exact and approximate sum representations for the Dirichlet process, Canadian Journal of Statistics 30 (2002) 269–283.
  • Teh et al. [2006] Y. W. Teh, M. I. Jordan, M. J. Beal, D. M. Blei, Hierarchical Dirichlet processes, Journal of the American Statistical Association 101 (2006) 1566–1581.
  • Carvalho et al. [2007] C. M. Carvalho, H. Massam, M. West, Simulation of hyper-inverse Wishart distributions in graphical models, Biometrika 94 (2007) 647–659.
  • Fox et al. [2011] E. B. Fox, E. B. Sudderth, M. I. Jordan, A. S. Willsky, Bayesian nonparametric inference of switching dynamic linear models, IEEE Transactions on Signal Processing 59 (2011) 1569–1585.
  • Esteller et al. [2001] R. Esteller, J. Echauz, T. Tcheng, B. Litt, B. Pless, Line length: an efficient feature for seizure onset detection, in: Proceedings of the 23rd Engineering in Medicine and Biology Society Conference, 2001.
  • Wulsin [2013] D. F. Wulsin, Bayesian Nonparametric Modeling of Epileptic Events, Ph.D. thesis, University of Pennsylvania, 2013.
  • Gelman et al. [2004] A. Gelman, J. B. Carlin, H. S. Stern, D. S. Rubin, Bayesian Data Analysis, second ed., Chapman & Hall/CRC, Boca Raton, Florida, 2004.
  • Green [1995] P. J. Green, Reversible jump Markov chain Monte Carlo computation and Bayesian model determination, Biometrika 82 (1995) 711–732.
  • Richardson and Green [1997] S. Richardson, P. J. Green, On Bayesian analysis of mixtures with an unknown number of components, Journal of the Royal Statistical Society, Series B 59 (1997) 731–792.
  • Brooks et al. [2011] S. P. Brooks, A. Gelman, G. L. Jones, X. Meng (Eds.), Handbook of Markov Chain Monte Carlo, Chapman & Hall/CRC, Boca Raton, Florida, 2011.
  • Fox [2009] E. B. Fox, Bayesian Nonparametric Learning of Complex Dynamical Phenomena, Ph.D. thesis, Massachusetts Institute of Technology, 2009.

Appendix A HMM sum-product algorithm

Consider a hidden Markov model of a sequence y1:Ty_{1:T} with corresponding discrete states z1:Tz_{1:T}, each of which takes one of KK values. The joint probability of y1:ty_{1:t} and zt=z_{t}=\ell is

p(y1:t,zt=)=p(ytzt=)k=1Kp(y1:t1,zt1=k)p(zt=zt1=k),p(y_{1:t},z_{t}=\ell)=p(y_{t}\mid z_{t}=\ell)\sum_{k=1}^{K}p(y_{1:t-1},z_{t-1}=k)p(z_{t}=\ell\mid z_{t-1}=k), (20)

sometimes called a forward message, which depends on a recursive call for y1:t1y_{1:t-1} and zt1=kz_{t-1}=k with

p(y1,z1=k)=p(y1z1=k)p(z1=k).p(y_{1},z_{1}=k)=p(y_{1}\mid z_{1}=k)p(z_{1}=k). (21)
Algorithm 2 HMM forward-filtering algorithm for calculating p(y1:T)p(y_{1:T})
1:let 𝝅0\boldsymbol{\rm\pi}_{0} and 𝝅=(𝝅1||𝝅K)T\boldsymbol{\rm\pi}=(\boldsymbol{\rm\pi}_{1}|\cdots|\boldsymbol{\rm\pi}_{K})^{\rm T} be the initial and transition distributions
2:let 𝐮t+K\boldsymbol{\rm u}_{t}\in\mathbb{R}_{+}^{K} be the likelihoods p(ytzt=k)p(y_{t}\mid z_{t}=k) for each kk
3:let 𝝃t+K\boldsymbol{\rm\xi}_{t}\in\mathbb{R}_{+}^{K} be the forward messages p(y1:t1,zt1=k)p(y_{1:t-1},z_{t-1}=k) at time tt for each kk
4:normalize each 𝐮t\boldsymbol{\rm u}_{t} to sum to 1, preventing underflow during computations
5:for t=1,,Tt=1,\ldots,T do
6:  store the marginal log probability over 𝐮t\boldsymbol{\rm u}_{t} in vtv_{t} : vtlog(𝟏T𝐮t)v_{t}\leftarrow\log\left(\boldsymbol{\rm 1}^{\rm T}\boldsymbol{\rm u}_{t}\right)
7:  normalize 𝐮t\boldsymbol{\rm u}_{t} : 𝐮~t𝐮t/exp(vt)\widetilde{\boldsymbol{\rm u}}_{t}\leftarrow\boldsymbol{\rm u}_{t}/\exp(v_{t})
8:end for
9:calculate and normalize initial forward messages
10:𝝃1𝐮~1𝝅0\boldsymbol{\rm\xi}_{1}\leftarrow\widetilde{\boldsymbol{\rm u}}_{1}\circ\boldsymbol{\rm\pi}_{0}
11:w1𝟏T𝝃1w_{1}\leftarrow\boldsymbol{\rm 1}^{\rm T}\boldsymbol{\rm\xi}_{1}, 𝝃~1𝝃1/w1\widetilde{\boldsymbol{\rm\xi}}_{1}\leftarrow\boldsymbol{\rm\xi}_{1}/w_{1}
12:v1v1+log(w1)v_{1}\leftarrow v_{1}+\log(w_{1})
13:propagate messages forward through each time point
14:for t=2,,Tt=2,\ldots,T do
15:  transmit messages forward : 𝝃t𝐮~t(𝝅T𝝃~t1)\boldsymbol{\rm\xi}_{t}\leftarrow\widetilde{\boldsymbol{\rm u}}_{t}\circ\left(\boldsymbol{\rm\pi}^{\rm T}\widetilde{\boldsymbol{\rm\xi}}_{t-1}\right)
16:  normalize : wt𝟏T𝝃tw_{t}\leftarrow\boldsymbol{\rm 1}^{\rm T}\boldsymbol{\rm\xi}_{t}, 𝝃~t𝝃t/wt\widetilde{\boldsymbol{\rm\xi}}_{t}\leftarrow\boldsymbol{\rm\xi}_{t}/w_{t}
17:  vtvt+log(wt)v_{t}\leftarrow v_{t}+\log(w_{t})
18:end for
19:calculate final marginal log likelihood : logp(y1:T)=t=1Tvt\log p(y_{1:T})=\sum_{t=1}^{T}v_{t}

Calculating the marginal likelihood p(y1:T)p(y_{1:T}) simply involves one last marginalization over zTz_{T},

p(y1:T)=k=1Kp(y1:T,zT=k).p(y_{1:T})=\sum_{k=1}^{K}p(y_{1:T},z_{T}=k). (22)

Algorithm 2 provides a numerically stable recipe for calculating this marginal likelihood.

Algorithm 3 HMM backward-filtering forward-sampling algorithm for block-sampling z1:Tz_{1:T}
1:let 𝝅0\boldsymbol{\rm\pi}_{0} and 𝝅=(𝝅1||𝝅K)T\boldsymbol{\rm\pi}=(\boldsymbol{\rm\pi}_{1}|\cdots|\boldsymbol{\rm\pi}_{K})^{\rm T} be the initial and transition distributions
2:let 𝐮t+K\boldsymbol{\rm u}_{t}\in\mathbb{R}_{+}^{K} be the likelihoods p(ytzt=k)p(y_{t}\mid z_{t}=k) for each kk
3:let 𝜻t+K\boldsymbol{\rm\zeta}_{t}\in\mathbb{R}_{+}^{K} be the backward messages p(yt+1:Tzt=k)p(y_{t+1:T}\mid z_{t}=k) at tt for each kk
4:normalize each 𝐮t\boldsymbol{\rm u}_{t} to sum to 1, preventing underflow during computations
5:for t=1,,Tt=1,\ldots,T do
6:  𝐮~t𝐮t/(𝟏T𝐮t)\widetilde{\boldsymbol{\rm u}}_{t}\leftarrow\boldsymbol{\rm u}_{t}/\left(\boldsymbol{\rm 1}^{\rm T}\boldsymbol{\rm u}_{t}\right)
7:end for
8:calculate backward messages over all time points
9:𝜻~T=𝟏\widetilde{\boldsymbol{\rm\zeta}}_{T}=\boldsymbol{\rm 1}
10:for t=T1,,1t=T-1,\ldots,1 do
11:  transmit messages backward : 𝝉t+1𝐮~t+1𝜻~t+1\boldsymbol{\rm\tau}_{t+1}\leftarrow\widetilde{\boldsymbol{\rm u}}_{t+1}\circ\widetilde{\boldsymbol{\rm\zeta}}_{t+1}, 𝜻t𝝅𝝉t+1\boldsymbol{\rm\zeta}_{t}\leftarrow\boldsymbol{\rm\pi}\boldsymbol{\rm\tau}_{t+1}
12:  normalize : 𝜻~t𝜻t/(𝟏T𝜻t)\widetilde{\boldsymbol{\rm\zeta}}_{t}\leftarrow\boldsymbol{\rm\zeta}_{t}/(\boldsymbol{\rm 1}^{T}\boldsymbol{\rm\zeta}_{t})
13:end for
14:𝝉1𝐮~1𝜻~1\boldsymbol{\rm\tau}_{1}\leftarrow\widetilde{\boldsymbol{\rm u}}_{1}\circ\widetilde{\boldsymbol{\rm\zeta}}_{1}
15:sample first time point
16:𝐪1𝝅0𝝉1\boldsymbol{\rm q}_{1}\leftarrow\boldsymbol{\rm\pi}_{0}\circ\boldsymbol{\rm\tau}_{1}, 𝐪~1𝐪1/(𝟏T𝐪)\widetilde{\boldsymbol{\rm q}}_{1}\leftarrow\boldsymbol{\rm q}_{1}/(\boldsymbol{\rm 1}^{\rm T}\boldsymbol{\rm q})
17:z1𝐪~1z_{1}\sim\widetilde{\boldsymbol{\rm q}}_{1}
18:sample other time points
19:for t=2,,Tt=2,\ldots,T do
20:  𝐪t𝝅zt1𝝉t\boldsymbol{\rm q}_{t}\leftarrow\boldsymbol{\rm\pi}_{z_{t-1}}\circ\boldsymbol{\rm\tau}_{t}, 𝐪~t𝐪t/(𝟏T𝐪t)\widetilde{\boldsymbol{\rm q}}_{t}\leftarrow\boldsymbol{\rm q}_{t}/(\boldsymbol{\rm 1}^{\rm T}\boldsymbol{\rm q}_{t})
21:  zt𝐪~tz_{t}\sim\widetilde{\boldsymbol{\rm q}}_{t}
22:end for

We can sample the states z1:Tz_{1:T} from their joint distribution, also known as block sampling, via a similar recursive formulation. The conditional likelihood of the last TtT-t samples given the state at tt is

p(yt+1:Tzt=)=k=1Kp(yt+1zt+1=k)p(yt+2:Tzt+1=k)p(zt+1=kzt=),p(y_{t+1:T}\mid z_{t}=\ell)=\\ \sum_{k=1}^{K}p(y_{t+1}\mid z_{t+1}=k)p(y_{t+2:T}\mid z_{t+1}=k)p(z_{t+1}=k\mid z_{t}=\ell), (23)

which depends recursively on the backward messages p(yt+2:Tzt+1=k)p(y_{t+2:T}\mid z_{t+1}=k) for each k{1,,K}k\in\{1,\ldots,K\}. To sample z1:Tz_{1:T} at once we use the joint posterior distribution of the entire state sequence z1:Tz_{1:T}, which factors into

p(z1:Ty1:T)=p(zTzT1,y1:T)p(zT1zT2,y1:T)p(z2z1,y1:T)p(z1y1:T)p(z_{1:T}\mid y_{1:T})=\\ p(z_{T}\mid z_{T-1},y_{1:T})p(z_{T-1}\mid z_{T-2},y_{1:T})\cdots p(z_{2}\mid z_{1},y_{1:T})p(z_{1}\mid y_{1:T}) (24)

If we first sample z1z_{1}, we can condition on it to then sample z2z_{2} and continue in this fashion until we finish with zTz_{T}. The posterior for ztz_{t} is the product of the backward message, the likelihood of yty_{t} given ztz_{t}, and the probability of ztz_{t} given zt1z_{t-1},

p(zty1:T)p(yt+1:Tzt)p(ytzt)p(ztzt1),p(z_{t}\mid y_{1:T})\propto p(y_{t+1:T}\mid z_{t})p(y_{t}\mid z_{t})p(z_{t}\mid z_{t-1}), (25)

where p(ztzt1)p(z1)p(z_{t}\mid z_{t-1})\triangleq p(z_{1}) for t=1t=1. A numerical stable recipe for this backward-filtering forward-sampling is given in Algorithm 3.

Appendix B Individual channel variables posterior

Sampling the variables associated with the individual channel ii involves first sampling active features 𝐟(i)\boldsymbol{\rm f}^{(i)} (while marginalizing z1:T(i)z_{1:T}^{(i)}), then conditioning on these feature assignments 𝐟(i)\boldsymbol{\rm f}^{(i)} to block sample the state sequence z1:T(i)z_{1:T}^{(i)}, and finally sampling the transition distribution 𝝅(i)\boldsymbol{\rm\pi}^{(i)} given the feature indicators 𝐟(i)\boldsymbol{\rm f}^{(i)} and state sequence z1:T(i)z^{(i)}_{1:T}. Explicit algorithms for this sampling are given in Wulsin [26, Section 4.2.1].

Channel marginal likelihood

Let 𝐢{1,,N}\boldsymbol{\rm i}^{\prime}\subseteq\{1,\ldots,N\} index the neighboring channels in the graph upon which channel ii is conditioned. The conditional likelihood of observation yt(i)y_{t}^{(i)} under AR model kk given the neighboring observations 𝐲t(𝐢)\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm i}^{\prime})} at time tt is

p(yt(i)𝐲~t(i),𝐲t(𝐢),zt(i)=k,𝐳t(𝐢),Zt,{𝐚k},{Δl})𝒩(μ~t,σ~t2)p\left(y_{t}^{(i)}\mid\boldsymbol{\rm\widetilde{y}}^{(i)}_{t},\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm i}^{\prime})},z^{(i)}_{t}=k,\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{t},Z_{t},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)\propto{\cal N}\left(\widetilde{\mu}_{t},\widetilde{\sigma}_{t}^{2}\right) (26)

for

μ~t=𝐚kT𝐲~t(i)+ΔZt(i,𝐢)ΔZt1(𝐢,𝐢)(𝐲t(𝐢)𝐀𝐳(𝐢)𝐘~t(𝐢)),σ~t2=ΔZt(i,i)ΔZt(i,𝐢)ΔZt1(𝐢,𝐢)ΔZt(𝐢,i),\displaystyle\begin{aligned} \widetilde{\mu}_{t}&=\boldsymbol{\rm a}^{T}_{k}\boldsymbol{\rm\widetilde{y}}^{(i)}_{t}+\Delta_{Z_{t}}^{(i,\boldsymbol{\rm i}^{\prime})}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm i}^{\prime},\boldsymbol{\rm i}^{\prime})}\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm i}^{\prime})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm i}^{\prime})}\right),\\ \widetilde{\sigma}_{t}^{2}&=\Delta_{Z_{t}}^{(i,i)}-\Delta_{Z_{t}}^{(i,\boldsymbol{\rm i}^{\prime})}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm i}^{\prime},\boldsymbol{\rm i}^{\prime})}\Delta_{Z_{t}}^{(\boldsymbol{\rm i}^{\prime},i)},\end{aligned} (27)

which follows from the conditional distribution of the multivariate normal [27, pg. 579]. Using the forward-filtering scheme (see Algorithm 2) to marginalize over the exponentially many state sequences z1:T(i)z_{1:T}^{(i)}, we can calculate the channel marginal likelihood,

p(y1:T(i)𝐲1:T(𝐢),𝐳1:T(𝐢),Z1:T,𝐟(i),𝜼(i),{𝐚k},{Δl}),p\left(y^{(i)}_{1:T}\mid\boldsymbol{\rm y}_{1:T}^{(\boldsymbol{\rm i}^{\prime})},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},Z_{1:T},\boldsymbol{\rm f}^{(i)},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right), (28)

of channel ii’s observations over all t=1,,Tt=1,\ldots,T given the observations 𝐲1:T(𝐢)\boldsymbol{\rm y}_{1:T}^{(\boldsymbol{\rm i}^{\prime})} and the assigned states 𝐳1:T(𝐢)\boldsymbol{\rm z}_{1:T}^{(\boldsymbol{\rm i}^{\prime})} of neighboring channels 𝐢\boldsymbol{\rm i}^{\prime} and given the event state sequence Z1:TZ_{1:T}. As previously discussed, taking the non-zero elements of the infinite-dimensional transition distributions 𝝅(i)\boldsymbol{\rm\pi}^{(i)}, derived from 𝐟(i)\boldsymbol{\rm f}^{(i)} and 𝜼(i)\boldsymbol{\rm\eta}^{(i)} as in Eq. (9), yields a set of K(i)K^{(i)}-dimensional active feature transition distributions 𝝅~={𝝅~j}\widetilde{\boldsymbol{\rm\pi}}=\{\widetilde{\boldsymbol{\rm\pi}}_{j}\}, reducing this marginalization to a series of matrix-vector products.

Sampling active features, 𝐟(i)\boldsymbol{\rm f}^{(i)}

We briefly describe the active feature sampling scheme given in detail by Fox et al. [5]. Recall that for our HIW-spatial BP-AR-HMM, we need to condition on neighboring channel state sequences 𝐳1:T(𝐢)\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T} and event state sequences Z1:TZ_{1:T}. Sampling the feature indicators 𝐟(i)\boldsymbol{\rm f}^{(i)} for channel ii via the Indian buffet process (IBP) involves considering those features shared by other channels and those unique to channel ii. Let K+=k=1K𝟏(fk(1)fk(N))K_{+}=\sum_{k=1}^{K}\boldsymbol{\rm 1}\left(f^{(1)}_{k}\vee\cdots\vee f^{(N)}_{k}\right) denote the total number of active features used by at least one of the channels. We consider the set of shared features across channels not including those specific to channel ii as 𝒮(i){1,,K+}{\cal S}^{(-i)}\subseteq\{1,\ldots,K_{+}\} and the set of unique features for channel ii as 𝒰(i){1,,K+}/𝒮(i){\cal U}^{(i)}\subseteq\{1,\ldots,K_{+}\}/{\cal S}^{(-i)}.

Shared features

The posterior for each shared feature k𝒮(i)k\in{\cal S}^{(-i)} for channel ii is given by

p(fk(i)y1:T(i),𝐲1:T(𝐢),𝐳1:T(𝐢),Z1:T,𝐅ik,𝐟kk(i),𝜼(i),{𝐚k},{Δl})p(fk(i)𝐅ik)p(y1:T(i)𝐲1:T(𝐢),𝐳1:T(𝐢),Z1:T,𝐟kk(i),fk(i),𝜼(i),{𝐚k},{Δl}),p\left(f^{(i)}_{k}\mid y^{(i)}_{1:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},Z_{1:T},\boldsymbol{\rm F}^{-ik},\boldsymbol{\rm f}^{(i)}_{k^{\prime}\neq k},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)\propto\\ p\left(f^{(i)}_{k}\mid\boldsymbol{\rm F}^{-ik}\right)p\left(y^{(i)}_{1:T}\mid\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},Z_{1:T},\boldsymbol{\rm f}^{(i)}_{k^{\prime}\neq k},f^{(i)}_{k},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right), (29)

where the marginal likelihood of y1:T(i)y^{(i)}_{1:T} term (see Eq. 28) follows from the sum-product algorithm. Recalling the form of the IBP posterior predictive distribution, we have p(fk(i)=1𝐅ik)=mk(i)/Np\left(f^{(i)}_{k}=1\mid\boldsymbol{\rm F}^{-ik}\right)=m^{(-i)}_{k}/N, where mk(i)m^{(-i)}_{k} denotes the number of other channels that use feature kk. We use this posterior to formulate a Metropolis-Hastings proposal that flips the current indicator value fk(i)f^{(i)}_{k} to its complement f¯k(i)\bar{f}^{(i)}_{k} with probability ρ(f¯k(i)fk(i))\rho(\bar{f}^{(i)}_{k}\mid f^{(i)}_{k}),

fk(i)={f¯k(i), w.p.ρ(f¯k(i)fk(i)),fk(i),otherwise,\displaystyle f^{(i)}_{k}=\begin{cases}\bar{f}^{(i)}_{k},&\mbox{ w.p.}\quad\rho(\bar{f}^{(i)}_{k}\mid f^{(i)}_{k}),\\ f^{(i)}_{k},&\mbox{otherwise},\\ \end{cases} (30)

where

ρ(f¯k(i)fk(i))=min(p(f¯k(i)y1:T(i),𝐲1:T(𝐢),𝐳1:T(𝐢),Z1:T,𝐅ik,𝐟kk(i),𝜼(i),{𝐚k},{Δl},)p(fk(i)y1:T(i),𝐲1:T(𝐢),𝐳1:T(𝐢),Z1:T,𝐅ik,𝐟kk(i),𝜼(i),{𝐚k},{Δl},),1).\rho(\bar{f}^{(i)}_{k}\mid f^{(i)}_{k})=\min\left(\frac{p\left(\bar{f}^{(i)}_{k}\mid y^{(i)}_{1:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},Z_{1:T},\boldsymbol{\rm F}^{-ik},\boldsymbol{\rm f}^{(i)}_{k^{\prime}\neq k},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\},\right)}{p\left(f^{(i)}_{k}\mid y^{(i)}_{1:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},Z_{1:T},\boldsymbol{\rm F}^{-ik},\boldsymbol{\rm f}^{(i)}_{k^{\prime}\neq k},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\},\right)},1\right). (31)
Unique features

We either propose a new feature or remove a unique feature for channel ii using a birth and death reversible jump MCMC sampler [28, 29, 30] (see Fox et al. [16] for details relevant to the BP-AR-HMM). We denote the number of unique features for channel ii as n(i)=|𝒰(i)|n^{(i)}=|{\cal U}^{(i)}|. We define the vector of shared feature indicators as 𝐟(i)=𝐟kk𝒮(i)(i)\boldsymbol{\rm f}^{(i)}_{-}=\boldsymbol{\rm f}^{(i)}_{k^{\prime}\mid k^{\prime}\in{\cal S}^{(-i)}} and that for unique feature indicators as 𝐟+(i)=𝐟kk𝒰(i)(i)\boldsymbol{\rm f}^{(i)}_{+}=\boldsymbol{\rm f}^{(i)}_{k^{\prime}\mid k^{\prime}\in{\cal U}^{(i)}}, which together (𝐟(i),𝐟+(i))\left(\boldsymbol{\rm f}^{(i)}_{-},\boldsymbol{\rm f}^{(i)}_{+}\right) define the full feature indicator vector 𝐟(i)\boldsymbol{\rm f}^{(i)} for channel ii. Similarly, 𝐚+(i)\boldsymbol{\rm a}^{(i)}_{+} and 𝜼+(i)\boldsymbol{\rm\eta}^{(i)}_{+} describe the model dynamics and transition parameters associated with these unique features. We propose a new unique feature vector 𝐟+{\boldsymbol{\rm f}^{\prime}}_{+} and corresponding model dynamics 𝐚+{\boldsymbol{\rm a}^{\prime}}_{+} and transition parameters 𝜼+{\boldsymbol{\rm\eta}^{\prime}}_{+} (sampled from their priors in the case of feature birth) with a proposal distribution of

q(𝐟+,𝐚+,𝜼+𝐟+(i),{𝐚k}+(i),𝜼+(i))=q(𝐟+𝐟+(i))q(𝐚+𝐟+,𝐟+(i),{𝐚k}+(i))q(𝜼+𝐟+,𝐟+(i),𝜼+(i)).q\left(\boldsymbol{\rm f}^{\prime}_{+},\boldsymbol{\rm a}^{\prime}_{+},\boldsymbol{\rm\eta}^{\prime}_{+}\mid\boldsymbol{\rm f}^{(i)}_{+},\{\boldsymbol{\rm a}_{k}\}^{(i)}_{+},\boldsymbol{\rm\eta}^{(i)}_{+}\right)=\\ q\left(\boldsymbol{\rm f}^{\prime}_{+}\mid\boldsymbol{\rm f}^{(i)}_{+}\right)q\left({\boldsymbol{\rm a}^{\prime}}_{+}\mid\boldsymbol{\rm f}^{\prime}_{+},\boldsymbol{\rm f}^{(i)}_{+},\{\boldsymbol{\rm a}_{k}\}^{(i)}_{+}\right)q\left({\boldsymbol{\rm\eta}^{\prime}}_{+}\mid\boldsymbol{\rm f}^{\prime}_{+},\boldsymbol{\rm f}^{(i)}_{+},\boldsymbol{\rm\eta}^{(i)}_{+}\right). (32)

A new unique feature is proposed with probability 0.50.5 and each existing unique feature is removed with probability 0.5/n(i)0.5/n^{(i)}. This proposal is accepted with probability

ρ(𝐟+,𝐚+,𝜼+𝐟+(i),{𝐚k}+(i),𝜼+(i))=min(p(y1:T(i)𝐲1:T(𝐢),𝐳1:T(𝐢),[𝐟(i)𝐟+],𝜼(i),𝜼+,{𝐚k},{Δl})p(y1:T(i)𝐲1:T(𝐢),𝐳1:T(𝐢),[𝐟(i)𝐟+(i)],𝜼(i),{𝐚k},{Δl})Poisson(niαc/N)Poisson(niαc/N)q(𝐟+(i)𝐟+)q(𝐟+𝐟+(i)),1).\rho\left(\boldsymbol{\rm f}_{+}^{\prime},\boldsymbol{\rm a}_{+}^{\prime},\boldsymbol{\rm\eta}_{+}^{\prime}\mid\boldsymbol{\rm f}^{(i)}_{+},\{\boldsymbol{\rm a}_{k}\}^{(i)}_{+},\boldsymbol{\rm\eta}^{(i)}_{+}\right)=\\ \min\left(\frac{p\left(y^{(i)}_{1:T}\mid\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},[\boldsymbol{\rm f}^{(i)}_{-}\ \boldsymbol{\rm f}^{\prime}_{+}],\boldsymbol{\rm\eta}^{(i)},\boldsymbol{\rm\eta}_{+}^{\prime},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)}{p\left(y^{(i)}_{1:T}\mid\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},[\boldsymbol{\rm f}^{(i)}_{-}\ \boldsymbol{\rm f}^{(i)}_{+}],\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)}\cdot\right.\\ \left.\frac{\mbox{Poisson}\left(n_{i}^{\prime}\mid\alpha_{c}/N\right)}{\mbox{Poisson}\left(n_{i}\mid\alpha_{c}/N\right)}\frac{q\left(\boldsymbol{\rm f}^{(i)}_{+}\mid\boldsymbol{\rm f}^{\prime}_{+}\right)}{q\left(\boldsymbol{\rm f}^{\prime}_{+}\mid\boldsymbol{\rm f}^{(i)}_{+}\right)},1\right). (33)

Channel state sequence, z1:T(i)z_{1:T}^{(i)}

We block sample the state sequence for all the time points of channel ii, given that channel’s feature-constrained transition distributions 𝝅(i)\boldsymbol{\rm\pi}^{(i)}, the state parameters {𝐚k}\{\boldsymbol{\rm a}_{k}\}, the observations y1:T(i)y^{(i)}_{1:T}, and the neighboring observations 𝐲1:T(𝐢)\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T} and current states 𝐳1:T(𝐢)\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T}. The joint probability of the state sequence z1:T(i)z^{(i)}_{1:T} is given by

p(z1:T(i)y1:T(i),𝐲1:T(𝐢),𝐳1:T(𝐢),𝐟(i),𝜼(i),{𝐚k},{Δl})=p(z1(i)y1(i),𝐲1(𝐢),𝐳1(𝐢),𝐟(i),𝜼(i),{𝐚k},{Δl})t=2Tp(zt(i)yt:T(i),𝐲t:T(𝐢),zt1(i),𝐳t:T(𝐢),𝐟(i),𝜼(i),{𝐚k},{Δl}).p\left(z^{(i)}_{1:T}\mid y^{(i)}_{1:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1:T},\boldsymbol{\rm f}^{(i)},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)=\\ p\left(z^{(i)}_{1}\mid y^{(i)}_{1},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{1},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{1},\boldsymbol{\rm f}^{(i)},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)\cdot\\ \prod_{t=2}^{T}p\left(z^{(i)}_{t}\mid y^{(i)}_{t:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{t:T},z^{(i)}_{t-1},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{t:T},\boldsymbol{\rm f}^{(i)},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right). (34)

Again following the backward filtering forward sampling scheme (Algorithm 3), at each time point tt we sample state zt(i)z^{(i)}_{t} conditioned on zt1(i)z^{(i)}_{t-1} by marginalizing zt+1:T(i)z^{(i)}_{t+1:T}. The conditional probability of zt(i)z^{(i)}_{t} is given by

p(zt(i)yt:T(i),𝐲t:T(𝐢),zt1(i),Z1:T,𝐳t:T(𝐢),𝐟(i),𝜼(i),{𝐚k},{Δl})Multi(𝝅~zt1(i)(i)𝐮t(i)𝝍t),p\left(z^{(i)}_{t}\mid y^{(i)}_{t:T},\boldsymbol{\rm y}^{(\boldsymbol{\rm i}^{\prime})}_{t:T},z^{(i)}_{t-1},Z_{1:T},\boldsymbol{\rm z}^{(\boldsymbol{\rm i}^{\prime})}_{t:T},\boldsymbol{\rm f}^{(i)},\boldsymbol{\rm\eta}^{(i)},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)\propto\\ \mbox{Multi}\left(\boldsymbol{\rm\widetilde{\pi}}^{(i)}_{z^{(i)}_{t-1}}\circ\boldsymbol{\rm u}^{(i)}_{t}\circ\boldsymbol{\rm\psi}_{t}\right), (35)

where 𝝅~zt1(i)(i)\boldsymbol{\rm\widetilde{\pi}}^{(i)}_{z^{(i)}_{t-1}} is the transition distribution given the assigned state at t1t-1, 𝐮t(i)K(i)\boldsymbol{\rm u}^{(i)}_{t}\in\mathbb{R}^{K^{(i)}} is the vector of likelihoods under each possible state at time tt (as in Eq. (26)), and 𝝍tK(i)\boldsymbol{\rm\psi}_{t}\in\mathbb{R}^{K^{(i)}} is the vector of backwards messages (see Eq. (23)) from time point t+1t+1 to tt.

Channel transition parameters, 𝜼(i)\boldsymbol{\rm\eta}^{(i)}

Following [20, Supplement], the posterior for the transition variable ηjk(i)\eta^{(i)}_{jk} is given by

p(ηjk(i)z1:T(i),fk(i))(ηjk(i))njk(i)+γc+κcδ(j,k)1eηjk(i)kfk(i)=1ηjk(i),p(\eta^{(i)}_{jk}\mid z^{(i)}_{1:T},f^{(i)}_{k})\propto\frac{(\eta_{jk}^{(i)})^{n^{(i)}_{jk}+\gamma_{c}+\kappa_{c}\delta(j,k)-1}e^{\eta^{(i)}_{jk}}}{\sum_{k^{\prime}\mid f^{(i)}_{k}=1}\eta^{(i)}_{jk^{\prime}}}, (36)

where njk(i)n^{(i)}_{jk} denotes the number of times channel ii transitions from state jj to state kk. We can sample from this posterior via two auxiliary variables,

𝜼¯j(i)Dir(γc+κc𝐞j+𝐧j(i))Cj(i)Gamma(Kγc+κc,1)𝜼j(i)=Cj(i)𝜼¯j(i).\displaystyle\begin{aligned} \bar{\boldsymbol{\rm\eta}}^{(i)}_{j}&\sim\mbox{Dir}(\gamma_{c}+\kappa_{c}\boldsymbol{\rm e}_{j}+\boldsymbol{\rm n}^{(i)}_{j})\\ C_{j}^{(i)}&\sim\mbox{Gamma}(K\gamma_{c}+\kappa_{c},1)\\ \boldsymbol{\rm\eta}^{(i)}_{j}&=C^{(i)}_{j}\bar{\boldsymbol{\rm\eta}}^{(i)}_{j}.\end{aligned} (37)

Appendix C Event state variables posterior

Since we model the event state process with a (truncated approximation to the) HDP-HMM, inference is more straightforward than with the channel states. We block sample the event state sequence Z1:TZ_{1:T} and then sample the event state transition distributions ϕ\boldsymbol{\rm\phi}.

Event marginal likelihood

Let 𝐳t\boldsymbol{\rm z}_{t} denote the vector of NN channel states at time tt. Since the space of 𝐳t\boldsymbol{\rm z}_{t} is exponentially large, we cannot integrate it out to compute the marginal conditional likelihood of the data given the event state sequence Z1:TZ_{1:T} (and model parameters). Instead, we consider the conditional likelihood of an observation at time tt given channel states 𝐳t\boldsymbol{\rm z}_{t} and event state Zt=lZ_{t}=l,

p(𝐲t𝐘~t,𝐳t,Zt=l,{𝐚k},Δl)𝒩(𝐀𝐳t𝐘~t,Δl).p(\boldsymbol{\rm y}_{t}\mid\boldsymbol{\rm\widetilde{Y}}_{t},\boldsymbol{\rm z}_{t},Z_{t}=l,\{\boldsymbol{\rm a}_{k}\},\Delta_{l})\propto{\cal N}(\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t},\Delta_{l}). (38)

Recalling Eq. (3), we see that this conditional likelihood of 𝐲t\boldsymbol{\rm y}_{t} is equivalent to a zero-mean multivariate normal model on the channel innovations ϵt\boldsymbol{\rm\epsilon}_{t},

p(ϵtZt=l,Δl)𝒩(𝟎,Δl).p(\boldsymbol{\rm\epsilon}_{t}\mid Z_{t}=l,\Delta_{l})\propto{\cal N}(\boldsymbol{\rm 0},\Delta_{l}).

As with the channel marginal likelihoods, we use the forward-filtering algorithm (see Algorithm 2) to marginalize over the possible event state sequences Z1:TZ_{1:T}, yielding a likelihood conditional on the channel states 𝐳t\boldsymbol{\rm z}_{t} and autoregressive parameters {𝐚k}\{\boldsymbol{\rm a}_{k}\}, in addition to the event transition distribution ϕ\boldsymbol{\rm\phi} and event state covariances {Δl}\{\Delta_{l}\},

p(𝐲1:T𝐳1:T,ϕ,{𝐚k},{Δl}).p\left(\boldsymbol{\rm y}_{1:T}\mid\boldsymbol{\rm z}_{1:T},\boldsymbol{\rm\phi},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right). (39)

Event state sequence, Z1:TZ_{1:T}

The mechanics of sampling the event state sequence Z1:TZ_{1:T} directly parallel those of sampling the individual channel state sequences z1:T(i)z_{1:T}^{(i)}. The joint probability of the event state sequence is given by

p(Z1:T𝐲1:T,𝐳1:T,ϕ,{𝐚k},{Δl})p(Z1𝐲1,𝐳1,ϕ,{𝐚k},{Δl})t=2Tp(Zt𝐲t:T,𝐳t:T,Zt1,ϕ,{𝐚k},{Δl}).p(Z_{1:T}\mid\boldsymbol{\rm y}_{1:T},\boldsymbol{\rm z}_{1:T},\boldsymbol{\rm\phi},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\})\propto\\ p(Z_{1}\mid\boldsymbol{\rm y}_{1},\boldsymbol{\rm z}_{1},\boldsymbol{\rm\phi},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\})\prod_{t=2}^{T}p(Z_{t}\mid\boldsymbol{\rm y}_{t:T},\boldsymbol{\rm z}_{t:T},Z_{t-1},\boldsymbol{\rm\phi},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}). (40)

We again use the backward filtering forward sampling scheme of the sum-product algorithm to block sample each event state whose conditional probability distribution over the LL states is given by

p(Zt𝐘~t,𝐳t,ϕ,{𝐚k},{Δl})Multi(ϕZt1𝐯t𝝍t),p\left(Z_{t}\mid\boldsymbol{\rm\widetilde{Y}}_{t},\boldsymbol{\rm z}_{t},\boldsymbol{\rm\phi},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}\right)\propto\mbox{Multi}\left(\boldsymbol{\rm\phi}_{Z_{t-1}}\circ\boldsymbol{\rm v}_{t}\circ\boldsymbol{\rm\psi}_{t}\right), (41)

where ϕZt1\boldsymbol{\rm\phi}_{Z_{t-1}} is the transition distribution given the assigned state at t1t-1, 𝐯tL\boldsymbol{\rm v}_{t}\in\mathbb{R}^{L} is the vector of likelihoods under each of the LL possible states at time tt (as in Eq. (38)), and 𝝍tL\boldsymbol{\rm\psi}_{t}\in\mathbb{R}^{L} is again the vector of backwards messages from time point t+1t+1 to tt. \circ denotes element-wise product.

Event transition parameters, ϕ\boldsymbol{\rm\phi}

The Dirichlet posterior for the event state ll’s transition distribution ϕl\boldsymbol{\rm\phi}_{l} simply involves the transition counts 𝐧l\boldsymbol{\rm n}_{l} from event state ll to all LL states,

p(ϕlZ1:T,𝜷)Dir(αe𝜷+𝐞lκe+𝐧l),p\left(\boldsymbol{\rm\phi}_{l}\mid Z_{1:T},\boldsymbol{\rm\beta}\right)\propto\mbox{Dir}(\alpha_{e}\boldsymbol{\rm\beta}+\boldsymbol{\rm\rm e}_{l}\kappa_{e}+\boldsymbol{\rm n}_{l}), (42)

for global weights 𝜷\boldsymbol{\rm\beta}, concentration parameter αe\alpha_{e}, and self-transition parameter κe\kappa_{e}.

Global transition parameters, 𝜷\boldsymbol{\rm\beta}

The Dirichlet posterior of the global transition distribution 𝜷\boldsymbol{\rm\beta} involves the auxiliary variables (m¯1,,m¯L)(\bar{m}_{\cdot 1},\ldots,\bar{m}_{\cdot L}),

p(𝜷Z1:T)Dir(γe/L+m¯1,,γe/L+m¯L),p(\boldsymbol{\rm\beta}\mid Z_{1:T})\propto\mbox{Dir}(\gamma_{e}/L+\bar{m}_{\cdot 1},\ldots,\gamma_{e}/L+\bar{m}_{\cdot L}), (43)

where these auxiliary variables are defined as

m¯ll={mll,llmllwl,l=lmll=r=1nllθrθrBer(αeβl+κeδ(l,l)αeβl+δ(l,l)+r),wlBin(mll,ρeρe+βl(1ρe))\displaystyle\begin{aligned} \bar{m}_{ll^{\prime}}&=\begin{cases}m_{ll^{\prime}},&l\neq l^{\prime}\\ m_{ll}-w_{l},&l=l^{\prime}\end{cases}\\ m_{ll^{\prime}}&=\sum_{r=1}^{n_{ll^{\prime}}}\theta_{r}\\ \theta_{r}&\sim\mbox{Ber}\left(\frac{\alpha_{e}\beta_{l}+\kappa_{e}\delta(l,l^{\prime})}{\alpha_{e}\beta_{l}+\delta(l,l^{\prime})+r}\right),\\ w_{l}&\sim\mbox{Bin}\left(m_{ll^{\prime}},\frac{\rho_{e}}{\rho_{e}+\beta_{l}(1-\rho_{e})}\right)\end{aligned} (44)

and ml=lmllm_{\cdot l^{\prime}}=\sum_{l}m_{ll^{\prime}}. See Fox [31, Appendix A] for full derivations.

Appendix D Hyperparameters posterior

Below we give brief descriptions for the MCMC sampling of the hyperparameters in our model. Full derivations are given in Fox [31, Section 5.2.3, Appendix C].

Channel dynamics model hyperparameters, γc\gamma_{c}, κc\kappa_{c}

We use Metropolis-Hastings steps to propose a new value γc\gamma_{c}^{\prime} from gamma distributions with fixed variance σγc2\sigma^{2}_{\gamma_{c}} and accept with probability min(r(γcγc),1)\min(r(\gamma_{c}^{\prime}\mid\gamma_{c}),1),

r(γcγc)\displaystyle r(\gamma_{c}^{\prime}\mid\gamma_{c}) =p({𝝅(i)}γc,κ,{𝐟(i)})p(γcγc2/σγc2,γc/σγc2)p(γcγc,σγc2)p({𝝅(i)}γc,κ,{𝐟(i)})q(γcγc2/σγc2,γc/σγc2)q(γcγc,σγc2)\displaystyle=\frac{p(\{\boldsymbol{\rm\pi}^{(i)}\}\mid\gamma_{c}^{\prime},\kappa,\{\boldsymbol{\rm f}^{(i)}\})p(\gamma_{c}^{\prime}\mid\gamma_{c}^{2}/\sigma^{2}_{\gamma_{c}},\gamma_{c}/\sigma^{2}_{\gamma_{c}})p(\gamma_{c}\mid\gamma_{c}^{\prime},\sigma^{2}_{\gamma_{c}})}{p(\{\boldsymbol{\rm\pi}^{(i)}\}\mid\gamma_{c},\kappa,\{\boldsymbol{\rm f}^{(i)}\})q(\gamma_{c}\mid\gamma_{c}^{2}/\sigma^{2}_{\gamma_{c}},\gamma_{c}/\sigma^{2}_{\gamma_{c}})q(\gamma_{c}^{\prime}\mid\gamma_{c},\sigma^{2}_{\gamma_{c}})}
=p({𝝅(i)}γc,κ,{𝐟(i)})p({𝝅(i)}γc,κ,{𝐟(i)})Γ(ν)γcννaΓ(ν)γcννaexp(b(γcγc)σγc2(νν)),\displaystyle=\frac{p(\{\boldsymbol{\rm\pi}^{(i)}\}\mid\gamma_{c}^{\prime},\kappa,\{\boldsymbol{\rm f}^{(i)}\})}{p(\{\boldsymbol{\rm\pi}^{(i)}\}\mid\gamma_{c},\kappa,\{\boldsymbol{\rm f}^{(i)}\})}\frac{\Gamma(\nu)\gamma_{c}^{\nu^{\prime}-\nu-a}}{\Gamma(\nu^{\prime})\gamma_{c}^{\nu-\nu^{\prime}-a}}\exp\left(-b(\gamma_{c}^{\prime}-\gamma_{c})\sigma^{2(\nu-\nu^{\prime})}_{\gamma_{c}}\right), (45)

where ν=γc2/σγc2\nu=\gamma_{c}^{2}/\sigma^{2}_{\gamma_{c}}, ν=γc2/σγc2\nu^{\prime}={\gamma_{c}^{\prime}}^{2}/\sigma^{2}_{\gamma_{c}}, and we have a Gamma(a,b)\mbox{Gamma}(a,b) prior on γc\gamma_{c}. The likelihood term p({𝝅(i)}γc,κ,{𝐟(i)})p(\{\boldsymbol{\rm\pi}^{(i)}\}\mid\gamma_{c}^{\prime},\kappa,\{\boldsymbol{\rm f}^{(i)}\}) follows from the Dirichlet distribution and is given by

p({𝝅(i)}γc,κ,{𝐟(i)})=i=1Nk=iK(i)(Γ(γcK(i)+κc)(kK(i)1Γ(γc))Γ(γc+κc)j=1K(i)(π~kj(i))γc+κcδ(k,j)1),p(\{\boldsymbol{\rm\pi}^{(i)}\}\mid\gamma_{c}^{\prime},\kappa,\{\boldsymbol{\rm f}^{(i)}\})=\\ \prod_{i=1}^{N}\prod_{k=i}^{K^{(i)}}\left(\frac{\Gamma(\gamma_{c}^{\prime}K^{(i)}+\kappa_{c})}{\left(\prod_{k^{\prime}}^{K^{(i)}-1}\Gamma(\gamma_{c}^{\prime})\right)\Gamma(\gamma_{c}^{\prime}+\kappa_{c})}\prod_{j=1}^{K^{(i)}}\left(\widetilde{\pi}_{kj}^{(i)}\right)^{\gamma_{c}^{\prime}+\kappa_{c}\delta(k,j)-1}\right), (46)

where that for p({𝝅(i)}γc,κ,{𝐟(i)})p(\{\boldsymbol{\rm\pi}^{(i)}\}\mid\gamma_{c},\kappa,\{\boldsymbol{\rm f}^{(i)}\}) is similar. Recall that the transition parameters {𝝅(i)}\{\boldsymbol{\rm\pi}^{(i)}\} are independent over ii, and thus their likelihoods multiply. The proposal and acceptance ratio for κc\kappa_{c} is similar.

Channel active features model hyperparameter αc\alpha_{c}

We place a Gamma(aαc,bαc)\mbox{Gamma}(a_{\alpha_{c}},b_{\alpha_{c}}) prior on αc\alpha_{c}, which implies a gamma posterior of the form

p(αc{𝐟(i)})Gamma(aαc+K+,bαc+i=1N(1/i)),p(\alpha_{c}\mid\{\boldsymbol{\rm f}^{(i)}\})\propto\mbox{Gamma}(a_{\alpha_{c}}+K_{+},b_{\alpha_{c}}+\sum_{i=1}^{N}(1/i)), (47)

where K+=k=1K𝟏(fk(1)fk(N))K_{+}=\sum_{k=1}^{K}\boldsymbol{\rm 1}\left(f^{(1)}_{k}\vee\cdots\vee f^{(N)}_{k}\right) denotes the number of channel states activated in at least one of the channels.

Event dynamics model hyperparameters, γe\gamma_{e}, αe\alpha_{e}, κe\kappa_{e}, ρe\rho_{e}

Instead of sampling αe\alpha_{e} and κe\kappa_{e} independently, we an additional parameter ρe=κe/(αe+κe)\rho_{e}=\kappa_{e}/(\alpha_{e}+\kappa_{e}) and sample (αe+κe)(\alpha_{e}+\kappa_{e}) and ρe\rho_{e}, which is simpler than sampling αe\alpha_{e} and κe\kappa_{e} independently.

(𝜶𝐞+𝜿𝐞)\boldsymbol{\rm(\alpha_{e}+\kappa_{e})}

With a Gamma(aαe+κe,bαe+κe)\mbox{Gamma}(a_{\alpha_{e}+\kappa_{e}},b_{\alpha_{e}+\kappa_{e}}) prior on (αe+κe)(\alpha_{e}+\kappa_{e}), we use auxiliary variables {rl}l=1L\{r_{l}\}_{l=1}^{L} and {sl}l=1L\{s_{l}\}_{l=1}^{L} to define the posterior,

p(αe+κeZ1:T)Gamma(aαe+κe+m¯l=1Lsl,bαe+κel=1Llogrl),p\left(\alpha_{e}+\kappa_{e}\mid Z_{1:T}\right)\propto\mbox{Gamma}\left(a_{\alpha_{e}+\kappa_{e}}+\bar{m}_{\cdot\cdot}-\sum_{l=1}^{L}s_{l},b_{\alpha_{e}+\kappa_{e}}-\sum_{l=1}^{L}\log r_{l}\right), (48)

where m¯=l,l=1Lm¯ll\bar{m}_{\cdot\cdot}=\sum_{l,l^{\prime}=1}^{L}\bar{m}_{ll^{\prime}} is the sum over auxiliary variables m¯ll\bar{m}_{ll^{\prime}} defined in Eq. (44), and the auxiliary variables rlr_{l} and sls_{l} are sampled as

rl\displaystyle r_{l} Beta(α+κ+1,nl),\displaystyle\sim\mbox{Beta}(\alpha+\kappa+1,n_{l\cdot}),
sl\displaystyle s_{l} Ber(nl/(nl+α+κ)).\displaystyle\sim\mbox{Ber}(n_{l\cdot}/(n_{l\cdot}+\alpha+\kappa)).
𝝆𝐞\boldsymbol{\rm\rho_{e}}

With a Beta(cρe,dρe)\mbox{Beta}(c_{\rho_{e}},d_{\rho_{e}}) prior on ρe\rho_{e}, we use auxiliary variables {wl}l=1L\{w_{l\cdot}\}_{l=1}^{L} to define the posterior,

p(ρe𝐦¯,𝜷)Beta(cρe+l=1Lwl,dρe+m¯l=1Lwl).p(\rho_{e}\mid\bar{\boldsymbol{\rm m}},\boldsymbol{\rm\beta})\propto\mbox{Beta}\left(c_{\rho_{e}}+\sum_{l=1}^{L}w_{l\cdot},d_{\rho_{e}}+\bar{m}_{\cdot\cdot}-\sum_{l=1}^{L}w_{l\cdot}\right). (49)

For wlsBer(ρ)w_{ls}\sim\mbox{Ber}(\rho) over s=1,,mlls=1,\ldots,m_{ll}, the posterior of the auxiliary variable wlw_{l\cdot} is

p(wlm¯ll,βl)Bin(m¯ll,ρe+βl(1ρe))p(w_{l\cdot}\mid\bar{m}_{ll},\beta_{l})\propto\mbox{Bin}(\bar{m}_{ll},\rho_{e}+\beta_{l}(1-\rho_{e})) (50)
𝜸𝐞\boldsymbol{\rm\gamma_{e}}

With a Gamma(aγe,bγe)\mbox{Gamma}(a_{\gamma_{e}},b_{\gamma_{e}}) prior on γe\gamma_{e}, we use auxiliary variables vv and qq to define the posterior,

p(γe𝐦)Gamma(aγeq+l=1L𝟏(m¯l>0),bγelogv).p(\gamma_{e}\mid\boldsymbol{\rm m})\propto\mbox{Gamma}\left(a_{\gamma_{e}}-q+\sum_{l=1}^{L}\boldsymbol{\rm 1}(\bar{m}_{\cdot l}>0),b_{\gamma_{e}}-\log v\right). (51)

The auxiliary variables are sampled via

v\displaystyle v Beta(γe+1,m¯),\displaystyle\sim\mbox{Beta}(\gamma_{e}+1,\bar{m}_{\cdot\cdot}),
q\displaystyle q Ber(m¯/(γe+m¯)).\displaystyle\sim\mbox{Ber}(\bar{m}_{\cdot\cdot}/(\gamma_{e}+\bar{m}_{\cdot\cdot})).

Appendix E Autoregressive state coefficients posterior

Recall that our prior on the autoregressive coefficients 𝐚k\boldsymbol{\rm a}_{k} is a multivariate normal with zero mean and covariance Σ0\Sigma_{0},

p(𝐚kΣ0)\displaystyle p(\boldsymbol{\rm a}_{k}\mid\Sigma_{0}) 𝒩(𝟎,Σ0)\displaystyle\propto{\cal N}(\boldsymbol{\rm 0},\Sigma_{0})
logp(𝐚kΣ0)\displaystyle\log p(\boldsymbol{\rm a}_{k}\mid\Sigma_{0}) 12𝐚kTΣ01𝐚k.\displaystyle\propto-\frac{1}{2}\boldsymbol{\rm a}_{k}^{\rm T}\Sigma_{0}^{-1}\boldsymbol{\rm a}_{k}. (52)

The conditional event likelihood given the channel states 𝐳1:T\boldsymbol{\rm z}_{1:T} and the event states Z1:TZ_{1:T} is

p(𝐲1:T𝐳1:T,Z1:T,{𝐚k},{Δl})\displaystyle p(\boldsymbol{\rm y}_{1:T}\mid\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}) t=1T𝒩(𝐲t;𝐀𝐳t𝐘~,ΔZt)\displaystyle\propto\prod_{t=1}^{T}{\cal N}(\boldsymbol{\rm y}_{t};\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}},\Delta_{Z_{t}})
logp(𝐲1:T𝐳1:T,Z1:T,{𝐚k},{Δl})\displaystyle\log p(\boldsymbol{\rm y}_{1:T}\mid\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k}\},\{\Delta_{l}\}) 12t=1T(𝐲t𝐀𝐳t𝐘~t)TΔZt1(𝐲t𝐀𝐳t𝐘~t).\displaystyle\propto-\frac{1}{2}\sum_{t=1}^{T}(\boldsymbol{\rm y}_{t}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t})^{\rm T}\Delta_{Z_{t}}^{-1}(\boldsymbol{\rm y}_{t}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t}). (53)

The product of these prior and likelihood terms is the joint distribution over 𝐚k\boldsymbol{\rm a}_{k} and 𝐲1:T\boldsymbol{\rm y}_{1:T},

logp(𝐚k,𝐲1:T𝐳1:T,Z1:T,{𝐚k}kk,{Δl})12𝐚kTΣ01𝐚k12t=1T(𝐲t𝐀𝐳t𝐘~t)TΔZt1(𝐲t𝐀𝐳t𝐘~t).\log p(\boldsymbol{\rm a}_{k},\boldsymbol{\rm y}_{1:T}\mid\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\})\propto\\ -\frac{1}{2}\boldsymbol{\rm a}_{k}^{\rm T}\Sigma_{0}^{-1}\boldsymbol{\rm a}_{k}-\frac{1}{2}\sum_{t=1}^{T}(\boldsymbol{\rm y}_{t}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t})^{\rm T}\Delta_{Z_{t}}^{-1}(\boldsymbol{\rm y}_{t}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t}). (54)

We take a brief tangent to prove a useful identity,

Lemma E.1.

Let the column vector 𝐱m\boldsymbol{\rm x}\in\mathbb{R}^{m} and the symmetric matrix A𝕊m×mA\in\mathbb{S}^{m\times m} be defined as

𝐱=[𝐲𝐳] and A=[BCCTD],\boldsymbol{\rm x}=\left[\begin{array}[]{c}\boldsymbol{\rm y}\\ \boldsymbol{\rm z}\end{array}\right]\quad\mbox{ and }\quad A=\left[\begin{array}[]{cc}B&C\\ C^{\rm T}&D\end{array}\right],

where 𝐲p\boldsymbol{\rm y}\in\mathbb{R}^{p}, 𝐳q\boldsymbol{\rm z}\in\mathbb{R}^{q}, B𝕊p×pB\in\mathbb{S}^{p\times p}, D𝕊q×qD\in\mathbb{S}^{q\times q}, Cp×qC\in\mathbb{R}^{p\times q}, and m=p+qm=p+q. Then

𝐱TA𝐱=𝐲TB𝐲+𝐳TD𝐳+2𝐲TC𝐳.\boldsymbol{\rm x}^{\rm T}A\boldsymbol{\rm x}=\boldsymbol{\rm y}^{\rm T}B\boldsymbol{\rm y}+\boldsymbol{\rm z}^{\rm T}D\boldsymbol{\rm z}+2\boldsymbol{\rm y}^{\rm T}C\boldsymbol{\rm z}. (55)
Proof.
𝐱TA𝐱\displaystyle\boldsymbol{\rm x}^{\rm T}A\boldsymbol{\rm x} =[𝐲T𝐳T][BCCTD][𝐲𝐳]\displaystyle=\left[\begin{array}[]{cc}\boldsymbol{\rm y}^{\rm T}&\boldsymbol{\rm z}^{\rm T}\end{array}\right]\left[\begin{array}[]{cc}B&C\\ C^{\rm T}&D\end{array}\right]\left[\begin{array}[]{c}\boldsymbol{\rm y}\\ \boldsymbol{\rm z}\end{array}\right]
=[𝐲T𝐳T][B𝐲+C𝐳CT𝐲+D𝐳]\displaystyle=\left[\begin{array}[]{cc}\boldsymbol{\rm y}^{\rm T}&\boldsymbol{\rm z}^{\rm T}\end{array}\right]\left[\begin{array}[]{c}B\boldsymbol{\rm y}+C\boldsymbol{\rm z}\\ C^{\rm T}\boldsymbol{\rm y}+D\boldsymbol{\rm z}\end{array}\right]
=𝐲TB𝐲+𝐲TC𝐳+𝐳TCT𝐲+𝐳TD𝐳\displaystyle=\boldsymbol{\rm y}^{\rm T}B\boldsymbol{\rm y}+\boldsymbol{\rm y}^{\rm T}C\boldsymbol{\rm z}+\boldsymbol{\rm z}^{\rm T}C^{\rm T}\boldsymbol{\rm y}+\boldsymbol{\rm z}^{\rm T}D\boldsymbol{\rm z}
=𝐲TB𝐲+𝐳TD𝐳+2𝐲TC𝐳\displaystyle=\boldsymbol{\rm y}^{\rm T}B\boldsymbol{\rm y}+\boldsymbol{\rm z}^{\rm T}D\boldsymbol{\rm z}+2\boldsymbol{\rm y}^{\rm T}C\boldsymbol{\rm z}

Note that this identity also holds for any permutation pp applied to the rows of 𝐱\boldsymbol{\rm x} and the rows and columns of AA. We now can manipulate the likelihood term of Eq. (54) into a form that separates 𝐚k\boldsymbol{\rm a}_{k} from 𝐚kk\boldsymbol{\rm a}_{k^{\prime}\neq k}. Suppose that 𝐤+\boldsymbol{\rm k}^{+} denotes the indices of the NN channels where zt(i)=kz_{t}^{(i)}=k and 𝐤={1,,N}/𝐤+\boldsymbol{\rm k}^{-}=\{1,\ldots,N\}/\boldsymbol{\rm k}^{+} denotes those for whom zt(i)kz_{t}^{(i)}\neq k. Furthermore, we use the superscript indexing on these two sets of indices to select the corresponding portions of the 𝐲t\boldsymbol{\rm y}_{t} vector and the 𝐀𝐳t\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}, 𝐘~T\boldsymbol{\rm\widetilde{Y}}_{T}, and ΔZt1\Delta_{Z_{t}}^{-1} matrices. We start by decomposing the likelihood term into three parts,

(𝐲t𝐀𝐳t𝐘~t)TΔZt1(𝐲t𝐀𝐳t𝐘~t)(𝐲t(𝐤+)𝐀𝐳t(𝐤+,𝐤+)𝐘~t(𝐤+,𝐤+))TΔZt1(𝐤+,𝐤+)(𝐲t(𝐤+)𝐀𝐳t(𝐤+,𝐤+)𝐘~t(𝐤+,𝐤+))+2(𝐲t(𝐤+)𝐀𝐳t(𝐤+,𝐤+)𝐘~t(𝐤+,𝐤+))TΔZt1(𝐤+,𝐤)(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))+(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))TΔZt1(𝐤,𝐤)(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤)),\begin{array}[]{l}(\boldsymbol{\rm y}_{t}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t})^{\rm T}\Delta_{Z_{t}}^{-1}(\boldsymbol{\rm y}_{t}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}\boldsymbol{\rm\widetilde{Y}}_{t})\propto\\ \qquad\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)^{\rm T}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)+\\ \qquad 2\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)^{\rm T}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)+\\ \qquad\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)^{\rm T}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right),\end{array} (56)

which we then insert into our previous expression for the joint distribution of 𝐚k\boldsymbol{\rm a}_{k} and 𝐲1:T\boldsymbol{\rm y}_{1:T} (Eq. (54)),

logp(𝐚k,𝐲1:T𝐳1:T,Z1:T,{𝐚k}kk,{Δl})12𝐚kTΣ01𝐚k12t=1T{(𝐲t(𝐤+)𝐀𝐳t(𝐤+,𝐤+)𝐘~t(𝐤+,𝐤+))TΔZt1(𝐤+,𝐤+)(𝐲t(𝐤+)𝐀𝐳t(𝐤+,𝐤+)𝐘~t(𝐤+,𝐤+))+2(𝐲t(𝐤+)𝐀𝐳t(𝐤+,𝐤+)𝐘~t(𝐤+,𝐤+))TΔZt1(𝐤+,𝐤)(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))+(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))TΔZt1(𝐤,𝐤)(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))}.\begin{array}[]{l}\log p(\boldsymbol{\rm a}_{k},\boldsymbol{\rm y}_{1:T}\mid\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\})\propto-\frac{1}{2}\boldsymbol{\rm a}_{k}^{\rm T}\Sigma_{0}^{-1}\boldsymbol{\rm a}_{k}-\\ \qquad\frac{1}{2}\sum_{t=1}^{T}\left\{\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)^{\rm T}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)+\right.\\ \qquad 2\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)^{\rm T}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)+\\ \qquad\left.\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)^{\rm T}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)\right\}.\\ \end{array} (57)

Conditioning on 𝐲1:T\boldsymbol{\rm y}_{1:T} allows us to absorb the third term of the sum into the proportionality, and after replacing 𝐀𝐳t(𝐤+,𝐤+)𝐘~(𝐤+,𝐤+)\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\widetilde{Y}}^{(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})} with a more explicit expression, we have

logp(𝐚k𝐲1:T,𝐳1:T,Z1:T,{𝐚k}kk,{Δl})12𝐚kTΣ01𝐚k12t=1T{(𝐲t(𝐤+)[𝐲~t(k1+)𝐲~t(k|𝐤+|+)]T𝐚k)T(ΔZt1(𝐤+,𝐤+))(𝐲t(𝐤+)[𝐲~t(k1+)𝐲~t(k|𝐤+|+)]T𝐚k)+2(𝐲t(𝐤+)[𝐲~t(k1+)𝐲~t(k|𝐤+|+)]T𝐚k)T(ΔZt1(𝐤+,𝐤))(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))},\begin{array}[]{l}\log p(\boldsymbol{\rm a}_{k}\mid\boldsymbol{\rm y}_{1:T},\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\})\propto-\frac{1}{2}\boldsymbol{\rm a}_{k}^{\rm T}\Sigma_{0}^{-1}\boldsymbol{\rm a}_{k}-\\ \qquad\frac{1}{2}\sum_{t=1}^{\rm T}\left\{\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]^{\rm T}\boldsymbol{\rm a}_{k}\right)^{\rm T}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)\cdot\right.\\ \qquad\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]^{\rm T}\boldsymbol{\rm a}_{k}\right)+\\ \qquad 2\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}-\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]^{\rm T}\boldsymbol{\rm a}_{k}\right)^{\rm T}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\right)\cdot\\ \qquad\left.\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)\right\},\end{array} (58)

which we can further expand to yield

logp(𝐚k𝐲1:T,𝐳1:T,Z1:T,{𝐚k}kk,{Δl})12𝐚kTΣ01𝐚k12t=1T{(𝐲t(𝐤+))T(ΔZt1(𝐤+,𝐤+))(𝐲t(𝐤+))+(𝐚kT[𝐲~t(k1+)𝐲~t(k|𝐤+|+)])(ΔZt1(𝐤+,𝐤+))([𝐲~t(k1+)𝐲~t(k|𝐤+|+)]T𝐚k)2(𝐲t(𝐤+))T(ΔZt1(𝐤+,𝐤+))([𝐲~t(k1+)𝐲~t(k|𝐤+|+)]T𝐚k)}t=1T{𝐲tT(𝐤+)ΔZt1(𝐤+,𝐤)(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))(𝐚kT[𝐲~t(k1+)𝐲~t(k|𝐤+|+)])(ΔZt1(𝐤+,𝐤))(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))}.\begin{array}[]{l}\log p(\boldsymbol{\rm a}_{k}\mid\boldsymbol{\rm y}_{1:T,}\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\})\propto-\frac{1}{2}\boldsymbol{\rm a}_{k}^{\rm T}\Sigma_{0}^{-1}\boldsymbol{\rm a}_{k}-\\ \qquad\frac{1}{2}\sum_{t=1}^{T}\left\{\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}\right)^{\rm T}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}\right)+\right.\\ \qquad\left(\boldsymbol{\rm a}_{k}^{\rm T}\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]\right)\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)\left(\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]^{\rm T}\boldsymbol{\rm a}_{k}\right)-\\ \qquad\left.2\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}\right)^{\rm T}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)\left(\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]^{\rm T}\boldsymbol{\rm a}_{k}\right)\right\}-\\ \qquad\sum_{t=1}^{T}\left\{\boldsymbol{\rm y}_{t}^{{\rm T}(\boldsymbol{\rm k}^{+})}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)-\right.\\ \qquad\left.\left(\boldsymbol{\rm a}_{k}^{\rm T}\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]\right)\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\right)\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)\right\}.\end{array} (59)

Absorbing more terms unrelated to 𝐚k\boldsymbol{\rm a}_{k} into the proportionality, we have

logp(𝐚k𝐲1:T,𝐳1:T,Z1:T,{𝐚k}kk,{Δl})12𝐚kTΣ01𝐚k12t=1T{(𝐚kT[𝐲~t(k1+)𝐲~t(k|𝐤+|+)])(ΔZt1(𝐤+,𝐤+))([𝐲~t(k1+)𝐲~t(k|𝐤+|+)]T𝐚k)2(𝐲t(𝐤+))T(ΔZt1(𝐤+,𝐤+))([𝐲~t(k1+)𝐲~t(k|𝐤+|+)]T𝐚k)}t=1T{(𝐚kT[𝐲~t(k1+)𝐲~t(k|𝐤+|+)])(ΔZt1(𝐤+,𝐤))(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))},\begin{array}[]{l}\log p(\boldsymbol{\rm a}_{k}\mid\boldsymbol{\rm y}_{1:T,}\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\})\propto-\frac{1}{2}\boldsymbol{\rm a}_{k}^{\rm T}\Sigma_{0}^{-1}\boldsymbol{\rm a}_{k}-\\ \qquad\frac{1}{2}\sum_{t=1}^{T}\left\{\left(\boldsymbol{\rm a}_{k}^{\rm T}\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]\right)\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)\left(\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]^{\rm T}\boldsymbol{\rm a}_{k}\right)-\right.\\ \qquad\left.2\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}\right)^{\rm T}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)\left(\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]^{\rm T}\boldsymbol{\rm a}_{k}\right)\right\}-\\ \qquad\sum_{t=1}^{T}\left\{-\left(\boldsymbol{\rm a}_{k}^{\rm T}\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]\right)\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\right)\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)\right\},\end{array} (60)

which after some rearranging gives

logp(𝐚k𝐲1:T,𝐳1:T,Z1:T,{𝐚k}kk,{Δl})12𝐚kT{Σ01+t=1T[𝐲~t(k1+)𝐲~t(k|𝐤+|+)](ΔZt1(𝐤+,𝐤+))[𝐲~t(k1+)𝐲~t(k|𝐤+|+)]T}𝐚k+𝐚kTt=1T{[𝐲~t(k1+)𝐲~t(k|𝐤+|+)](ΔZt1(𝐤+,𝐤+))(𝐲t(𝐤+))+[𝐲~t(k1+)𝐲~t(k|𝐤+|+)](ΔZt1(𝐤+,𝐤))(𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤))}.\begin{array}[]{l}\log p(\boldsymbol{\rm a}_{k}\mid\boldsymbol{\rm y}_{1:T,}\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\})\propto\\ \qquad-\frac{1}{2}\boldsymbol{\rm a}_{k}^{\rm T}\left\{\Sigma_{0}^{-1}+\sum_{t=1}^{T}\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]^{\rm T}\right\}\boldsymbol{\rm a}_{k}+\\ \qquad\boldsymbol{\rm a}_{k}^{\rm T}\sum_{t=1}^{T}\left\{\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\right)\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}\right)+\right.\\ \qquad\left.\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right]\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\right)\left(\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\right)\right\}.\end{array} (61)

Before completing the square, we will find it useful to introduce a bit more notation to simplify the expression,

𝐘¯t(𝐤+)=[𝐲~t(k1+)𝐲~t(k|𝐤+|+)],ϵt(𝐤)=𝐲t(𝐤)𝐀𝐳t(𝐤,𝐤)𝐘~t(𝐤,𝐤),\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}=\left[\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{1}^{+})}\mid\cdots\mid\boldsymbol{\rm\widetilde{y}}_{t}^{(k_{|\boldsymbol{\rm k}^{+}|}^{+})}\right],\qquad\boldsymbol{\rm\epsilon}_{t}^{(\boldsymbol{\rm k}^{-})}=\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{-})}-\boldsymbol{\rm A}_{\boldsymbol{\rm z}_{t}}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\widetilde{Y}}_{t}^{(\boldsymbol{\rm k}^{-},\boldsymbol{\rm k}^{-})},

yielding

logp(𝐚k𝐲1:T,𝐳1:T,Z1:T,{𝐚k}kk,{Δl})12𝐚kT{Σ01+t=1T𝐘¯t(𝐤+)ΔZt1(𝐤+,𝐤+)𝐘¯tT(𝐤+)}𝐚k+𝐚kT{t=1T𝐘¯t(𝐤+)(ΔZt1(𝐤+,𝐤+)𝐲t(𝐤+)+ΔZt1(𝐤+,𝐤)ϵt(𝐤))}.\begin{array}[]{l}\log p(\boldsymbol{\rm a}_{k}\mid\boldsymbol{\rm y}_{1:T,}\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\})\propto\\ \qquad-\frac{1}{2}\boldsymbol{\rm a}_{k}^{\rm T}\left\{\Sigma_{0}^{-1}+\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\bar{Y}}_{t}^{{\rm T}(\boldsymbol{\rm k}^{+})}\right\}\boldsymbol{\rm a}_{k}+\\ \qquad\boldsymbol{\rm a}_{k}^{\rm T}\left\{\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}+\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\epsilon}_{t}^{(\boldsymbol{\rm k}^{-})}\right)\right\}.\end{array} (62)

We desire an expression in the form 12(𝐚k𝝁k)TΣk1(𝐚k𝝁k)-\frac{1}{2}(\boldsymbol{\rm a}_{k}-\boldsymbol{\rm\mu}_{k})^{\rm T}\Sigma^{-1}_{k}(\boldsymbol{\rm a}_{k}-\boldsymbol{\rm\mu}_{k}) for unknown 𝝁k\boldsymbol{\rm\mu}_{k} and Σk1\Sigma^{-1}_{k} so that it conforms to the multivariate normal density with mean 𝝁k\boldsymbol{\rm\mu}_{k} and precision Σk1\Sigma^{-1}_{k}. We already have our Σk1\Sigma^{-1}_{k} value from the quadratic term above,

Σk1=Σ01+t=1T𝐘¯t(𝐤+)ΔZt1(𝐤+,𝐤+)𝐘¯tT(𝐤+),\Sigma^{-1}_{k}=\Sigma_{0}^{-1}+\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\bar{Y}}_{t}^{{\rm T}(\boldsymbol{\rm k}^{+})}, (63)

which allows us to solve the cross-term for 𝝁k\boldsymbol{\rm\mu}_{k},

12(2𝝁kTΣk1𝐚k)\displaystyle-\frac{1}{2}(-2\boldsymbol{\rm\mu}_{k}^{\rm T}\Sigma_{k}^{-1}\boldsymbol{\rm a}_{k}) =𝐚kT(t=1T𝐘¯t(𝐤+)(ΔZt1(𝐤+,𝐤+)𝐲t(𝐤+)+ΔZt1(𝐤+,𝐤)ϵt(𝐤)+)),\displaystyle=\boldsymbol{\rm a}_{k}^{\rm T}\left(\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}+\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\epsilon}_{t}^{(\boldsymbol{\rm k}^{-})}+\right)\right),
Σk1𝝁k\displaystyle\Sigma_{k}^{-1}\boldsymbol{\rm\mu}_{k} =t=1T𝐘¯t(𝐤+)(ΔZt1(𝐤+,𝐤+)𝐲t(𝐤+)+ΔZt1(𝐤+,𝐤)ϵt(𝐤)+).\displaystyle=\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}+\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\epsilon}_{t}^{(\boldsymbol{\rm k}^{-})}+\right). (64)

We can pull the final required 12𝝁kTΣk1𝝁k-\frac{1}{2}\boldsymbol{\rm\mu}_{k}^{\rm T}\Sigma_{k}^{-1}\boldsymbol{\rm\mu}_{k} term from the proportionality and complete the square. Thus, we have the form of the posterior for 𝐚k\boldsymbol{\rm a}_{k},

p(𝐚k𝐲1:T,𝐳1:T,Z1:T,{𝐚k}kk,{Δl})\displaystyle p(\boldsymbol{\rm a}_{k}\mid\boldsymbol{\rm y}_{1:T,}\boldsymbol{\rm z}_{1:T},Z_{1:T},\{\boldsymbol{\rm a}_{k^{\prime}}\}_{k^{\prime}\neq k},\{\Delta_{l}\}) exp(12(𝐚k𝝁k)TΣk1(𝐚k𝝁k))\displaystyle\propto\exp\left(-\frac{1}{2}(\boldsymbol{\rm a}_{k}-\boldsymbol{\rm\mu}_{k})^{\rm T}\Sigma^{-1}_{k}(\boldsymbol{\rm a}_{k}-\boldsymbol{\rm\mu}_{k})\right)
𝒩(𝝁k,Σk),\displaystyle\propto{\cal N}(\boldsymbol{\rm\mu}_{k},\Sigma_{k}), (65)

where

Σk1=Σ01+t=1T𝐘¯t(𝐤+)ΔZt1(𝐤+,𝐤+)𝐘¯tT(𝐤+)Σk1𝝁k=t=1T𝐘¯t(𝐤+)(ΔZt1(𝐤+,𝐤+)𝐲t(𝐤+)+ΔZt1(𝐤+,𝐤)ϵt(𝐤)).\displaystyle\begin{aligned} \Sigma_{k}^{-1}&=\Sigma_{0}^{-1}+\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm\bar{Y}}_{t}^{{\rm T}(\boldsymbol{\rm k}^{+})}\\ \Sigma_{k}^{-1}\boldsymbol{\rm\mu}_{k}&=\sum_{t=1}^{T}\boldsymbol{\rm\bar{Y}}_{t}^{(\boldsymbol{\rm k}^{+})}\left(\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{+})}\boldsymbol{\rm y}_{t}^{(\boldsymbol{\rm k}^{+})}+\Delta_{Z_{t}}^{-1(\boldsymbol{\rm k}^{+},\boldsymbol{\rm k}^{-})}\boldsymbol{\rm\epsilon}_{t}^{(\boldsymbol{\rm k}^{-})}\right).\end{aligned} (66)

Appendix F Experiment Parameters

Below, we give explicit values for the various priors and parameters used in our experiments.

parameter description value
NN number of time series per event 6
rr AR model order 1
𝐦0\boldsymbol{\rm m}_{0} 𝐚k\boldsymbol{\rm a}_{k} 𝒩{\cal N} prior mean 0
Σ0\Sigma_{0} 𝐚k\boldsymbol{\rm a}_{k} 𝒩{\cal N} prior covariance 0.1I1×10.1\cdot I_{1\times 1}
LL truncated number of event states 20
b0b_{0} Δl\Delta_{l} HIW prior degrees of freedom N+3N+3
D0D_{0} Δl\Delta_{l} HIW prior scale 0.05(b0N1)(IN×N+1)0.05(b_{0}-N-1)(I_{N\times N}+1)
(aαe+κe,bαe+κe)(a_{\alpha_{e}+\kappa_{e}},b_{\alpha_{e}+\kappa_{e}}) αe+κe\alpha_{e}+\kappa_{e} gamma prior (1,1)(1,1)
(aγe,bγe)(a_{\gamma_{e}},b_{\gamma_{e}}) γe\gamma_{e} gamma prior (1,1)(1,1)
(cρe,dρe)(c_{\rho_{e}},d_{\rho_{e}}) ρe\rho_{e} beta prior (1,1)(1,1)
(aγc,bγc)(a_{\gamma_{c}},b_{\gamma_{c}}) γc\gamma_{c} gamma prior (1,1)(1,1)
(aκc,bκc)(a_{\kappa_{c}},b_{\kappa_{c}}) κc\kappa_{c} gamma prior (1000,1)(1000,1)
σγc2\sigma^{2}_{\gamma_{c}} γc\gamma_{c} Metropolis-Hastings proposal variance 11
σκc2\sigma^{2}_{\kappa_{c}} κc\kappa_{c} Metropolis-Hastings proposal variance 100100
(aαc,bαc)(a_{\alpha_{c}},b_{\alpha_{c}}) αc\alpha_{c} gamma prior (1,1)(1,1)
Table 2: Parameters used in sparse factorial BP-AR-HMM simulation experiment
parameter description value
NN number of time series per event 16 and 6
rr AR model order 5
𝐦0\boldsymbol{\rm m}_{0} 𝐚k\boldsymbol{\rm a}_{k} 𝒩{\cal N} prior mean 𝟎\boldsymbol{\rm 0}
Σ0\Sigma_{0} 𝐚k\boldsymbol{\rm a}_{k} 𝒩{\cal N} prior covariance Cov({yt(i)}t,i)\mbox{Cov}(\{y_{t}^{(i)}\}_{\forall t,i})
LL truncated number of event states 30
b0b_{0} Δl\Delta_{l} (H)IW prior degrees of freedom N+3N+3
D0D_{0} Δl\Delta_{l} (H)IW prior scale (b0N1)Cov({𝐲t+1𝐲t}t)(b_{0}-N-1)\cdot\mbox{Cov}(\{\boldsymbol{\rm y}_{t+1}-\boldsymbol{\rm y}_{t}\}_{\forall t})
(aαe+κe,bαe+κe)(a_{\alpha_{e}+\kappa_{e}},b_{\alpha_{e}+\kappa_{e}}) αe+κe\alpha_{e}+\kappa_{e} gamma prior (1,1)(1,1)
(aγe,bγe)(a_{\gamma_{e}},b_{\gamma_{e}}) γe\gamma_{e} gamma prior (1,1)(1,1)
(cρe,dρe)(c_{\rho_{e}},d_{\rho_{e}}) ρe\rho_{e} beta prior (1,1)(1,1)
(aγc,bγc)(a_{\gamma_{c}},b_{\gamma_{c}}) γc\gamma_{c} gamma prior (1,1)(1,1)
(aκc,bκc)(a_{\kappa_{c}},b_{\kappa_{c}}) κc\kappa_{c} gamma prior (1000,1)(1000,1)
σγc2\sigma^{2}_{\gamma_{c}} γc\gamma_{c} Metropolis-Hastings proposal variance 11
σκc2\sigma^{2}_{\kappa_{c}} Metropolis-Hastings proposal variance 100100
(aαc,bαc)(a_{\alpha_{c}},b_{\alpha_{c}}) gamma prior (1,1)(1,1)
Table 3: Parameters used in epileptic seizures and bursts experiments. When applicable, the same parameters were used for the standard BP-AR-HMM as in the correlated BP-AR-HMMs. The analysis of two two seizures involved 16 iEEG channels, and the analysis of the 15 bursts and single seizure involved 6 iEEG channels.

Acknowledgements

This work is supported in part by AFOSR Grant FA9550-12-1-0453 and DARPA Grant FA9550-12-1-0406 negotiated by AFOSR, ONR Grant N00014-10-1-0746, NINDS RO1-NS041811, RO1-NS48598, and U24-NS063930-03, and the Mirowski Discovery Fund for Epilepsy Research.