This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Dynamic Graph Learning based on Graph Laplacian

Bo Jiang1, Ashkan Panahi2, Hamid Krim1,Yiyi Yu3, Spencer L. Smith3 1Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC, USA
2Department of Computer Science and Engineering, Chalmers University of Technology, Göteborg, Sweden
3Department of Electrical and Computer Engineering, University of California Santa Barbara, Santa Barbara, CA, USA
bjiang8@ncsu.edu
Abstract

The purpose of this paper is to infer a global (collective) model of time-varying responses of a set of nodes as a dynamic graph, where the individual time series are respectively observed at each of the nodes. The motivation of this work lies in the search for a connectome model which properly captures brain functionality upon observing activities in different regions of the brain and possibly of individual neurons. We formulate the problem as a quadratic objective functional of observed node signals over short time intervals, subjected to the proper regularization reflecting the graph smoothness and other dynamics involving the underlying graph’s Laplacian, as well as the time evolution smoothness of the underlying graph. The resulting joint optimization is solved by a continuous relaxation and an introduced novel gradient-projection scheme. We apply our algorithm to a real-world dataset comprising recorded activities of individual brain cells. The resulting model is shown to not only be viable but also efficiently computable.

Index Terms:
Dynamic Graph Learning, Graph Signal Processing, Sparse Signal, Convex Optimization

I Introduction

The increased and ubiquitous emergence of graphs is becoming an excellent tool for quantifying interaction between different elements in great variety of network systems. Analyzing and discovering the underlying structure of a graph for a given data set has become central to a variety of different signal processing research problems which may be interpreted as graph structure recovery. For example, in social media [1], such as Facebook and LinkedIn, the basic interaction/relation between two individuals being represented by a link, yields the notion of a graph known as the social network, which is used for inferring further characteristics among all involved individuals [2]. Similarly, in physics [3] and chemistry [4], graphs are widely used to capture the interactions among atoms/molecules to study the behavior of different materials. The rather recent connectome paradigm[5] in neuros-science, is based on the hypothesis that the brain is a massively connected network and its behavior variation and connectivity, particularly in response to controlled external, can be used to investigate brain’s structures and ideally its functionality [6, 7].
Existing analysis approaches of connectivity of neuron signals and associated problems can be categorized into two main groups, (i)(i) Noise correlation analysis[8, 9], which is often applied by neuro- scientists to uncover the connectivity among neurons, (ii)(ii) Static graph learning [10, 11], which, by way of an optimization procedure, tries to attain a fixed graph over time. Noise correlation is commonly used in neuro-science to establish connectivity between every pair of neurons if their noise correlation over a short time is significant. This method, however, requires many observations, making it hard to obtain an acceptable connectivity estimation over that interval. Additionally the acquired connectivity does not lend to simple rationalization following an experiment with a specific stimilus. In the second track, research on estimating graph structures for a given data set has been active and includes graph learning. Research on the latter has primarily been based on graphs’ topology and signals’ smoothness, and the application of the graph Laplacian has been predominant. Other recent work includes deep neural network modeling, and the training/testing was performed on graph datasets to ultimately generate a graph to represent patterns under given signals. These studies have primarily focussed on a static graph, with signals non-sparse and the assumption of consistency of the graph over time. These models require sufficiently adequate samples for training and testing, once again making difficult to use on neuronal signals with a typically low sample size, in order to glean the desired variations over small time intervals. Graphs’ dynamics with clear potential impact on temporal data characterization, have also been of interest by many researchers [12, 13]. In this theme, the models are used to predict the links given previous graphs and associated signals. All these approaches require much prior information on known structures and plenty of data for training and predicting future graphs.
Building on the wealth of prior research work in neuro-science and graph learning [11], we propose a new model, with a dynamic structure goal to track neurons’ dynamic connectivity over time. To that end, our proposed graph will include vertices/nodes for nodes, and their connectivity is reflected by the graph edges whose attribute is determined by the probability/intensity of connection between every pair of neurons.
To proceed, the outline of our paper follows in order our contributions in the sequel. Firstly, exploiting the insight from prior research on graph learning with graph Laplacian [11, 14], we propose an optimization formulation, which can yield a graph over each short time interval which in turn reflects the evolving transformation of the connectivity. Secondly, we modify our model to fit sparse signals so that we can verify our optimized solution on a neuronal signal dataset. Thirdly, we apply three alternative methods to simplify the optimization procedures, to simplify the solution procedures of the optimization problems, and help reach the optimal points. We finally proceed to test our proposed model on a neuronal dataset, to help improve our understanding on the neuronal interaction and their process of transferring signals.

II Problem Setup and Background

For notation clarity and throughout the paper, we will adopt an upper and lower case bold letter to respectively denote a matrix and a vector, and the superscripts T,1T,-1 to respectively denote its tranpose and inverse. The operator tr()tr(\cdot) denotes a matrix trace. The identity, zero and ”1” matrices are respectively denoted by 𝐈\mathbf{I}, 𝟎\mathbf{0} and 𝟏\mathbf{1}, while xijx_{ij} represents the ii-th row, jj-th column element of 𝐗\mathbf{X}.
Our neuronal-activity dataset will consist of N\mathnormal{N} neuron/nodes, and will be characterized by a connectivity graph G={V,E,W}\mathnormal{G}=\{V,E,W\}, where V\mathnormal{V} denotes the vertex set V={v1,v2,,vN}V=\{v_{1},v_{2},\dots,v_{N}\}, E\mathnormal{E} is the edge set with attributes reflecting the connectivity between each pair of vertices quantified by 0w1,wW0\leq w\leq 1,\forall w\in W as a connectivity strength. A time series yn(t)y_{n}(t) of observations with t=1,2,,Tt=1,2,\ldots,T, is associated with each node vnv_{n}. For simplicity in our derivations, we will aggregate the nodes’ finite length time series into a N×TN\times T matrix 𝐘\mathbf{Y}, where (𝐘)n,t=yn(t)(\mathbf{Y})_{n,t}=y_{n}(t). Our problem formulation will seek for each observed 𝐘\mathbf{Y}, either a static graph GG or a time dependent graph series of graphs G1,G2,G_{1},G_{2},\ldots.
The well known graph Laplacian of an undirected graph can describe its topology, and can serve as the second derivative operator of the underlying graph. The corresponding Laplacian matrix \mathcal{L} is commonly defined as [15], with (i,j)=wij\mathcal{L}(i,j)=-w_{i}j, for ii,jj adjacent nodes, and (i,j)=0\mathcal{L}(i,j)=0 otherwise, and (i,i)=di\mathcal{L}(i,i)=d_{i} where di=jwijd_{i}=\sum_{j}w_{ij} denotes the degree of node ii . Its simple matrix expression is =DW\mathcal{L}=D-W, where D\mathnormal{D} is a diagnal matrix of degrees.
The Laplacian matrix may also usefully adopt, in some context, a second derivative interpretation of graphs: Given an assignment x=(x1,x2,,xn)x=(x_{1},x_{2},\ldots,x_{n}) of real number to the nodes, the Laplacian matrix may be found as the second derivative of xx as (𝐖,x)=ij>iwij𝐚ij𝐚ijTx\mathcal{L}(\mathbf{W},x)=\sum_{i}\sum_{j>i}w_{ij}\mathbf{a}_{ij}\mathbf{a}^{T}_{ij}x, where 𝐚ij\mathbf{a}_{ij} denotes a N-dimensional vector whose elements are all 0s, except the ii-th element being respectively 1 and jj-th element -1. As may be seen, aija_{ij} represents the first derivative of the edge between the ii-th and jj-th node. The notion of a Laplacian will be exploited in this sequel as a structural regularizer when an optimal graph is sought for a given data set.

III Dynamic Graph Learning

III-A Static Graph Learning

Prior to proposing the dynamic structure learning of a graph, we first recall the principles upon which static graph learning was based [11]. Using the Laplacian quadratic form, 𝐱n(𝐖)𝐱n\mathbf{x}_{n}\mathcal{L}(\mathbf{W})\mathbf{x}_{n}, as a smoothness regularizer of the signals 𝐱n\mathbf{x}_{n}, and the degree of connectivity KK as a tuning parameter, [11] discovers a KK-sparse graph from noisy signals 𝐲t=𝐱t+𝐧t\mathbf{y}_{t}=\mathbf{x}_{t}+\mathbf{n}_{t}, by seeking the solution of the following,

argmin𝐗,𝐖1Tt=0T1𝐲t𝐱t2+γ𝐱tT(𝐖)𝐱ts.t.0wt,ij1,i,j,i,j>iwt,ij=K,\begin{split}\operatorname*{argmin}_{\mathbf{X},\mathbf{W}}&\frac{1}{T}\sum_{t=0}^{T-1}\|\mathbf{y}_{t}-\mathbf{x}_{t}\|^{2}+\gamma\mathbf{x}_{t}^{T}\mathcal{L}(\mathbf{W})\mathbf{x}_{t}\\ s.t.&\quad 0\leq w_{t,ij}\leq 1,\quad\forall i,j,\quad\sum_{i,j>i}w_{t,ij}=K,\end{split} (1)

where γ\gamma and KK are tuning parameters, 𝐗=[𝐱1,𝐱2,,𝐱T]\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{T}] is the noiseless signals and 𝐘=[𝐲1,𝐲2,,𝐲T]\mathbf{Y}=[\mathbf{y}_{1},\mathbf{y}_{2},\dots,\mathbf{y}_{T}] its noisy observation. 𝐖\mathbf{W} is the adjacency weight matrix for the undirected graph, with the additional relaxation of the individual weights to the interval [0, 1][0,\,1].

III-B Dynamic Graph Learning

Note that in [11] a single connectivity graph is inferred for the entire observation time interval, thus overlooking the practically varying connections between every two nodes over time. To account for these variations and towards capturing the true underlying structure of the graph, we propose to learn the dynamics of the graph. Relating these dynamics to the brain signals which are of practical interest, they would not only reflect the signals (as a response to the corresponding stimuli) in that time interval, but also account for their dependence on those in the previous graph and time interval. To that end, we can account for the similarity of temporally adjacent graphs in the overall functionality of the sequence of graphs consistent with the observed data. Selecting a 1-norm distance of connectivity weight matrices between consecutive time intervals, we can proceed with the graph sequence discovery and hence the dynamics by seeking the solution to the following,

argmin𝐗,𝐖tt=1T[𝐲t𝐱t2+tr(γ𝐱tT(𝐖t)𝐱t)]+αt=1T1𝐖t𝐖t+11s.t.0wt,ij1,i,j,i,j>iwt,ij=K\begin{split}\operatorname*{argmin}_{\mathbf{X},\mathbf{W}_{t}}&\sum_{t=1}^{T}\big{[}\|\mathbf{y}_{t}-\mathbf{x}_{t}\|^{2}+tr(\gamma\mathbf{x}_{t}^{T}\mathcal{L}(\mathbf{W}_{t})\mathbf{x}_{t})\big{]}\\ &+\alpha\sum_{t=1}^{T-1}\|\mathbf{W}_{t}-\mathbf{W}_{t+1}\|_{1}\\ s.t.&\quad 0\leq w_{t,ij}\leq 1,\quad\forall i,j,\quad\sum_{i,j>i}w_{t,ij}=K\end{split} (2)

where α\alpha is the penalty coefficient, 𝐘\mathbf{Y} is the observed data, 𝐗=[𝐱1,𝐱2,,𝐱T]\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{T}] is the noise-free data, 𝐖t\mathbf{W}_{t} is the weight matrix in the tt-th time interval, and KK is a tuning parameter.

III-C Dynamic Graph Learning from Sparse Signals

The solution given by [11] addresses the static graph learning, but the observed signals yn(t)y_{n}(t) may often be sparse, which poses a problem: Noting that (𝐖,x)=ij>iwij𝐚ij𝐚ijTx\mathcal{L}(\mathbf{W},x)=\sum_{i}\sum_{j>i}w_{ij}\mathbf{a}_{ij}\mathbf{a}^{T}_{ij}x into the Laplacian quadratic form, we have wij𝐲i𝐲j2w_{ij}\|\mathbf{y}_{i}-\mathbf{y}_{j}\|^{2}, calculating the distance between two signals, and we minimize this term to find some nodes with tough connections, in another word, the values of the signals are similar in a small time interval. This equation also implies that if 𝐱i\mathbf{x}_{i} and 𝐱j\mathbf{x}_{j} are close to 0, their distance will also be close to 0. This thus introduces unexpected false edges when sparse signals are present. As a simple illustration, let us assume that sparse signals are rewritten as 𝐘=[𝐘~,𝟎]T\mathbf{Y}=[\tilde{\mathbf{Y}},\mathbf{0}]^{T}, where the dimension of 𝐘\mathbf{Y} is N×tN\times t, 𝐘~\tilde{\mathbf{Y}} is an n×tn\times t matrix and 𝟎\mathbf{0} is (nN)×t(n-N)\times t. Given that 2-norm is non-negative and the Laplacian matrix is positive semi-definite, we can find a trivial optimal solution of (𝐗,𝐖)(\mathbf{X},\mathbf{W}), where 𝐖\mathbf{W} is sparse, such that 𝐗=𝐘\mathbf{X}=\mathbf{Y}, and the weight matrix can be represented by some block matrix, 𝐖=[𝟎𝟎𝟎𝐖~]\mathbf{W}=\begin{bmatrix}\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{\tilde{W}}\end{bmatrix}.
Since problem (3) is a convex and non-negative problem, it can be shown that the optimal loss value is 0 by inserting the solution 𝐘=𝐗\mathbf{Y}=\mathbf{X} and 𝐖\mathbf{W} into optimization (3). This then shows that if sparse signals (which happens to be the case for brain firing patterns) are present, the solution to the formulated optimization may not be unique , furthermore, yielding some of these optimal points to result in connections between zeros-signal nodes (i.e. meaningless connections per our understanding).
Towards mitigating this difficulty, we introduce a constraint term to help focus on the nodes with significant values, specifically we constrain edge nodes energy to be of significance. This yields the following formulation,

argmin𝐗,𝐖tt=1T[𝐲t𝐱t2+tr(γ𝐱tT(𝐖t)𝐱t)2ηi,j>iwij(𝐱t,i2+𝐱t,j2)]+αt=1T1𝐖t𝐖t+11s.t.0wij1,i,j,i,j>iwij=K\begin{split}&\operatorname*{argmin}_{\mathbf{X},\mathbf{W}_{t}}\sum_{t=1}^{T}\bigg{[}\|\mathbf{y}_{t}-\mathbf{x}_{t}\|^{2}+tr(\gamma\mathbf{x}_{t}^{T}\mathcal{L}(\mathbf{W}_{t})\mathbf{x}_{t})\\ &-2\eta\sum_{i,j>i}w_{ij}(\|\mathbf{x}_{t,i}\|^{2}+\|\mathbf{x}_{t,j}\|^{2})\bigg{]}+\alpha\sum_{t=1}^{T-1}\|\mathbf{W}_{t}-\mathbf{W}_{t+1}\|_{1}\\ &s.t.\quad 0\leq w_{ij}\leq 1,\quad\forall i,j,\quad\sum_{i,j>i}w_{ij}=K\end{split} (3)

where η\eta is a penalty coefficient, and 𝐱t,i\mathbf{x}_{t,i} is the ii-th node signal in the tt-th time interval. Since the weight matrix for an undirected graph is symmetric, therefore this additional part of the new optimization can be simplified as following:

𝒫=2ηi,j>iwij(𝐱t,i2+𝐱t,j2)=tr(𝐱tTη𝒟(𝐖t)𝐱𝐭)\begin{split}{\mathcal{P}}&=-2\eta\sum_{i,j>i}w_{ij}(\|\mathbf{x}_{t,i}\|^{2}+\|\mathbf{x}_{t,j}\|^{2})\\ &=-tr(\mathbf{x}_{t}^{T}\eta\mathcal{D}(\mathbf{W}_{t})\mathbf{x_{t}})\end{split} (4)

where 𝒟(𝐖t)\mathcal{D}(\mathbf{W}_{t}) is a diagonal matrix defined above from weight matrix 𝐖t\mathbf{W}_{t}. Combining the two tr()tr(\cdot) expressions from Eqs.  (3) and (4) will yield the simpler form of Eq. (6). With a liitle more attention, one could note that this procedure naturally prefers nodes of higher energy by associating a higher weight.

IV Algorithmic Solution

The conventional use of Lagrangian duality for solving the above optimization model is costly in time and memory, and thus begs for an alternative.

IV-A Projection method

To address this difficulty, we consider the constraints as a subspace, where 𝒲\mathcal{W} is the whole space for graphs, with 𝒲ij0\mathcal{W}_{ij}\geq 0, and W𝒲W\subset\mathcal{W}, such that 0Wij10\leq W_{ij}\leq 1. Then, we introduce a projection method for projecting the updated Wt𝒲W_{t}\in\mathcal{W} into the subspace WW. Considering an updated weight matrix as a point in a high dimensional space, we minimize the distance between the point and the subspace within the whole space by enforcing minW~t12i,j>i(W~t,ijWt,ij)2\min_{\tilde{W}_{t}}\frac{1}{2}\sum_{i,j>i}(\tilde{W}_{t,ij}-W_{t,ij})^{2}, s.t.i,j>iW~t,ij=Ks.t.\sum_{i,j>i}\tilde{W}_{t,ij}=K and W~t𝐖\tilde{W}_{t}\subset\mathbf{W}. Applying the Lagrangian Duality on this minimization problem yields,
Claim:

L(W~t,κ)=12i,j>i(W~t,ijWt,ij)2+κ(i,j>iW~t,ijK),s.t.W~t𝐖.\begin{split}L(\tilde{W}_{t},\kappa)=&\frac{1}{2}\sum_{i,j>i}(\tilde{W}_{t,ij}-W_{t,ij})^{2}+\kappa(\sum_{i,j>i}\tilde{W}_{t,ij}-K),\\ &s.t.\tilde{W}_{t}\subset\mathbf{W}.\end{split} (5)

IV-B Proximal operator

In light of the non-smoothness of l1l_{1} norm, we call on the proximal operator to solve this part [16]. Firstly, the l1l_{1} term wtwt+11\|w_{t}-w_{t+1}\|_{1} in optimization (3) may be affected by the order of updating WtW_{t}s. Therefore, to minimize the influence of the order of updating variables, we introduce new variables ZtZ_{t} to replace the this term, and add a new constraint that Zt=WtWt+1Z_{t}=W_{t}-W_{t+1}. Through applying these new variables, updating WtW_{t} is not influenced by the others weight matrices, and ZtZ_{t} gives the relaxation between each two adjacent weight matrices. In the end, this would be equivalent to the previous optimization problem, with the advantage of its decreasing the impact on the optimization caused by the order of updating variables.
Claim: As a result, the Lagrangian duality form of the optimization yields the following,

L(Wt,Xt,γ,η,α,β,λ)=t=1T𝐲t𝐱t2+tr(𝐱tT(γ(𝐖t)η𝒟(𝐖t))𝐱t)+αt=1T1Zt1+βt,ZtWt+Wt+1.\begin{split}&L(W_{t},X_{t},\gamma,\eta,\alpha,\beta,\lambda)=\sum_{t=1}^{T}\|\mathbf{y}_{t}-\mathbf{x}_{t}\|^{2}\\ &+tr(\mathbf{x}_{t}^{T}(\gamma\mathcal{L}(\mathbf{W}_{t})-\eta\mathcal{D}(\mathbf{W}_{t}))\mathbf{x}_{t})+\alpha\sum_{t=1}^{T-1}\|Z_{t}\|_{1}\\ &+\langle\beta_{t},Z_{t}-W_{t}+W_{t+1}\rangle.\end{split} (6)

Now we have the function of ZtZ_{t}, denoted as f(Zt)=αZt+βt,Ztf(Z_{t})=\alpha\|Z_{t}\|+\langle\beta_{t},Z_{t}\rangle, which is not a smooth but convex function over ZtZ_{t}. To avoid smoothness point, we apply proximal operator to update ZtZ_{t} by projecting the point into the defined convex domain and getting closer to the optimal point. The function is defined as 𝐩𝐫𝐨𝐱λf(Vt)=argminZtf(Zt)+12λZtVt22\mathbf{prox}_{\lambda f}(V_{t})=\operatorname*{argmin}_{Z_{t}}f(Z_{t})+\frac{1}{2\lambda}\|Z_{t}-V_{t}\|_{2}^{2} where λ\lambda is some tuning parameter. It is clear that we achieve the optimal point ZtZ_{t}^{*}, if and only if we have Zt=𝐩𝐫𝐨𝐱λf(Zt)Z_{t}^{*}=\mathbf{prox}_{\lambda f}(Z^{*}_{t}), therefore for the kk-th iteration, we update the variable ZtZ_{t} as Ztk=𝐩𝐫𝐨𝐱λf(Zt(K1))Z^{k}_{t}=\mathbf{prox}_{\lambda f}(Z^{(K-1)}_{t}).

IV-C Algorithm

The function of XtX_{t} can be regarded as a convex smooth function, which allows the calculation of the derivative of the optimization formulation over XtX_{t} and setting the value to 0.

Xt(k)=(𝐈+γ(Wt(k1))η𝒟(Wt(k1)))1YtX^{(k)}_{t}=\big{(}\mathbf{I}+\gamma\mathcal{L}(W^{(k-1)}_{t})-\eta\mathcal{D}(W^{(k-1)}_{t})\big{)}^{-1}Y_{t} (7)

Since the functions of WtW_{t}s are smooth, we use gradient descent to update each WtW_{t}. The whole algorithm is presented in Algorithm 1.

1
Input : YtY_{t}
Output : Xt,WtX_{t},W_{t}
2 α\alpha, γ\gamma, η\eta, λ\lambda and learning rate τ\tau are pre-defined.
3 while not converged do
4   Update XtX_{t} by (16)
5 Wt(k)=Wt(k1)τ1WtL(Wt,Xt,γ,η,α,β,λ)|Wt(k1)W^{(k)}_{t}=W^{(k-1)}_{t}-\tau_{1}\frac{\partial}{\partial W_{t}}L(W_{t},X_{t},\gamma,\eta,\alpha,\beta,\lambda)|_{W_{t}^{(k-1)}}
6   Project Wt(k)W_{t}^{(k)} to the defined domain by Projection method.
7   Update (Wt(k1))\mathcal{L}(W_{t}^{(k-1)}) and 𝒟(Wt(k1))\mathcal{D}(W_{t}^{(k-1)}) by definition.
8   Update ZtZ_{t} by Proximal operator.
9 βt(k)=βt(k1)τ2βtL(Wt,Xt,γ,η,α,β,λ)|βt(k1)\beta_{t}^{(k)}=\beta_{t}^{(k-1)}-\tau_{2}\frac{\partial}{\partial\beta_{t}}L(W_{t},X_{t},\gamma,\eta,\alpha,\beta,\lambda)|_{\beta_{t}^{(k-1)}}
10 end while
Algorithm 1 algorithm for dynamic graph learning
Refer to caption
Figure 1: Visual Stimuli: The visual stimuli for the mouse in a single trial.

V Experiments and results

The data in these experiments were measured in S. L. Smith’s Laboratory at UCSB[8]. They use a two-photon electro-microscope [17] to collect fluorescent signals. The whole experiment consists of 3 specific scenarios with a 20 trial measurement, and the stimuli in each trial are the same. The stimuli for each of the scenarios are shown in Figure 1, and consist of ”gray” movie, artificial movie, natural movie 1 and natural movie 2. The dataset includes 590 neurons in V1 area and 301 neurons in AL area, and the sample rate is approximately 13.3 Hz. To select the most consistent 50 neurons in V1 area , We calculate the correlation between signals in every 2 trials for each neuron, and we choose the 50 neurons with the highest mean correlation.

In addition to the similarity under similar stimuli, there is memory across the change of stimuli. The brain’s reaction time for stimuli is 100ms approximately and the delay of the device is around 50 to 100ms, the time difference between 2 time points is 75ms; therefore, we choose T=213T=213 in the optimization model to capture the change within 150ms, and we have 25 to 26 graphs for each stimulus. We choose K=30K=30 to restrict a sparse graph, 5 percent of number of complete graph. Applying the same parameters on the signals of 20 trials, we have 8 graphs for each trial, where great variations can be observed between different trials. Therefore, we transform the weight matrix to an adjacent matrix through considering the weight matrix as the probability of connectivity between neurons, remove the edges with probability less than 0.5 and set other edges’ weight to 1, then we add the adjacency matrices from different trials in the same time interval, and we choose the value of edges greater than 5.
Through transforming the weight matrix of each graph into a vector and calculating correlation coefficient between every two vectors, e.g. the element value of ii-th row and jj-th column of the matrix stands for the correlation between ii-th and jj-th graphs’ weight vectors and the matrix is symmetric obviously, the red dash lines divide the plot into small blocks, representing the exact time interval corresponding to each specific stimulus which is shown on the left of the plot, and Fig. 2 gives an intuitive view on the memories between consecutive stimuli and similarities of graphs activated by similar stimuli.

Refer to caption
Figure 2: Correlations between graphs

From this neuron signal dataset, we observe variations of neurons’ connectivity over trials, but it preserves similar patterns for similar stimuli in V1 area. Through looking into different time scales, we also show the memorial patterns from one stimulus across to another. These observations can be seen as a basic step for studying brains’ functional connectivity reflecting to the specific stimuli.

VI Conclusion

This paper introduces an optimization model for learning dynamics of sparse graphs with sparse signals based on the graph Laplacian and its smoothness assumption without prior knowledge on the signals. Through applying three alternative solution methods, this model learns a single graph in a short time interval and a set of graphs over the whole signals, and it can capture the small change of graphs in a brief time interval. In the experiment, we solve the difficulty of the low sample rate for detecting graphs, and discover the functional connectivity on specific stimuli instead of revealing the physical connections of neurons. More future research should focus on discovering brains with more datasets and measuring methods, which will support future discovery on understanding how neurons collaborate with each other and how brains work. A future plan is to optimize the model to learn the transformation of graphs.

References

  • [1] S. Jagadish and J. Parikh, “Discovery of friends using social network graph properties,” Jun. 3 2014, uS Patent 8,744,976.
  • [2] Y. Huang, A. Panahi, H. Krim, and L. Dai, “Fusion of community structures in multiplex networks by label constraints,” in 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018, pp. 887–891.
  • [3] G. Audi, A. Wapstra, and C. Thibault, “The ame2003 atomic mass evaluation:(ii). tables, graphs and references,” Nuclear physics A, vol. 729, no. 1, pp. 337–676, 2003.
  • [4] A. A. Canutescu, A. A. Shelenkov, and R. L. Dunbrack Jr, “A graph-theory algorithm for rapid protein side-chain prediction,” Protein science, vol. 12, no. 9, pp. 2001–2014, 2003.
  • [5] O. Sporns, Discovering the human connectome. MIT press, 2012.
  • [6] R. Polanía, W. Paulus, A. Antal, and M. A. Nitsche, “Introducing graph theory to track for neuroplastic alterations in the resting human brain: a transcranial direct current stimulation study,” Neuroimage, vol. 54, no. 3, pp. 2287–2296, 2011.
  • [7] G. K. Ocker, Y. Hu, M. A. Buice, B. Doiron, K. Josić, R. Rosenbaum, and E. Shea-Brown, “From the statistics of connectivity to the statistics of spike times in neuronal networks,” Current opinion in neurobiology, vol. 46, pp. 109–119, 2017.
  • [8] Y. Yu, J. N. Stirman, C. R. Dorsett, and S. L. Smith, “Mesoscale correlation structure with single cell resolution during visual coding,” bioRxiv, p. 469114, 2018.
  • [9] H. Sompolinsky, H. Yoon, K. Kang, and M. Shamir, “Population coding in neuronal systems with correlated noise,” Physical Review E, vol. 64, no. 5, p. 051904, 2001.
  • [10] H. P. Maretic, D. Thanou, and P. Frossard, “Graph learning under sparsity priors,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Ieee, 2017, pp. 6523–6527.
  • [11] S. P. Chepuri, S. Liu, G. Leus, and A. O. Hero, “Learning sparse graphs under smoothness prior,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017, pp. 6508–6512.
  • [12] P. Goyal, S. R. Chhetri, and A. Canedo, “dyngraph2vec: Capturing network dynamics using dynamic graph representation learning,” Knowledge-Based Systems, 2019.
  • [13] P. Goyal, N. Kamra, X. He, and Y. Liu, “Dyngem: Deep embedding method for dynamic graphs,” arXiv preprint arXiv:1805.11273, 2018.
  • [14] X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst, “Learning laplacian matrix in smooth graph signal representations,” IEEE Transactions on Signal Processing, vol. 64, no. 23, pp. 6160–6173, 2016.
  • [15] F. R. Chung and F. C. Graham, Spectral graph theory. American Mathematical Soc., 1997, no. 92.
  • [16] N. Parikh, S. Boyd et al., “Proximal algorithms,” Foundations and Trends® in Optimization, vol. 1, no. 3, pp. 127–239, 2014.
  • [17] N. Ji, J. Freeman, and S. L. Smith, “Technologies for imaging neural activity in large volumes,” Nature neuroscience, vol. 19, no. 9, p. 1154, 2016.
  • [18] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
  • [19] F. Crick and C. Koch, “Are we aware of neural activity in primary visual cortex?” Nature, vol. 375, no. 6527, p. 121, 1995.
  • [20] V. Kalofolias, “How to learn a graph from smooth signals,” in Artificial Intelligence and Statistics, 2016, pp. 920–929.
  • [21] D. Wang, P. Cui, and W. Zhu, “Structural deep network embedding,” in Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2016, pp. 1225–1234.
  • [22] S. Cao, W. Lu, and Q. Xu, “Deep neural networks for learning graph representations,” in Thirtieth AAAI Conference on Artificial Intelligence, 2016.
  • [23] C. M. Niell and M. P. Stryker, “Modulation of visual responses by behavioral state in mouse visual cortex,” Neuron, vol. 65, no. 4, pp. 472–479, 2010.