This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Latent Evolution Model for Change Point Detection in Time-varying Networks

Yongshun Gong Xue Dong Jian Zhang Meng Chen School of Software, Shandong University, Jinan, Shandong, China University of Technology Sydney, Australian
Abstract

Graph-based change point detection (CPD) play an irreplaceable role in discovering anomalous graphs in the time-varying network. While several techniques have been proposed to detect change points by identifying whether there is a significant difference between the target network and successive previous ones, they neglect the natural evolution of the network. In practice, real-world graphs such as social networks, traffic networks, and rating networks are constantly evolving over time. Considering this problem, we treat the problem as a prediction task and propose a novel CPD method for dynamic graphs via a latent evolution model. Our method focuses on learning the low-dimensional representations of networks and capturing the evolving patterns of these learned latent representations simultaneously. After having the evolving patterns, a prediction of the target network can be achieved. Then, we can detect the change points by comparing the prediction and the actual network by leveraging a trade-off strategy, which balances the importance between the prediction network and the normal graph pattern extracted from previous networks. Intensive experiments conducted on both synthetic and real-world datasets show the effectiveness and superiority of our model.

keywords:
Change point detection , Time-varying networks , Network prediction
journal: Journal of  Templates

1 Introduction

Time-varying network has shed a light on the spatio-temporal information of this network (graph), which represents multiple structure of nodes and dynamic changes [1]. A stable and gradual development of such network is essential to many real-world applications such as city-wide traffic management, weather detection, and detection of epidemic outbreaks [2, 3, 4, 5, 6]. As one of the anomaly detection tasks, change point detection (CPD) concentrates on identifying the anomalous timestamps in temporal networks, which deviates from the normal network evolution [7]. For example, in the Canadian parliament voting network, there is an anomalous voting pattern in 2013, as shown in the right part of Figure 1 (a). The graph in the red box reveals different voting interactions compared with previous two years (2011 and 2012). CPD can identify anomalies like this from the time-varying network sequences.

Nowadays, existing methods mainly compare the target network with a learned normal graph pattern [7, 8]. This normal pattern is extracted from previous networks in a given time-window via a mean or global representation learning method [7, 9, 10]. Even these methods have achieved encouraging results on the synthetic test, most of them neglect capturing the real-world dynamic evolution patterns. In other words, they are suitable for implementing in a relatively stable environment [11], but not always able to capture anomalies in temporal networks (e.g., traffic network) , as they do not take into account the evolution trend across the temporal networks [12, 13, 14]. For example, the upper part of Figure 2 illustrates the temporal changes of crowd flow in the Sydney traffic network. If these traffic maps are embedded into a latent space, their latent representations could be presented in the bottom part of this figure. There is a clear upward trend in the real-world traffic network. Considering the change point detection for the next timestamp, we aim to capture the time-evolving trend and make an accurate prediction to the traffic map at TnextT_{next}. When a prediction (represented by the black dotted circle) is obtained, we can compare it with the real map representation (the green dotted circle) at TnextT_{next}. However, if we use an average or global method to compute the normal graph pattern of previous networks, we can only capture the latent representation as illustrated by the red dotted circle. Apparently, the gap2gap_{2} is much larger than gap1gap_{1}, suggesting that only extracting the previous normal graph pattern based on the average method leads to a distorted result. Moreover, a time-varying network is not only reflected by the link connections, but also varied in the weight changes. Even though a few of methods can capture the changes for link connections, they do not consider the dynamic weights [15].

Refer to caption
Figure 1: Canadian parliament voting networks. In 2011 and 2012, the most of votes gathered in one node, while the voting network changed in 2013 since there are two aggregation points. It is indicated that 2013 is a change point of the Canadian parliament voting network.
Refer to caption
Figure 2: CPD with time-varying prediction strategy.

Motivated by the above issues, we draw inspiration from several latent space prediction model [16, 17] and propose a new evolution method for the change point detection. One of the purposes in this paper is to capture the evolution trend of a time-varying network. Once the trend is learned, we can make a prediction of the network with the evolving patterns. Accordingly, the difference between the prediction and the actual network can be recognized as a criteria to identify the change point. After casting the problem as a prediction task, there are two challenges along with that need to be carefully handled.

  • 1.

    One is how to capture temporal dependencies and evolution trends.

  • 2.

    The second is how to identify the anomalous network by considering the prediction result.

To address these challenges, we have developed a prediction model for the time-varying network, and proposed a trade-off strategy based on the prediction result and the previous normal graph pattern to detect change points. Specifically, we first embedded the original network into latent spaces via matrix factorization model. Then, our method goes a step further to capture the dynamic movements of these latent spaces from current timestamp to the next via the latent transition learning process. When the latent spaces and transition matrices are learned, we can make a prediction to the target network. At second, we choose the Laplacian spectrum strategy to compare the differences between the prediction and the actual network. At last, the normal graph pattern learned from previous networks is taken into consideration as a trade-off strategy to identify whether the target network is a change point. We summarize the main contributions and innovations of this paper as follows:

  • 1.

    Unlike existing methods detecting change points directly [3, 7, 8], we formulate the CPD problem as a temporal network prediction task, and propose a data-driven forecasting method via the latent evolution model, called LEM-CPD, which learns network evolution trends over time.

  • 2.

    To improve the effectiveness and robustness of the model in various real-world scenarios, we further take long-term and periodic properties into consideration, and propose a novel long-term guidance method based on the weighted multi-view learning.

  • 3.

    We develop a trade-off strategy to identify change points based on the Laplacian spectrum method, which balances the importance between the prediction network and the normal graph pattern extracted from previous networks.

2 Related Work

2.1 Change Point Detection

In this section, we briefly introduce some related literatures of CPD, which can be used to discover outliers in network and sequences [18, 19, 20, 21]. [22] provided the first attempt on transforming the CPD problem to detect the times of fundamental changes. The proposed LetoChange method used Bayesian hypothesis testing to determine whether a model with parameter changes is acceptable. [3] proposed a Markov process based method, which treats the time-varying network as a first order Markov process. This work used joint edge probabilities as the ”feature vector”. [10] proposed a tensor-based CPD model. Tensor factorization is widely recognized as an effective method to capture a low-rank representation in the time dimension. In this method, after getting the latent attributes of each time point, the clustering and outlier detection methods were used to identify change points. Activity vector method was proposed by [9], it focused on finding anonymous networks which are significant deviation from previous normal ones. Activity vector leveraged the principal eigenvector of the adjacency matrix to present each network, it considered a short term window to capture the normal graph pattern. Currently, [7] proposed a Laplacian-based detection method for CPD, named LAD. It utilised the Laplacian matrix and SVD to obtain embedding vector of each network. In additional, two window sizes are involved to capture both short and long term properties. [15] proposed an efficient approach for network change point localization. However, this method only considered the changes in link connections that cannot capture the dynamics for weight graphs.

As aforementioned discussion, most of the existing relevant methods did not fully consider the dynamic trends in the time-varying networks when detecting change points.

2.2 Dynamic Latent Space Model

Recent studies have indicated that the latent space learning methods provide excellent solutions for the dynamic network problems, e.g., flow prediction [17], community detection systems [23], social network evolution learning [24], sequential analysis [25, 26, 27, 28], and traffic speed prediction [16]. The benefits of a latent space model is to obtain the low-rank approximations from the original features, so that obtain a more compact matrix by removing the redundant information of the large scale matrix [29, 30]. [31] proposed a dynamic model for a static co-occurrence model CODE, where the coordinates can change with time to get new observations. There are several advanced investigations focusing on traffic analysis under the latent space assumption. For example, to address the issues of finding abnormal activities in crowded scenes, [32] devised a spatio-temporal Laplacian eigenmap strategy to detect anomalies. [33] leveraged a tensor structure to resolve the high-order time-varying prediction problem. The key characteristics of the original time sequences can be represented by the learned latent core tensor. [16] developed a real-time traffic speed prediction model that learns temporal and topological features from the road network. It then used an online latent space learning method to solve the proposed constrained optimization problem. [34] took the advantage of dynamic latent space learning to address the temporal analysis problem very efficiently on the large-scale streaming data. In other applications, [4, 17] used the online matrix factorization model to solve the crowd flow distribution problem.

In summary, latent space models can effectively capture dynamic evolving trends from real-world networks, as well as learning the latent embedding for such networks.

3 Problem Formulation

In the CPD problem, we use a directed graph 𝑮t=(𝑽t,𝑬t)\bm{G}_{t}=(\bm{V}_{t},\bm{E}_{t}) to define the network at timestamp tt, where 𝑬t\bm{E}_{t} is the group of edges and 𝑽t\bm{V}_{t} denotes the group of vertexes. An edge et(vi,vj,wij)e_{t}(v_{i},v_{j},w_{ij}) denotes the connection between node viv_{i} and node vjv_{j} with weight wijw_{ij} at timestamp tt. Following the same setting in [7, 10], ww = 1 for all edges in unweighted graphs and ww \in +\mathbb{R}^{+} for weighted graphs; and the number of nodes in the graph is assumed to be constant across all timestamps.

Given a certain time window TT, our work first aims to predict the next network 𝑮T+1\bm{G}_{T+1} by using a series of previous networks, {𝑮t}t=1T\{\bm{G}_{t}\}_{t=1}^{T}. Second, the next snapshot 𝑮T+1\bm{G}_{T+1} will be identified as the change point or not via a trade-off strategy by leveraging the prediction 𝑮T+1\bm{G}_{T+1} and the previous normal graph pattern. Among which, the normal graph pattern indicates the average network embedding learned from previous {𝑮t}t=1T\{\bm{G}_{t}\}_{t=1}^{T} in a given time window. The detailed learning method is presented in Section 4.5.

4 Methodology

4.1 Temporal Topology Embedding

Considering inherent complexities of the time-varying network, we leverage the latent space model to represent dynamic graphs. Latent space models are widely applied into solving network-wide problems, e.g., community detection [23], social and heterogeneous networks construction[1], and network-wide traffic prediction[17, 35], etc. The benefit of low-rank approximation of latent space model is to get a more compact matrix by removing the redundant information of the large-scale matrix [36]. For a slice 𝑮\bm{G} \in +n×n\mathbb{R}^{n\times n}_{+} (the timestamp tt is omitted for brevity), we use tri-factorization that decomposes 𝑮\bm{G} into three latent representations, in the form 𝑼𝑪𝑽\bm{U}\bm{C}\bm{V}, where 𝑼\bm{U} \in +n×k\mathbb{R}^{n\times k}_{+} and 𝑽\bm{V} \in +k×n\mathbb{R}^{k\times n}_{+} denote the latent attributes of start and end nodes respectively, 𝑪\bm{C} \in +k×k\mathbb{R}^{k\times k}_{+} denotes the attribute interaction patterns, nn is the number of nodes and kk is the number of dimension of latent spaces. Therefore, we approximate 𝑮\bm{G} by a non-negative matrix tri-factorization form 𝑼𝑪𝑽\bm{U}\bm{C}\bm{V}:

min𝑼,𝑪,𝑽0𝑮𝑼𝑪𝑽F2.\small\underset{\bm{U},\bm{C},\bm{V}\geq 0}{{\rm min}}\;||\bm{G}-\bm{U}\bm{C}\bm{V}||_{F}^{2}. (1)

Note that, some tri-factorization methods prefer to use the factorization form 𝑮𝑼𝑪𝑼\bm{G}\approx\bm{U}\bm{C}\bm{U}^{\top} because they usually focus on the undirected graph [23, 37]. However, in our problem, some real-world networks are formulated as the directed graph. In this way, 𝑮𝑼𝑪𝑽\bm{G}\approx\bm{U}\bm{C}\bm{V} provides a more flexible formulation to represent the graph 𝑮\bm{G} [38, 39]. Thus, our basic problem formulation differs from that of methods in [16, 4, 17].

The reasons that we utilise the non-negativity constraint are: 1) the reasonable assumptions of latent attributes and the better interpretability of the results [40, 41]; 2) all weights in our networks are non-negative, thus latent spaces 𝑼\bm{U}, 𝑽\bm{V} and 𝑪\bm{C} should be non-negative as well.

To take the time-series pattern into consideration, we formulate this sequential problem as a time-dependent model. Given a certain time window TT (i.e., a set contains TT previous networks, and TT presents the current time as well), for each timestamp tt \in [1, TT], our method aims to learn the time-dependent latent representations 𝑼t\bm{U}_{t}, 𝑽t\bm{V}_{t} and 𝑪\bm{C} from 𝑮t\bm{G}_{t}. Although the latent attribute matrix 𝑼t\bm{U}_{t} is time-dependent, we assume that the latent attribute interaction matrix 𝑪\bm{C} is an inherent property, and thus 𝑪\bm{C} is fixed for all timestamps. This assumption is based on the fact that temporal network exists invariant properties [16]. After considering the temporal pattern, our model can be formulated as:

min𝑼t,𝑪,𝑽t0𝒥=t=1T𝑮t𝑼t𝑪𝑽tF2.\small\underset{\bm{U}_{t},\bm{C},\bm{V}_{t}\geq 0}{{\rm min}}\;\mathcal{J}=\sum_{t=1}^{T}||\bm{G}_{t}-\bm{U}_{t}\bm{C}\bm{V}_{t}||_{F}^{2}. (2)

4.2 Latent Space Evolution

Since our model is a time-varying forecasting system, the network is continually changing over time. We not only want to learn the time-dependent latent attributes, but also aim to learn the evolution patterns of latent attributes 𝑼t\bm{U}_{t} and 𝑽t\bm{V}_{t} from previous timestamps to the next, i.e., how to learn the evolution forms such as 𝑼t\bm{U}_{t} \rightarrow 𝑼t+1\bm{U}_{t+1} and 𝑽t\bm{V}_{t} \rightarrow 𝑽t+1\bm{V}_{t+1}.

To this end, we define two transition matrices 𝑨+k×k\bm{A}\in\mathbb{R}_{+}^{k\times k} and 𝑩+n×n\bm{B}\in\mathbb{R}_{+}^{n\times n} to represent the smooth trends of 𝑼t\bm{U}_{t} and 𝑽t\bm{V}_{t} in T previous conditions. The learned transition patterns 𝑨\bm{A} and 𝑩\bm{B} can be treated as a global evolution trend from previous timestamps. Thus, matrices 𝑨\bm{A} and 𝑩\bm{B} can be recognized as the evolving patterns of the dynamic network. For example, 𝑨\bm{A} approximates the changes of 𝑼\bm{U} between time t-1 to t, i.e., optimizes 𝑼t=𝑼t1𝑨\bm{U}_{t}=\bm{U}_{t-1}\bm{A}.

Then, let us involve all timestamps in a window-size TT, our latent transition learning can be expressed as:

min𝑼t,𝑽t,𝑨,𝑩,𝑪0𝒯=t=2T(𝑼t𝑼t1𝑨F2+𝑽t𝑽t1𝑩F2).\small\underset{\bm{U}_{t},\bm{V}_{t},\bm{A},\bm{B},\bm{C}\geq 0}{{\rm min}}\mathcal{T}=\sum_{t=2}^{T}(||\bm{U}_{t}-\bm{U}_{t-1}\bm{A}||_{F}^{2}+||\bm{V}_{t}-\bm{V}_{t-1}\bm{B}||_{F}^{2}). (3)

When we consider the temporal topology problem jointly, we have:

min𝑼t,𝑽t,𝑨,𝑩,𝑪01=𝒥+λ1𝒯,\small\underset{\bm{U}_{t},\bm{V}_{t},\bm{A},\bm{B},\bm{C}\geq 0}{{\rm min}}\mathcal{L}_{1}=\mathcal{J}+\lambda_{1}\mathcal{T}, (4)

where λ1\lambda_{1} is the regularization parameter.

To this stage, transition matrices 𝑨\bm{A} and 𝑩\bm{B} can learn the short-term knowledge from previous networks in a window size. Considering that the long-term knowledge is helpful for the CPD and network evolution problems [3, 17], we also take the long-term pattern into consideration to improve the prediction performance in the next section.

4.3 Long-term Guidance

Dynamic networks, such as climate and travel networks, usually have a strong periodic property. Such property is considered as the long-term guidance which has more gradual trends. Because of this point, we can leverage a long-term window to achieve the large-scaled network evolution, and propose a novel long-term guidance to extract periodic and stable information. Given a long-term set of networks {𝑮t}t=14T\{\bm{G}_{t}\}_{t=1}^{4T} (the window size extends from TT to 4TT for example, then the long-term guidance will consider the states of 4TT previous networks), we learn two guide matrices, 𝑼lt\bm{U}^{\rm lt} and 𝑽lt\bm{V}^{\rm lt}, to represent the common characteristic of {𝑮t}t=14T\{\bm{G}_{t}\}_{t=1}^{4T}:

min𝑼lt,𝑽lt0t=14T(||rt(𝑮t𝑼lt𝑽lt)||F2,\small\underset{\bm{U}^{\rm lt},\bm{V}^{\rm lt}\geq 0}{{\rm min}}\sum_{t=1}^{4T}(||r_{t}\odot(\bm{G}_{t}-\bm{U}^{\rm lt}\bm{V}^{\rm lt})||_{F}^{2}, (5)

where rtr_{t} is the adaptive weight calculated by function: rt=estt=14Testr_{t}=\frac{e^{s_{t}}}{\sum^{4T}_{t=1}e^{s_{t}}}, and sts_{t} is the similarity between 𝑮t\bm{G}_{t} and the current network 𝑮4T\bm{G}_{4T}. We use the matrix L2-norm to calculate the similarities among networks, and then normalize them satisfying 4Tst=1\sum^{4T}s_{t}=1. The assumption behind this is that 𝑮t\bm{G}_{t} can provide more useful information if it is more similar to 𝑮4T\bm{G}_{4T}. \odot is the Hadamard product operator.

The main idea of the above process is analogous to the multi-view learning which projects multiple views into shared common spaces [42, 21]. The adaptive weight rtr_{t} controls how much information can be extracted from 𝑮t\bm{G}_{t}. Such technique has been successfully used in common space learning [43, 44]. The learning approaches of Eq. (5) are:

𝑼lt=𝑼ltt=14T(rt𝑮t)𝑽t=14T[rt(𝑼lt𝑽lt)]𝑽lt,\bm{U}^{\rm lt}=\bm{U}^{\rm lt}\odot\frac{\sum_{t=1}^{4T}(r_{t}\odot\bm{G}_{t})\bm{V}^{\top}}{\sum_{t=1}^{4T}[r_{t}\odot(\bm{U}^{\rm lt}\bm{V}^{\rm lt})]\bm{V}^{\rm lt}}, (6)
𝑽lt=𝑽ltt=14T𝑼lt(rt𝑮t)t=14T𝑼lt[rt(𝑼ltVlt)].\bm{V}^{\rm lt}=\bm{V}^{\rm lt}\odot\frac{\sum_{t=1}^{4T}\bm{U}^{\rm lt\top}(r_{t}\odot\bm{G}_{t})}{\sum_{t=1}^{4T}\bm{U}^{\rm lt\top}[r_{t}\odot(\bm{U}^{lt}V^{\rm lt})]}. (7)

After having 𝑼lt\bm{U}^{\rm lt} and 𝑽lt\bm{V}^{\rm lt}, the strategy of the long-term guidance is formulated as:

min𝑼T,𝑽T,𝑨,𝑩0=𝑼lt𝑼T𝑨F2+𝑽lt𝑽T𝑩F2,\small\underset{\bm{U}_{T},\bm{V}_{T},\bm{A},\bm{B}\geq 0}{{\rm min}}\mathcal{H}=||\bm{U}^{\rm lt}-\bm{U}_{T}\bm{A}||_{F}^{2}+||\bm{V}^{\rm lt}-\bm{V}_{T}\bm{B}||_{F}^{2}, (8)

where 𝑼T\bm{U}_{T} and 𝑽T\bm{V}_{T} denote the latent representations at current timestamp TT.

Taking all techniques jointly into consideration, the final optimization function is expressed as:

min𝑼t,𝑽t,𝑨,𝑩,𝑪0=𝒥+λ1𝒯+λ2,\small\underset{\bm{U}_{t},\bm{V}_{t},\bm{A},\bm{B},\bm{C}\geq 0}{{\rm min}}\mathcal{L}=\mathcal{J}+\lambda_{1}\mathcal{T}+\lambda_{2}\mathcal{H}, (9)

where λ2\lambda_{2} is the regularization factor.

Make a Prediction. The learned latent space matrices 𝑼T\bm{U}_{T} and 𝑽T\bm{V}_{T}, and the evolving patterns 𝑨\bm{A}, 𝑩\bm{B} and 𝑪\bm{C} can be used to make the prediction after solving Equation (9). Then we have the predicted network 𝑮^T+1\hat{\bm{G}}_{T+1} as:

𝑮^T+1=(𝑼T𝑨)𝑪(𝑽T𝑩).\small\hat{\bm{G}}_{T+1}=(\bm{U}_{T}\bm{A})\bm{C}(\bm{V}_{T}\bm{B}). (10)

4.4 Learning Process

Due to the non-convexity of Eq. (9), we use an effective gradient descent approach with the multiplicative update method [45] to find its local optimization.

Theorem 1

\mathcal{L} is non-increasing by optimizing 𝐔t\bm{U}_{t}, 𝐕t\bm{V}_{t}, 𝐀\bm{A} 𝐁\bm{B} and 𝐂\bm{C} alternatively.

𝑼t=𝑼t𝑮t(𝑪𝑽t)+λ1(𝑼t1𝑨+𝑼t+1𝑨)+λ2^𝑼lt𝑨(𝑼t𝑪𝑽t)(𝑪𝑽t)+λ1(𝑼t+𝑼t𝑨𝑨)+λ2^𝑼T𝑨𝑨,\bm{U}_{t}=\bm{U}_{t}\odot\frac{\bm{G}_{t}(\bm{C}\bm{V}_{t})^{\top}+\lambda_{1}(\bm{U}_{t-1}\bm{A}+\bm{U}_{t+1}\bm{A}^{\top})+\hat{\lambda_{2}}\bm{U}^{\rm lt}\bm{A}^{\top}}{(\bm{U}_{t}\bm{C}\bm{V}_{t})(\bm{C}\bm{V}_{t})^{\top}+\lambda_{1}(\bm{U}_{t}+\bm{U}_{t}\bm{A}\bm{A}^{\top})+\hat{\lambda_{2}}\bm{U}_{T}\bm{A}\bm{A}^{\top}}, (11)
𝑽t=𝑽t(𝑼t𝑪)𝑮t+λ1(𝑽t1𝑩+𝑽t+1𝑩)+λ2^𝑽lt𝑩(𝑼t𝑪)(𝑼t𝑪𝑽t)+λ1(𝑽t+𝑽t𝑩𝑩)+λ2^𝑽T𝑩𝑩,\bm{V}_{t}=\bm{V}_{t}\odot\frac{(\bm{U}_{t}\bm{C})^{\top}\bm{G}_{t}+\lambda_{1}(\bm{V}_{t-1}\bm{B}+\bm{V}_{t+1}\bm{B}^{\top})+\hat{\lambda_{2}}\bm{V}^{\rm lt}\bm{B}^{\top}}{(\bm{U}_{t}\bm{C})^{\top}(\bm{U}_{t}\bm{C}\bm{V}_{t})+\lambda_{1}(\bm{V}_{t}+\bm{V}_{t}\bm{B}\bm{B}^{\top})+\hat{\lambda_{2}}\bm{V}_{T}\bm{B}\bm{B}^{\top}}, (12)

λ2^\hat{\lambda_{2}} is given by:

λ2^={λ2,t=T     0,otherwise\small\hat{\lambda_{2}}=\left\{\begin{matrix}\lambda_{2},\;t=T\\ \;\;\;\;\;0,\;\rm otherwise\end{matrix}\right. (13)
𝑨=𝑨λ1t=1T𝑼t1𝑼t+λ2𝑼T𝑼ltλ1t=1T𝑼t1(𝑼t1𝑨)+λ2𝑼T(𝑼T𝑨),\small\bm{A}=\bm{A}\odot\frac{\lambda_{1}\sum_{t=1}^{T}\bm{U}_{t-1}^{\top}\bm{U}_{t}+\lambda_{2}\bm{U}_{T}^{\top}\bm{U}^{lt}}{\lambda_{1}\sum_{t=1}^{T}\bm{U}_{t-1}^{\top}(\bm{U}_{t-1}\bm{A})+\lambda_{2}\bm{U}_{T}^{\top}(\bm{U}_{T}\bm{A})}, (14)
𝑩=𝑩λ1t=1T𝑽t1𝑽t+λ2𝑽T𝑽ltλ1t=1T𝑽t1(𝑽t1𝑩)+λ2𝑽T(𝑽T𝑩),\small\bm{B}=\bm{B}\odot\frac{\lambda_{1}\sum_{t=1}^{T}\bm{V}_{t-1}^{\top}\bm{V}_{t}+\lambda_{2}\bm{V}_{T}^{\top}\bm{V}^{lt}}{\lambda_{1}\sum_{t=1}^{T}\bm{V}_{t-1}^{\top}(\bm{V}_{t-1}\bm{B})+\lambda_{2}\bm{V}_{T}^{\top}(\bm{V}_{T}\bm{B})}, (15)
𝑪=𝑪t=1T𝑼t𝑮t𝑽tt=1T𝑼t(𝑼t𝑪Vt)𝑽t,\small\bm{C}=\bm{C}\odot\frac{\sum_{t=1}^{T}\bm{U}_{t}^{\top}\bm{G}_{t}\bm{V}_{t}^{\top}}{\sum_{t=1}^{T}\bm{U}_{t}^{\top}(\bm{U}_{t}\bm{C}V_{t})\bm{V}_{t}^{\top}}, (16)

where XX^{\top} is the transpose of XX. The proof of Theorem 1 and the detailed derivations are deferred to the Appendix.

Note that, update rules Eq. (11) - (12) indicate a time-related learning process, the information from previous and next timestamps (𝑼t1\bm{U}_{t-1} and 𝑼t+1\bm{U}_{t+1}) affect each other. This chain updating rules make sure all variables are learned mutually.

4.5 Change Point Detection

By computing the prediction network 𝑮^T+1\hat{\bm{G}}_{T+1} for timestamp TT+1, we can compare it with the actual network 𝑮T+1\bm{G}_{T+1} to check whether 𝑮T+1\bm{G}_{T+1} is a change point or not. Specifically, we use the Laplacian-spectrum-based method which has shown a state-of-the-art performance in graph comparisons [46]. The eigenvalues of the Laplacian matrix capture the structural properties of the corresponding graph. Note that real-world applications contain both undirected and directed graphs. For the undirected graphs, we construct a normalized Laplacian matrix 𝑳t\bm{L}_{t}, defined as 𝑳t=𝑰𝑫t1/2𝑾t𝑫t1/2\bm{L}_{t}=\bm{I}-\bm{D}_{t}^{-1/2}\bm{W}_{t}\bm{D}_{t}^{-1/2}, where 𝑰\bm{I} is the identity matrix; 𝑾t\bm{W}_{t} is the weight matrix of 𝑮t\bm{G}_{t} and 𝑫t\bm{D}_{t} is a diagonal matrix 𝑫t,(i;i)\bm{D}_{t,(i;i)} = j(𝑮t,(i;i))\sum_{j}(\bm{G}_{t,(i;i)}). For the directed graphs, we leverage the method proposed in [47] to build the Laplacian matrix:

𝑷i,j=wi,jjwi,j,\small\bm{P}_{i,j}=\frac{w_{i,j}}{\sum_{j}w_{i,j}}, (17)
𝑳=𝑰(ϕ1/2𝑷ϕ1/2+ϕ1/2𝑷Tϕ1/2)/2,\small\bm{L}=\bm{I}-(\bm{\phi}^{1/2}\bm{P}\bm{\phi}^{-1/2}+\bm{\phi}^{-1/2}\bm{P}^{T}\bm{\phi}^{1/2})/2, (18)

where ϕ\bm{\phi} is the matrix with the Perron vector [48] of 𝑷\bm{P} in the diagonal; 𝑷\bm{P} is a transition probability matrix [47].

Given the Laplacian matrices for all timestamps, i.e., 𝑳1\bm{L}_{1}, \cdots, 𝑳T+1\bm{L}_{T+1} and 𝑳^T+1\hat{\bm{L}}_{T+1}, where 𝑳T+1\bm{L}_{T+1} is built by actual network 𝑮T+1\bm{G}_{T+1}; 𝑳^T+1\hat{\bm{L}}_{T+1} is built by the prediction 𝑮^T+1\hat{\bm{G}}_{T+1}. We follow a well-performed metric that utilises the singular values calculated of Laplacian matrices 𝑳t\bm{L}_{t} as the network attributes [7]. Then, the vector made up of singular values is denoted as 𝝈t\overrightarrow{\bm{\sigma}}_{t}, tt \in [1, TT+1]. Let 𝝈a\overrightarrow{\bm{\sigma}}_{a} (𝝈T+1\overrightarrow{\bm{\sigma}}_{T+1}) denote the vector of actual network, 𝝈p\overrightarrow{\bm{\sigma}}_{p} (singular values of 𝑳^T+1\hat{\bm{L}}_{T+1}) denote the vector of prediction network. The first anomaly score 𝒁1\bm{Z}_{1} based on the cosine distance is defined as:

𝒁1=1𝝈p𝝈a𝝈p2𝝈a2.\small\bm{Z}_{1}=1-\frac{\overrightarrow{\bm{\sigma}}^{\top}_{p}\overrightarrow{\bm{\sigma}}_{a}}{||\overrightarrow{\bm{\sigma}}_{p}||_{2}||\overrightarrow{\bm{\sigma}}_{a}||_{2}}. (19)

It is evident that the more dissimilar between the prediction and the actual network, 𝒁1\bm{Z}_{1} is closer to 1, and vice versa111The range of 𝒁1\bm{Z}_{1} is [0,1] because of the non-negativity of 𝝈\overrightarrow{\bm{\sigma}}.

Trade-off Strategy

The score of 𝒁1\bm{Z}_{1} reflects the difference between the prediction and ground truth. This evaluation is more suitable for the temporal network that evolves gradually. However, the normal graph pattern extracted from the previous networks is also important since it can reflect the average evolution representation. Thus we introduce the second anomaly score 𝒁2\bm{Z}_{2} which considers the normal graph patterns. Let 𝝈nor\overrightarrow{\bm{\sigma}}_{nor} be the average vector of previous snapshots, i.e., 𝝈nor\overrightarrow{\bm{\sigma}}_{nor} = (t=1T𝝈t)/T\sum_{t=1}^{T}\overrightarrow{\bm{\sigma}}_{t})/T, 𝒁2\bm{Z}_{2} is defined as:

𝒁2=1𝝈nor𝝈a𝝈nor2𝝈a2.\small\bm{Z}_{2}=1-\frac{\overrightarrow{\bm{\sigma}}^{\top}_{nor}\overrightarrow{\bm{\sigma}}_{a}}{||\overrightarrow{\bm{\sigma}}_{nor}||_{2}||\overrightarrow{\bm{\sigma}}_{a}||_{2}}. (20)

Finally, the combined anomaly score 𝒁\bm{Z} is a trade-off between 𝒁1\bm{Z}_{1} and 𝒁2\bm{Z}_{2}:

𝒁=α𝒁1+(1α𝒁2),\small\bm{Z}=\alpha\bm{Z}_{1}+(1-\alpha\bm{Z}_{2}), (21)

where α\alpha is the trade-off factor. Algorithm 1 summarizes our learning and detection process in LEM-CPD.

4.6 Complexity Analysis

Here we analyse the time complexity of LEM-CPD algorithm. After the prediction process, the time complexity of the change point detection is mainly affected by SVD. Even though randomized SVD has an O(n2k)O(n^{2}k) complexity, it is not involved in the update loop of variables (such as 𝑼t\bm{U}_{t} and 𝑽t\bm{V}_{t}). In the prediction process, the time complexity of Eq. (11) - Eq. (12) is dominated by the matrix multiplication operations in each iteration. Therefore, for each iteration, the computational complexity of LEM-CPD is O(Tkn2)O(Tkn^{2}). Compared with the latest CPD method LAD (O(n2k)O(n^{2}k)) [7], the time complexity of our method is O(Tkn2)O(Tkn^{2}), which is higher than LAD.

Input: network matrices [G1,,GTG_{1},\cdots,G_{T}]; actual network GT+1G_{T+1}; long-term guidanc UltU^{lt}, VltV^{lt}.
Output: ZZ score.
1 initialize UtU_{t}, VtV_{t}, AA, BB and CC by previous detection values.
2 for Iter = 1 to Max do
3       if  |ii+1|\left|\mathcal{L}_{i}-\mathcal{L}_{i+1}\right| / i\mathcal{L}_{i} \geq ε\varepsilon then
4             for t = 1 to T do
5                   update UtU_{t} and VtV_{t} By Equations (11) and (12).
6                  
7            update AA, BB and CC By Equations (14) - (16).
8            
9      else
10             Break
11      
12
13Prediction network G^T+1\hat{G}_{T+1} = (UTA)C(VTB)(U_{T}A)C(V_{T}B).
14 Calculate L1L_{1}, \cdots, LT+1L_{T+1} and L^T+1\hat{L}_{T+1}.
15 get σt\overrightarrow{\sigma}_{t}; σa\overrightarrow{\sigma}_{a}; σp\overrightarrow{\sigma}_{p} and σnor\overrightarrow{\sigma}_{nor}.
Calculate ZZ score By Equations (19) - (21).
Algorithm 1 LEM-CPD

5 Experiments and Results

Baselines

We compare the proposed method LEM-CPD with the following baselines. All parameters of the proposed method and baselines are optimized by the grid search method in the prepared validation dataset. For example, in the real-world Flow dataset, five workdays in one week is chosen as the validation data, then we test our method on the next week.

  • 1.

    LT-A. We compared the next network with its long-term average network.

  • 2.

    LAD. [7] proposed a Laplacian-based detection method for CPD. It used the Laplacian matrix and SVD to obtain embedding vector of each network. In additional, two window sizes are involved to capture both short and long term properties.

  • 3.

    LADdos. [8] devised a variant method of LAD to identify time steps where the graph structure deviates significantly from the norm.

  • 4.

    EdgeMonitoring. This method used joint edge probabilities as the ”feature vector”, and formulated the time-varying network as a first order Markov process [3].

  • 5.

    TensorSplat. [10] proposed a tensor-based CPD model, which utilised the CP decomposition to capture a low-rank space in the time dimension. Then an outlier detection method is used to identify the change points.

  • 6.

    Activity vector. This algorithm used the principal eigenvector of the adjacency matrix to present each network and only considered short term properties [9].

  • 7.

    LEM-CPD-lt. The variant of the proposed method, which removes the long-term guidance.

Metrics

The Hit Ratio (HR@K) metric is a popular evaluation method that reflects the proportion of right detected anomalies in the top K most anomalous points. Unlike the synthetic experiments, we cannot get the ground truth in the real-world datasets, and thus the well-known anomalous timestamps are used as labels.

5.1 Synthetic Experiments

Setting: We chose a widely used data generator Stochastic Block Model (SBM) [49] to generate synthetic data. We followed two settings in the data generation process in [7]. 1) Pure datasets: four time-varying networks with 500 nodes and 151 time points are generated, which contain 3, 7, 10, 15 change points respectively. Therefore, we can use HR@K, where K is setting to 3, 7, 10, 15 to evaluate all competitive methods. Hybrid dataset: we generate four homogeneous networks of the Pure datasets, while contains both change points and event points according to the hybrid setting in [7]. Thus, HR@K (K = 3, 7, 10, 15) is also suitable in this dataset. In the experiments, our short and long term window sizes are set to 3 and 12 respectively; the change point detection threshold is set to 0.5; the trade-off factor α\alpha is set to 0.2.

Discussion: Table 1 presents the HR@K accuracy of all test methods on eight datasets. It is apparent that there are several methods which can effectively handle the CPD problem in the synthetic datasets, e.g., LEM-CPD (ours), LAD, and EdgeMonitoring. It is mainly because the normal evaluation pattern in the synthetic datasets is very stable, so that these method can easily detect anomalies if some dramatically changes appeared. TensorSplat method cannot capture all change points in the sparse datasets. Unlike LADdoc, LAD and LEM-CPD, Activity vector chooses the principal eigenvectors of the adjacency matrix to represent the graph embedding directly, the latent space may not be learned well compared with the Laplacian spectrum method. Furthermore, as the results shown in Table 1, we can see that without the long-term guidance, LEM-CPD-lt performs worse than our final model, which demonstrates the effectiveness of the long-term guidance. For methods that use the prediction process, such as LEM-CPD-lt and LEM-CPD, can achieve the best detection accuracy as shown in Table 1.

Figure 3 provides the detection results and their 𝒁\bm{Z} scores in the Pure dataset with seven change points. The seven most anomalous points are correctly detected. We can clearly see that the 𝒁\bm{Z} scores of change points are markedly different from the normal ones. The scenario proves our point that the normal evolutionary pattern of the synthetic network is stable. Therefore, in our trade-off strategy, the score 𝒁2\bm{Z}_{2} can greatly handle this situation, so that we use a small α\alpha in the experiments.

Refer to caption
Figure 3: LEM-CPD correctly identifies the change points in the Pure dataset.
Table 1: The Performance on the synthetic datasets.
Metric HR@3 HR@7 HR@10 HR@15 Average
Dataset Pure Hybrid Pure Hybrid Pure Hybrid Pure Hybrid
LT-A 66.7% 33.3% 42.9% 28.6% 40.0% 20.0% 46.7% 33.3% 38.9%
TensorSplat [10] 66.7% 66.7% 71.4% 42.9% 70.0% 60.0% 73.3% 66.7% 64.7%
Activity vector [9] 66.7% 33.3% 57.1% 14.2% 60.0% 30.0% 66.7% 33.3% 45.2%
EdgeMonitoring [3] 100.0% 66.7% 71.4% 57.1% 50.0% 60.0% 73.3% 80.0% 69.8%
LAD [7] 100.0% 66.7% 100.0% 85.7% 90.0% 80.0% 86.7% 86.7% 86.9%
LADdoc [8] 100.0% 66.7% 100.0% 85.7% 80.0% 90.0% 80.0% 86.7% 86.1%
LEM-CPD-lt (Ours) 100.0% 100.0% 100.0% 85.7% 90.0% 80.0% 80.0% 86.7% 90.3%
LEM-CPD (Ours) 100.0% 100.0% 100.0% 85.7% 90.0% 90.0% 86.7% 86.7% 92.4%
Table 2: The performance on the real-world datasets.
Metric Flow Maintenance Senate Average
Dataset HR@3 HR@6 HR@1 HR@3 HR@1 HR@2
LT-A 23.3% 16.7% 100.0% 66.6% 50.0% 50.0% 51.1%
TensorSplat [10] 50.0% 43.3% 100.0% 66.6% 100.0% 50.0% 68.3%
Activity vector [9] 33.3% 26.7% 0.0% 33.3% 50.0% 50.0% 32.2%
EdgeMonitoring [3] 53.3% 40.0% 100.0% 66.6% 100.0% 100.0% 76.6%
LAD [7] 40.0% 36.7% 100.0% 33.3% 100.0% 100.0% 68.3%
LADdoc [8] 53.3% 40.0% 100.0% 33.3% 100.0% 100.0% 71.1%
LEM-CPD-lt (Ours) 76.6% 63.3% 100.0% 66.6% 100.0% 100.0% 84.4%
LEM-CPD (Ours) 90.0% 76.6% 100.0% 66.6% 100.0% 100.0% 88.9%

5.2 Real-world Experiments

Setting: To evaluate the performance of our method in the real-world time-varying networks, we conduct experiments on three datasets. The selection of K of HR@K is based on the number of change points in the corresponding dataset. For example, there are two change points in the Senate dataset, then K will be chosen from 1 and 2.

1) Flow dataset: a large-scale, real-world dataset that is collected from the city transport. This data records the crowd flows among the entire city train network. In a weekday, there exist six change points during the rush and non-rush transition period. 2) Maintenance dataset: The Maintenance of metro lines will change the connections among stations. In this dataset, the one line of the city train network had been maintained on three separated days. Therefore, three change points are included in the dataset (21 days in total). 3) Senate dataset: a social connection network between legislators during the 97rd-108th Congress [50]. In this dataset, 100th and 104th congress networks are recognized as the change points in many references [3, 7]. The performances of different methods are summarized in Table 2.

Discussion: Our model, LEM-CPD significantly outperforms all other comparative methods as shown in Table 2, especially LEM-CPD has outperformed in the Flow dataset. In this test, we report the average accuracy of HR@3 and HR@6 in five weekdays. Compared with LAD and Activity vector, the prediction strategy can capture much more dynamic trends with time evolving. As the motivation illustrated in the Introduction, LADdoc, LAD and Activity vector may difficult to identify the natural gaps between the previous normal graph pattern and the actual network. TensorSplat and EdgeMonitoring achieved the second and third positions because they considered a part of evolution trend during their learning process, e.g., tensor-based methods are able to get the evolution information. In the real-world experiments, we set up a larger α\alpha=0.6 to let the prediction score participate more in the final decision. There presents similar conclusions in the Maintenance dataset because the city train network has the strong dynamic characters. However, in the Senate dataset, LADdoc, LAD and EdgeMonitoring methods can also get 100% accuracy, it is because the 100th and 104th congress networks are significant deviation from others.

Table 3: The prediction comparisons on the Flow dataset.
Weekdays Weekends
GPR 3.283 2.333
LSM-RN-All 3.713 2.105
SARIMA 1.939 2.270
HA 1.652 2.176
OLS [17] 1.531 1.944
LEM-CPD (Ours) 1.615 2.037

5.3 Prediction Evaluation and Visualization

5.3.1 Baselines and Metric used in the Prediction Evaluation

\bullet HA: We predict CFD by the historical average method on each timespan. For example, all historical time spans from 9:45 AM to 10:00 AM on Tuesdays are utilized to do the forecast for the same time interval.

\bullet OLS: the latest Online latent space based model utilizing side information [4].

\bullet LSM-RN-All: A state-of-the-art matrix factorization based method to predict network-wide traffic speed problem [16].

\bullet SARIMA: A linear regression model with seasonal property to effectively predict future values in a time series.

\bullet GPR: Gaussian process regression (GPR) would handle the spatiotemporal pattern prediction in a stochastic process. It usually suffers from a heavy computational cost [51].

Measures. The metric used in this paper is Mean Absolute Error (MAE), as it is generally employed in evaluating time series accuracy [52, 13].

MAE=i=1Ω|cic^i|Ω,MAE=\frac{\sum_{i=1}^{\Omega}|c_{i}-\hat{c}_{i}|}{\Omega},

where c^i\hat{c}_{i} is a forecasting value and cic_{i} is the ground truth; Ω\Omega is the number of predictions.

Refer to caption
(a) Timestamp: 8:00 AM - 8:15 AM.
Refer to caption
(b) Timestamp: 17:00 PM - 17:15 PM.
Figure 4: The visualization of our prediction.

5.3.2 Results and Visualization

This part of experiments is designed to assess the prediction ability of our model. Table 3 reports the average prediction error (mean absolute error) on the Flow dataset. We compared our model with some state-of-the-art methods. In the prediction process, LEM-CPF can achieve the second best result because the best model OLS involves much more external information.

Figure 4 gives an intuitive presentation of the flow prediction in Flow dataset; we pick up 30 stations to keep the image clear. Figure 4 (a) and (b) illustrate the ground truth and prediction of the crowd flow network, each node presents a station, the weight wijw_{ij} >> 0 if there exists a passenger flow in the current timestamp. The depth of color indicates the number of crowd flows. It is apparent that our prediction can construct the flow network precisely.

5.4 Sensitivity of Parameters

Refer to caption
(a) Trade-off Factor α\alpha.
Refer to caption
(b) Factor λ1\lambda_{1}.
Refer to caption
(c) Factor λ2\lambda_{2}.
Figure 5: Effect of Parameters.

This section evaluates the performances of LEM-CPD by varying the critical parameters (α\alpha, λ1\lambda_{1} and λ2\lambda_{2}). We here show the experimental results for the Flow dataset. We discuss them separately but pick them up by the grid search method because four parameters have high dimensional correlations that are hard to visualize. Our illustration approach that discusses parameters separately has been widely used in many other research papers [16, 4].

Fig. 5 (a) shows the different performances with a varying setting for α\alpha. The α\alpha indicates the balance factor to control the importance of prediction results. When we increase α\alpha from 0.1 to 0.6, the results improve significantly. However, the performance tends to stay stable at 0.5 \leq α\alpha \leq 0.9. In particular, LEM-CPD achieves the best result when α=0.6\alpha=0.6, while it can get good performance if the α\alpha is set between 0.5 and 0.9. It indicates that the prediction result is important and useful to the change point detection.

Fig. 5 (b) and (c) reveal the effect of varying λ1\lambda_{1} and λ2\lambda_{2}. These two parameters determine the strength of the transition matrices and the long-term guidance, respectively. λ1\lambda_{1}=212^{-1} and λ2\lambda_{2}=232^{3} yield the best prediction results for LEM-CPD.

6 Conclusion

To date, despite achievements in graph-based CPD, most recent methods cannot perform well in the real-world dynamic networks. It is mainly because the real temporal network usually evolves over time. In this paper, we first cast the CPD problem as a prediction task, and develop a novel method to capture latent transitions from previous timestamps to the next. When we made a prediction for the target network, the change point can be identified by a trade-off strategy. Compared with other approaches, the main innovation of this paper is providing an effectiveness CPD method via a prediction process. All the experimental results demonstrate the superiority of LEM-CPD, which achieves more 12 percent than other competitive methods.

References

  • [1] X. Wang, Y. Lu, C. Shi, R. Wang, P. Cui, S. Mou, Dynamic heterogeneous information network embedding with meta-path based proximity, IEEE Transactions on Knowledge and Data Engineering.
  • [2] J. Chen, K. Li, H. Rong, K. Bilal, K. Li, S. Y. Philip, A periodicity-based parallel time series prediction algorithm in cloud computing environments, Information Sciences 496 (2019) 506–537.
  • [3] Y. Wang, A. Chakrabarti, D. Sivakoff, S. Parthasarathy, Fast change point detection on dynamic social networks., in: Proceedings of 26th International Joint Conference on Artificial Intelligence, 2017, pp. 2992–2998.
  • [4] Y. Gong, Z. Li, J. Zhang, W. Liu, Y. Zheng, C. Kirsch, Network-wide crowd flow prediction of sydney trains via customized online non-negative matrix factorization, in: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, 2018, pp. 1243–1252.
  • [5] B. Wang, J. Lu, Z. Yan, H. Luo, T. Li, Y. Zheng, G. Zhang, Deep uncertainty quantification: A machine learning approach for weather forecasting, in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 2087–2095.
  • [6] J. Chen, K. Li, K. Li, P. S. Yu, Z. Zeng, Dynamic planning of bicycle stations in dockless public bicycle-sharing system using gated graph neural network, ACM Transactions on Intelligent Systems and Technology (TIST) 12 (2) (2021) 1–22.
  • [7] S. Huang, Y. Hitti, G. Rabusseau, R. Rabbany, Laplacian change point detection for dynamic graphs, in: Proceedings of the 26th ACM SIGKDD, 2020, pp. 349–358.
  • [8] S. Huang, G. Rabusseau, R. Rabbany, Scalable change point detection for dynamic graphs (2021) 1–6.
  • [9] T. Idé, H. Kashima, Eigenspace-based anomaly detection in computer systems, in: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 2004, pp. 440–449.
  • [10] D. Koutra, E. E. Papalexakis, C. Faloutsos, Tensorsplat: Spotting latent anomalies in time, in: 2012 16th Panhellenic Conference on Informatics, IEEE, 2012, pp. 144–149.
  • [11] K. Li, W. Yang, K. Li, Performance analysis and optimization for spmv on gpu using probabilistic modeling, IEEE Transactions on Parallel and Distributed Systems 26 (1) (2014) 196–205.
  • [12] C. Chen, K. Li, S. G. Teo, X. Zou, K. Li, Z. Zeng, Citywide traffic flow prediction based on multiple gated spatio-temporal convolutional neural networks, ACM Transactions on Knowledge Discovery from Data (TKDD) 14 (4) (2020) 1–23.
  • [13] H. Qu, Y. Gong, M. Chen, J. Zhang, Y. Zheng, Y. Yin, Forecasting fine-grained urban flows via spatio-temporal contrastive self-supervision, IEEE Transactions on Knowledge and Data Engineering.
  • [14] J. Liu, H. Qu, M. Chen, Y. Gong, Traffic flow prediction based on spatio-temporal attention block, in: 2022 International Conference on Machine Learning, Cloud Computing and Intelligent Mining (MLCCIM), IEEE, 2022, pp. 32–39.
  • [15] D. Wang, Y. Yu, A. Rinaldo, Optimal change point detection and localization in sparse dynamic networks, The Annals of Statistics 49 (1) (2021) 203–232.
  • [16] D. Deng, C. Shahabi, U. Demiryurek, L. Zhu, R. Yu, Y. Liu, Latent space model for road networks to predict time-varying traffic, in: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, 2016, pp. 1525–1534.
  • [17] Y. Gong, Z. Li, J. Zhang, W. Liu, Y. Zheng, Online spatio-temporal crowd flow distribution prediction for complex metro system, IEEE Transactions on Knowledge and Data Engineering.
  • [18] X. Dong, Y. Gong, L. Zhao, Comparisons of typical algorithms in negative sequential pattern mining, in: 2014 IEEE Workshop on Electronics, Computer and Applications, IEEE, 2014, pp. 387–390.
  • [19] Y. Gong, C. Liu, X. Dong, Research on typical algorithms in negative sequential pattern mining, The Open Automation and Control Systems Journal 7 (1).
  • [20] Y. Gong, T. Xu, X. Dong, G. Lv, e-nspfi: Efficient mining negative sequential pattern from both frequent and infrequent positive sequential patterns, International Journal of Pattern Recognition and Artificial Intelligence 31 (02) (2017) 1750002.
  • [21] J. Huang, L. Zhang, Y. Gong, J. Zhang, X. Nie, Y. Yin, Series photo selection via multi-view graph learning, arXiv preprint arXiv:2203.09736.
  • [22] L. Peel, A. Clauset, Detecting change points in the large-scale structure of evolving networks, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 29, 2015.
  • [23] Y. Zhang, D.-Y. Yeung, Overlapping community detection via bounded nonnegative matrix tri-factorization, in: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, 2012, pp. 606–614.
  • [24] K. S. Xu, A. O. Hero, Dynamic stochastic blockmodels for time-evolving social networks, IEEE Journal of Selected Topics in Signal Processing 8 (4) (2014) 552–562.
  • [25] X. Dong, Y. Gong, L. Cao, F-nsp+: A fast negative sequential patterns mining method with self-adaptive data storage, Pattern Recognition 84 (2018) 13–27.
  • [26] X. Dong, Y. Gong, L. Cao, e-rnsp: An efficient method for mining repetition negative sequential patterns, IEEE transactions on cybernetics 50 (5) (2018) 2084–2096.
  • [27] P. Qiu, Y. Gong, Y. Zhao, L. Cao, C. Zhang, X. Dong, An efficient method for modeling nonoccurring behaviors by negative sequential patterns with loose constraints, IEEE Transactions on Neural Networks and Learning Systems.
  • [28] Y. Gong, J. Yi, D.-D. Chen, J. Zhang, J. Zhou, Z. Zhou, Inferring the importance of product appearance with semi-supervised multi-modal enhancement: A step towards the screenless retailing, in: Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 1120–1128.
  • [29] Z. Li, J. Zhang, Q. Wu, Y. Gong, J. Yi, C. Kirsch, Sample adaptive multiple kernel learning for failure prediction of railway points, in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 2848–2856.
  • [30] Z. Li, J. Zhang, Y. Gong, Y. Yao, Q. Wu, Field-wise learning for multi-field categorical data, Advances in Neural Information Processing Systems 33 (2020) 9890–9899.
  • [31] P. Sarkar, S. M. Siddiqi, G. J. Gordon, A latent space approach to dynamic embedding of co-occurrence data, in: Artificial Intelligence and Statistics, PMLR, 2007, pp. 420–427.
  • [32] M. Thida, H.-L. Eng, P. Remagnino, Laplacian eigenmap with temporal constraints for local abnormality detection in crowded scenes, IEEE Transactions on Cybernetics 43 (6) (2013) 2147–2156.
  • [33] P. Jing, Y. Su, X. Jin, C. Zhang, High-order temporal correlation model learning for time-series prediction, IEEE transactions on cybernetics 49 (6) (2018) 2385–2397.
  • [34] F. Wang, C. Tan, P. Li, A. C. König, Efficient document clustering via online nonnegative matrix factorizations, in: Proceedings of the 2011 SIAM International Conference on Data Mining, SIAM, 2011, pp. 908–919.
  • [35] Y. Gong, Z. Li, J. Zhang, W. Liu, J. Yi, Potential passenger flow prediction: A novel study for urban transportation development, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 4020–4027.
  • [36] Z. Zhang, X. Mei, B. Xiao, Abnormal event detection via compact low-rank sparse learning, IEEE Intelligent Systems 31 (2) (2015) 29–36.
  • [37] Y. Pei, N. Chakraborty, K. Sycara, Nonnegative matrix tri-factorization with graph regularization for community detection in social networks, in: Twenty-fourth international joint conference on artificial intelligence, 2015, pp. 2083–2089.
  • [38] T. Li, Y. Zhang, V. Sindhwani, A non-negative matrix tri-factorization approach to sentiment classification with lexical prior knowledge, in: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2009, pp. 244–252.
  • [39] H. Wang, F. Nie, H. Huang, F. Makedon, Fast nonnegative matrix tri-factorization for large-scale data co-clustering, in: Twenty-Second International Joint Conference on Artificial Intelligence, 2011, pp. 1553–1558.
  • [40] X. Li, G. Cui, Y. Dong, Graph regularized non-negative low-rank matrix factorization for image clustering, IEEE transactions on cybernetics 47 (11) (2016) 3840–3853.
  • [41] X. Luo, H. Wu, H. Yuan, M. Zhou, Temporal pattern-aware qos prediction via biased non-negative latent factorization of tensors, IEEE transactions on cybernetics 50 (5) (2019) 1798–1809.
  • [42] J. Zhao, X. Xie, X. Xu, S. Sun, Multi-view learning overview: Recent progress and new challenges, Information Fusion 38 (2017) 43–54.
  • [43] Y. Gong, Z. Li, J. Zhang, W. Liu, B. Chen, X. Dong, A spatial missing value imputation method for multi-view urban statistical data., in: Proceedings of 29th International Joint Conference on Artificial Intelligence, 2020, pp. 1310–1316.
  • [44] Y. Gong, Z. Li, J. Zhang, W. Liu, Y. Yin, Y. Zheng, Missing value imputation for multi-view urban statistical data via spatial correlation learning, IEEE Transactions on Knowledge and Data Engineering.
  • [45] D. D. Lee, H. S. Seung, Algorithms for non-negative matrix factorization, in: Advances in neural information processing systems, 2001, pp. 556–562.
  • [46] R. C. Wilson, P. Zhu, A study of graph spectra for comparing graphs and trees, Pattern Recognition 41 (9) (2008) 2833–2841.
  • [47] F. Chung, Laplacians and the cheeger inequality for directed graphs, Annals of Combinatorics 9 (1) (2005) 1–19.
  • [48] S. U. Pillai, T. Suel, S. Cha, The perron-frobenius theorem: some of its applications, IEEE Signal Processing Magazine 22 (2) (2005) 62–75.
  • [49] P. W. Holland, K. B. Laskey, S. Leinhardt, Stochastic blockmodels: First steps, Social networks 5 (2) (1983) 109–137.
  • [50] J. H. Fowler, Legislative cosponsorship networks in the us house and senate, Social Networks 28 (4) (2006) 454–465.
  • [51] C. E. Rasmussen, Gaussian processes in machine learning, in: Summer School on Machine Learning, Springer, 2003, pp. 63–71.
  • [52] Y. Gong, Network-wide spatio-temporal predictive learning for the intelligent transportation system, Ph.D. thesis (2020).
  • [53] K. B. Petersen, M. S. Pedersen, et al., The matrix cookbook, Technical University of Denmark 7 (15) (2008) 510.
  • [54] M. D. Gupta, J. Xiao, Non-negative matrix factorization as a feature selection tool for maximum margin classifiers, in: CVPR 2011, IEEE, 2011, pp. 2841–2848.

Appendix A Derivation process of Update Rules

We here derive the update rule of UtU_{t} as an example, other variables can be solved with a similar process. The objective of \mathcal{L} could be rewritten as follows:

=L0+L1+L2\mathcal{L}=L_{0}+L_{1}+L_{2} , where:

L0=t=1T||𝑮t𝑼t𝑪𝑽t)||F2,L1=λ1t=2T(𝑼t𝑼t1𝑨F2+𝑽t𝑽t1𝑩F2),L2=λ2(𝑼lt𝑼T𝑨F2+𝑯lt𝑯T𝑩F2).\begin{split}&L_{0}=\sum_{t=1}^{T}||\bm{G}_{t}-\bm{U}_{t}\bm{C}\bm{V}_{t})||_{F}^{2},\\ &L_{1}=\lambda_{1}\sum_{t=2}^{T}(||\bm{U}_{t}-\bm{U}_{t-1}\bm{A}||_{F}^{2}+||\bm{V}_{t}-\bm{V}_{t-1}\bm{B}||_{F}^{2}),\\ &L_{2}=\lambda_{2}(||\bm{U}^{lt}-\bm{U}_{T}\bm{A}||_{F}^{2}+||\bm{H}^{lt}-\bm{H}_{T}\bm{B}||_{F}^{2}).\end{split} (22)

We provide the derivative of L0L_{0} respect to 𝑼t\bm{U}_{t} as an example, the other components can be derived in the same way. L0L_{0} could also be rewritten as follows:

L0=t=1Ttr((𝑮t𝑼t𝑪𝑽t)(𝑮t𝑼t𝑪𝑽t)T),\small L_{0}=\sum_{t=1}^{T}tr((\bm{G}_{t}-\bm{U}_{t}\bm{C}\bm{V}_{t})(\bm{G}_{t}-\bm{U}_{t}\bm{C}\bm{V}_{t})^{T}), (23)

analogously, we can get:

L1=λ1t=2T(tr((𝑼t𝑼t1𝑨)(𝑼t𝑼t1𝑨)T)+𝑽t𝑽t1𝑩F2),\small L_{1}=\lambda_{1}\sum_{t=2}^{T}(tr((\bm{U}_{t}-\bm{U}_{t-1}\bm{A})(\bm{U}_{t}-\bm{U}_{t-1}\bm{A})^{T})+||\bm{V}_{t}-\bm{V}_{t-1}\bm{B}||_{F}^{2}), (24)
L2=λ2(tr((𝑼lt𝑼T𝑨)(𝑼lt𝑼TA)T)+𝑽lt𝑽T𝑩F2).\small L_{2}=\lambda_{2}(tr((\bm{U}^{lt}-\bm{U}_{T}\bm{A})(\bm{U}^{lt}-\bm{U}_{T}A)^{T})+||\bm{V}^{lt}-\bm{V}_{T}\bm{B}||_{F}^{2}). (25)

Taking the derivation of \mathcal{L} with respect to UtU_{t}, we can get g(Ut)g(U_{t}) according to [53]:

As introduced in [45], the traditional gradient descent method is expressed as: UtU_{t} = UtU_{t} - γg(Ut)\gamma g(U_{t}) = UtU_{t} - γ(Pitem+Nitem)\gamma(P_{item}+N_{item}), where PitemP_{item} and NitemN_{item} denote all positive and negative items in g(Ut)g(U_{t}), respectively (e.g., PitemP_{item} = Gt(CVt)+λ1(Ut1A+Ut+1A)+λ2^UltAG_{t}(CV_{t})^{\top}+\lambda_{1}(U_{t-1}A+U_{t+1}A^{\top})+\hat{\lambda_{2}}U^{lt}A^{\top}). We can set the step γ\gamma to:

γ=UtPitem,\gamma=\frac{U_{t}}{P_{item}}, (26)

then, we got the update rule of UtU_{t} as shown in Equation (8) of the main paper.

Appendix B Proof of Convergence

To prove Theorem 1, we will find an auxiliary function similar to that used in the [45]. We here give the convergence proof of VtV_{t} and other variables can similarly be proofed.

Definition 1. G(v,v)G(v,v^{{}^{\prime}}) is an auxiliary function for our final function (v)\mathcal{L}(v) if the following conditions are satisfied:

G(v,v)(v) and G(v,v)=𝒥(v).G\left(v^{\prime},v\right)\geq\mathcal{L}(v)\quad\text{ and }\quad G(v,v)=\mathcal{J}(v). (27)

Lemma 1 If GG is an auxiliary function, then 𝒥\mathcal{J} is non-increasing under the update:

vt+1=argminvG(v,vt),v^{t+1}=\arg\min_{v}G\left(v,v^{t}\right), (28)

consequently, we have:

(vt+1)G(vt+1,vt)G(vt,vt)=(vv).\mathcal{L}\left(v^{t+1}\right)\leq G\left(v^{t+1},v^{t}\right)\leq G\left(v^{t},v^{t}\right)=\mathcal{L}\left(v^{v}\right). (29)

The proof of Lemma 1 is given by [45]. Lemma 1 illustrates that (vt+1)(vt)\mathcal{L}\left(v^{t+1}\right)\leq\mathcal{L}(v^{t}) when exits G(v,vt)G\left(v,v^{t}\right); vtv^{t} is a column vector of VtV_{t}.

Lemma 2. If K(vt)K(v^{t}) is a diagonal matrix under the following definition,

K(vt)=diag((UC)(UC)Tv./v),K(v^{t})=diag((UC)(UC)^{T}v./v), (30)

then,

G(v,vt)=(vt)+(vvt)T(vt)+12(vvt)TK(vt)(vvt),\begin{split}G\left(v,v^{t}\right)=&\mathcal{L}\left(v^{t}\right)+\left(v-v^{t}\right)^{T}\nabla\mathcal{L}\left(v^{t}\right)\\ &+\frac{1}{2}\left(v-v^{t}\right)^{T}K\left(v^{t}\right)\left(v-v^{t}\right),\end{split} (31)

is an auxiliary function for (v)\mathcal{L}(v).

Proof 1

Since G(v,v)G(v,v) = (v)\mathcal{L}(v) is obvious, we need only show that G(v,vt)(v)G(v,v^{t})\geq\mathcal{L}(v). To do this, we compare

(v)=(vt)+(vvt)T(vt)+12(vvt)T((UC)(UC)T)(vvt),\begin{split}\mathcal{L}(v)=&\mathcal{L}\left(v^{t}\right)+\left(v-v^{t}\right)^{T}\nabla\mathcal{L}\left(v^{t}\right)\\ &+\frac{1}{2}\left(v-v^{t}\right)^{T}\left((UC)(UC)^{T}\right)\left(v-v^{t}\right),\end{split} (32)

with Equation (31) to find that G(v,vt)(v)G(v,v^{t})\geq\mathcal{L}(v) is equivalent to

0(vvt)T[K(vt)(UC)(UC)T](vvt).0\leq\left(v-v^{t}\right)^{T}\left[K\left(v^{t}\right)-(UC)(UC)^{T}\right]\left(v-v^{t}\right). (33)

The next step is to prove [K(vt)(UC)(UC)T]\left[K\left(v^{t}\right)-(UC)(UC)^{T}\right] is positive semi-definite. Let Q=(UC)(UC)TQ=(UC)(UC)^{T}, then [K(vt)(UC)(UC)T]\left[K\left(v^{t}\right)-(UC)(UC)^{T}\right] can be expressed as [diag(Qh./h)Q][diag(Qh./h)-Q]. As the Lemma 1 provided in [54], if Q is a symmetric non-negative matrix and vv be a positive vector, then the matrix Q^=diag(Qv./v)Q0\hat{Q}=diag(Qv./v)-Q\succeq 0.

Replacing G(v,vt)G(v,v^{t}) in Equation (28) by Equation (31) results in the update rule:

vt+1=vtK(vt)1(vt).v^{t+1}=v^{t}-K\left(v^{t}\right)^{-1}\nabla\mathcal{L}\left(v^{t}\right). (34)

Since Equation (31) is an auxiliary function, \mathcal{L} is nonincreasing under this update rule, according to Lemma 1. Writing the components of this equation explicitly, we obtain

vat+1=vat(UCg)a(UC((UC)Tv))a.v_{a}^{t+1}=v_{a}^{t}\frac{\left(UCg\right)_{a}}{\left(UC((UC)^{T}v\right))_{a}}. (35)

By reversing the roles of UU and VV in Lemma 1 and Lemma 2, \mathcal{L} can similarly be shown to be nonincreasing under the update rules for UU.

Refer to caption
Figure 6: Convergence rate in the Flow dataset.

As shown in Figures 6, our model LEM-CPD can efficiently converge into a local optimization with a small number of iterations.