A Comprehensive Survey on
Graph Anomaly Detection with Deep Learning
Abstract
Anomalies are rare observations (e.g., data records or events) that deviate significantly from the others in the sample. Over the past few decades, research on anomaly mining has received increasing interests due to the implications of these occurrences in a wide range of disciplines - for instance, security, finance, and medicine. For this reason, anomaly detection, which aims to identify these rare observations, has become one of the most vital tasks in the world and has shown its power in preventing detrimental events, such as financial fraud, network intrusions, and social spam. The detection task is typically solved by identifying outlying data points in the feature space, which, inherently, overlooks the relational information in real-world data. At the same time, graphs have been prevalently used to represent the structural/relational information, which raises the graph anomaly detection problem - identifying anomalous graph objects (i.e., nodes, edges and sub-graphs) in a single graph, or anomalous graphs in a set/database of graphs. Conventional anomaly detection techniques cannot tackle this problem well because of the complexity of graph data (e.g., irregular structures, relational dependencies, node/edge types/attributes/directions/multiplicities/weights, large scale, etc.). However, thanks to the advent of deep learning in breaking these limitations, graph anomaly detection with deep learning has received a growing attention recently. In this survey, we aim to provide a systematic and comprehensive review of the contemporary deep learning techniques for graph anomaly detection. Specifically, we provide a taxonomy that follows a task-driven strategy and categorizes existing work according to the anomalous graph objects that they can detect. We especially focus on the challenges in this research area and discuss the key intuitions, technical details as well as relative strengths and weaknesses of various techniques in each category. From the survey results, we highlight 12 future research directions spanning unsolved and emerging problems introduced by graph data, anomaly detection, deep learning and real-world applications. Additionally, to provide a wealth of useful resources for future studies, we have compiled a set of open-source implementations, public datasets, and commonly-used evaluation metrics. With this survey, our goal is to create a “one-stop-shop” that provides a unified understanding of the problem categories and existing approaches, publicly available hands-on resources, and high-impact open challenges for graph anomaly detection using deep learning.
Index Terms:
Anomaly detection, outlier detection, fraud detection, rumor detection, fake news detection, spammer detection, misinformation, graph anomaly detection, deep learning, graph embedding, graph representation, graph neural networks.1 Introduction
Anomalies were first defined by Grubbs in 1969 [grubbs1969procedures] as “one that appears to deviate markedly from other members of the sample in which it occurs” and the studies on anomaly detection were initiated by the statistics community in the 19th century. To us, anomalies might appear as social spammers or misinformation in social media; fraudsters, bot users or sexual predators in social networks; network intruders or malware in computer networks and broken devices or malfunctioning blocks in industry systems, and they often introduce huge damage to the real-world systems they appear in. According to FBI’s 2014 Internet Crime Report111https://www.fbi.gov/file-repository/2014_ic3report.pdf/view, the financial loss due to crime on social media reached more than $60 million in the second half of the year alone and a more up-to-date report222https://www.zdnet.com/article/online-fake-news-costing-us-78-billion-globally-each-year/ indicates that the global economic cost of online fake news reached around $78 billion a year in 2020.
In computer science, the research on anomaly detection dates back to the 1980s, and detecting anomalies on graph data has been an important data mining paradigm since the beginning. However, the extensive presence of connections between real-world objects and advances in graph data mining in the last decade have revolutionized our understanding of the graph anomaly detection problems such that this research field has received a dramatic increase in interest over the past five years. One of the most significant changes is that graph anomaly detection has evolved from relying heavily on human experts’ domain knowledge into machine learning techniques that eliminate human intervention, and more recently, to various deep learning technologies. These deep learning techniques are not only capable of identifying potential anomalies in graphs far more accurately than ever before, but they can also do so in real-time.


For our purposes today, anomalies, which are also known as outliers, exceptions, peculiarities, rarities, novelties, etc., in different application fields, refer to abnormal objects that are significantly different from the standard, normal, or expected. Although these objects rarely occur in real-world, they contain critical information to support downstream applications. For example, the behaviors of fraudsters provide evidences for anti-fraud detection and abnormal network traffics reveal signals for network intrusion protection. Anomalies, in many cases, may also have real and adverse impacts, for instance, fake news in social media can create panic and chaos with misleading beliefs [hooi2016birdnest, ahmed2019combining, nguyen2020fang, tam2019anomaly], untrustworthy reviews in online review systems can affect customers’ shopping choices [yu2016survey, benamira2019semi, kumar2018rev2], network intrusions might leak private personal information to hackers [mongiovi2013netspot, miller2013efficient, DBLP:journals/compsec/MiaoSZ20, perozzi2016scalable], and financial frauds can cause huge damage to economic systems [xuexiong2022, 10.1145/3394486.3403361, 10.1145/3394486.3403354, DBLP:conf/www/GuoLAHHZZ19].
Anomaly detection is the data mining process that aims to identify the unusual patterns that deviate from the majorities in a dataset [iglewicz1993detect, chandola2009anomaly, Fraud2020]. In order to detect anomalies, conventional techniques typically represent real-world objects as feature vectors (e.g., news in social media are represented as bag-of-words [sun2018detecting], and images in web pages are represented as color histograms [DBLP:conf/icdm/WuHPZCZ14]), and then detect outlying data points in the vector space [DBLP:conf/ijcai/WangL20, DBLP:conf/kdd/PangCCL18, DBLP:journals/corr/abs-2007-02500], as shown in Fig. 1(a). Although these techniques have shown power in locating deviating data points under tabulated data format, they inherently discard the complex relationships between objects [akoglu2015graph].
Yet, in reality, many objects have rich relationships with each other, which can provide valuable complementary information for anomaly detection. Take online social networks as an example, fake users can be created using valid information from normal users or they can camouflage themselves by mimicking benign users’ attributes [hooi2017graph, CARE-GNN]. In such situations, fake users and benign users would have near-identical features, and conventional anomaly detection techniques might not be able to identify them using feature information only. Meanwhile, fake users always build relationships with a large number of benign users to increase their reputation and influence so they can get unexpected benefits, whereas benign users rarely exhibit such activities [pandit2007netprobe, Densealert]. Hence, these dense and unexpected connections formed by fake users denote their deviations to the benigns and more comprehensive detection techniques should take these structural information into account to pinpoint the deviating patterns of anomalies.
To represent the structural information, Graphs, in which nodes/vertices denote real objects, and the edges denote their relationships, have been prevalently used in a range of application fields [liu2008spotting, wu2014multi, gao2017collaborative, aggarwal2011outlier, wu2017multiple, NIPS20201], including social activities, e-commerce, biology, academia and communication. With the structural information contained in graphs, detecting anomalies in graphs raises a more complex anomaly detection problem in non-Euclidean space - graph anomaly detection (GAD) that aims to identify anomalous graph objects (i.e., nodes, edges or sub-graphs) in a single graph as well as anomalous graphs among a set/database of graphs [akoglu2015graph, DBLP:journals/jiis/ChenHS12, GBGP]. As a toy example shown in Fig. 1(b), given an online social network, graph anomaly detection aims to identify anomalous nodes (i.e., malicious users), anomalous edges (i.e., abnormal relations) and anomalous sub-graphs (i.e., malicious user groups). But, because the copious types of graph anomalies cannot be directly represented in Euclidean feature space, it is not feasible to directly apply traditional anomaly detection techniques to graph anomaly detection, and researchers have intensified their efforts to GAD recently.
Amongst earlier works in this area, the detection methods relied heavily on handcrafted feature engineering or statistical models built by domain experts [akoglu2010oddball, eswaran2018spotlight, li2014probabilistic]. This inherently limits these techniques’ capability to detect unknown anomalies, and the exercise tended to be very labor-intensive. Many machine learning techniques, such as matrix factorization [li2017radar, DBLP:journals/pnas/MahoneyD09] and SVM [DBLP:journals/pr/ErfaniRKL16], have also been applied to detect graph anomalies. However, real-world networks often contain millions of nodes and edges that result in extremely high dimensional and large-scale data, and these techniques do not easily scale up to such data efficiently. Practically, they exhibit high computational overhead in both the storage and execution time [DBLP:journals/jbd/ThudumuBJS20]. These general challenges associated with graph data are significant for the detection techniques, and we categorize them as data-specific challenges (Data-CHs) in this survey. A summary of them is provided in Appendix A.
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/bae61069-e95d-47d3-9556-e176c09190b3/x3.png)
Surveys | AD | DAD | GAD | GADL | Source Code | Dataset | ||||
---|---|---|---|---|---|---|---|---|---|---|
Node | Edge | Sub-graph | Graph | Real-world | Synthetic | |||||
Our Survey | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Chandola et al. [chandola2009anomaly] | ![]() |
- | - | - | - | - | - | - | - | - |
Boukerche et al. [DBLP:journals/csur/BoukercheZA20] | ![]() |
![]() |
- | - | - | - | - | - | - | - |
Bulusu et al. [DBLP:journals/corr/abs-2003-06979] | ![]() |
![]() |
- | - | - | - | - | - | - | - |
Thudumu et al. [DBLP:journals/jbd/ThudumuBJS20] | ![]() |
![]() |
![]() |
- | - | - | - | - | - | - |
Pang et al. [DBLP:journals/corr/abs-2007-02500] | ![]() |
![]() |
![]() |
![]() |
![]() |
- | - | ![]() |
![]() |
- |
Chalapathy and Chawla [chalapathy2019deep] | ![]() |
![]() |
- | - | - | - | - | ![]() |
![]() |
- |
Akoglu et al. [akoglu2015graph] | ![]() |
- | ![]() |
- | - | - | - | - | - | - |
Ranshous et al. [ranshous2015anomaly] | ![]() |
- | ![]() |
- | - | - | - | ![]() |
- | - |
Jennifer and Kumar [d2021anomaly] | ![]() |
- | ![]() |
- | - | - | - | - | - | - |
Eltanbouly et al. [DBLP:conf/iciot3/EltanboulyBACE20] | ![]() |
![]() |
![]() |
- | - | - | - | - | - | - |
Fernandes et al. [DBLP:journals/telsys/FernandesRCAP19] | ![]() |
![]() |
![]() |
- | - | - | - | - | - | - |
Kwon et al. [kwon2019survey] | ![]() |
![]() |
- | - | - | - | - | - | ![]() |
- |
Gogoi et al. [DBLP:journals/cj/GogoiBBK11] | ![]() |
![]() |
- | - | - | - | - | - | - | - |
Savage et al. [savage2014anomaly] | ![]() |
- | ![]() |
- | - | - | - | - | - | - |
Yu et al. [yu2016survey] | ![]() |
- | ![]() |
- | - | - | - | - | - | - |
Hunkelmann et al. [ahmed2019combining] | ![]() |
- | ![]() |
- | - | - | - | - | - | - |
Pourhabibi et al. [Fraud2020] | ![]() |
![]() |
![]() |
![]() |
- | - | - | - | - | - |
* AD: Anomaly Detection, DAD: Anomaly Detection with Deep Learning, GAD: Graph Anomaly Detection. | ||||||||||
* GADL: Graph Anomaly Detection with Deep Learning. | ||||||||||
* -: not included, ![]() ![]() ![]() |
Non-deep learning based techniques also lack the capability to capture the non-linear properties of real objects [DBLP:journals/corr/abs-2007-02500]. Hence, the representations of objects learned by them are not expressive enough to fully support graph anomaly detection. To tackle these problems, more recent studies seek the potential of adopting deep learning techniques to identify anomalous graph objects. As a powerful tool for data mining, deep learning has achieved great success in data representation and pattern recognition [DBLP:conf/wsdm/WangNWYL20, zhang2019unsupervised, DBLP:conf/icdm/WangZ00ZX20]. Its deep architecture with layers of parameters and transformations appear to suit the aforementioned problems well. The more recent studies, such as deep graph representation learning and graph neural networks (GNNs), further enrich the capability of deep learning for graph data mining [ijcai2020-693, wu2020comprehensive, cui2018survey, NIPS20202, su2021comprehensive]. By extracting expressive representations such that graph anomalies and normal objects can be easily separated, or the deviating patterns of anomalies can be learned directly through deep learning techniques, graph anomaly detection with deep learning (GADL) is starting to take the lead in the forefront of anomaly detection. As a frontier technology, graph anomaly detection with deep learning, hence, is expected to generate more fruitful results on detecting anomalies and secure a more convenient life for the society.
1.1 Challenges in GAD with Deep Learning
Due to the complexity of anomaly detection and graph data mining [noble2003graph, DBLP:conf/ijcai/TengYEL18, shah2016edgecentric, DBLP:conf/icdm/WangGF17, GAL], in addition to the prior mentioned data-specific challenges, adopting deep learning techniques for graph anomaly detection also faces a number of challenges from the technical side. These challenges associated with deep learning are categorized as technique-specific challenges (Tech-CHs), and they are summarized as follows.
Tech-CH1. Anomaly-aware training objectives. Deep learning models rely heavily on the training objectives to fine-tune all the trainable parameters. For graph anomaly detection, this necessitates appropriate training objectives or loss functions such that the GADL models can effectively capture the differences between benign and anomalous objects. Designing anomaly-aware objectives is very challenging because there is no prior knowledge about the ground-truth anomalies as well as their deviating patterns versus the majority. How to effectively separate anomalies from normal objects through training remains critical for deep learning-based models.
Tech-CH2. Anomaly interpretability. In real-world scenarios, the interpretability of detected anomalies is also vital because we need to provide convincing evidence to support the subsequent anomaly handling process. For example, the risk management department of a financial organization must provide lawful evidence before blocking the accounts of identified anomalous users. As deep learning has been limited for its interpretability [DBLP:journals/jzusc/ZhangZ18, DBLP:journals/corr/abs-2007-02500], how to justify the detected graph anomalies remains a big challenge for deep learning techniques.
Tech-CH3. High training cost. Although D(G)NNs are capable of digesting rich information (e.g., structural information and attributes) in graph data for anomaly detection, these GADL models are more complex than conventional deep neural networks or machine learning methods due to the anomaly-aware training objectives. Such complexity inherently leads to high training costs in both time and computing resources.
Tech-CH4. Hyperparameter tuning. D(G)NNs naturally exhibit a large set of hyperparameters, such as the number of neurons in each neural network layer, the learning rate, the weight decay and the number of training epochs. Their learning performance is significantly affected by the values of these hyperparameters. However, it remains a serious challenge to effectively select the optimal/sub-optimal settings for the detection models due to the lack of labeled data in real scenarios.
Because deep learning models are sensitive to their associated hyperparameters, setting well-performing values for the hyperparameters is vital to the success of a task. Tuning hyperparameter is relatively trivial in supervised learning when labeled data are available. For instance, users can find an optimal/sub-optimal set of hyperparameters (e.g., through random search, grid search) by comparing the model’s outputs with the ground-truth. However, unsupervised anomaly detection has no accessible labeled data to judge the model’s performance under different hyperparameter settings [akoglu2021anomaly, zhao2020automating]. Selecting the ideal hyperparameter values for unsupervised detection models persists as a critical obstacle to applying them in a wide range of real scenarios.
1.2 Existing Anomaly Detection Surveys
Recognizing the significance of anomaly detection, many review works have been conducted in the last ten years covering a range of anomaly detection topics: anomaly detection with deep learning, graph anomaly detection, graph anomaly detection with deep learning, and particular applications of graph anomaly detection such as social media, social networks, fraud detection and network security, etc.
There are some representative surveys on generalized anomaly detection techniques - [chandola2009anomaly], [DBLP:journals/csur/BoukercheZA20] and [DBLP:journals/jbd/ThudumuBJS20]. But only the most up-to-date work in Thudumu et al. [DBLP:journals/jbd/ThudumuBJS20] covers the topic of graph anomaly detection. Recognizing the power of deep learning, the three contemporary surveys, Ruff et al. [ruff2021unifying], Pang et al. [DBLP:journals/corr/abs-2007-02500] and Chalapathy and Chawla [chalapathy2019deep] specifically review deep learning based anomaly detection techniques specifically.
As for graph anomaly detection, Akoglu et al. [akoglu2015graph], Ranshous et al. [ranshous2015anomaly], and Jennifer and Kumar [d2021anomaly] put their concentration on graph anomaly detection, reviewing many conventional approaches in this area, including statistical models and machine learning techniques. Other surveys are dedicated to particular applications of graph anomaly detection, such as computer network intrusion detection and anomaly detection in online social networks, e.g., [yu2016survey, ahmed2019combining, Fraud2020], and [DBLP:conf/iciot3/EltanboulyBACE20, DBLP:journals/telsys/FernandesRCAP19, kwon2019survey, DBLP:journals/cj/GogoiBBK11, savage2014anomaly]. These works provided solid reviews of the application of anomaly detection/graph anomaly detection techniques in these high demand and vital domains. However, none of the mentioned surveys are dedicated to techniques on graph anomaly detection with deep learning, as shown in Table I, and hence do not provide a systematic and comprehensive review of these techniques.
1.3 Contributions
Our contributions are summarized as follows:
-
•
The first survey in graph anomaly detection with deep learning. To the best of our knowledge, our survey is the first to review the state-of-the-art deep learning techniques for graph anomaly detection. Most of the relevant surveys focus either on conventional graph anomaly detection methods using non-deep learning techniques or on generalized anomaly detection techniques (for tabular/point data, time series, etc.). Until now, there has been no dedicated and comprehensive survey on graph anomaly detection with deep learning. Our work bridges this gap, and we expect that an organized and systematic survey will help push forward research in this area.
-
•
A systematic and comprehensive review. In this survey, we review the most up-to-date deep learning techniques for graph anomaly detection published in influential international conferences and journals in the area of deep learning, data mining, web services, and artificial intelligence, including: TKDE, TKDD, TPAMI, NeurIPS, SIGKDD, ICDM, WSDM, SDM, SIGMOD, IJCAI, AAAI, ICDE, CIKM, ICML, WWW, CVPR, and others. We first summarize seven data-specific and four technique-specific challenges in graph anomaly detection with deep learning. We then comprehensively review existing works from the perspectives of: 1) the motivations behind the deep methods; 2) the main ideas for identifying graph anomalies; 3) a brief introduction to conventional non-deep learning techniques; and 4) the technical details of deep learning algorithms. A brief timeline of graph anomaly detection and reviewed works is given in Fig. 2.
-
•
Future directions. From the survey results, we highlight 12 future research directions covering emerging problems introduced by graph data, anomaly detection, deep learning models, and real-world applications. These future opportunities indicate challenges that have not been adequately tackled, and so more effort is needed in the future.
-
•
Affluent resources. Our survey also provides an extensive collection of open-sourced anomaly detection algorithms, public datasets, synthetic dataset generating techniques, as well as commonly used evaluation metrics to push forward the state-of-the-art in graph anomaly detection. These published resources offer benchmark datasets and baselines for future research.
-
•
A new taxonomy. We have organized this survey with regard to different types of anomalies (i.e., nodes, edges, sub-graphs, and graphs) existing in graphs or graph databases. We also pinpoint the differences and similarities between different types of graph anomalies.
The rest of this survey is organized as follows. In Section 2, we provide preliminaries about the different types of settings. From Section 3 to Section 9, we review existing techniques for detecting anomalous nodes, edges, sub-graphs and graphs, respectively. In Section 10, we first provide a collection of published graph anomaly detection algorithms and datasets and then summarize commonly used evaluation metrics and synthetic data generation strategies. We highlight 12 future directions concerning deep learning in graph anomaly detection in Section 11 and summarize our survey in Section 12. A concrete taxonomy of our survey is given in Appendix B.
2 Preliminaries
In this section, we provide definitions of different types of graphs mostly used in node/edge/sub-graph-level anomaly detection (Section 3 to Section 7). For consistency, we have followed the conventional categorization of graphs as in existing works [akoglu2015graph, ranshous2015anomaly, kwon2019survey] and categorize them as static graphs, dynamic graphs, and graph databases. Unless otherwise specified, all graphs mentioned in the following sections are static. Meanwhile, as graph-level anomaly detection is discussed far away on page 13, to enhance readability, the definition for the graph database is given closer to the material in Section 8.
Definition 1 (Plain Graph). A static plain graph comprises a node set and an edge set where is the number of nodes and denotes an edge between nodes and . The adjacency matrix restores the graph structure, where if node and is connected, otherwise .
Definition 2 (Attributed Graph). A static attributed graph comprises a node set , an edge set and an attribute set . In an attributed graph, the graph structure follows the definition in Definition 1. The attribute matrix consists of nodes’ attribute vectors, where is the attribute vector associated with node and is the vector’s dimension. Hereafter, the terms attribute and feature are used interchangeably.
Definition 3 (Dynamic Graph). A dynamic graph comprises nodes and edges changing overtime. is the nodes set in the graph at a specific time step , is the corresponding edge set, and are the node attribute matrix and edge attribute matrix at time step in the graph if existed.
In reality, the nodes or edges might also be associated with numerical or categorical labels to indicate their classes (e.g., normal or abnormal). When label information is available/partially-available, supervised/semi-supervised detection models could be effectively trained.
3 Anomalous node detection (ANOS ND)
Anomalous nodes are commonly recognized as individual nodes that are significantly different from others. In real-world applications, these nodes often represent abnormal objects that appear individually, such as a single network intruder in computer networks, an independent fraudulent user in online social networks or a specific fake news on social media. In this section, we specifically focus on anomalous node detection in static graphs. The reviews on dynamic graphs can be found in Section 4. Table II at the end of Section 4 provides a summary of techniques reviewed for ANOS ND.
When detecting anomalous nodes in static graphs, the differences between anomalies and regular nodes are mainly drawn from the graph structural information and nodes/edges’ attributes [li2017radar, bojchevski2018bayesian, zhu2020mixedad, perozzi2014focused]. Given prior knowledge (i.e., community structure, attributes) about a static graph, anomalous nodes can be further categorized into the following three types:
-
•
Global anomalies only consider the node attributes. They are nodes that have attributes significantly different from all other nodes in the graph.
-
•
Structural anomalies only consider the graph structural information. They are abnormal nodes that have different connection patterns (e.g., connecting different communities, forming dense links with others).
-
•
Community anomalies consider both node attributes and graph structural information. They are defined as nodes that have different attribute values compared to other nodes in the same community.
In Fig. 3, node 14 is a global anomaly because its 4th feature value is 1 while all other nodes in the graph have the value of 0 for the corresponding feature. Nodes 5, 6, and 11 are identified as structural anomalies because they have links with other communities while other nodes in their community do not form cross-community links. Nodes 2 and 7 are community anomalies because their feature values are different from others in the communities they belong to.
3.1 ANOS ND on Plain Graphs
Plain graphs are dedicated to representing the structural information in real-world networks. To detect anomalous nodes in plain graphs, the graph structure has been extensively exploited from various angles. Here, we first summarize the representative traditional non-deep learning approaches, followed by a more recent, advanced detection technique based on representation learning.
3.1.1 Traditional Non-Deep Learning Techniques
Prior to the recent advances in deep learning and other state-of-the-art data mining technologies, traditional non-deep learning techniques have been widely used in many real-world networks to identify anomalous entities. A key idea behind these techniques was to transform the graph anomaly detection into a traditional anomaly detection problem, because the graph data with rich structure information can not be handled by the traditional detection techniques (for tabular data only) directly. To bridge the gap, many approaches [akoglu2010oddball, DBLP:conf/kdd/DingKBKC12, hooi2016fraudar] used the statistical features associated with each node, such as in/out degree, to detect anomalous nodes.
For instance, OddBall [akoglu2010oddball] employs the statistical features (e.g., the number of 1-hop neighbors and edges, the total weight of edges) extracted from each node and its 1-hop neighbors to detect particular structural anomalies that: 1) form local structures in shape of near-cliques or stars; 2) have heavy links with neighbors such that the total weight is extremely large; or 3) have a single dominant heavy link with one of the neighbors.
With properly selected statistical features, anomalous nodes can be identified with respect to their deviating feature patterns. But, in real scenarios, it is very hard to choose the most suitable features from a large number of candidates, and domain experts can always design new statistics, e.g., the maximum/minimum weight of edges. As a result, these techniques often carry prohibitive cost for assessing the most significant features and do not effectively capture the structural information.
3.1.2 Network Representation Based Techniques
To capture more valuable information from the graph structure for anomaly detection, network representation techniques have been widely exploited. Typically, these techniques encode the graph structure into an embedded vector space and identify anomalous nodes through further analysis. Hu et al. [hu2016embedding], for example, proposed an effective embedding method to detect structural anomalies that are connecting with many communities. It first adopts a graph partitioning algorithm (e.g., METIS [DBLP:journals/siamsc/KarypisK98]) to group nodes into communities ( is a user-specified number). Then, the method employs a specially designed embedding procedure to learn node embeddings that could capture the link information between each node and communities. Denoting the embedding for node as , the procedure initializes each with regard to the membership of node to community (if node belongs to the community, then ; otherwise, 0.) and optimizes node embeddings such that directly linked nodes have similar embeddings and unconnected nodes are dissimilar.
After generating the node embeddings, the link information between node and communities is quantified for further anomaly detection analysis. For a given node , such information is represented as:
(1) |
where comprises node ’s neighbors. If has many links with community , then the value in the corresponding dimension will be large.
In the last step, Hu et al. [hu2016embedding] formulate a scoring function to assign anomalousness scores, calculated as:
(2) |
As expected, structural anomalies receive higher scores as they connect to different communities. Indeed, given a predefined threshold, nodes with above-threshold scores are identified as anomalies.
To date, many plain network representation methods such as Deepwalk [perozzi2014deepwalk], Node2Vec [grover2016node2vec] and LINE [tang2015line] have shown their effectiveness in generating node representations and been used for anomaly detection performance validation [bandyopadhyay2020outlier, bandyopadhyay2019outlier, yu2018netwalk, cai2020structural]. By pairing the conventional anomaly detection techniques such as density-based techniques [breunig2000lof] and distance-based techniques [aggarwal2001outlier] with node embedding techniques, anomalous nodes can be identified with regard to their distinguishable locations (i.e., low-density areas or far away from the majorities) in the embedding space.
3.1.3 Reinforcement Learning Based Techniques
The success of reinforcement learning (RL) in tackling real-world decision making problems has attracted substantial interests from the anomaly detection community. Detecting anomalous nodes can be naturally regarded as a problem of deciding which class a node belongs to - anomalous or benign. As a special scenario of the general selective harvesting task, the anomalous node detection problem can be approached by a recent work in [morales2021selective] that intuitively combines reinforcement learning and network embedding techniques for selective harvesting. The proposed model, NAC, is trained with labeled data without any human intervention. Specifically, it first selects a seed network consisting of partially observed nodes and edges. Then, starting from the seed network, NAC adopts reinforcement learning to learn a node selection plan such that anomalous nodes in the undiscovered area can be identified. This is achieved by rewarding selection plans that can choose labeled anomalies with higher gains. Through offline training, NAC will learn an optimal/suboptimal anomalous node selection strategy and discover potential anomalies in the undiscovered graph step by step.
3.2 ANOS ND on Attributed Graphs
In addition to the structural information, real-world networks also contain rich attribute information affiliated with nodes [hamilton2017inductive, hamilton2017representation]. These attributes provide complementary information about real objects and together with graph structure, more hidden anomalies that are non-trivial can now be detected.
For clarity, we distinguish between deep neural networks and graph neural networks in this survey. We review deep neural network (Deep NN) based techniques, GCN based techniques, and reinforcement learning based techniques for ANOS ND as follows. Due to page limitations, other existing works including traditional non-deep learning techniques, GAT [velivckovic2017graph] based techniques, GAN based techniques, and network representation based techniques are surveyed in Appendix C.
3.2.1 Deep NN Based Techniques
The deep learning models such as autoencoder and deep neural networks provide solid basis for learning data representations. Adopting these models for more effective anomalous node detection have drawn substantial interest recently.
For example, Bandyopadhyay et al. [bandyopadhyay2020outlier] developed an unsupervised deep model, DONE, to detect global anomalies, structural anomalies and community anomalies in attributed graphs. Specifically, this work measures three anomaly scores for each node that indicate the likelihood of the situations where 1) it has similar attributes with nodes in different communities (); or 2) it connects with other communities (); or 3) it belongs to one community structurally but the attributes follow the pattern of another community (). If a particular node exhibits any of these characteristics, then it is assigned a higher score and is anomalous.
To acquire these scores, DONE adopts two separate autoencoders (AE), i.e., a structure AE and an attribute AE, as shown in Fig. 4. Both are trained by minimizing the reconstruction errors and preserving the homophily that assumes connected nodes have similar representations in the graph. When training the AEs, nodes exhibiting the predefined characteristics are hard to reconstruct and therefore introduce more reconstruction errors because their structure or attribute patterns do not conform to the standard behavior. Hence, the adverse impact of anomalies should be alleviated to achieve the minimized error. Accordingly, DONE specially designs an anomaly-aware loss function with five terms: , , , , and . and are the structure reconstruction error and attribute reconstruction error that can be written as:
(3) |
and
(4) |
where is the number of nodes, and store the structure information and attributes of node , and are the reconstructed vectors. and are proposed to maintain the homophily and they are formulated as:
(5) |
and
(6) |
where and are the learned latent representations from the structure AE and attribute AE, respectively. poses further restrictions on the generated representations for each node by the two AEs such that the graph structure and node attributes complement each other. It is formulated as:
(7) |
By minimizing the sum of these loss functions, the anomaly scores of each node are quantified, and the top-k nodes with higher scores are identified as anomalies.
3.2.2 GCN Based Techniques
Graph convolutional neural networks (GCNs) [kipf2016gcn] have accomplished decent success in many graph data mining tasks (e.g., link prediction, node classification, and recommendation) owing to its capability of capturing comprehensive information in the graph structure and node attributes. Therefore, many anomalous node detection techniques start to investigate GCNs. Fig. 5 illustrates a general framework of existing works in this line.
In [ding2019deep], Ding et al. measured an anomaly score for each node using the network reconstruction errors of both the structure and attribute. The proposed method, DOMINANT, comprises three parts, namely, the graph convolutional encoder, the structure reconstruction decoder, and the attribute reconstruction decoder. The graph convolutional encoder generates node embeddings through multiple graph convolutional layers. The structure reconstruction decoder tends to reconstruct the network structure from the learned node embeddings, while the attribute reconstruction decoder reconstructs the node attribute matrix. The whole neural network is trained to minimize the following loss function:
(8) |
where is the coefficient, depicts the adjacency matrix of the graph, and quantify reconstruction errors with regard to the graph structure and node attributes, respectively. When the training is finished, an anomaly score is then assigned to each node according to its contribution to the total reconstruction error, which is calculated by:
(9) |
where and are the structure vector and attribute vector of node , and are their corresponding reconstructed vectors. The nodes are then ranked according to their anomaly scores in descending order, and the top-k nodes are recognized as anomalies.
To enhance the performance of anomalous node detection, later work by Peng et al. [peng2020deep] further explores node attributes from multiple attributed views to detect anomalies. The multiple attributed views are employed to describe different perspectives of the objects [sheng2019multi, 6848779, wu2013multi]. For example, in online social networks, user’s demographic information and posted contents are two different attributed views, and they characterize the personal information and social activities, respectively. The underlying intuition of investigating different views is that anomalies might appear to be normal in one view but abnormal in another view.
For the purpose of capturing these signals, the proposed method, ALARM, applies multiple GCNs to encode information in different views and adopts a weighted aggregation of them to generate node representations. This model’s training strategy is similar to DOMINANT [ding2019deep] in that it aims to minimize the network reconstruction loss and attribute reconstruction loss and can be formulated as:
(10) |
where is coefficient to balance the errors, is the element at coordinate in the adjacency matrix , is the corresponding element in the reconstructed adjacency matrix , is the original node feature matrix and is the reconstructed node feature matrix. Lastly, ALARM adopts the same scoring function as [ding2019deep], and nodes with top-k highest scores are anomalous.
Instead of spotting unexpected nodes using their reconstruction errors, Li et al. [li2019specae] proposed SpecAE to detect global anomalies and community anomalies via a density estimation approach, Gaussian Mixture Model (GMM). Global anomalies can be identified by only considering the node attributes. For community anomalies, the structure and attributes need to be jointly considered because of their distinctive attributes to the neighbors. Accordingly, SpecAE investigates a graph convolutional encoder to learn node representations and reconstruct the nodal attributes through a deconvolution decoder. The parameters in the GMM are then estimated using the node representations. Due to the deviating attribute patterns of global and community anomalies, normal nodes are expected to exhibit greater energies in GMM, and the k nodes with the lowest probabilities are deemed to be anomalies.
In [wang2019fdgars], Wang et al. developed a novel detection model that identify fraudsters using their relations and features. Their proposed method, Fdgars, first models online users’ reviews and visited items as their features, and then identifies a small portion of significant fraudsters based on these features. In the last step, a GCN is trained in a semi-supervised manner by using the user-user network, user features, and labeled users. After training, the model can directly label unseen users.
A more recent work, GraphRfi [GraphRfi], also explores the potential of combining anomaly detection with other downstream graph analysis tasks. It targets on leveraging anomaly detection to identify malicious users and provide more accurate recommendations to service benign users by alleviating the impact of these untrustworthy users. Specifically, a GCN framework is deployed to encode users and items into a shared embedding space for recommendation and users are classified as fraudsters or normal users through an additional neural random forest using their embeddings. For rating prediction between users and items, the framework reduces the corresponding impact of suspicious users by assigning less weights to their training loss. At the same time, the rating behavior of users also provides auxiliary information for fraudster detection. The mutually beneficial relationship between these two applications (anomaly detection and recommendation) indicates the potential of information sharing among multiple graph learning tasks.
3.2.3 Reinforcement Learning Based Techniques
In contrast to NAC, Ding et al. [ding2019interactive] investigated to the use of reinforcement learning for anomalous node detection in attributed graphs. Their proposed algorithm, GraphUCB, models both attribute information and structural information, and inherits the merits of the contextual multi-armed bandit technology [langford2008epoch] to output potential anomalies. By grouping nodes into clusters based on their features, GraphUCB forms a -armed bandit model and measures the payoff of selecting a specific node as a potential anomaly for expert evaluation. With experts’ feedback on the predicted anomalies, the decision-making strategy is continuously optimized. Eventually, the most potential anomalies can be selected.
4 ANOS ND on Dynamic Graphs
Real-world networks can be modeled as dynamic graphs to represent evolving objects and the relationships among them. In addition to structural information and node attributes, dynamic graphs also contain rich temporal signals [DBLP:conf/wsdm/RossiGNH13], e.g., the evolving patterns of the graph structure and node attributes. On the one hand, these information inherently makes anomalous node detection on dynamic graphs more challenging. This is because dynamic graphs usually introduce large volume of data and temporal signals should also be captured for anomaly detection. But, on the other hand, they could provide more details about anomalies [ranshous2015anomaly, akoglu2015graph, wang2019detecting]. In fact, some anomalies might appear to be normal in the graph snapshot at each time stamp, and, only when the changes in a graph’s structure are considered, do they become noticeable.
In this section, we review the network representation based techniques and GAN based techniques as follows. Relevant techniques from traditional non-deep learning approaches are reviewed in Appendix D.
Graph Type | Approach | Category | Objective Function | Measurement | Outputs |
Static Graph - Plain | [hu2016embedding] | NR | Anomaly Score | ||
DCI [wang2021decoupling] | NR | Anomaly Prediction | Predicted Label | ||
NAC [morales2021selective] | RL | Cumulative reward | - | Anomalies | |
Static Graph - Attributed | ALAD [liu2017accelerated] | Non-DP | Anomaly Score | ||
Radar [li2017radar] | Non-DP | Residual Analysis | Residual Value | ||
ANOMALOUS [peng2018anomalous] | Non-DP | Residual Analysis | Residual Value | ||
SGASD [wu2017adaptive] | Non-DP | Anomaly Prediction | Predicted Label | ||
DONE [bandyopadhyay2020outlier] | DNN | Anomaly Scores | |||
DOMINANT [ding2019deep] | GCN | Anomaly Score | |||
ALARM [peng2020deep] | GCN | Anomaly Score | |||
SpecAE [li2019specae] | GCN | Density Estimation | Anomalousness Rank | ||
Fdgars [wang2019fdgars] | GCN | Anomaly Prediction | Predicted Label | ||
GraphRfi [GraphRfi] | GCN | Anomaly Prediction | Predicted Label | ||
ResGCN [pei2021resgcn] | GCN | Anomaly Score | |||
GraphUCB [ding2019interactive] | RL | Expert Judgment | - | Anomalies | |
AnomalyDAE [fan2020anomalydae] | GAT | Reconstruction Loss | Anomalousness Rank | ||
SemiGNN [SemiGNN] | GAT | Anomaly Prediction | Predicted Label | ||
AEGIS [ding2020inductive] | GAN | Anomaly Score | |||
REMAD [zhang2019robust] | NR | Residual Analysis | Residual Value | ||
CARE-GNN [CARE-GNN] | NR | Anomaly Prediction | Predicted Label | ||
SEANO [liang2018semi] | NR | Anomaly Score | Discriminator’s Output | ||
OCGNN [wang2020ocgnn] | NR | Location in Embedding Space | Distance to Hypersphere Center | ||
GAL [GAL] | NR | Anomaly Prediction | Predicted Label | ||
CoLA [liu2021anomaly] | NR | Anomaly Score | |||
COMMANDER [ding2021cross] | NR | Anomaly Score | |||
FRAUDRE [zhangge1] | NR | Anomaly Prediction | Predicted Label | ||
Meta-GDN [ding2021few] | NR | Anomaly Score | |||
Dynamic Graph - Plain | NetWalk [yu2018netwalk] | DNN | Anomaly Score | Nearest Distance to Cluster Centers | |
Dynamic Graph - Attributed | MTHL [teng2017anomaly] | Non-DP | Anomaly Score | Distance to Hypersphere Centroid | |
OCAN [zheng2019one] | GAN | Anomaly Score | Discriminator’s Output | ||
* Non-DP: Non-Deep Learning Techniques, DNN: Deep NN Based Techniques, GCN: GCN Based Techniques, RL: Reinforcement Learning Based Techniques. | |||||
* GAT: GAT Based Techniques, NR: Network Representation Based Techniques, GAN: Generative Adversarial Network Based Techniques. |
4.1 Network Representation Based Techniques
Following the research line of encoding a graph into an embedding space, after which anomaly detection is performed, dynamic network representation techniques have been investigated in the more recent works. Specifically, in [yu2018netwalk], Yu et al. presented a flexible deep representation technique, called NetWalk, for detecting anomalous nodes in dynamic (plain) graphs using only the structure information. It adopts an autoencoder to learn node representations on the initial graph and incrementally updates them when new edges are added or existing edges are deleted. To detect anomalies, NetWalk first executes the streaming -means clustering algorithm [ailon2009streaming] to group existing nodes in the current time stamp into different clusters. Then, each node’s anomaly score is measured with regard to its closest distance to the clusters. When the node representations are updated, the cluster centers and anomaly scores are recalculated accordingly.
4.2 GAN Based Techniques
In practice, anomaly detection is facing great challenges from the shortage of ground-truth anomalies. Consequently, many research efforts have been invested in modeling the features of anomalies or regular objects such that anomalies can be identified effectively. Among these techniques, generative adversarial networks (GAN) [goodfellow2014generative] have received extensive attention because of its impressive performance in capturing real data distribution and generating simulated data.
Motivated by the recent advances in “bad” GAN [dai2017good], Zheng et al. [zheng2019one] circumvented the fraudster detection problem using only the observed benign users’ attributes. The basic idea is to seize the normal activity patterns and detect anomalies that behave significantly differently. The proposed method, OCAN, starts by extracting the benign users’ content features using their historical social behaviors (e.g., historical posts, posts’ URL), for which this method is classified into the dynamic category. A long short-term memory (LSTM) based autoencoder [DBLP:conf/icml/SrivastavaMS15] is employed to achieve this and as assumed, benign users and malicious users are in separate regions in the feature space. Next, a novel one-class adversarial net comprising a generator and a discriminator is trained. Specifically, the generator produces complementary data points that locate in the relatively low density areas of benign users. The discriminator, accordingly, aims to distinguish the generated samples from the benign users. After training, benign users’ regions are learned by the discriminator and anomalies can hence be identified with regard to their locations.
Both NetWalk [yu2018netwalk] and OCAN [zheng2019one] approach the anomalous node detection problem promisingly, however, they respectively only consider the structure or attributes. By the success of static graph anomaly detection techniques that analyze both aspects, when the structure and attribute information in dynamic graphs are jointly considered, an enhanced detection performance can be foreseen. We therefore highlight this unexplored area for future works in Section 11.
5 Anomalous edge detection (ANOS ED)
In contrast to anomalous node detection, which targets individual nodes, ANOS ED aims to identify abnormal links. These links often inform the unexpected or unusual relationships between real objects [chang2021f], such as the abnormal interactions between fraudsters and benign users shown in Fig. 1, or suspicious interactions between attacker nodes and benign user machines in computer networks. Following the previous taxonomy, in this section, we review the state-of-the-art ANOS ED methods for static graphs, and Section 6 summarizes the techniques for dynamic graphs. A summary is provided in Table III. This section includes methods based on deep NNs, GCNs and network representations. The non-deep learning techniques are reviewed in Appendix E.
5.1 Deep NN Based Techniques
Similar to deep NN based ANOS ND techniques, autoencoder and fully connected network (FCN) have also been used for anomalous edge detection. As an example, Ouyang et al. [DBLP:conf/ijcnn/Ouyang0020] approached the problem by modeling the distribution of edges through deep models to identify the existing edges that are least likely to appear as anomalies (as shown in Fig. 6). The probability of each edge is decided by and which measure the edge probability using node with its neighbors and node with its neighbors , respectively. To calculate , the proposed method, UGED, first encodes each node into a lower-dimensional vector through a FCN layer and generates node ’s representation by a mean aggregation of itself and its neighbors’ vectors. Next, the node representations are fed into another FCN to estimate . The prediction is expressed as , where represents the trainable parameters, and is ’s representation. UGED’s training scheme aims to maximize the prediction of existing edges via a cross-entropy-based loss function, . After training, an anomaly score is assigned to each edge using the average of and . As such, existing edges that have a lower probability will get higher scores and the top-k edges are reported as anomalous.
5.2 GCN Based Techniques
Following the line of modeling edge distributions, some studies leverage GCNs to better capture the graph structure information. Duan et al. [AANE] demonstrated that the existence of anomalous edges in the training data prevents traditional GCN based models from capturing real edge distributions, which leads to sub-optimal detection performance. This inherently raises a problem: to achieve better detection performance, the node embedding process should alleviate the negative impact of anomalous edges, but these edges are detected using the learned embeddings. To tackle this, the proposed method, AANE, jointly considers these two issues by iteratively updating the embeddings and detection results during training.
In each training iteration, AANE generates node embeddings through GCN layers and learns an indicator matrix to spot potential anomalous edges. Given an input graph with adjacency matrix , each term in is 1 if , and 0 otherwise. Here, is the predicted link probability between nodes and , which is calculated as the hyperbolic tangent of and ’s embeddings, and is a predefined threshold. By this, an edge is identified as anomalous when its predicted probability is less than the average of all links associated with the node by a predefined threshold.
The total loss function of AANE contains two parts: an anomaly-aware loss () and an adjusted fitting loss (). is proposed to penalize the link prediction results and the indicator matrix such that anomalous edges will have lower prediction probabilities when they are marked as 1 in . This is formulated as:
(11) |
where is the node set, is the set of ’s neighbors. quantifies the reconstruction loss with regard to the removal of potential anomalous edges, denoted as:
(12) |
where is an adjusted adjacency matrix that removes all predicted anomalies from the input adjacency matrix . By minimizing these two losses, AANE identifies the top-k edges with lowest probabilities as anomalies.
5.3 Network Representation Based Techniques
Instead of using node embeddings for ANOS ED, edge representations learned directly from the graph are also feasible for distinguishing anomalies. If the edge representations well-preserve the graph structure and interaction content (e.g., messages in online social networks, co-authored papers in citation networks) between pairs of nodes, an enhanced detection performance can then be expected. To date, several studies, such as Xu et al. [DBLP:journals/ijdsa/XuWCY20], have shown promising results in generating edge representations. Although they are not specifically designed for graph anomaly detection, they pinpoint a potential approach to ANOS ED. This is highlighted as a potential future direction in Section 11.1.
6 ANOS ED on Dynamic Graphs
Dynamic graphs are powerful in reflecting the appearance/disappearance of edges over time [ranshous2016scalable]. Anomalous edges can be distinguished by modeling the changes in graph structure and capturing the edge distributions at each time step. Recent approaches to ANOS ED on dynamic graphs are reviewed in this section.
6.1 Network Representation Based Techniques
The intuition of network representation based techniques is to encode the dynamic graph structure information into edge representations and apply the aforementioned traditional anomaly detection techniques to spot irregular edges. This is quite straightforward, but there remain vital challenges in generating/updating informative edge representations when the graph structure evolves. To mitigate this challenge, the ANOS ND model NetWalk [yu2018netwalk] is also capable of detecting anomalous edges in dynamic graphs. Following the line of distance-based anomaly detection, NetWalk encodes edges into a shared latent space using node embeddings, and anomalies are identified based on their distances to the nearest edge-cluster centers in the latent space. Practically, Netwalk generates edge representations as the Hadamard product of the source and destination nodes’ representations, denoted as: . When new edges arrive or existing edges disappear, the node and edge representations are updated from random walks in the temporary graphs at each time stamp, after which the edge-cluster centers and edge anomaly scores are recalculated. Finally, the top-k farthest edges to the edge-clusters are reported as anomalies.
6.2 GCN Based Techniques
Although NetWalk is capable of detecting anomalies in dynamic graphs, it simply updates edge representations without considering the evolving patterns of long/short-term nodes and the graph’s structure. For more effective ANOS ED, Zheng et al. [zheng2019addgraph] intuitively combined temporal, structural and attribute information to measure the anomalousness of edges in dynamic graphs. They propose a semi-supervised model, AddGraph, which comprises a GCN and Gated Recurrent Units (GRU) with attention [cui2019hierarchical] to capture more representative structural information from the temporal graph in each time stamp and dependencies between them, respectively.
At each time stamp , GCN takes the output hidden state () at time to generate node embeddings, after which the GRU learns the current hidden state from the node embeddings and attentions on previous hidden states (as shown in Fig. 7). After getting the hidden state of all nodes, AddGraph assigns an anomaly score to each edge in the temporal graph based on the nodes associated with it. The proposed anomaly scoring function is formulated as:
(13) |
where and are the corresponding nodes, is the weight of the edge, and are trainable parameters, and are hyper-parameters, and is the non-linear activation function. To learn and , Zheng et al. further assumed that all existing edges in the dynamic graph are normal in the training stage, and sampled non-existing edges as anomalies. Specifically, they form the loss function as:
(14) |
where is the edge set, are sampled non-existing edges at time stamp , is a hyper-parameter, and regularizes all trainable parameters in the model. After training, the scoring function identifies anomalous edges in the test data by assigning higher anomaly scores to them based on Eq. 13.
7 Anomalous sub-graph detection (ANOS SGD)
In real life, anomalies might also collude and behave collectively with others to garner benefits. For instance, fraudulent user groups in an online review network, as shown in Fig. 1, may post misleading reviews to promote or besmirch certain merchandise. When these data are represented as graphs, anomalies and their interactions usually form suspicious sub-graphs, and ANOS SGD is proposed to distinguish them from the benign.
Unlike individual and independent graph anomalies, i.e., single nodes or edges, each node and edge in a suspicious sub-graph might be normal. However, when considered as a collection, they turn out to be anomalous. Moreover, these sub-graphs also vary in size and inner structure, making anomalous sub-graph detection more challenging than ANOS ND/ED [dGraphScan]. Although extensive effort has been placed on circumventing this problem, deep-learning techniques have only begun to address this problem in the last five years. For reference, traditional non-deep learning based techniques are briefly introduced in Appendix F, and a summary of techniques reviewed for ANOS SGD is provided in Table III at the end of Section 9.
Due to the flexibility of heterogeneous graphs in representing the complex relationships between different kinds of real objects, several recent works have taken advantage of deep network representation techniques to detect real-world anomalies through ANOS SGD. For instance, Wang et al. [wang2018deep] represented online shopping networks as bipartite graphs (a specific type of heterogeneous graph that has two types of nodes and one type of edge), in which users are source nodes and items are sink nodes. Fraudulent groups are then detected based on suspicious dense blocks that form in these graphs.
Wang et al. [wang2018deep] aimed to learn anomaly-aware representations of users such that suspicious users in the same group will be located closely in the vector space, while benign users will be far away (as shown in the embedding space in Fig. 8). According to the observation that user nodes belonging to one fraudulent group are more likely to connect with the same item nodes, the developed model, DeepFD, measures similarities in the behavior of two users, , as the percentage of items shared among all the items they have reviewed. User representations are then generated through a traditional autoencoder, which is trained using three losses and follows the encoding-decoding process. The first loss is the reconstruction loss that ensures the bipartite graph structure can be reconstructed properly using the learned user representations and item representations. The second term preserves the user similarity information in the learned user representations. That is, if two users have similar behaviors, their representations should also be similar. This loss is formulated as:
(15) |
where is the number of user nodes, measures the similarity of user and ’s representations using an RBF kernel or other alternative. The third loss regularizes all trainable parameters. Finally, the suspicious dense blocks, which are expected to form dense regions in the vector space, are detected using DBSCAN [DBSCAN].
Another work, FraudNE [FraudNE], also models online review networks as bipartite graphs and further detects both malicious users and associated manipulated items following the dense block detection principle. Unlike DeepFD, FraudNE aspires to encode both types of nodes into a shared latent space where suspicious users and items belonging to the same dense block are very close to each other while others distribute uniformly (as shown in Fig. 8). FraudNE adopts two traditional autoencoders, namely, a source node autoencoder and a sink node autoencoder, to learn user representations and item representations, respectively. Both autoencoders are trained to jointly minimize their corresponding reconstruction losses and a shared loss function, and the total loss can be formulated as:
(16) |
where and are hyperparameters, and regularizes all trainable parameters. Specifically, the reconstruction losses (i.e., and ) measure the gap between the input user/item features (extracted from the graph structure) and their decoded features. The shared loss function is proposed to restrict the representation learning process such that each linked pair of users and items get similar representations. As the DBSCAN [DBSCAN] algorithm is convenient to apply for dense region detection, FraudNE also uses it to distinguish the dense sub-graphs formed by suspicious users and items.
To date, only a few works have put their efforts into using deep learning techniques for ANOS SGD. However, with intensifying research interest in sub-graph representation learning, we encourage more studies on ANOS SGD and highlight this as a potential future in Section 11.1.
8 Anomalous graph detection (ANOS GD)
Beyond anomalous node, edge, and sub-graph, graph anomalies might also appear as abnormal graphs in a set/database of graphs. Typically, a graph database is defined as:
Definition 4 (Graph Database). A graph database contains individual graphs. Here, each graph is comprised of a node set and an edge set . and are the node attribute matrix and edge attribute matrix of if it is an attributed graph.
This graph-level ANOS GD aims to detect individual graphs that deviate significantly from the others. A concrete example of ANOS GD is unusual molecule detection. When chemical compounds are represented as molecular/chemical graphs where the atoms and bonds are represented as nodes and edges [6702420, sun2021sugar], unusual molecules can be identified because their corresponding graphs have structures and/or features that deviate from the others. Brain disorders detection is another example. A brain disorder can be diagnosed by analyzing the dynamics of brain graphs at different stages of aging in sequence and finding an inconsistent snapshot at a specific time stamp.
The prior reviewed techniques (i.e., ANOS ND/ED/SGD) are not compatible with ANOS GD because they are dedicated to detecting anomalies in a single graph, whereas ANOS GD is directed at detecting graph-level anomalies. This problem is commonly approached by: 1) measuring the pairwise proximities of graphs using graph kernels [manzoor2016fast]; 2) detecting the appearance of anomalous graph signals created by abnormal groups of nodes [hooi2018changedar]; or 3) encoding graphs using frequent motifs [noble2003graph]. However, none of these methods are deep learning-based. As the time of writing, very few studies in ANOS GD with deep learning have been undertaken. As such, this is highlighted as a potential future direction in Section 11.1.
8.1 GNN Based Techniques
Motivated by the success of GNNs in various graph classification tasks, the most recent works in ANOS GD employ GNNs to classify single graphs as normal/abnormal in the given graph database. Specifically, Dou et al. [dou2021user] transformed fake news detection into an ANOS GD problem by modeling news as tree-structured propagation graphs where the root nodes denote pieces of news, and child nodes denote users who interact with the root news. Their end-to-end framework, UPFD, extracts two embeddings for the news piece and users, respectively, via a text embedding model (e.g. word2vec, BERT) and a user engagement embedding process. For each news graph, its latent representation is a flattened concatenation of these two embeddings, which is input to train a neural classifier with the label of the news. Corresponding propagation graphs that are labeled as fake by the trained model are regarded as anomalous.
Another representative work by Zhao and Akoglu [zhao2020using] employed a GIN model and one-class classification (i.e., DeepSVDD [ruff2018deep]) loss to train a graph-level anomaly detection framework in an end-to-end manner. For each individual graph in the graph database, its graph-level embedding is generated by applying mean-pooling over its nodes’ node-level embeddings. A graph is eventually depicted as anomalous if it lies outside the learned hypersphere, as shown in Fig. 9.
8.2 Network Representation Based Techniques
It is also possible to apply general graph-level network representation techniques to ANOS GD. With these methods, the detection problem is transformed into a conventional outlier detection problem in the embedding space. In contrast to D(G)NN based techniques that can detect graph anomalies in an end-to-end manner, adopting these representation techniques for anomaly detection is two-staged. First, graphs in the database are encoded into a shared latent space using graph-level representation techniques, such as Graph2Vec [narayanan2017graph2vec], FGSD [verma2017hunt]. Then, the anomalousness of each single graph is measured by an off-the-shelf outlier detector. Essentially, this kind of approach involves pairing existing methods in both stages, yet, the stages are disconnected from each other and, hence, the detection performance can be subpar since the embedding similarities are not necessarily designed for the sake of anomaly detection.
9 ANOS GD on Dynamic Graphs
For dynamic graph environments, graph-level anomaly detection endeavors to identify abnormal graph snapshots/temporal graphs. Similar to ANOS ND and ED on dynamic graphs, given a sequence of graphs, anomalous graphs can be distinguished regarding their unusual evolving patterns, abnormal graph-level features, or other characteristics.
Graph Type | Approach | Category | Objective Function | Measurement | Outputs |
Anomalous Edge Detection Techniques | |||||
Static Graph - Plain | UGED [DBLP:conf/ijcnn/Ouyang0020] | DNN | Anomaly Score | ||
AANE [AANE] | GCN | Anomaly Ranking | Edge Existing Probability | ||
Static Graph - Attributed | eFraudCom [zhangge2] | NR | Anomaly Prediction | Predicted Label | |
Dynamic Graph - Plain | NetWalk [yu2018netwalk] | NR | Anomaly Score | Nearest Distance to Cluster Centers | |
Dynamic Graph - Attributed | AddGraph [zheng2019addgraph] | GCN | Anomaly Score | ||
Anomalous Sub-graph Detection Techniques | |||||
Static Graph - Plain | DeepFD [wang2018deep] | NR | Density-based Method (DBSCAN) | Dense sub-graphs | |
FraudNE [FraudNE] | NR | Density-based Method (DBSCAN) | Dense sub-graphs | ||
Anomalous Graph Detection Techniques | |||||
Graph Database - Attributed | UPFD [dou2021user] | NR | Anomaly Prediction | Predicted Label | |
OCGIN [zhao2020using] | GNN | Location in Embedding Space | Distance to Hypersphere Center | ||
Dynamic Graph - Plain | DeepSphere [teng2018deep] | DNN | Location in Embedding Space | Anomalous Label | |
GLAD-PAW [wan2021glad] | GNN | Anomaly Prediction | Predicted Label | ||
* DNN: Deep NN Based techniques, GCN: GCN Based Techniques, NR: Network Representation Based Techniques. | |||||
* GNN: Graph Neural Network Based Techniques. |
In order to derive each graph snapshot/temporal graph’s characteristics, the commonly used GNN, LSTM and autoencoder are feasible to apply. For instance, Teng et al. [teng2018deep] applied a LSTM-autoencoder to detect abnormal graph snapshots, as shown in Fig. 10. In their proposed model, DeepSphere, a dynamic graph is described as a collection of three-order tensors, where each , and the slices along the time dimension are the adjacency matrices of graph snapshots. To identify abnormal tensors, DeepSphere first embeds each graph snapshot into a latent space using an LSTM autoencoder, and then leverages a one-class classification objective [ruff2018deep] that learns a hypersphere such that normal snapshots are covered, and anomalous snapshots lay outside. The LSTM autoencoder takes the adjacency matrices as input sequentially and attempts to reconstruct these input matrices through training. The hypersphere is learned through a single neural network layer and its objective function is formulated as:
(17) |
where is the latent representation generated by the LSTM autoencoder, a is the centroid of the hypersphere, is the radius, is the outlier penalty (), is the number of training graph snapshots, and is a hyperparameter. The overall objective function of DeepSphere is represented as:
(18) |
where is the reconstruction loss of the LSTM autoencoder. When the training is finished, DeepSphere spots a given unseen data as anomalous if its embedding lies outside the learned hypersphere with a radius of .
In addition to all ANOS ND, ED, SGD, and GD techniques reviewed above, it is worth mentioning that perturbed graphs, which adversarial models generate to attack graph classification algorithms or GNNs [zhang2021backdoor, dai2018adversarial, 9338329], can also be regarded as (intensional) anomalies. In a perturbed graph, the nodes and edges are modified deliberately to deviate from the others. We have not reviewed these in this survey because their main purpose is to attack a GNN model. The key idea behind these methods is the attacking/perturbation strategy, and studies in this sphere seldom focus on a detection or reasoning module to identify the perturbed graph or its sub-structures, i.e., anomalous nodes, edges, sub-graphs, or graphs.
Model | Language | Platform | Graph | Code Repository |
AnomalyDAE [fan2020anomalydae] | Python | Tensorflow | Static Attributed Graph | https://github.com/haoyfan/AnomalyDAE |
MADAN [gutierrez2020multi] | Python | - | Static Attributed Graph | https://github.com/leoguti85/MADAN |
PAICAN [bojchevski2018bayesian] | Python | Tensorflow | Static Attributed Graph | http://www.kdd.in.tum.de/PAICAN/ |
ONE [bandyopadhyay2019outlier] | Python | - | Static Attributed Graph | https://github.com/sambaranban/ONE |
DONE&AdONE [bandyopadhyay2020outlier] | Python | Tensorflow | Static Attributed Graph | https://bit.ly/35A2xHs |
SLICENDICE [nilforoshan2019slicendice] | Python | - | Static Attributed Graph | http://github.com/hamedn/SliceNDice/ |
FRAUDRE [zhangge1] | Python | Pytorch | Static Attributed Graph | https://github.com/FraudDetection/FRAUDRE |
SemiGNN [SemiGNN] | Python | Tensorflow | Static Attributed Graph | https://github.com/safe-graph/DGFraud |
CARE-GNN [CARE-GNN] | Python | Pytorch | Static Attributed Graph | https://github.com/YingtongDou/CARE-GNN |
GraphConsis [GraphConsis] | Python | Tensorflow | Static Attributed Graph | https://github.com/safe-graph/DGFraud |
GLOD [zhao2020using] | Python | Pytorch | Static Attributed Graph | https://github.com/LingxiaoShawn/GLOD-Issues |
OCAN [zheng2019one] | Python | Tensorflow | Static Graph | https://github.com/PanpanZheng/OCAN |
DeFrauder [DBLP:conf/ijcai/DhawanGK019] | Python | - | Static Graph | https://github.com/LCS2-IIITD/DeFrauder |
GCAN [lu2020gcan] | Python | Keras | Heterogeneous Graph | https://github.com/l852888/GCAN |
HGATRD [huang2020heterogeneous] | Python | Pytorch | Heterogeneous Graph | https://github.com/201518018629031/HGATRD |
GLAN [yuan2019jointly] | Python | Pytorch | Heterogeneous Graph | https://github.com/chunyuanY/RumorDetection |
GEM [liu2018heterogeneous] | Python | - | Heterogeneous Graph | https://github.com/safe-graph/DGFraud/tree/master/algorithms/GEM |
eFraudCom [zhangge2] | Python | Pytorch | Heterogeneous Graph | https://github.com/GeZhangMQ/eFraudCom |
DeepFD [wang2018deep] | Python | Pytorch | Bipartite Graph | https://github.com/JiaWu-Repository/DeepFD-pyTorch |
ANOMRANK [yoon2019fast] | C++ | - | Dynamic Graph | https://github.com/minjiyoon/anomrank |
MIDAS [DBLP:conf/aaai/0001HYSF20] | C++ | - | Dynamic Graph | https://github.com/Stream-AD/MIDAS |
Sedanspot [eswaran2018sedanspot] | C++ | - | Dynamic Graph | https://www.github.com/dhivyaeswaran/sedanspot |
F-FADE [chang2021f] | Python | Pytorch | Dynamic Graph | http://snap.stanford.edu/f-fade/ |
DeepSphere [teng2018deep] | Python | Tensorflow | Dynamic Graph | https://github.com/picsolab/DeepSphere |
Changedar [hooi2018changedar] | Matlab | - | Dynamic Graph | https://bhooi.github.io/changedar/ |
UPFD [dou2021user] | Python | Pytorch | Graph Database | https://github.com/safe-graph/GNN-FakeNews |
OCGIN [zhao2020using] | Python | Pytorch | Graph Database | https://github.com/LingxiaoShawn/GLOD-Issues |
DAGMM [zong2018deep] | Python | Pytorch | Non Graph | https://github.com/danieltan07/dagmm |
DevNet [pang2019deep] | Python | Tensorflow | Non Graph | https://github.com/GuansongPang/deviation-network |
RDA [zhou2017anomaly] | Python | Tensorflow | Non Graph | https://github.com/zc8340311/RobustAutoencoder |
GAD [GAD] | Python | Tensorflow | Non Graph | https://github.com/raghavchalapathy/gad |
Deep SAD [ruff2019deep] | Python | Pytorch | Non Graph | https://github.com/lukasruff/Deep-SAD-PyTorch |
DATE [10.1145/3394486.3403339] | Python | Pytorch | Non Graph | https://github.com/Roytsai27/Dual-Attentive-Tree-aware-Embedding |
STS-NN [STS-NN] | Python | Pytorch | Non Graph | https://github.com/JiaWu-Repository/STS-NN |
* -: No Dedicated Platforms. |
Category | Dataset | #G | #N | #E | #FT | #AN | REF | URL |
Citation Networks | ACM | 1 | 16K | 71K | 8.3K | - | [ding2019deep, ding2019interactive, fan2020anomalydae, ding2020inductive] | http://www.arnetminer.org/open-academic-graph |
Cora | 1 | 2.7K | 5.2K | 1.4K | - | [li2019specae, bandyopadhyay2020outlier, bandyopadhyay2019outlier, liang2018semi, bojchevski2018bayesian] | http://linqs.cs.umd.edu/projects/projects/lbc | |
Citeseer | 1 | 3.3K | 4.7K | 3.7K | - | [bandyopadhyay2020outlier, bandyopadhyay2019outlier, liang2018semi, perozzi2016scalable] | http://linqs.cs.umd.edu/projects/projects/lbc | |
Pubmed | 1 | 19K | 44K | 500 | - | [li2019specae, bandyopadhyay2020outlier, bandyopadhyay2019outlier, liang2018semi] | http://linqs.cs.umd.edu/projects/projects/lbc | |
DBLP | 1 | - | - | - | - | [yu2018netwalk, eswaran2018sedanspot, bojchevski2018bayesian, hu2016embedding, perozzi2016scalable] | http://www.informatik.uni-trier.de/˜ley/db/ | |
Social Networks | Enron | - | 80K | - | - | - | [li2017radar, peng2018anomalous, zhang2019robust, gutierrez2020multi, wang2019detecting, yoon2019fast, eswaran2018sedanspot, eswaran2018spotlight, rayana2016less] | http://odds.cs.stonybrook.edu/#table2 |
UCI Message | 1 | 5K | - | - | - | [yu2018netwalk, cai2020structural, zheng2019addgraph] | http://archive.ics.uci.edu/ml | |
Google+ | 4 | 75M | 11G | - | - | - | https://wangbinghui.net/dataset.html | |
Twitter Sybil | 3 | 41M | - | - | 100K | - | https://wangbinghui.net/dataset.html | |
Twitter WorldCup2014 | - | 54K | - | - | - | [rayana2016less] | http://shebuti.com/SelectiveAnomalyEnsemble/ | |
Twitter Security2014 | - | 130K | - | - | - | [rayana2016less] | http://shebuti.com/SelectiveAnomalyEnsemble/ | |
Reality Mining | - | 9.1K | - | - | - | [rayana2016less] | http://shebuti.com/SelectiveAnomalyEnsemble/ | |
NYTNews | - | 320K | - | - | - | [rayana2016less] | http://shebuti.com/SelectiveAnomalyEnsemble/ | |
Politifact | 314 | 41K | 40K | - | 157 | [dou2021user] | https://github.com/safe-graph/GNN-FakeNews | |
Gossipcop | 5.4K | 314K | 308K | - | 2.7K | [dou2021user] | https://github.com/safe-graph/GNN-FakeNews | |
Co-purchasing Networks | Disney | 1 | 124 | 334 | 30 | 6 | [li2017radar, peng2018anomalous, liu2017accelerated, zhang2019robust, gutierrez2020multi] | https://www.ipd.kit.edu/mitarbeiter/muellere/consub/ |
Amazon-v1 | 1 | 314K | 882K | 28 | 6.2K | [peng2018anomalous, hooi2017graph, zhu2020mixedad, kumar2018rev2, bojchevski2018bayesian, hu2016embedding, hooi2016fraudar] | https://www.ipd.kit.edu/mitarbeiter/muellere/consub/ | |
Amazon-v2 | 1 | 11K | - | 25 | 821 | - | https://github.com/dmlc/dgl/blob/master/python/dgl/data/fraud.py | |
Elliptic | 1 | 203K | 234K | 166 | 4.5K | - | https://www.kaggle.com/ellipticco/elliptic-data-set | |
Yelp | 1 | 45K | - | 32 | 6.6K | - | https://github.com/dmlc/dgl/blob/master/python/dgl/data/fraud.py | |
Transportation Networks | New York City Taxi | - | - | - | - | - | [teng2018deep, teng2017anomaly, eswaran2018spotlight] | http://www.nyc.gov/html/tlc/html/about/triprecorddata.shtml |
* -: Not Given, #G: Number of Graphs, #N: Number of Nodes, #E: Number of Edges, #FT: Number of Features, #AN: Number of Anomalies, REF: References. |
10 Published Algorithms and Datasets
Acquiring open-sourced implementations and real-world datasets with real-world anomalies are far from trivial in academic research on graph anomaly detection. Here, we first list the published algorithms with publicly available implementations, then we provide a collection of public benchmark datasets and summarize the commonly used evaluation metrics. Lastly, due to the shortage of labeled anomalies in real-world datasets, we review three synthetic dataset generation strategies used in the existing works. All the resources are available at: https://github.com/XiaoxiaoMa-MQ/Awesome-Deep-Graph-Anomaly-Detection/.
10.1 Published Algorithms
The published implementations of algorithms and models contribute to baseline experiments. Table IV provides a summary of published implementations outlining the language and platforms, graphs that they can admit, and URLs to code repositories.
10.2 Published Datasets
Table V summarizes the most commonly used datasets, categorizing them into different groups with regard to their application fields. Notably, there is a lack of anomaly ground truth with labeled anomalies only provided in the Enron, Twitter Sybil, Disney, Amazon, Elliptic and Yelp datasets. Details of the DBLP, UCI message, Digg, Wikipedia, and New York city taxi datasets are not given because these public datasets only contain the raw data, and in most existing works, they are further processed to build different graphs (e.g., homogeneous graphs, bipartite graphs). The well-known citation networks are often used to generate synthetic datasets by injecting anomalies into them - the number of anomalies varies from study to study.
10.3 Synthetic Dataset Generation
Given the rarity of ground-truth anomalies, many researchers have employed synthetic datasets to investigate the effectiveness of their proposed methods [bandyopadhyay2019outlier, DBLP:journals/aim/SenNBGGE08, DBLP:conf/icdm/SanchezMLKB13]. Typically, these datasets can be categorized as follows:
-
•
Synthetic graphs with injected anomalies. Pursuing this strategy, graphs are created to simulate real-world networks. All the nodes and edges are manually added with well-known benchmarks (e.g., Lanchinetti-Fornunato-Radicchi (LFR) [lancichinetti2009community], small-world [watts1998collective], scale-free graphs [akoglu2009rtg]). Once built, ground-truth anomalies are planted into the network. For the feasibility of generating expected scale of networks, this strategy is mostly used by previous works to validate their underlying intuitions in anomaly detection.
-
•
Real-world datasets with injected anomalies. These datasets are built based on the real-world networks. In particular, anomalies are created either by modifying the topological structure or the attributes of existing nodes/edges/sub-graphs, or by inserting non-existent graph objects.
-
•
Downsampled graph classification datasets. The widely-used graph classification datasets (e.g., NCI1, IMDB, ENZYMES in [zhao2020using]) can be easily converted into sets suitable for anomaly detection through two steps. Firstly, one particular class and its data records are chosen to represent normal objects. Then, the other data records are downsampled as anomalies at a specified downsampling rate. By this, the generated graph anomaly detection dataset is, in fact, a subset of the original dataset. The most significant strength of this strategy is that no single data record has been modified.
10.4 Evaluation Metrics
To date, the most widely used metrics for evaluating anomaly detection performance evaluation include accuracy, precision, recall rate, F1-score and AUC-AP (Average Precision). Their formulas/descriptions are given in Table VI. However, a more dedicated analysis with some new evaluation metrics is needed for further performance examination because anomaly detection has different requirements for different applications [bozarth2020toward, DBLP:conf/cybersecpods/ElmrabitZL020, DBLP:conf/nss/EnglyLM20], e.g., false negative and false positive. For instance, network intrusion prevention systems are more sensitive to false negative errors, while false positive errors are considered relatively harmless. This is because any risky connections should be shut down to prevent information leaks. By contrast, other applications concentrate more on the false positives, e.g., in auditing domain, companies often set a budget for an auditor to look at flagged anomalies, and they want high precision/small false positive rate such that the auditor’s time can be best used. Hence, when evaluating detection performance, we suggest reviewing the specific requirements of the application domain for fair and suitable comparisons.
Evaluation Metric | Formula/Description |
---|---|
Accuracy | |
Precision | |
Recall | |
F1 Score | |
AUC-ROC | The Area Under the ROC Curve |
AUC-AP | The Area Under Precision-Recall Curve |
11 Future Directions
So far, we have reviewed the contemporary deep learning techniques that are devoted to graph anomaly detection. An apparent observation from our survey is there remain many compounded challenges imposed by the complexity of anomaly detection, graph data, and the immaturity of deep learning techniques for graph data mining. Another observation is that deep learning techniques in graph anomaly detection are still confined to a relatively small number of studies, and most of these focus on anomalous node detection. For a gauge, simply compare the length of Tables II and III. Edge, sub-graph, and graph-level anomaly detection have clearly received much less attention. To bridge the gaps and push forward future work, we have identified 12 directions of future research for graph anomaly detection with deep learning.
11.1 Anomalous Edge, Sub-graph, and Graph Detection
In real-world graphs, anomalies also appear as unusual relationships between objects, sub-structures formed by abnormal groups, or abnormal graphs, which are known as anomalous edges, sub-graphs, and graphs respectively. As indicated in our review, there is a huge gap between the existing anomalous edge/sub-graph/graph detection techniques and the emerging demands for more advanced solutions in various application domains (e.g., social networks, computer networks, financial networks). When detecting anomalous edges/sub-graphs/graphs, the proposed methods should be capable of leveraging the rich information contained in graphs to find clues and characteristics that can distinguish normal objects and anomalies in specific applications. Typically, this involves extracting edge/sub-graph/graph-level features, modeling the patterns of these features, and measuring the abnormalities accordingly. However, current deep learning based graph anomaly detection techniques put forward very little effort in this regard.
Opportunities: We believe more research efforts can be done on anomalous edge, sub-graph, and graph detection with regard to their significance in real-world applications. Possible solutions to this gap might to be first consider the application domain and explore domain knowledge to find complementary clues as a basis for these problems. Then, motivated by recent advances in deep learning for edge, sub-graph, and graph-level representation learning [DBLP:journals/ijdsa/XuWCY20, DBLP:conf/nips/AlsentzerFLZ20], extensive work can be done to learn an anomaly-aware embedding space such that it is feasible to extract abnormal patterns of anomalies. Although this direction seems quite straightforward, the true challenge lies in the specific application domains. Hence, domain knowledge, anomalous pattern recognition and anomaly-aware deep learning techniques should be enforced simultaneously.
11.2 Anomaly Detection in Dynamic Graphs
Dynamic graphs provide powerful machinery with which to capture the evolving relationships between real objects and their attributes. Their ever-changing structure and attribute information inherently make anomaly detection very challenging in these scenarios, leading to two primary concerns for the task. The first is to consider the spatial and temporal information contained in each graph snapshot at different time stamps, and the second is to explore the evolving patterns of nodes, edges, sub-graphs and graphs, as well as their interaction with the node/edge attributes over time. When these challenges have been tackled with mature solutions, detection techniques will achieve better results.
Opportunities: From our observations, most of deep learning based dynamic graph anomaly detection techniques are built on DeepWalk [perozzi2014deepwalk], GCN [kipf2016gcn] or other deep models that are intuitively designed for static graphs. This means other information, like evolving patterns in attributes [wang2019time, zhang2018salient]), are not adequately used in the detection task. We can therefore identify the following directions for future studies to target.
-
•
Using dynamic graph mining tools. As a popular research topic, deep learning for dynamic graph data mining [DBLP:conf/wsdm/SankarWGZY20, DBLP:journals/jmlr/KazemiGJKSFP20] has shown its effectiveness in supporting dynamic graph analysis, such as node clustering and graph classification [cui2018survey]. More future works can be foreseen that adopt these techniques for anomaly detection.
-
•
Deriving solid evidence for anomaly detection. The rich structural, attribute and temporal information in dynamic graphs are valuable resources for identifying anomalies. Apart from the indicators widely used in current works, such as burst of connections between node pairs or suddenly vanishing connections, we suggest exploring structural and attribute changes in depth. From such studies, we may derive additional information to enhance the detection performance, such as the appearance of abnormal attributes.
-
•
Handling complex dynamics. Real-world networks always exhibit changes in both the network structure and node attributes, but only very few studies address this circumstance. Most of the state-of-arts only consider changes in one of these aspects. Although this ‘double’ scenario is extremely complex and detecting anomalies in this kind of dynamic graph is very challenging, it is worth studying because these graphs are highly reflective of real network data.
11.3 Anomaly Detection in Heterogeneous Graphs
Heterogeneous graphs are a specific type of graph that contain diverse types of nodes and edges. For instance, Twitter can be intuitively modeled as a heterogeneous graph comprised of tweets, users, words, etc.
Opportunities: To use the complex relationships between different types of nodes in heterogeneous graphs for anomaly detection, representative works, such as HGATRD [huang2020heterogeneous], GCAN [lu2020gcan] and GLAN [yuan2019jointly], typically decompose a heterogeneous graph into individual graphs according to meta-paths, e.g., one with tweets and users, and another with tweets and words. They then use D(G)NNs to learn the embeddings for graph anomaly detection. Such a decomposition inherently overlooks the direct inter-relations among diverse types of nodes/edges and downgrades the effectiveness of the embeddings. A possible solution is to reveal the complex relations between different types of nodes and edges, and encode them into a unique representation for boosted detection performance.
11.4 Anomaly Detection in Large-scale Graphs
The scalability of methods to high-dimensional and large-scale data is an ongoing and significant challenge to anomaly detection techniques. In face of large-scale networks, such as Facebook and Twitter that contain billions of users and friendship links, the size of data in terms of both graph size and number of node attributes is extremely high. However, most of the existing works lack the ability to detect anomalies in such large-scale data because they are transductive models and need to take the whole graph as input for further analysis. Computation time and memory cost increase dramatically as the network scales up, and this stops existing techniques from being used on large-scale networks.
Opportunities: Accordingly, there is a need for scalable graph anomaly detection techniques. One possible approach would be an inductive learning scheme that first trains a detection model on part of the whole graph and then applies the model to detect anomalies in the unseen data. As some inductive learning models, such as GraphSAGE [GraphSAGE], have shown their effectiveness on link prediction and node classification in large-scale graphs, this approach is expected to provide a basis for graph anomaly detection in large-scale graphs and similar techniques can be investigated in the future.
11.5 Multi-view Graph Anomaly Detection
In real-world networks, objects might form different kinds of relationships with others (e.g., user’s followership and friendship on Twiter). And their attribute information might be collected from different resources, such as user’s profile, historical posts. This results in two types of multi-view graphs: 1) multi-graph that contains more than one type of edges between two nodes [DBLP:conf/aaai/KhanB19, fan2020one2multi]; and 2) multi-attributed-view graph that stores node attributes in different attributed views [DBLP:conf/ijcai/ChengWTXG20, peng2020deep, zhang2016identifying].
Opportunities: These multi-views basically allow us to analyze real objects’ characteristics from different perspectives. Each view also provides complementary information to other views, and they might have different significance on anomaly detection. For instance, anomalies might be indistinguishable in one view but are obviously divergent from the majority in another view. There are a variety of work in data mining on multi-view learning [xiao2015temporal, gujral2020beyond]. However, work that can accommodate multi-view graphs along with multi-view attributes on nodes for anomaly detection purposes is nascent. Moreover, the rich information contained in multiple views and the inconsistency among them are overlooked in these works. To this end, we believe more research effort in this direction is needed. Digesting the relationships between views will be vital to their success, as two views might provide contrary/supplementary information for anomaly detection.
11.6 Camouflaged/Adversarial Anomaly Detection
The easy accessibility of online platforms has made them convenient targets for fraudsters, attackers and other malevolent agents to carry out malicious activities. Although various anomaly detection systems have been deployed to protect benign objects, anomalies can still conceal themselves to evade detection [shah2014spotting]. Known as camouflaged anomalies, these entities typically disguise themselves as regular objects. If the detection techniques are not robust against such cases, i.e., if they cannot quickly and effectively adapt to the evolving behavior of evasion-seeking attackers, the anomalies are simply left to cause their damage.
Opportunities: In the face of camouflage, the boundary between anomalies and regular objects is blurred, making anomalies much harder to be identified. We believe extensive effort should be placed on detecting these anomalies because, as yet, very few studies have looked at handling camouflaged anomalies in graphs [CARE-GNN, hooi2016fraudar, hooi2017graph]. To fulfill this gap, one major direction might be to jointly analyze the attributes, co-relations, such as the triadic, tetradic, or high-order relationships between objects in hypergraphs [chen2020multi, guzzo2017malevolent, sun2021heterogeneous, silva2008hypergraph], and other information comprised in graphs. By this, anomalies that only camouflage their local structures or attributes can be identified effectively. Enhancing existing techniques might be another direction. This involves incorporating additional detection mechanisms or function blocks particularly designed for distinguishing camouflaged anomalies with existing detection techniques. Consequently, these techniques will bridge most existing works and camouflaged anomaly detection.
11.7 Imbalanced Graph Anomaly Detection
Anomalies are rare, which means anomaly detection always occurs coexists with class imbalance in the training data. As deep learning models rely heavily on the training data, such imbalance pose great challenges to graph anomaly detection, and this remains a significant obstacle to deep learning techniques. Typically, the imbalanced class distributions will downgrade the detection techniques’ ability to capture the differences between anomalies and non-anomalies. It might even cause over-fitting on the anomalous class because there are too few anomalies in the data. If the detection model overlooks this critical fact and is trained improperly, detection performance will be sub-optimal.
Opportunities: In fact, class imbalance has been widely explored in various research areas [zhangge1, ding2021cross]. Advances such as under-sampling the majority class or modifying the algorithms shed important light on solving training problems with imbalances. Yet, contemporary graph anomaly detection methods rarely incorporate these techniques. For more effective detection techniques, biased models that pay more attention to anomalies, such as penalizing additional training losses on misclassified anomalies, would be a possible direction to circumvent the problem. Moreover, when adopting graph neural networks that aggregate neighboring information to the target node, like GCN or GraphSAGE, the over-smoothing between the features of connected nodes should be prevented such that the distinguishable features of the anomalies can be preserved to support anomaly detection.
11.8 Multi-task Anomaly Detection
Graph anomaly detection has close relations with other graph mining tasks including community detection [ijcai2020-693] and node classification [tang2016node], and link prediction [gao2011temporal]. For a concrete example, when detecting community anomalies, community detection techniques are usually used to extract the community structures prior to anomaly detection. Meanwhile, the anomaly detection results can be used to optimize the community structure. Such mutually beneficial collaborations between anomaly detection and other tasks inherently suggest an opportunity for multi-task learning that can handle diverse tasks simultaneously and share information among tasks.
Opportunities: Multi-task learning provides effective machinery with which incorporate associated tasks [DBLP:conf/aaai/SanhWR19, DBLP:conf/aaai/HesselSE0SH19]. Its utmost advantage is that the training signal from another task could yield complementary information to distinguish anomalies from non-anomalies. The result would be enhanced detection performance. However, very few attempts focus on this at present. Beyond current works, such as [GraphRfi] that jointly perform anomalous node detection and personalized recommendation, explorations into combining other learning tasks with graph anomaly detection are likely to emerge as a fruitful future direction.
11.9 Graph Anomaly Interpretability
The interpretability of anomaly detection techniques is vital to the subsequent anomaly handling process. When applying these techniques to real applications, such as financial and insurance systems, it is essential to provide explainable and lawful evidence to support the detection results. However, most of the existing works lack the ability to provide such evidence. To identify the anomalies, the most commonly used metrics are top-k rankings and simple anomaly scoring functions. These metrics are flexible enough to label objects as being either an anomaly or not an anomaly, but they cannot derive solid explanations. Moreover, as deep learning techniques have also been criticized for their low interpretability, future works on graph anomaly detection with deep learning should pay much more attention to this [pang2020self].
Opportunities: To bridge this gap, integrating specially designed interpretation algorithms or mechanisms [DBLP:conf/nips/Sanchez-Lengeling20, attribution1] into the detection framework would be a possible solution, noting that this would inherently induce a higher computational cost. Future works should therefore balance the cost of anomaly detection performance and interpretability. Visualization-based approaches, e.g., dashboards, charts, might also be feasible for showing the distinction between anomalies and non-anomalies in a human-friendly manner. Further research in this direction will be successful if interpretable visualization results can be given [hohman2019s].
11.10 Graph Anomaly Identification Strategies
Amongst existing unsupervised graph anomaly detection techniques, anomalies are mainly identified based on residual analysis [peng2018anomalous, li2017radar], reconstruction loss [fan2020anomalydae], distance-based statistics [yu2018netwalk], density-based statistics [li2019specae], graph scan statistics [kulldorff1997spatial, neill2005detection, sharpnack2013near, berk1979goodness], and one-class classification [wang2020ocgnn]. The underlying intuition of these identification strategies is that anomalies have inconsistent data patterns with regular objects, and they will, therefore: 1) introduce more residual errors or be harder to reconstruct; or 2) be located in low-density areas or far away from the majority class in an anomaly-aware feature space. Effort toward designing novel loss functions for GNNs for anomaly detection is currently quite limited [GAL]
Opportunities: Although these strategies could capture the deviating data patterns of anomalies, they also have different limitations. Specifically, the residual analysis, one-class classification and reconstruction loss strategies are sensitive to noisy training data. Noisy nodes, edges or sub-graphs also exhibit large residuals, a large distance to the origin/hypersphere center and high reconstruction losses. Meanwhile, the distance-based and density-based strategies can only be applied when anomalies and non-anomalies are well separated in lower-dimensional space. Detection performance also downgrades dramatically if the gap between anomalies and non-anomalies is not that evident. It calls for extensive future efforts to break these limitations and explore new anomaly identification strategies.
11.11 Systematic Benchmarking
Systematic benchmarking is key to evaluating the performance of graph anomaly detection techniques. As indicated in our analysis in Section 10.4, recent studies are constantly raising attention to more comprehensive and effective benchmarking [bozarth2020toward, DBLP:conf/cybersecpods/ElmrabitZL020, DBLP:conf/nss/EnglyLM20, zhao2020using]. Typically, the benchmarking framework consists of benchmark datasets, baseline methods, evaluation metrics, and further analysis tools. When evaluating a techniques’ performance with other baselines, the evaluation dataset and metrics become very important because the performance of each model may vary depending on the setting. The shortage of public datasets and (public available) baseline methods also imposes great challenges for effective evaluation. Although one of the aims of our survey is to provide extensive materials for this purpose, like open-sourced implementations, datasets, and evaluation metrics, this work can only serve as a basis for future investigations into systematic benchmarking. We invite more efforts from the anomaly detection community toward this important case. Certainly, rigorous attention to designing better frameworks for bechmarking would help to disclose the advances and shortcomings of various detection techniques and essentially track a unprejudiced and accurate progress record in this field.
11.12 Unified Anomaly Detection Framework
A graph anomaly can be categorized as an anomalous node, edge, and sub-graph in a single graph or an anomalous graph in a graph database. These anomalies usually coexist in real-world datasets. For instance, individual fraudsters, abnormal relationships, and fraud groups exist concurrently in online social network, as shown in Fig. 1. Moreover, there may be different ways to define anomalies of a certain type, such as community outliers versus anomalous communities or attribute-based versus structural anomalies. When deploying detection techniques in real applications, it is expected that all types of anomalies can be identified while consuming the least resources and time. A straightforward approach would be to integrate independent anomalous node, edge, and sub-graph detection techniques. Although this is convenient to apply to relatively small networks, its high computational cost will surely prevent the approach from scaling to large networks, such as Facebook and Twitter, because the same graph data has to be loaded and processed more than once by different techniques.
Opportunities: Unified frameworks that can detect diverse types of anomalies together [DBLP:conf/bigdataconf/BabaieCAY14, DBLP:conf/issre/LiCJHY20] may provide feasible solutions to bridge the gap. To build such frameworks, one possible direction is to capture all the information needed by different detection techniques simultaneously so that these techniques can be applied. The idea seems to be non-challenging, but in deep learning, methods of designing neural network layers and learning strategies that can fulfill this need will require extensive effort.
12 Conclusion
Due to the complex relationships between real-world objects and recent advances in deep learning, especially graph neural networks, graph anomaly detection with deep learning is currently at the forefront of anomaly detection. To the best of our knowledge, this is the first survey to present a comprehensive review dedicated to graph anomaly detection with modern deep learning techniques. Specifically, we have reviewed and categorized contemporary deep learning techniques according to the types of graph anomalies they can detect, including: (1) anomalous node detection; (2) anomalous edge detection; (3) anomalous sub-graph detection; and, finally, (4) anomalous graph detection. Clear summarizations and comparisons between different works are given to offer a complete and thorough picture of the current work and progress of graph anomaly detection as a field.
Moreover, to push forward future research in this area, we have provided a basis for systematic benchmarking by compiling a wide range of commonly used datasets, open-sourced implementations and synthetic dataset generation techniques. We further highlight 12 potential directions for future work based on the survey results. It is our firm belief that graph anomaly detection with deep learning is indeed more than a burst of temporary interest, and numerous applications from diverse domains are to surely benefit from it for the years to come.
References
- [1] F. E. Grubbs, “Procedures for detecting outlying observations in samples,” Technometrics, vol. 11, no. 1, pp. 1–21, 1969.
- [2] B. Hooi, N. Shah, A. Beutel, S. Günnemann, L. Akoglu, M. Kumar, D. Makhija, and C. Faloutsos, “Birdnest: Bayesian inference for ratings-fraud detection,” in Proc. SIAM Int. Conf. Data Mining, 2016, pp. 495–503.
- [3] S. Ahmed, K. Hinkelmann, and F. Corradini, “Combining machine learning with knowledge engineering to detect fake news in social networks-a survey,” in Proc. AAAI Conf. Artif. Intell., vol. 12, 2019, p. 8.
- [4] V. Nguyen, K. Sugiyama, P. Nakov, and M. Kan, “Fang: Leveraging social context for fake news detection using graph representation,” in Proc. ACM 29th Int. Conf. Inf. Knowl. Manage., 2020, pp. 1165–1174.
- [5] N. T. Tam, M. Weidlich, B. Zheng, H. Yin, N. Q. V. Hung, and B. Stantic, “From anomaly detection to rumour detection using data streams of social platforms,” VLDB J., vol. 12, no. 9, pp. 1016–1029, 2019.
- [6] R. Yu, H. Qiu, Z. Wen, C. Lin, and Y. Liu, “A survey on social media anomaly detection,” SIGKDD Explor., vol. 18, no. 1, pp. 1–14, 2016.
- [7] A. Benamira, B. Devillers, E. Lesot, A. K. Ray, M. Saadi, and F. D. Malliaros, “Semi-supervised learning and graph neural networks for fake news detection,” in Int. Conf. Adv. Social Netw. Anal. Mining, 2019, pp. 568–569.
- [8] S. Kumar, B. Hooi, D. Makhija, M. Kumar, C. Faloutsos, and V. S. Subrahmanian, “Rev2: Fraudulent user prediction in rating platforms,” in Proc. ACM Int. Conf. Web Search Data Mining, 2018, pp. 333–341.
- [9] P. Bogdanov, C. Faloutsos, M. Mongiovì, E. E. Papalexakis, R. Ranca, and A. K. Singh, “Netspot: Spotting significant anomalous regions on dynamic networks,” in Proc. SIAM Int. Conf. Data Mining, 2013, pp. 28–36.
- [10] B. A. Miller, N. Arcolano, and N. T. Bliss, “Efficient anomaly detection in dynamic, attributed graphs: Emerging phenomena and big data,” in Proc. IEEE Int. Conf. Intell. Secur. Inf., 2013, pp. 179–184.
- [11] K. Miao, X. Shi, and W. Zhang, “Attack signal estimation for intrusion detection in industrial control system,” Comput. Secur., vol. 96, p. 101926, 2020.
- [12] B. Perozzi and L. Akoglu, “Scalable anomaly ranking of attributed neighborhoods,” in Proc. SIAM Int. Conf. Data Mining, 2016, pp. 207–215.
- [13] X. Luo, J. Wu, A. Beheshti, J. Yang, X. Zhang, Y. Wang, and S. Xue, “Comga: Community-aware attributed graph anomaly detection,” in Proc. ACM Int. Conf. Web Search Data Mining, 2022.
- [14] B. Branco, P. Abreu, A. S. Gomes, M. S. C. Almeida, J. T. Ascensão, and P. Bizarro, “Interleaved sequence rnns for fraud detection,” in Proc. ACM SIGKDD 26th Int. Conf. Knowl. Discov. Data Mining, 2020, pp. 3101–3109.
- [15] C. Liu, Q. Zhong, X. Ao, L. Sun, W. Lin, J. Feng, Q. He, and J. Tang, “Fraud transactions detection via behavior tree with local intention calibration,” in Proc. ACM SIGKDD 26th Int. Conf. Knowl. Discov. Data Mining, 2020, pp. 3035–3043.
- [16] Q. Guo, Z. Li, B. An, P. Hui, J. Huang, L. Zhang, and M. Zhao, “Securing the deep fraud detector in large-scale e-commerce platform via adversarial machine learning approach,” in Proc. Int. Conf. World Wide Web, 2019, pp. 616–626.
- [17] B. Iglewicz and D. C. Hoaglin, How to detect and handle outliers. Asq Press, 1993, vol. 16.
- [18] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,” ACM Comput. Surv., vol. 41, no. 3, pp. 15:1–15:58, 2009.
- [19] T. Pourhabibi, O. Kok-Leong, B. H. Kam, and B. Y. Ling, “Fraud detection: A systematic literature review of graph-based anomaly detection approaches,” Decis. Support Syst., p. 113303, 2020.
- [20] X. Sun, C. Zhang, G. Li, D. Sun, F. Ren, A. Y. Zomaya, and R. Ranjan, “Detecting users’ anomalous emotion using social media for business intelligence,” J. Comput. Sci., vol. 25, pp. 193–200, 2018.
- [21] J. Wu, Z. Hong, S. Pan, X. Zhu, Z. Cai, and C. Zhang, “Multi-graph-view learning for graph classification,” in Proc. IEEE Int. Conf. Data Mining, 2014, pp. 590–599.
- [22] Z. Wang and C. Lan, “Towards a hierarchical bayesian model of multi-view anomaly detection,” in Proc. 29th Int. Joint Conf. Artif. Intell., 2020, pp. 2420–2426.
- [23] G. Pang, L. Cao, L. Chen, and H. Liu, “Learning representations of ultrahigh-dimensional data for random distance-based outlier detection,” in Proc. ACM SIGKDD 24th Int. Conf. Knowl. Discov. Data Mining, 2018, pp. 2041–2050.
- [24] G. Pang, C. Shen, L. Cao, and A. V. D. Hengel, “Deep learning for anomaly detection,” ACM Comput. Surv., vol. 54, no. 2, p. 1–38, 2021.
- [25] L. Akoglu, H. Tong, and D. Koutra, “Graph based anomaly detection and description: A survey,” Data Min. Knowl. Discovery, vol. 29, no. 3, pp. 626–688, 2015.
- [26] B. Hooi, K. Shin, H. A. Song, A. Beutel, N. Shah, and C. Faloutsos, “Graph-based fraud detection in the face of camouflage,” ACM Trans. Knowl. Discovery Data, vol. 11, no. 4, pp. 44:1–44:26, 2017.
- [27] Y. Dou, Z. Liu, L. Sun, Y. Deng, H. Peng, and P. S. Yu, “Enhancing graph neural network-based fraud detectors against camouflaged fraudsters,” in Proc. ACM Int. Conf. Inf. Knowl. Manage., 2020, pp. 315–324.
- [28] S. Pandit, D. H. Chau, S. Wang, and C. Faloutsos, “Netprobe: fast and scalable system for fraud detection in online auction networks,” in Proc. 16th Int. Conf. World Wide Web, 2007, pp. 201–210.
- [29] K. Shin, B. Hooi, J. Kim, and C. Faloutsos, “Densealert: Incremental dense-subtensor detection in tensor streams,” in Proc. ACM SIGKDD 23rd Int. Conf. Knowl. Discov. Data Mining, 2017, pp. 1057–1066.
- [30] Z. Liu, J. X. Yu, Y. Ke, X. Lin, and L. Chen, “Spotting significant changing subgraphs in evolving graphs,” in Proc. IEEE 8th Int. Conf. Data Mining, 2008, pp. 917–922.
- [31] J. Wu, Z. Hong, S. Pan, X. Zhu, C. Zhang, and Z. Cai, “Multi-graph learning with positive and unlabeled bags,” in Proc. SIAM Int. Conf. Data Mining, 2014, pp. 217–225.
- [32] L. Gao, J. Wu, C. Zhou, and Y. Hu, “Collaborative dynamic sparse topic regression with user profile evolution for item recommendation,” in Proc. AAAI Conf. Artif. Intell., vol. 31, no. 1, 2017.
- [33] C. C. Aggarwal, Y. Zhao, and S. Y. Philip, “Outlier detection in graph streams,” in Proc. IEEE 27th Int. Conf. Data Eng., 2011, pp. 399–409.
- [34] J. Wu, S. Pan, X. Zhu, C. Zhang, and P. S. Yu, “Multiple structure-view learning for graph classification,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 7, pp. 3236–3251, 2018.
- [35] H. Wang, C. Zhou, X. Chen, J. Wu, S. Pan, and J. Wang, “Graph stochastic neural networks for semi-supervised learning,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020.
- [36] Z. Chen, W. Hendrix, and N. F. Samatova, “Community-based anomaly detection in evolutionary networks,” J. Intell. Inf. Syst., vol. 39, no. 1, pp. 59–85, 2012.
- [37] F. Jie, C. Wang, F. Chen, L. Li, and X. Wu, “Block-structured optimization for anomalous pattern detection in interdependent networks,” in Proc. IEEE Int. Conf. Data Mining, 2019, pp. 1138–1143.
- [38] L. Akoglu, M. McGlohon, and C. Faloutsos, “Oddball: Spotting anomalies in weighted graphs,” in Pacific-Asia Conf. Knowl. Discov. Data Mining. Springer, 2010, pp. 410–421.
- [39] D. Eswaran, C. Faloutsos, S. Guha, and N. Mishra, “Spotlight: Detecting anomalies in streaming graphs,” in Proc. ACM SIGKDD 24th Int. Conf. Knowl. Discov. Data Mining, 2018, pp. 1378–1386.
- [40] N. Li, H. Sun, K. C. Chipman, J. George, and X. Yan, “A probabilistic approach to uncovering attributed graph anomalies,” in Proc. SIAM Int. Conf. Data Mining, 2014, pp. 82–90.
- [41] J. Li, H. Dani, X. Hu, and H. Liu, “Radar: Residual analysis for anomaly detection in attributed networks,” in Proc. 26th Int. Joint Conf. Artif. Intell., 2017, pp. 2152–2158.
- [42] M. W. Mahoney and P. Drineas, “CUR matrix decompositions for improved data analysis,” Proc. Natl. Acad. Sci. USA, vol. 106, no. 3, pp. 697–702, 2009.
- [43] S. M. Erfani, S. Rajasegarar, S. Karunasekera, and C. Leckie, “High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning,” Pattern Recognit., vol. 58, pp. 121–134, 2016.
- [44] S. Thudumu, P. Branch, J. Jin, and J. J. Singh, “A comprehensive survey of anomaly detection techniques for high dimensional big data,” J. Big Data, vol. 7, no. 1, p. 42, 2020.
- [45] A. Boukerche, L. Zheng, and O. Alfandi, “Outlier detection: Methods, models, and classification,” ACM Comput. Surv., vol. 53, no. 3, pp. 55:1–55:37, 2020.
- [46] S. Bulusu, B. Kailkhura, B. Li, P. K. Varshney, and D. Song, “Anomalous instance detection in deep learning: A survey,” arXiv preprint arXiv:2003.06979, 2020.
- [47] R. Chalapathy and S. Chawla, “Deep learning for anomaly detection: A survey,” arXiv preprint arXiv:1901.03407, 2019.
- [48] S. Ranshous, S. Shen, D. Koutra, S. Harenberg, C. Faloutsos, and N. F. Samatova, “Anomaly detection in dynamic networks: A survey,” Wiley Interdiscip. Rev. Comput. Stat., vol. 7, no. 3, pp. 223–247, 2015.
- [49] D. J. D’Souza and K. U. K. Reddy, “Anomaly detection for big data using efficient techniques: A review,” AIDE, pp. 1067–1080, 2021.
- [50] S. Eltanbouly, M. Bashendy, N. AlNaimi, Z. Chkirbene, and A. Erbad, “Machine learning techniques for network anomaly detection: A survey,” in Int. Conf. Inform. IoT Enabling Technol., 2020, pp. 156–162.
- [51] G. Fernandes, J. J. Rodrigues, L. F. Carvalho, J. F. Al-Muhtadi, and M. L. Proença, “A comprehensive survey on network anomaly detection,” Telecommun. Syst., vol. 70, no. 3, pp. 447–489, 2019.
- [52] D. Kwon, H. Kim, J. Kim, S. C. Suh, I. Kim, and K. J. Kim, “A survey of deep learning-based network anomaly detection,” Clust. Comput., vol. 22, pp. 949–961, 2019.
- [53] P. Gogoi, D. K. Bhattacharyya, B. Borah, and J. K. Kalita, “A survey of outlier detection methods in network anomaly identification,” Comput. J., vol. 54, no. 4, pp. 570–588, 2011.
- [54] D. Savage, X. Zhang, X. Yu, P. Chou, and Q. Wang, “Anomaly detection in online social networks,” Soc. Networks, vol. 39, pp. 62–70, 2014.
- [55] R. Wang, K. Nie, T. Wang, Y. Yang, and B. Long, “Deep learning for anomaly detection,” in Proc. ACM 13th Int. Conf. Web Search Data Mining, 2020, pp. 894–896.
- [56] Y. Zhang, J. Wu, Z. Cai, B. Du, and S. Y. Philip, “An unsupervised parameter learning model for rvfl neural network,” Neural Networks, vol. 112, pp. 85–97, 2019.
- [57] Q. Wang, W. Zhao, J. Yang, J. Wu, C. Zhou, and Q. Xing, “Atne-trust: Attributed trust network embedding for trust prediction in online social networks,” in Proc. IEEE Int. Conf. Data Mining, 2020, pp. 601–610.
- [58] F. Liu, S. Xue, J. Wu, C. Zhou, W. Hu, C. Paris, S. Nepal, J. Yang, and P. S. Yu, “Deep learning for community detection: progress, challenges and opportunities,” in Proc. 29th Int. Joint Conf. Artif. Intell., 2020, pp. 4981–4987.
- [59] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A comprehensive survey on graph neural networks,” IEEE Trans. Neural Networks Learn. Syst., vol. 32, no. 1, pp. 4–24, 2021.
- [60] P. Cui, X. Wang, J. Pei, and W. Zhu, “A survey on network embedding,” IEEE Trans. Knowl. Data Eng., vol. 31, no. 5, pp. 833–852, 2019.
- [61] S. Zhu, S. Pan, C. Zhou, J. Wu, Y. Cao, and B. Wang, “Graph geometry interaction learning,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020.
- [62] X. Su, S. Xue, F. Liu, J. Wu, J. Yang, C. Zhou, W. Hu, C. Paris, S. Nepal, D. Jin et al., “A comprehensive survey on community detection with deep learning,” arXiv preprint arXiv:2105.12584, 2021.
- [63] C. C. Noble and D. J. Cook, “Graph-based anomaly detection,” in Proc. ACM SIGKDD 9th Int. Conf. Knowl. Discov. Data Mining, 2003, pp. 631–636.
- [64] X. Teng, M. Yan, A. M. Ertugrul, and Y. Lin, “Deep into hypersphere: Robust and unsupervised anomaly discovery in dynamic networks,” in Proc. 27th Int. Joint Conf. Artif. Intell., 2018, pp. 2724–2730.
- [65] N. Shah, A. Beutel, B. Hooi, L. Akoglu, S. Günnemann, D. Makhija, M. Kumar, and C. Faloutsos, “Edgecentric: Anomaly detection in edge-attributed networks,” in Proc. IEEE 16th Int. Conf. Data Mining, 2016, pp. 327–334.
- [66] B. Wang, N. Z. Gong, and H. Fu, “Gang: Detecting fraudulent users in online social networks via guilt-by-association on directed graphs,” in Proc. IEEE Int. Conf. Data Mining, 2017, pp. 465–474.
- [67] T. Zhao, C. Deng, K. Yu, T. Jiang, D. Wang, and M. Jiang, “Error-bounded graph anomaly loss for gnns,” in Proc. ACM Int. Conf. Inf. Knowl. Manage., 2020, pp. 1873–1882.
- [68] Q. Zhang and S. Zhu, “Visual interpretability for deep learning: a survey,” Frontiers Inf. Technol. Electron. Eng., vol. 19, no. 1, pp. 27–39, 2018.
- [69] L. Akoglu, “Anomaly mining–past, present and future,” arXiv preprint arXiv:2105.10077, 2021.
- [70] Y. Zhao, R. A. Rossi, and L. Akoglu, “Automating outlier detection via meta-learning,” arXiv preprint arXiv:2009.10606, 2020.
- [71] L. Ruff, J. R. Kauffmann, R. A. Vandermeulen, G. Montavon, W. Samek, M. Kloft, T. G. Dietterich, and K.-R. Müller, “A unifying review of deep and shallow anomaly detection,” IEEE, 2021.
- [72] A. Bojchevski and S. Günnemann, “Bayesian robust attributed graph clustering: Joint learning of partial anomalies and group structure,” in Proc. AAAI 32nd Conf. Artif. Intell., 2018.
- [73] M. Zhu and H. Zhu, “Mixedad: A scalable algorithm for detecting mixed anomalies in attributed graphs,” in Proc. AAAI Conf. Artif. Intell., vol. 34, no. 01, 2020, pp. 1274–1281.
- [74] B. Perozzi, L. Akoglu, P. I. Sánchez, and E. Müller, “Focused clustering and outlier detection in large attributed graphs,” in Proc. ACM SIGKDD 20th Int. Conf. Knowl. Discov. Data Mining, 2014, pp. 1346–1355.
- [75] Q. Ding, N. Katenka, P. Barford, E. D. Kolaczyk, and M. Crovella, “Intrusion as (anti)social communication: characterization and detection,” in Proc. ACM SIGKDD 18th Int. Conf. Knowl. Discov. Data Mining, 2012, pp. 886–894.
- [76] B. Hooi, H. A. Song, A. Beutel, N. Shah, K. Shin, and C. Faloutsos, “Fraudar: Bounding graph fraud in the face of camouflage,” in Proc. ACM SIGKDD 22nd Int. Conf. Knowl. Discov. Data Mining, 2016, pp. 895–904.
- [77] R. Hu, C. C. Aggarwal, S. Ma, and J. Huai, “An embedding approach to anomaly detection,” in Proc. IEEE 32nd Int. Conf. Data Eng., 2016, pp. 385–396.
- [78] G. Karypis and V. Kumar, “A fast and high quality multilevel scheme for partitioning irregular graphs,” J. Sci. Comput., vol. 20, no. 1, pp. 359–392, 1998.
- [79] B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in Proc. ACM SIGKDD 20th Int. Conf. Knowl. Discov. Data Mining, 2014, pp. 701–710.
- [80] A. Grover and J. Leskovec, “Node2vec: Scalable feature learning for metworks,” in Proc. ACM SIGKDD 22nd Int. Conf. Knowl. Discov. Data Mining, 2016, pp. 855–864.
- [81] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei, “Line: Large-scale information network embedding,” in Proc. 24th Int. Conf. World Wide Web, 2015, pp. 1067–1077.
- [82] S. Bandyopadhyay, L. N, S. V. Vivek, and M. N. Murty, “Outlier resistant unsupervised deep architectures for attributed network embedding,” in Proc. ACM 13th Int. Conf. Web Search Data Mining, 2020, pp. 25–33.
- [83] S. Bandyopadhyay, N. Lokesh, and M. N. Murty, “Outlier aware network embedding for attributed networks,” in Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, 2019, pp. 12–19.
- [84] W. Yu, W. Cheng, C. C. Aggarwal, K. Zhang, H. Chen, and W. Wang, “Netwalk: A flexible deep embedding approach for anomaly detection in dynamic networks,” in Proc. ACM SIGKDD 24th Int. Conf. Knowl. Discov. Data Mining, 2018, pp. 2672–2681.
- [85] L. Cai, Z. Chen, C. Luo, J. Gui, J. Ni, D. Li, and H. Chen, “Structural temporal graph neural networks for anomaly detection in dynamic graphs,” arXiv preprint arXiv:2005.07427, 2020.
- [86] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, “Lof: identifying density-based local outliers,” in Proc. ACM SIGMOD Int. Conf. Manage. Data, 2000, pp. 93–104.
- [87] C. C. Aggarwal and P. S. Yu, “Outlier detection for high dimensional data,” in Proc. ACM SIGMOD Int. Conf. Manage. Data, 2001, pp. 37–46.
- [88] P. Morales, R. S. Caceres, and T. Eliassi-Rad, “Selective network discovery via deep reinforcement learning on embedded spaces,” Appl. Network Sci., vol. 6, no. 1, pp. 1–20, 2021.
- [89] W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proc. 31st Int. Conf. Neural Inf. Process. Syst., 2017, pp. 1024–1034.
- [90] W. L. Hamilton, R. Ying, and J. Leskovec, “Representation learning on graphs: Methods and applications,” IEEE Data Eng. Bull., vol. 40, no. 3, pp. 52–74, 2017.
- [91] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” arXiv:1710.10903, 2017.
- [92] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. Int. Conf. Learn. Represent., 2017.
- [93] K. Ding, J. Li, R. Bhanushali, and H. Liu, “Deep anomaly detection on attributed networks,” in Proc. SIAM Int. Conf. Data Mining, 2019, pp. 594–602.
- [94] Z. Peng, M. Luo, J. Li, L. Xue, and Q. Zheng, “A deep multi-view framework for anomaly detection on attributed networks,” IEEE Trans. Knowl. Data Eng., 2020.
- [95] X.-R. Sheng, D.-C. Zhan, S. Lu, and Y. Jiang, “Multi-view anomaly detection: neighborhood in locality matters,” in Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, 2019, pp. 4894–4901.
- [96] J. Wu, S. Pan, X. Zhu, and Z. Cai, “Boosting for multi-graph classification,” IEEE Trans. Cybern., vol. 45, no. 3, pp. 416–429, 2015.
- [97] J. Wu, X. Zhu, C. Zhang, and Z. Cai, “Multi-instance multi-graph dual embedding learning,” in Proc. IEEE 13th Int. Conf. Data Mining, 2013, pp. 827–836.
- [98] Y. Li, X. Huang, J. Li, M. Du, and N. Zou, “Specae: Spectral autoencoder for anomaly detection in attributed networks,” in Proc. ACM 28th Int. Conf. Inf. Knowl. Manage., 2019, pp. 2233–2236.
- [99] J. Wang, R. Wen, C. Wu, Y. Huang, and J. Xion, “Fdgars: Fraudster detection via graph convolutional networks in online app review system,” in Proc. Int. Conf. World Wide Web, 2019, pp. 310–316.
- [100] S. Zhang, H. Yin, T. Chen, Q. V. H. Nguyen, Z. Huang, and L. Cui, “Gcn-based user representation learning for unifying robust recommendation and fraudster detection,” in Proc. 43rd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2020, pp. 689–698.
- [101] K. Ding, J. Li, and H. Liu, “Interactive anomaly detection on attributed networks,” in Proc. ACM 12th Int. Conf. Web Search Data Mining, 2019, pp. 357–365.
- [102] J. Langford and T. Zhang, “The epoch-greedy algorithm for multi-armed bandits with side information,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2008, pp. 817–824.
- [103] R. A. Rossi, B. Gallagher, J. Neville, and K. Henderson, “Modeling dynamic behavior in large evolving graphs,” in Proc. ACM 6th Int. Conf. Web Search Data Mining, 2013, pp. 667–676.
- [104] H. Wang, J. Wu, W. Hu, and X. Wu, “Detecting and assessing anomalous evolutionary behaviors of nodes in evolving social networks,” ACM Trans. Knowl. Discovery Data, vol. 13, no. 1, pp. 12:1–12:24, 2019.
- [105] Y. Wang, J. Zhang, S. Guo, H. Yin, C. Li, and H. Chen, “Decoupling representation learning and classification for gnn-based anomaly detection,” in Proc. Int. ACM SIGIR 44th Conf. Res. Develop. Inf. Retrieval, 2021, pp. 1239–1248.
- [106] N. Liu, X. Huang, and X. Hu, “Accelerated local anomaly detection via resolving attributed networks,” in Proc. 26th Int. Joint Conf. Artif. Intell., 2017, pp. 2337–2343.
- [107] Z. Peng, M. Luo, J. Li, H. Liu, and Q. Zheng, “Anomalous: A joint modeling approach for anomaly detection on attributed networks,” in Proc. 27th Int. Joint Conf. Artif. Intell., 2018, pp. 3513–3519.
- [108] L. Wu, X. Hu, F. Morstatter, and H. Liu, “Adaptive spammer detection with sparse group modeling,” in Proc. AAAI Int. Conf. Web Soc. Media, 2017, pp. 319–326.
- [109] Y. Pei, T. Huang, W. van Ipenburg, and M. Pechenizkiy, “Resgcn: Attention-based deep residual modeling for anomaly detection on attributed networks,” Mach. Learn., pp. 1–23, 2021.
- [110] H. Fan, F. Zhang, and Z. Li, “Anomalydae: Dual autoencoder for anomaly detection on attributed networks,” in Proc. IEEE Int. Conf. Acoustics Speech Signal Processing, 2020, pp. 5685–5689.
- [111] D. Wang, Y. Qi, J. Lin, P. Cui, Q. Jia, Z. Wang, Y. Fang, Q. Yu, J. Zhou, and S. Yang, “A semi-supervised graph attentive network for financial fraud detection,” in Proc. IEEE Int. Conf. Data Mining, 2019, pp. 598–607.
- [112] K. Ding, J. Li, N. Agarwal, and H. Liu, “Inductive anomaly detection on attributed networks,” in Proc. 29th Int. Joint Conf. Artif. Intell., 2020, pp. 1288–1294.
- [113] L. Zhang, J. Yuan, Z. Liu, Y. Pei, and L. Wang, “A robust embedding method for anomaly detection on attributed networks,” in Proc. Int. Joint Conf. Neural Netw., 2019, pp. 1–8.
- [114] J. Liang, P. Jacobs, J. Sun, and S. Parthasarathy, “Semi-supervised embedding in attributed networks with outliers,” in Proc. SIAM Int. Conf. Data Mining, 2018, pp. 153–161.
- [115] X. Wang, B. Jin, Y. Du, P. Cui, Y. Tan, and Y. Yang, “One-class graph neural networks for anomaly detection in attributed networks,” Neural Comput. Appl., pp. 1–13, 2021.
- [116] Y. Liu, Z. Li, S. Pan, C. Gong, C. Zhou, and G. Karypis, “Anomaly detection on attributed networks via contrastive self-supervised learning,” IEEE Trans. Neural Networks Learn. Syst., 2021.
- [117] K. Ding, K. Shu, X. Shan, J. Li, and H. Liu, “Cross-domain graph anomaly detection,” IEEE Trans. Neural Networks Learn. Syst., 2021.
- [118] G. Zhang, J. Wu, J. Yang, A. Beheshti, S. Xue, C. Zhou, and Q. Z. Sheng, “Fraudre: Fraud detection dual-resistant to graph inconsistency and imbalance,” in Proc. IEEE Int. Conf. Data Mining, 2021, pp. 66–77.
- [119] K. Ding, Q. Zhou, H. Tong, and H. Liu, “Few-shot network anomaly detection via cross-network meta-learning,” in Proc. Web Conf., 2021, pp. 2448–2456.
- [120] X. Teng, Y. Lin, and X. Wen, “Anomaly detection in dynamic networks using multi-view time-series hypersphere learning,” in Proc. ACM Int. Conf. Inf. Knowl. Manage., 2017, pp. 827–836.
- [121] P. Zheng, S. Yuan, X. Wu, J. Li, and A. Lu, “One-class adversarial nets for fraud detection,” in Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, 2019, pp. 1286–1293.
- [122] N. Ailon, R. Jaiswal, and C. Monteleoni, “Streaming k-means approximation,” in Proc. Int. Conf. Neural Inf. Process. Syst., vol. 4, 2009, p. 2.
- [123] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2014.
- [124] Z. Dai, Z. Yang, F. Yang, W. W. Cohen, and R. Salakhutdinov, “Good semi-supervised learning that requires a bad gan,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2017.
- [125] N. Srivastava, E. Mansimov, and R. Salakhutdinov, “Unsupervised learning of video representations using lstms,” in Int. Conf. Mach. Learn., vol. 37, 2015, pp. 843–852.
- [126] Y.-Y. Chang, P. Li, R. Sosic, M. Afifi, M. Schweighauser, and J. Leskovec, “F-fade: Frequency factorization for anomaly detection in edge streams,” in Proc. ACM 14th Int. Conf. Web Search Data Mining, 2021, pp. 589–597.
- [127] L. Ouyang, Y. Zhang, and Y. Wang, “Unified graph embedding-based anomalous edge detection,” in Proc. Int. Joint Conf. Neural Netw., 2020, pp. 1–8.
- [128] D. Duan, L. Tong, Y. Li, J. Lu, L. Shi, and C. Zhang, “Aane: Anomaly aware network embedding for anomalous link detection,” in Proc. IEEE Int. Conf. Data Mining, 2020, pp. 1002–1007.
- [129] L. Xu, X. Wei, J. Cao, and P. S. Yu, “Icane: Interaction content-aware network embedding via co-embedding of nodes and edges,” Int. J. Data Sci. Anal., vol. 9, no. 4, pp. 401–414, 2020.
- [130] S. Ranshous, S. Harenberg, K. Sharma, and N. F. Samatova, “A scalable approach for outlier detection in edge streams using sketch-based approximations,” in Proc. SIAM Int. Conf. Data Mining, 2016, pp. 189–197.
- [131] L. Zheng, Z. Li, J. Li, Z. Li, and J. Gao, “Addgraph: Anomaly detection in dynamic graph using attention-based temporal gcn,” in Proc. Int. Joint Conf. Artif. Intell., 2019, pp. 4419–4425.
- [132] Q. Cui, S. Wu, Y. Huang, and L. Wang, “A hierarchical contextual attention-based network for sequential recommendation,” Neurocomputing, vol. 358, pp. 141–149, 2019.
- [133] M. Shao, J. Li, F. Chen, and X. Chen, “An efficient framework for detecting evolving anomalous subgraphs in dynamic networks,” in Proc. IEEE Conf. Comput. Commun., 2018, pp. 2258–2266.
- [134] H. Wang, C. Zhou, J. Wu, W. Dang, X. Zhu, and J. Wang, “Deep structure learning for fraud detection,” in Proc. IEEE Int. Conf. Data Mining, 2018, pp. 567–576.
- [135] M. Ester, H. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proc. ACM Int. Conf. Knowl. Discov. Data Mining, 1996, pp. 226–231.
- [136] M. Zheng, C. Zhou, J. Wu, S. Pan, J. Shi, and L. Guo, “Fraudne: A joint embedding approach for fraud detection,” in Proc. Int. Joint Conf. Neural Netw., 2018, pp. 1–8.
- [137] J. Wu, X. Zhu, C. Zhang, and P. S. Yu, “Bag constrained structure pattern mining for multi-graph classification,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 10, pp. 2382–2396, 2014.
- [138] Q. Sun, J. Li, H. Peng, J. Wu, Y. Ning, P. S. Yu, and L. He, “Sugar: Subgraph neural network with reinforcement pooling and self-supervised mutual information mechanism,” in Proc. Int. Conf. World Wide Web, 2021, pp. 2081–2091.
- [139] E. A. Manzoor, S. M. Milajerdi, and L. Akoglu, “Fast memory-efficient anomaly detection in streaming heterogeneous graphs,” in Proc. ACM SIGKDD 22nd Int. Conf. Knowl. Discov. Data Mining, 2016, pp. 1035–1044.
- [140] B. Hooi, L. Akoglu, D. Eswaran, A. Pandey, M. Jereminov, L. Pileggi, and C. Faloutsos, “Changedar: Online localized change detection for sensor data on a graph,” in Proc. ACM 27th Int. Conf. Inf. Knowl. Manage., 2018, pp. 507–516.
- [141] Y. Dou, K. Shu, C. Xia, P. S. Yu, and L. Sun, “User preference-aware fake news detection,” arXiv preprint arXiv:2104.12259, 2021.
- [142] L. Zhao and L. Akoglu, “On using classification datasets to evaluate graph outlier detection: Peculiar observations and new insights,” arXiv preprint arXiv:2012.12931, 2020.
- [143] L. Ruff, R. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui, A. Binder, E. Müller, and M. Kloft, “Deep one-class classification,” in Int. Conf. Mach. Learn., 2018, pp. 4393–4402.
- [144] A. Narayanan, M. Chandramohan, R. Venkatesan, L. Chen, Y. Liu, and S. Jaiswal, “graph2vec: Learning distributed representations of graphs,” arXiv preprint arXiv:1707.05005, 2017.
- [145] S. Verma and Z.-L. Zhang, “Hunt for the unique, stable, sparse and fast feature learning on graphs,” in Proc. 31st Int. Conf. Neural Inf. Process. Syst., 2017, pp. 87–97.
- [146] G. Zhang, Z. Li, J. Huang, J. Wu, C. Zhou, J. Yang, and J. Gao, “efraudcom: An e-commerce fraud detection system via competitive graph neural networks,” ACM Trans. Inf. Syst., 2021.
- [147] X. Teng, M. Yan, A. M. Ertugrul, and Y. Lin, “Deep into hypersphere: Robust and unsupervised anomaly discovery in dynamic networks,” in Proc. 27th Int. Joint Conf. Artif. Intell., 2018, pp. 2724–2730.
- [148] Y. Wan, Y. Liu, D. Wang, and Y. Wen, “Glad-paw: Graph-based log anomaly detection by position aware weighted graph attention network,” in Pacific-Asia Conf. Knowl. Discov. Data Mining, 2021, pp. 66–77.
- [149] Z. Zhang, J. Jia, B. Wang, and N. Z. Gong, “Backdoor attacks to graph neural networks,” in Proc. 26th ACM Symp. Access Control Models Technol., 2021, pp. 15–26.
- [150] H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, “Adversarial attack on graph structured data,” in Int. Conf. Mach. Learn., 2018, pp. 1115–1124.
- [151] X. Lin, C. Zhou, H. Yang, J. Wu, H. Wang, Y. Cao, and B. Wang, “Exploratory adversarial attacks on graph neural networks,” in Proc. IEEE Int. Conf. Data Mining, 2020, pp. 1136–1141.
- [152] L. Gutiérrez-Gómez, A. Bovet, and J.-C. Delvenne, “Multi-scale anomaly detection on attributed networks,” in Proc. AAAI Conf. Artif. Intell., vol. 34, no. 01, 2020, pp. 678–685.
- [153] H. Nilforoshan and N. Shah, “Slicendice: Mining suspicious multi-attribute entity groups with multi-view graphs,” in Proc. IEEE Int. Conf. Data Sci. Adv. Anal., 2019, pp. 351–363.
- [154] Z. Liu, Y. Dou, P. S. Yu, Y. Deng, and H. Peng, “Alleviating the inconsistency problem of applying graph neural network to fraud detection,” in Proc. 43rd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, 2020, pp. 1569–1572.
- [155] S. Dhawan, S. C. R. Gangireddy, S. Kumar, and T. Chakraborty, “Spotting collective behaviour of online frauds in customer reviews,” in Proc. Int. Joint Conf. Artif. Intell., 2019, pp. 245–251.
- [156] Y.-J. Lu and C.-T. Li, “Gcan: Graph-aware co-attention networks for explainable fake news detection on social media,” in Proc. Annu. Meet. Assoc. Comput. Linguist. Proc. Conf., 2020, pp. 505–514.
- [157] Q. Huang, J. Yu, J. Wu, and B. Wang, “Heterogeneous graph attention networks for early detection of rumors on twitter,” in Proc. Int. Joint Conf. Neural Netw., 2020, pp. 1–8.
- [158] C. Yuan, Q. Ma, W. Zhou, J. Han, and S. Hu, “Jointly embedding the local and global relations of heterogeneous graph for rumor detection,” in Proc. IEEE Int. Conf. Data Mining, 2019, pp. 796–805.
- [159] Z. Liu, C. Chen, X. Yang, J. Zhou, X. Li, and L. Song, “Heterogeneous graph neural networks for malicious account detection,” in Proc. ACM 27th Int. Conf. Inf. Knowl. Manage., 2018, pp. 2077–2085.
- [160] M. Yoon, B. Hooi, K. Shin, and C. Faloutsos, “Fast and accurate anomaly detection in dynamic graphs with a two-pronged approach,” in Proc. ACM SIGKDD 25th Int. Conf. Knowl. Discov. Data Mining, 2019, pp. 647–657.
- [161] S. Bhatia, B. Hooi, M. Yoon, K. Shin, and C. Faloutsos, “Midas: Microcluster-based detector of anomalies in edge streams,” in Proc. AAAI Conf. Artif. Intell., vol. 34, no. 04, 2020, pp. 3242–3249.
- [162] D. Eswaran and C. Faloutsos, “Sedanspot: Detecting anomalies in edge streams,” in Proc. IEEE Int. Conf. Data Mining, 2018, pp. 953–958.
- [163] B. Zong, Q. Song, M. R. Min, W. Cheng, C. Lumezanu, D. Cho, and H. Chen, “Deep autoencoding gaussian mixture model for unsupervised anomaly detection,” in Proc. Int. Conf. Learn. Represent., 2018.
- [164] G. Pang, C. Shen, and A. van den Hengel, “Deep anomaly detection with deviation networks,” in Proc. ACM SIGKDD 25th Int. Conf. Knowl. Discov. Data Mining, 2019, pp. 353–362.
- [165] C. Zhou and R. C. Paffenroth, “Anomaly detection with robust deep autoencoders,” in Proc. ACM SIGKDD 23rd Int. Conf. Knowl. Discov. Data Mining, 2017, pp. 665–674.
- [166] R. Chalapathy, E. Toth, and S. Chawla, “Group anomaly detection using deep generative models,” in European Conf. Mach. Learn. Knowl. Discov. Databases. Springer, 2018, pp. 173–189.
- [167] L. Ruff, R. A. Vandermeulen, N. Görnitz, A. Binder, E. Müller, K. Müller, and M. Kloft, “Deep semi-supervised anomaly detection,” arXiv preprint arXiv:1906.02694, 2019.
- [168] S. Kim, Y. Tsai, K. Singh, Y. Choi, E. Ibok, C. Li, and M. Cha, “Date: Dual attentive tree-aware embedding for customs fraud detection,” in Proc. ACM SIGKDD 26th Int. Conf. Knowl. Discov. Data Mining, 2020, pp. 2880–2890.
- [169] Q. Huang, C. Zhou, J. Wu, L. Liu, and B. Wang, “Deep spatial–temporal structure learning for rumor detection on twitter,” Neural Comput. Appl., 08 2020.
- [170] S. Rayana and L. Akoglu, “Less is more: Building selective anomaly ensembles,” ACM Trans. Knowl. Discov. Data, vol. 10, no. 4, pp. 1–33, 2016.
- [171] P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad, “Collective classification in network data,” AI Mag., vol. 29, no. 3, pp. 93–106, 2008.
- [172] P. I. Sánchez, E. Müller, F. Laforet, F. Keller, and K. Böhm, “Statistical selection of congruent subspaces for mining attributed graphs,” in Proc. IEEE 13th Int. Conf. Data Mining, 2013, pp. 647–656.
- [173] A. Lancichinetti and S. Fortunato, “Community detection algorithms: a comparative analysis,” Physical review E, vol. 80, no. 5, p. 056117, 2009.
- [174] D. J. Watts and S. H. Strogatz, “Collective dynamics of ‘small-world’networks,” nature, vol. 393, no. 6684, pp. 440–442, 1998.
- [175] L. Akoglu and C. Faloutsos, “Rtg: A recursive realistic graph generator using random typing,” in European Conf. Mach. Learn. Knowl. Discov. Databases. Springer, 2009, pp. 13–28.
- [176] L. Bozarth and C. Budak, “Toward a better performance evaluation framework for fake news classification,” in Proc. AAAI Int. Conf. Web Soc. Media, 2020, pp. 60–71.
- [177] N. Elmrabit, F. Zhou, F. Li, and H. Zhou, “Evaluation of machine learning algorithms for anomaly detection,” in Cyber Security, 2020, pp. 1–8.
- [178] A. H. Engly, A. R. Larsen, and W. Meng, “Evaluation of anomaly-based intrusion detection with combined imbalance correction and feature selection,” in Int. Conf. Netw. Secur., vol. 12570, 2020, pp. 277–291.
- [179] E. Alsentzer, S. G. Finlayson, M. M. Li, and M. Zitnik, “Subgraph neural networks,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020.
- [180] H. Wang, Q. Zhang, J. Wu, S. Pan, and Y. Chen, “Time series feature learning with labeled and unlabeled data,” Pattern Recognition, vol. 89, pp. 55–66, 2019.
- [181] Q. Zhang, J. Wu, P. Zhang, G. Long, and C. Zhang, “Salient subsequence learning for time series clustering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 9, pp. 2193–2207, 2018.
- [182] A. Sankar, Y. Wu, L. Gou, W. Zhang, and H. Yang, “Dysat: Deep neural representation learning on dynamic graphs via self-attention networks,” in Proc. ACM Int. Conf. Web Search Data Mining, 2020, pp. 519–527.
- [183] S. M. Kazemi, R. Goel, K. Jain, I. Kobyzev, A. Sethi, P. Forsyth, and P. Poupart, “Representation learning for dynamic graphs: A survey,” J. Mach. Learn. Res., vol. 21, pp. 70:1–70:73, 2020.
- [184] W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proc. 31st Int. Conf. Neural Inf. Process. Syst., 2017, pp. 1024–1034.
- [185] M. R. Khan and J. E. Blumenstock, “Multi-gcn: Graph convolutional networks for multi-view networks, with applications to global poverty,” in Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, 2019, pp. 606–613.
- [186] S. Fan, X. Wang, C. Shi, E. Lu, K. Lin, and B. Wang, “One2multi graph autoencoder for multi-view graph clustering,” in Proc. Int. Conf. World Wide Web, 2020, pp. 3070–3076.
- [187] J. Cheng, Q. Wang, Z. Tao, D. Xie, and Q. Gao, “Multi-view attribute graph convolution networks for clustering,” in Proc. Int. Joint Conf. Artif. Intell., 2020, pp. 2973–2979.
- [188] J. Zhang, B. Cao, S. Xie, C. Lu, P. S. Yu, and A. B. Ragin, “Identifying connectivity patterns for brain diseases via multi-side-view guided deep architectures,” in Proc. SIAM Int. Conf. Data Mining, 2016, pp. 36–44.
- [189] H. Xiao, J. Gao, D. S. Turaga, L. H. Vu, and A. Biem, “Temporal multi-view inconsistency detection for network traffic analysis,” in Proc. 24th Int. Conf. World Wide Web, 2015, pp. 455–465.
- [190] E. Gujral, R. Pasricha, and E. Papalexakis, “Beyond rank-1: Discovering rich community structure in multi-aspect graphs,” in Proc. Int. Conf. World Wide Web, 2020, pp. 452–462.
- [191] N. Shah, A. Beutel, B. Gallagher, and C. Faloutsos, “Spotting suspicious link behavior with fbox: An adversarial perspective,” in Proc. IEEE Int. Conf. Data Mining, 2014, pp. 959–964.
- [192] H. Chen, H. Yin, X. Sun, T. Chen, B. Gabrys, and K. Musial, “Multi-level graph convolutional networks for cross-platform anchor link prediction,” in Proc. ACM SIGKDD 26th Int. Conf. Knowl. Discov. Data Mining, 2020, pp. 1503–1511.
- [193] A. Guzzo, A. Pugliese, A. Rullo, D. Sacca, and A. Piccolo, “Malevolent activity detection with hypergraph-based models,” IEEE Trans. Knowl. Data Eng., vol. 29, no. 5, pp. 1115–1128, 2017.
- [194] X. Sun, H. Yin, B. Liu, H. Chen, J. Cao, Y. Shao, and N. Q. Viet Hung, “Heterogeneous hypergraph embedding for graph classification,” in Proc. ACM 14th Int. Conf. Web Search Data Mining, 2021, pp. 725–733.
- [195] J. Silva and R. Willett, “Hypergraph-based anomaly detection of high-dimensional co-occurrences,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 3, pp. 563–569, 2008.
- [196] J. Tang, C. C. Aggarwal, and H. Liu, “Node classification in signed social networks,” in Proc. SIAM Int. Conf. Data Mining, 2016, pp. 54–62.
- [197] S. Gao, L. Denoyer, and P. Gallinari, “Temporal link prediction by integrating content and structure information,” in Proc. ACM 20th Int. Conf. Inf. Knowl. Manage., 2011, pp. 1169–1174.
- [198] V. Sanh, T. Wolf, and S. Ruder, “A hierarchical multi-task approach for learning embeddings from semantic tasks,” in Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, 2019, pp. 6949–6956.
- [199] M. Hessel, H. Soyer, L. Espeholt, W. Czarnecki, S. Schmitt, and H. van Hasselt, “Multi-task deep reinforcement learning with popart,” in Proc. AAAI Conf. Artif. Intell., vol. 33, no. 01, 2019, pp. 3796–3803.
- [200] G. Pang, C. Yan, C. Shen, A. v. d. Hengel, and X. Bai, “Self-trained deep ordinal regression for end-to-end video anomaly detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 12 173–12 182.
- [201] B. Sanchez-Lengeling, J. N. Wei, B. K. Lee, E. Reif, P. Wang, W. W. Qian, K. McCloskey, L. J. Colwell, and A. B. Wiltschko, “Evaluating attribution for graph neural networks,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2020.
- [202] T. Idé, A. Dhurandhar, J. Navrátil, M. Singh, and N. Abe, “Anomaly attribution with likelihood compensation,” in Proc. AAAI Conf. Artif. Intell., vol. 35, no. 5, 2021, pp. 4131–4138.
- [203] F. Hohman, H. Park, C. Robinson, and D. H. P. Chau, “Summit: Scaling deep learning interpretability by visualizing activation and attribution summarizations,” IEEE Trans. Vis. Comput. Graph., vol. 26, no. 1, pp. 1096–1106, 2019.
- [204] M. Kulldorff, “A spatial scan statistic,” Commun. Stat.- Theory Methods, vol. 26, no. 6, pp. 1481–1496, 1997.
- [205] D. B. Neill, A. W. Moore, M. Sabhnani, and K. Daniel, “Detection of emerging space-time clusters,” in Proc. ACM SIGKDD 11th Int. Conf. Knowl. Discov. Data Mining, 2005, pp. 218–227.
- [206] J. L. Sharpnack, A. Krishnamurthy, and A. Singh, “Near-optimal anomaly detection in graphs using lovasz extended scan statistic,” in Proc. 26th Int. Conf. Neural Inf. Process. Syst., 2013, pp. 1959–1967.
- [207] R. H. Berk and D. H. Jones, “Goodness-of-fit test statistics that dominate the kolmogorov statistics,” Wahrsch. Verw. Gebiete., vol. 47, no. 1, pp. 47–59, 1979.
- [208] T. Babaie, S. Chawla, S. Ardon, and Y. Yu, “A unified approach to network anomaly detection,” in BigData, 2014, pp. 650–655.
- [209] X. Li, P. Chen, L. Jing, Z. He, and G. Yu, “Swisslog: Robust and unified deep learning based log anomaly detection for diverse faults,” in Proc. IEEE 31st Int. Symp. Software Reliab. Eng., 2020, pp. 92–103.
Appendix A Challenges in Graph Anomaly Detection
Due to the complexity of anomaly detection and graph data mining, adopting deep learning technologies for graph anomaly detection faces a number of challenges:
Data-CH1. Ground-truth is scarce. In most cases, there is little or no prior knowledge about the features or patterns of anomalies in real applications. Ground-truth anomalies are often identified by domain experts, and this generally cost-prohibitive. As a result, labeled ground-truth anomalies are often unavailable for analysis in a wide range of disciplines.
Data-CH2. Various types of graphs. Different graphs model different real-world data. For instance, plain graphs contain only structural information, attributed graphs contain both structural and attribute information, and heterogeneous graphs represent the complex relations between different types of objects. These graphs reflect the real-world data in different forms, and graph anomalies will show different deviating patterns in different types of graphs.
Data-CH3. Various types of graph anomalies. Given a specific type of graph, graph anomalies could appear as a specific node, edge, sub-graph, or an entire graph, and each type of these anomalies is significantly different from others. This means detection methods must involve concise definitions of anomalies and be able to identify concrete clues about the deviating patterns of anomalies.
Data-CH4. High dimensionality and large scale. Representing the structure information of real-world networks usually results in high dimensional and large-scale data [NIPS20201] because real-world network often contain millions or billions of nodes. Graph anomaly detection techniques, hence, should be capable of handling such high dimensional and large scale data; this includes the ability to extract anomalous patterns under the constraints of execution time and feasible computing resources.
Data-CH5. Interdependencies and dynamics. The relationships between real objects reveal their interdependencies and they can no longer be treated individually for anomaly detection. That is to say, the detection techniques need to consider the deviating patterns of anomalies by assessing the pairwise, triadic, and higher relationships among objects restored in conventional graphs or hypergraphs [silva2008hypergraph, sun2021heterogeneous, guzzo2017malevolent, chen2020multi]. In addition, the dynamic nature of real-world networks makes detection problems much more challenging.
Data-CH6. Class imbalance. As anomalies are rare occurrences, only a very small proportion of the real-world data might be anomalous. This naturally introduces a critical class imbalance problem to anomaly detection because the number of normal objects is far greater than anomalies in the training data. If no further actions are taken to tackle this challenge, learning-based anomaly detection techniques might overlook the patterns of anomalies, leading to sub-optimal results.
Data-CH7. Unknown and camouflage of anomalies. In reality, knowledge about anomalies mainly stems from human expertise. There are still many unknown anomalies across different application domains, and new types of anomalies might appear in the future. Nevertheless, real-world anomalies can hide or be camouflaged as benign objects to bypass existing detection systems. In graphs, anomalies might hide themselves by connecting with many normal nodes or by mimicking their attributes. Detection methods, therefore, need to be adaptive to unknown and novel anomalies and robust to camouflaged anomalies.
These data-specific challenges and technical-specific challenges (discussed in Section 1.1) are summarized in Table. VII, along with the corresponding articles that aim to address them.
Challenges | Details | Methods |
Data-specific Challenges | ||
Data-CH1 | Ground-truth is scarce | [li2017radar, ding2019interactive, bandyopadhyay2020outlier, bandyopadhyay2019outlier, yu2018netwalk, liu2017accelerated, ding2020inductive, teng2018deep, zheng2019one, peng2020deep, wang2019fdgars, SemiGNN, DBLP:conf/ijcnn/Ouyang0020, zheng2019addgraph] |
Data-CH2 | Various types of graphs | [liang2018semi, peng2018anomalous, ding2019deep, li2019specae, ding2019interactive, fan2020anomalydae, bandyopadhyay2020outlier, bandyopadhyay2019outlier, liu2017accelerated, zhang2019robust, peng2020deep, CARE-GNN, wang2018deep, FraudNE, dou2021user] |
Data-CH3 | Various types of graph anomalies | [li2019specae, ding2019interactive, peng2020deep, CARE-GNN, AANE] |
Data-CH4 | High dimensionality and large scale | [zong2018deep, fan2020anomalydae, bandyopadhyay2020outlier, yu2018netwalk, liu2017accelerated, teng2018deep, hu2016embedding, wu2017adaptive] |
Data-CH5 | Interdependencies and dynamics | [peng2018anomalous, ding2019deep, li2019specae, ding2019interactive, fan2020anomalydae, bandyopadhyay2020outlier, bandyopadhyay2019outlier, yu2018netwalk, liu2017accelerated, teng2018deep, gutierrez2020multi, hu2016embedding, wu2017adaptive, peng2020deep, wang2019fdgars, SemiGNN, teng2017anomaly, DBLP:conf/ijcnn/Ouyang0020, AANE, zheng2019addgraph, wang2018deep, FraudNE, dou2021user] |
Data-CH6 | Class imbalance | [zong2018deep, SemiGNN, GAL, zhao2020using] |
Data-CH7 | Unknown and camouflage of anomalies | [li2017radar, peng2018anomalous, liu2017accelerated, hooi2017graph, ding2020inductive, teng2018deep, CARE-GNN, zheng2019addgraph, wang2018deep, FraudNE] |
Techniques-specific Challenges | ||
Tech-CH1 | Anomaly-aware training objectives | [bandyopadhyay2020outlier, bandyopadhyay2019outlier, yu2018netwalk, liu2017accelerated, ding2020inductive, teng2018deep, zheng2019one, wang2019fdgars, DBLP:conf/ijcnn/Ouyang0020, zheng2019addgraph, gutierrez2020multi, hu2016embedding, wu2017adaptive, SemiGNN, teng2017anomaly, AANE, liang2018semi, peng2018anomalous, ding2019deep, li2019specae, ding2019interactive, fan2020anomalydae, zhang2019robust, peng2020deep, CARE-GNN, wang2018deep, FraudNE, dou2021user, zhu2020mixedad, GAL, zhao2020using] |
Tech-CH2 | Anomaly interpretability | [hu2016embedding, SemiGNN] |
Tech-CH3 | High training cost | [yu2018netwalk, wang2019fdgars, wang2018deep, peng2020deep, chang2021f, zhu2020mixedad] |
Tech-CH4 | Hyperparameter tuning | [zhao2020automating, GAL] |
Appendix B Taxonomy
The taxonomy of this survey is shown in Fig. 11.
Appendix C ANOS ND on Static Graphs
Following the taxonomy of Section 3.2, this section reviews traditional non-deep learning techniques designed for ANOS ND on static attributed graphs, followed by techniques based on GATs, GANs, and network representation.

C.1 Traditional Non-Deep Learning Techniques
Traditional techniques, such as statistical models, matrix factorization, and KNN, have been widely applied to extract the structural/attribute patterns of anomalous nodes for a subsequent detection process.
Among these, matrix factorization (MF) based techniques have shown power at capturing both topological structures and node attributes, achieving promising detection performance. An early attempt at this type of method was Liu et al. [liu2017accelerated]. They aimed to detect community anomalies (defined in Section 3) through the developed model, ALAD. As shown in Fig. 12(a), through non-negative matrix factorization, ALAD incorporates both the graph structure and node attributes to derive community structures and their attribute distribution vectors . When the matrices are decomposed, ALAD measures the normality of each node according to the attribute similarity between it and the community it belongs to. By ranking the nodes’ normality scores in ascending order, the top-k nodes are identified as community anomalies.
Li et al. [li2017radar] approached ANOS ND from a different perspective using residual analysis. As assumed, anomalies will lead to larger attribute reconstruction residual errors because they do not conform to the attribute patterns of the majorities. Accordingly, the proposed model, Radar, learns the residual errors , as shown in Fig. 12(b), by factorizing the node attributes . To incorporate the structural information for obtaining these errors, Radar puts explicit restrictions on the learned residuals such that directly linked nodes will have similar residual patterns in (known as the homophily effect). Finally, the top-k nodes with larger norms in are identified as anomalies.
Although the homophily hypothesis provides strong support for exploiting structural information, it might not always be hold. In fact, real objects may have distinct attributes from their connected neighbors and it is non-trivial to regulate all connected objects share similar values in each dimension in the feature space. By this, Peng et al. [peng2018anomalous] indicated that there are structurally irrelevant node attributes that do not satisfy the homophily hypothesis. Indeed, these structurally irrelevant node attributes would have adverse effects on anomaly detection techniques that are developed based on this hypothesis. To tackle this problem, their developed model, ANOMALOUS, uses CUR [DBLP:journals/pnas/MahoneyD09] decomposition to select attributes that are closely related to the network structure and then spot anomalies through residual analysis following Radar (as shown in Fig. 12(c)).



Beyond matrix factorization, linear regression models have also been designed to train anomaly classifiers given labeled training data. A representative work is Wu et al. [wu2017adaptive]. Their supervised model, SGASD, has yielded encouraging results with identifying social spammers using the social network structure, the content information in social media and user labels.
These non-deep learning techniques are able to capture valuable information from graph topology and node attributes, but their application and generalizability to real-world networks (which are usually in large-scale) is strictly limited due to the high computational cost of matrix decomposition operations and regression models.
C.2 GAT Based Techniques
Although GCN provides an effective solution to incorporating graph structure with node attributes for ANOS ND (reviewed in Section 3.2.2), its ability to capture the most relevant information from neighboring is subpar. This is due to the simple convolution operation that aggregates neighbor information equally to the target node. Recently, graph attention mechanism (GAT) [velivckovic2017graph] is employed to replace the traditional graph convolution. For instance, Fan et al. [fan2020anomalydae] applied graph attention neural network to encode the network structure information (structure encoding). The method, AnomalyDAE, also adopts a separate attribute autoencoder to embed the node attributes (attribute encoding). Through an unsupervised encoding-decoding process, each node is ranked according to its corresponding reconstruction loss, and the top-k nodes introducing the greatest losses are identified as anomalies. Specifically, the attribute decoding process takes both node embeddings learned through the structure and attribute encoding processes to reconstruct node attributes, as shown in Fig. 13, while the graph topology is reconstructed only using the embeddings output by the GAT. To acquire better reconstruction results, AnomalyDAE is trained to minimize the overall loss function, denoted as:
(19) |
where is the coefficient, and is the input adjacency matrix and attribute matrix, and are the reconstructed matrices. Each and is 1, if the corresponding element and equals 0, otherwise, their values are defined by hyperparameters greater than 1.
Another decent work is SemiGNN [SemiGNN], in which Wang et al. proposed a semi-supervised attention-based graph neural network for detecting fraudulent users in online payment platforms. This work further explores user information collected from various sources (e.g., transaction information and user profiles), and represents real-networks as multi-view graphs. Each view in the graph is modeled to reflect the relationship between users or the correlation between user attributes. For anomaly detection, SemiGNN first generates node embedding from each view by aggregating neighbor information through a node-level attention mechanism. It then employs view-level attention to aggregate node embeddings from each view and generates a unified representation for each node. Lastly, the class of each node is predicted through a softmax classifier. Indeed, Wang et al. designed a supervised classification loss and an unsupervised graph reconstruction loss to jointly optimize the model by fully utilizing labeled and unlabeled data. The classification loss can be denoted as:
(20) |
where is the labeled user set and its size is , is an indicator function, is the number of labels to be predicted (in most cases, the label is either anomalies or non-anomalies, and ), and represents the trainable variables. Meanwhile, the unsupervised loss encourages unlabeled nodes (users) that can be reached by labeled nodes through random walks to obtain similar representations and vice versa. This is achieved by negative sampling (unlabeled nodes that cannot be reached by random walks are negative samples) and the loss can be formulated as:
(21) |
where denotes the user set, denotes the neighbor set of , represents negative samples, is the sampling distribution, and is the sigmoid function. The total loss takes the sum of them and is formulated as:
(22) |
where is a balancing parameter and regularizes all trainable variables.
C.3 GAN Based Techniques
Because GAN is effective at capturing anomalous/regular data distributions (as reviewed in Section 4.2), Ding et al. [ding2020inductive] used GAN in their developed model, AEGIS, for improved anomaly discriminability on unseen data. As shown in Fig. 14, this model first generates node embeddings through a GNN from the input attributed graph, and then a generator and a discriminator are trained to identify anomalies. In the first phase, anomalous nodes and regular nodes are mapped to distinctive areas in the embedding space such that the GAN is able to learn a boundary between them. Accordingly, Ding et al. built an autoencoder network with graph differentiative layers to capture the attribute difference between each node and its -th neighbors. Such difference information enables anomalies to be distinguished easily. The embeddings are encoded as follows:
(23) |
where is embedded features of node through the -th layer, is the attention for each hop, is the set of -th order neighbors, is the attention for each neighbor, and is the difference between and , and are the trainable variables. The autoencoder is fine-tuned until the node attributes can be best reconstructed using the learned embeddings, after which the GAN is trained.
In the second phase, the generator follows a prior distribution to generate anomalies by sampling noisy data, while the discriminator struggles to distinguish between the embeddings of the normal nodes and those of the generated anomalies. The training process is formulated as a mini-max game between the generator and discriminator as follows:
(24) |
where is the node embeddings, and are the generated anomalies. After training, AEGIS directly learns embedding for a test node , and quantifies its anomaly score with regard to the discriminator’s outputs, i.e., the possibility that node is normal. The scoring function is formulated as:
(25) |
and the top-k nodes are deemed to be anomalous.
C.4 Network Representation Based Techniques
With network representation, graphs are first encoded into a vector space before the anomalies detection procedure takes place. As outlined in Section 3.1.2, numerous studies on ANOS ND in attributed graphs have exploited deep network representation techniques.
For instance, Zhang et al. [zhang2019robust] detected abnormal nodes that have attributes significantly deviating from their neighbors through a 3-layer neural network, REMAD, and residual analysis. They explicitly divide the original node attribute matrix into a residual attribute matrix that captures the abnormal characters of anomalies and a structurally relevant attribute matrix for network representation learning. Both matrices are jointly updated throughout the representation learning process so nearby nodes are encouraged to have similar representations. Specifically, these node embeddings are generated by aggregating neighbor information with each node’s own attributes, formulated as:
(26) |
where is node ’s representation generated by the -th layer (), contains ’s neighbors, is the activation function, and are the trainable variables. Finally, the residual matrix will contain the abnormal information of each node and the top-k nodes with the largest norms are considered anomalies.
Given partial node labels, Liang et al. [liang2018semi] developed a semi-supervised representation model, SEANO, that incorporates graph structure, node attributes and label information. Similar to REMAD, SEANO also aggregates neighbor information to center nodes, and the node representations are obtained through an embedding layer, formulated as:
(27) |
where is ’s representation, is a trainable variable that identifies the weight of ’s own attributes (), is the average of node ’s neighbors’ representations, and the function maps original node attributes into lower dimensional vectors. Then, a supervised component, which takes the representations as input, predicts node labels through a softmax classifier, and an unsupervised component is trained to reconstruct node contexts (node sequences). The context of each node is not only generated through random walks on the graph but also from the labeled nodes that belong to the same class. After training, SEANO interprets as the normality score of node and the top-k nodes with the highest scores are classed as anomalies.
Learning node representations via aggregating neighbor information has proven effective for capturing comprehensive information from graph structure and node attributes. But, Liu et al. [GraphConsis] demonstrated such an approach can help anomalies aggregate features from regular nodes, making them look normal and leading to sub-optimal detection performance. They identified three concrete issues that should be considered when applying aggregation operations for anomaly detection: 1) Anomalies are rare objects in a network. Hence, directly aggregating neighborhood information will smooth the difference between the anomalies and normal instances, blurring boundaries between them. 2) Directly connected nodes have distinctive features, and the assumption that connected nodes share similar features, which serves as the basis for feature aggregation, no longer holds in this scenario. 3) Real objects also form multiple types of relations with others, which means aggregation results for different types of relations will be distinctive. With regard to these concerns, their proposed method, GraphConsis, follows a sampling strategy to avoid potential anomalous neighbors when aggregating node features. This method also adopts an attention mechanism to aggregate neighbor information following different links. The learned node representations, therefore, are more robust to anomalies. As such, GraphConsis takes them as input to train a classifier for predicting labels.
Dou et al. [CARE-GNN] further considered camouflage behaviors of fraudsters in their proposed model CARE-GNN to enhance detection performance. As specified, the camouflages can be categorized as either feature camouflage or relation camouflage. Respectively, anomalies either adjust their feature information or form connections with many benign objects to gloss over suspicious information. Hence, directly employing aggregation will overlook the camouflages and smooth the abnormal patterns of anomalies, eliminating the distinctions between anomalies and normal objects. To alleviate over-smoothness, CARE-GNN also adopts a neighbor sampling strategy, as is the case with GraphConsis, to filter camouflaged anomalies and explores different types of relations formed between users. Specifically, under each relation, Dou et al. employed a MLP to predict node labels using their features and measure the similarity ( distance) between each node and its neighbors according to the MLP’s output. Then, the top-k most similar neighbors are selected for feature aggregation, and CARE-GNN generates each node’s representation through a combination of latent representations that are learned under different relations. A classifier is eventually trained using the representations to predict the node labels.
As can be seen, the performance of these network representation based techniques is decided by their training objectives/loss functions. Enhanced detection performance is probable if the loss function is able to separate normal nodes from abnormal nodes reasonably well. Motivated by this, a more recent work in [GAL] emphasizes the importance of anomaly-aware loss functions. In order to adjust margins for the anomalies, the authors proposed a novel loss function to guide the representation learning process. Specifically, this loss function is designed to find the relative scales between the margins of outlier nodes and normal nodes. An MLP-based classifier is finally trained using the node representations generated by the anomaly-aware loss-guided GNNs and node labels. For unseen nodes, the classifier will label them upon their representations.
Appendix D ANOS ND on Dynamic Graphs With Traditional Non-Deep Learning Techniques
To detect anomalous nodes in dynamic plain graphs, traditional non-deep learning techniques rely heavily on modeling the structural evolving patterns of nodes. Representative works like [DBLP:conf/wsdm/RossiGNH13] and [wang2019detecting] assume that the evolutionary behaviors of regular nodes’ (i.e., generate or remove connections with others) usually follow stable patterns, and their changes introduce less impact on the graph structure compared to anomalies. Specifically, in [wang2019detecting], Wang et al. proposed a novel link prediction method to fit the evolutionary patterns of most nodes such that anomalies can be identified because their observations significantly conflict with the prediction results. They further quantified the impact of anomalous behaviors by assessing the perturbation imposed on the graph adjacency matrix.
Other traditional works also exploit node/edge attributes and their changes. For examples, Teng et al. [teng2017anomaly] took node and edge attributes as two different views to describe each node. By encoding both types of information into a shared latent space, their proposed model learns a hypersphere from historical records. When new observations of existing nodes are given, the model distinguishes between the benign and the anomalies according to their distances to the hypersphere centroid. Points lying outside the hypersphere are spotted as anomalies.
Unlike embedding techniques, Nguyen et al. [tam2019anomaly] proposed a non-parametric method to detect anomalous users, tweets, hashtags, and links on social platforms. Specifically, they modeled social platforms as heterogeneous social graphs such that the affluent relationships between users, tweets, hashtags and links were effectively captured. Through extensive analysis of the features, such as user registration information, keywords in tweets, the linguistic style of links and the popularity scores of hashtags, anomalous objects are spotted based on their deviating features. This work also uses relationships between individual objects as well as the detected anomalies, and detects groups of abnormal objects.
Appendix E ANOS ED With Traditional Non-Deep Learning Techniques
Traditional non-deep learning based approaches mainly focus on using temporal signals (e.g., changes in graph structure), and applying specially designed statistical metrics to detect anomalous edges on dynamic graphs [ranshous2016scalable, aggarwal2011outlier]. As a concrete example, Eswaran and Faloutsos [eswaran2018sedanspot] modeled a dynamic graph as a stream of edges and exploited the graph structure as well as the structure evolving patterns. They identified two signs of anomalous edges: 1) connecting regions of the graph that were disconnected; and 2) connections that appear in bursts. For incoming edges, their model assigns anomaly scores to each edge, and the top-k edges with highest scores are anomalies. Another most recent work by Chang et al. [chang2021f] proposed a novel frequency factorization algorithm, aiming to spot anomalous incoming edges based on their likelihood of observed frequency. This method merges the advantages of both probabilistic models and matrix factorization for capturing both temporal and structural changes of nodes, and as reported, it only requires constant memory to handle edge streams.
Appendix F ANOS SGD With Traditional Non-Deep Learning Techniques
F.1 ANOS SGD on Static Graphs
One motivation of ANOS SGD in static graphs is that anomalous sub-graphs often exhibit significantly different attribute distributions. Therefore, traditional non-deep learning techniques, such as gAnomaly [li2014probabilistic], AMEN [perozzi2016scalable], and SLICENDICE [nilforoshan2019slicendice], focus on modeling the attribute distributions and measuring the normality of sub-graphs. Another line of investigation is graph residual analysis. The rich attribute information contained in real-world networks provides insight into the relationships formed between objects. Thus, the motivation behind several studies to spot anomalous sub-graphs has been to measure the residual between the expected structures and observed structures [miller2013efficient].
F.2 ANOS SGD on Dynamic Graphs
Devising metrics for ANOS SGD has been the subject of many traditional works. For instance, Chen et al. [DBLP:journals/jiis/ChenHS12] introduced six metrics to identify community-based anomalies, namely: grown community, shrunken community, merged community, split community, born community and vanished community. Although these hand-crafted features or statistical patterns well fit some particular types of existing anomalies, their abilities to detect unseen and camouflage anomalies are limited and applying them directly might introduce high false negative rate, which is not optimal for applications like financial security. Other works, such as SPOTLIGHT by Eswaran et al. [eswaran2018spotlight] and another by Liu et al. [liu2008spotting], explore sudden changes in dynamic graphs and identify anomalous sub-graphs that are related to such changes.
Motivated by the phenomena that social spam and fraud groups often form dense temporal sub-graphs in online social networks, plenty of works, including, [Densealert, mongiovi2013netspot], use manually-extracted features and spot anomalous dense sub-graphs that have evolved significantly different from the reset of the graph.
In addition to these studies, a large number of works discuss uses of various graph scan statistics for anomalous sub-graph detection, such as the Kulldorff statistic [kulldorff1997spatial], Poisson statistic [neill2005detection], elevated mean scan statistic [sharpnack2013near] and Berk-Jones statistic [berk1979goodness]. Specifically, Shao et al. [dGraphScan] proposed a non-parametric method to detect anomalous sub-graphs in dynamic graphs where the network structure is constant, but the node attributes change overtime. This approach measures the anomalous score of each sub-graph with regard to the p-values of nodes it comprises. Sub-graphs with higher scores are more anomalous. Another work, GBGP [GBGP], instead, adopts the elevated mean scan statistic to identify nodes that might form anomalous sub-graphs and detects anomalous groups that follow predefined irregular structures.