This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\useunder

\ul

GaGSL: Global-augmented Graph Structure Learning via Graph Information Bottleneck

Shuangjie Li, Jiangqing Song, Baoming Zhang, Gaoli Ruan, Junyuan Xie, Chongjun Wang Shuangjie Li, Jiangqing Song, Baoming Zhang, Gaoli Ruan, Junyuan Xie and Chongjun Wang are with the School of Computer Science and Technology, National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210046, China (e-mail: shuangjieli@smail.nju.edu.cn; sjq@smail.nju.edu.cn; zhangbm@smail.nju.edu.cn; glruan@smail.nju.edu.cn; jyxie@nju.edu.cn; chjwang@nju.edu.cn).
Abstract

Graph neural networks (GNNs) are prominent for their effectiveness in processing graph data for semi-supervised node classification tasks. Most works of GNNs assume that the observed structure accurately represents the underlying node relationships. However, the graph structure is inevitably noisy or incomplete in reality, which can degrade the quality of graph representations. Therefore, it is imperative to learn a clean graph structure that balances performance and robustness. In this paper, we propose a novel method named Global-augmented Graph Structure Learning (GaGSL), guided by the Graph Information Bottleneck (GIB) principle. The key idea behind GaGSL is to learn a compact and informative graph structure for node classification tasks. Specifically, to mitigate the bias caused by relying solely on the original structure, we first obtain augmented features and augmented structure through global feature augmentation and global structure augmentation. We then input the augmented features and augmented structure into a structure estimator with different parameters for optimization and re-definition of the graph structure, respectively. The redefined structures are combined to form the final graph structure. Finally, we employ GIB based on mutual information to guide the optimization of the graph structure to obtain the minimum sufficient graph structure. Comprehensive evaluations across a range of datasets reveal the outstanding performance and robustness of GaGSL compared with the state-of-the-art methods.

Index Terms:
Graph structure learning, graph neural networks, graph information bottleneck, robustness
publicationid: pubid: 0000–0000/00$00.00 © 2021 IEEE

I Introduction

Graph data is pervasive in a variety of real-world scenarios, including power networks [1], social media [2, 3], and computer graphics [4]. In these scenarios, each node with attributes represents an entity, while each edge represents the relationship between the entity pairs. For example, in social networks, each node represents a user or individual, and they may have attributes such as personal information, interests, and professions. The edge that connects different nodes represents a friendship or following relationship, and it can be either directed or undirected. In recent years, graph neural networks (GNNs) have emerged as a powerful approach for working with graph data [5, 6, 7, 8], and have been widely adopted for diverse network analysis tasks, including node classification [9, 10], link prediction [11, 12], and graph classification [13, 14].

Most existing GNNs rely on one basic assumption that the observed structure precisely represents the underlying node relationships. However, this assumption does not always hold in practice, as there are multiple factors that can lead to noisy graph structure: (1) Presence of noise and bias. Data can be collected and annotated from multiple sources. In the process of data collection and annotation, noisy connections and bias may be introduced by subjective human judgment or limitations in device precision. In some special cases (e.g., graph-enhanced applications [15] and visual navigation [16]), the data may lack inherent graph structure and require additional graph construction (e.g., kkNN) for representation learning. (2) Adversarial attacks on graph structure. The majority of current methods for adversarial attacks on graph data, including poisoning attacks [17] on graph structure, concentrate on altering the graph structure, especially adding/deleting/rewiring edges [18], and the original structure can be severely damaged. For instance, in credit card fraud detection, a fraudster might generate numerous transactions involving multiple high-credit users to conceal their identity and avoid detection by GNNs.

How does the model’s performance vary when the graph structure faces different levels of noise? And what is the impact on the graph structure when it is subjected to noise? To investigate the answer to this question, we simulated the noisy graph structure by introducing artificial edges into the graph at different proportions (i.e., 25%, 50%, and 75%) of the original number of edges (simulated noise), using the polblogs [19] dataset. Additionally, we calculate the probability matrices between communities for the original graph structure, as well as the graph structure with 75% additional edges, and draw them as heat maps. As illustrated in Fig. 1, the performance of GNNs models drops dramatically with the increase of edge addition rate, with SGC [20] exhibiting the most significant decline. This observation suggests that randomly adding edges has a detrimental effect on node classification performance. Further comparison of the middle and right plots reveals that randomly adding edges can result in connections between nodes from different communities. This inter-community connectivity decreases the distinguishing ability of nodes after GNN aggregates neighbor information, thereby leading to a decline in model performance. In summary, the effectiveness of GNNs is heavily dependent on the underlying graph structure, while real-world graphs often suffer from missing, meaningless, or even spurious edges. This structural noise may hinder message passing and limit the generalization ability of GNNs. Therefore, there is an urgent need to explore the optimal graph structure suitable for GNNs.

Refer to caption

Refer to caption

Refer to caption

Figure 1: The performance of the models in node classification with different rates of edge addition (left), and heat maps of the probability matrices with edges added with rates of 0 (middle) and 75% (right).

In recent years, various graph structure learning (GSL) methods [21] have emerged to handle the aforementioned problems, ensuring the performance and robustness of the models. Nevertheless, developing effective techniques to learn an optimal graph structure for GNNs represents a technically challenging task. (1) How can multifaceted information be introduced to provide a more comprehensive perspective? Relying on a single graph structure is inadequate to fully capture the complexity and diversity inherent in a graph [22]. Different perspectives of graph structure can shed light on various aspects and features within the graph. Therefore, it becomes imperative to integrate multi-perspective graph structure to acquire a more comprehensive and diverse graph structure. Most current methods for graph structure obtain the optimal graph structure from the single original structure [23, 24, 19]. There are also methods that obtain optimal graph structure based on multiple fundamental views [9, 25, 26, 22]. As an illustrative example, Chen et al. [25] constructed graph structure by incorporating both the normalized adjacency matrix and the node embedding similarity matrix. Nevertheless, it only considers feature similarity and fails to capture structural role information. (2) How to learn a clean graph structure for node classification tasks? In information theory, two key principles are Graph Information Bottleneck (GIB) [27, 28, 29] and Principle of Relevant Information (PRI) [30]. GIB and PRI represent distinct approaches to redundancy reduction and information preservation. Specifically, GIB offers an essential guideline for GSL: an optimal graph structure should contain the minimum sufficient information required for the downstream prediction task. For example, Sun et al. [28] advanced the GIB principle in graph classification by jointly optimizing the graph structure and graph representation. PRI views the issue of reducing redundancy and retaining information as a balancing act. This balance is struck between reducing the entropy of the representation and its relative entropy to the original data. For instance, a structure containing the most relevant yet least redundant information, quantified using von Neumann entropy and Quantum Jensen-Shannon divergence, was developed by Sun et al. [31]. However, exploring the learning of the optimal graph structure that strike a balance between performance and robustness in node classification tasks, based on information theory principle, remains an ongoing challenge.

To address the aforementioned issues, in this paper, we propose a novel Global-augmented Graph Structure Learning (GaGSL) method to enhance the node classification performance and robustness based on the principle of GIB. GaGSL consists of global feature and structure augmentation, structure redefinition and GIB guidance. In global feature and structure augmentation, two different techniques used to obtain augmented features and augmented structure, respectively. Details are presented in subsection IV-B. In structure redefinition, we introduce a structure estimator to appropriately refine the graph structure. This involves reallocating weights to the graph adjacency matrix elements based on node similarity. Then, the redefined structures are integrated to form the final structure. More information is provided in subsection IV-C. In GIB guidance, we aim to maximize the mutual information (MI) between node labels and node embeddings 𝒁\bm{Z}^{*} based on the final graph structure, while simultaneously imposing constraints on the MI between 𝒁\bm{Z}^{*} and the node embeddings 𝒁r1\bm{Z}_{r1} or 𝒁r2\bm{Z}_{r2} based on the redefined structures. To effectively evaluate the MI, we employ an MI calculator based on the InfoNCE loss [32]. More elaboration is given in subsection IV-D. Finally, we employ a cyclic optimization scheme to iteratively update the model parameters. Details are presented in subsection IV-E. Our contributions can be summarized as follows:

  • We propose a Global-augmented GSL method, GaGSL, which is guided by GIB and aims at obtaining the most compact structure. This endeavor seeks to strike a finer balance between performance and robustness.

  • To alleviate the limitations of relying solely on a single graph structure, we integrate augmented features and augmented structure to obtain a more global and diverse graph structure.

  • The evaluation of our proposed GaGSL method is conducted on eight benchmark datasets, and the experimental findings convincingly showcase the effectiveness of GaGSL. Notably, GaGSL exhibits superior performance when contrasted to state-of-the-art GSL methods, and this advantage is particularly pronounced when the method is applied to datasets that have been subjected to attacks.

The rest of the paper is organized as follows. In section II we briefly introduce the related works. Before presenting our research methodology in Section IV, we give the relevant notations and backgrounds in Section III. In Section V, we present and discuss the results of our experiments on benchmark datasets. Finally, Section VI outlines the conclusions drawn from this work and discusses potential avenues for future research.

II Related works

Aligned with the focus of our study, we provide a concise review of the two research areas most pertinent to our work - Graph Neural Networks and graph structure learning.

II-A Graph Neural Networks

GNNs have emerged as a prominent approach due to their effectiveness in working with graph data. These GNN models can be broadly categorized into two main groups: spectral-based methods and spatial-based methods.

Spectral-based methods aim to identify graph patterns in the frequency domain, leveraging the sound mathematical precepts of Graph Signal Processing (GSP) [33, 34]. Bruna et al. [35] first extended the convolution to general graphs using a Fourier basis, treating the filter as a set of learnable parameters and considering graph signals with multiple channels. but eigendecomposition requires O(n3)O(n^{3}) computational complexity. In order to reduce the computational complexity, Defferrard et al. [36] and Kipf et al. [10] made several approximations and simplifications. Defferrard et al. [36] defined fast localized convolutional filters on graphs based on Chebyshev polynomials. Kipf et al. [10] further simplified ChebNet via a localized first-order approximation of spectral graph convolutions. Despite being spectral-based, GCN can also be viewed through a spatial perspective. In this context, GCN operates by aggregating feature information from the local neighborhood of each node. Recent studies have progressively improved upon GCN [10] by exploring alternative symmetric matrices. For example, The adaptive Graph Convolution Network (AGCN) [37] proposed a Spectral Graph Convolution layer with graph Laplacian Learning (SGC-LL), which efficiently adapts the graph topology according to the data and learning task context. Dual Graph Convolutional Network (DGCN) [38] hamilton2017inductivedesigned a dual neural network structure to encode both local and global consistency. Other spectral graph convolutions have also been proposed. Graph Wavelet Neural Network (GWNN) [39] defines the convolution operator via wavelet transform, which avoids matrix eigendecomposition and provides good interpretability with its local and sparse graph wavelets. Simple Spectral Graph Convolution (S2GC)[40] derives a variant of GCN based on a modified Markov Diffusion Kernel. It achieves a balance between low- and high-pass filter bands to capture both global and local contexts of each node.

Conversely, spatial-based methods draw inspiration from the message passing mechanism employed in Recurrent Graph Neural Network (RecGNN) [41, 42]. They directly define graph convolution in the spatial domain as transforming and aggregating local information. The Neural Network for Graphs (NN4G) [43] is the first work towards spatial-based convolutional graph neural networks (ConvGNNs). Unlike RecGNNs, NN4G utilizes a combinatorial neural network architecture where each layer has independent parameters to learn the mutual dependencies of the graph. This method allows the extension of a node’s neighborhood through the progressive construction of the architecture. To identify a particularly effective variant of the general approach and apply it to the task of chemical property prediction, the message passing neural network (MPNN) [8] provides a unified model for spatial-based ConvGNNs. In this framework, graph convolutions are treated as a message-passing process, where information is directly transmitted between nodes along the edges. The graph isomorphism network (GIN) [14] identifies that previous methods based on MPNNs lack the ability to differentiate between distinct graph structures based on the embeddings they generate. To address this limitation, GIN proposed a theoretical framework for analyzing the expressive capabilities of GNNs to capture different graph structures. Obtaining the full size of a node’s neighborhood is inefficient since the number of neighbors a node has can span a wide range, from as few as one to possibly more than a thousand or more. Hamilton et al. [44] generated representations by sampling and aggregating features from a node’s local neighborhood. Velivckovic et al. [5] assumed that the contributions of neighboring nodes to the central node are neither identical as in GraphSage nor predetermined as in GCN. Velivckovic et al. [5] assigned distinct edge weights based on node features during aggregation process. GCN is primarily inspired by the latest deep learning methods and therefore may inherit unnecessary complexity and redundant computation. Wu et al. [20] reduced complexity and computation by successively removing nonlinearities and collapsing weight matrices between consecutive layers.

Many other graph neural network models can be reviewed in recent surveys [6, 45]. But almost all these GNNs assume that the observed structure accurately reflects the underlying node relationships, which considerably constrains their ability to manage uncertainty in the graph topology.

II-B Graph Structure Learning

GSL attempts to approximate a better structure for the original graph, which is not a newly emerged topic and has roots in prior works in network science [46, 47]. GSL methods can be generally classified into three categories: metric learning approaches, probabilistic modeling approaches and direct optimization approaches.

Metric learning methods polish the graph structure by learning a metric function that evaluates the similarity between pairs of node representations. For example, Zhang et al. [48] and Wang et al. [9] leveraged cosine similarity to model the edge weights. Zhang et al. [48] detected fake edges with different features and labels between nodes and mitigates their negative impact on prediction by removing these edges or reducing their weight in neural messaging. Wang et al. [9] employed a distinct method to derive the final node embeddings. It diffuses node features across the original graph and integrates the representations from both the generated feature graph and the original input graph using an attention mechanism. Note that Chen et al. [25] constructed the graph via a multi-head self-attention network, which is an end-to-end graph learning framework for iteratively optimizing graph structure and node embeddings. The core idea of IDGL [25] is that better node embedding is beneficial to capture better graph structure.

Probabilistic modeling approaches assume that graph is generated through a sampling process from certain distributions, and they use learnable parameters to model the probability of sampling edges. Franceschi et al. [23] modeled the edges between each pair of nodes by sampling from Bernoulli distributions with learnable parameters, and presented GSL as a two-layer programming problem, which was the first work in probabilistic modeling. The approach proposed by Zhang et al. [49] employed Monte Carlo dropout to sample the learnable model parameters multiple times for each generated graph. Meanwhile, the method developed by Wang et al. [22] processed multi-view information, such as multi-order neighborhood similarity, as observations of the optimal graph structure, and then derived the final graph structure based on Bayesian inference.

Direct optimization approaches consider the graph adjacency matrix as trainable parameters, which are tuned jointly alongside the primary GNN parameters. Yang et al. [50] aimed to leverage a given class label to both refine the graph structure and optimize the parameters of the GNN simultaneously. Jin et al. [19] focused on investigating key graph properties like sparsity, low rank, and feature smoothness, with the goal of designing more robust graph neural network models.

GSL methods can also be roughly split into single view and multiple fundamental views methods. However, these methods are insufficient to explore the theoretical guidance of learning optimal structure. A more comprehensive review of GSL can be found in a recent study [21].

III Notations and Backgrounds

In this section, we present the notations and backgrounds related to this paper.

III-A Notions

Let G=(𝑨,𝑿)G=(\bm{A},\bm{X}) be a graph with adjacency matrix 𝑨|𝑽|×|𝑽|\bm{A}\in\mathbb{R}^{|\bm{V}|\times|\bm{V}|} and node feature matrix 𝑿|𝑽|×F\bm{X}\in\mathbb{R}^{|\bm{V}|\times F}, where 𝑽:={vi}i=1N\bm{V}:=\{v_{i}\}_{i=1}^{N} denotes node set. Following the commonly adopted semi-supervised node classification setting, only a small portion of nodes, denoted as 𝑽L:={vi}i=1M\bm{V}_{L}:=\{v_{i}\}_{i=1}^{M}, have associated labels available, represented as 𝒀:={yi}i=1M\bm{Y}:=\{y_{i}\}_{i=1}^{M}, where yiy_{i} corresponds to the label of node viv_{i}. 𝑳=𝑰N𝑫1/2𝑨𝑫1/2\bm{L}=\bm{I}_{N}-\bm{D}^{-1/2}\bm{A}\bm{D}^{-1/2} denotes the normalized graph Laplacian matrix, where 𝑰N\bm{I}_{N} represents the identity matrix, and 𝑫|𝑽|×|𝑽|\bm{D}\in\mathbb{R}^{|\bm{V}|\times|\bm{V}|} is a diagnoal degree matrix with 𝑫i,i=j𝑨i,j\bm{D}_{i,i}=\sum_{j}\bm{A}_{i,j}.

Given an input graph G=(𝑨,𝑿)G=(\bm{A},\bm{X}) and partial node labels 𝒀\bm{Y}, the objective of GSL for GNNs is to jointly learn an optimal graph structure and the GNN model parameters, with the aim of enhancing the node classification performance for unlabeled nodes. The main notations used in this paper are summarized in Table I.

III-B Graph Neural Network

Modern GNNs stack multiple graph convolution layers to learn high-level node representations. The convolution operation usually consists of two steps: aggregation and update, which are respectively represented as follows:

𝒎i(l)=AGGREGATE(l)(𝒉j(l),vj𝒩(vi))\bm{m}_{i}^{(l)}=\mathrm{AGGREGATE}^{(l)}(\bm{h}_{j}^{(l)},{v_{j}}\in\mathcal{N}({v_{i}})) (1)
𝒉i(l+1)=UPDATE(l)(𝒉i(l),𝒎i(l))\bm{h}_{i}^{(l+1)}=\mathrm{UPDATE}^{(l)}(\bm{h}_{i}^{(l)},\bm{m}_{i}^{(l)}) (2)

where 𝒎i(l)\bm{m}_{i}^{(l)} and 𝒉i(l)\bm{h}_{i}^{(l)} are the message vector and the hidden embedding of node viv_{i} at the l-th layer, respectively. The set 𝒩(vi)\mathcal{N}({v_{i}}) consists of nodes that are adjacent to node viv_{i}. If l=0l=0, then 𝒉i(0)=𝒙i\bm{h}_{i}^{(0)}=\bm{x}_{i}. AGGREGATE(l)()\mathrm{AGGREGATE}^{(l)}(\cdot) and UPDATE(l)()\mathrm{UPDATE}^{(l)}(\cdot) are characterized by the specific model, respectively. In an L-layer network, the final embedding 𝒉i(L)\bm{h}_{i}^{(L)} is fed to a linear fully connected layer for the classification task.

III-C Graph Information Bottleneck

Inspired by the Information Bottleneck (IB), Wu et al. [27] proposed GIB principle to optimize node-level representations 𝒁\bm{Z} to to capture the minimal sufficient information within the input graph data G=(𝑨,𝑿)G=(\bm{A},\bm{X}) required for predicting the target 𝒀\bm{Y}. The objective is defined as the following optimization:

argmin𝒁I(𝒁;𝒀)+βI(𝒁;G){\rm arg}\mathop{\min}_{\bm{Z}}-I(\bm{Z};\bm{Y})+\beta I(\bm{Z};G) (3)

Intuitively, the first term I(𝒁;𝒀)-I(\bm{Z};\bm{Y}) encourages the representations 𝒁\bm{Z} to be maximally informative about the target (sufficient). The second term (𝒁;G)(\bm{Z};G) serves to prevent 𝒁\bm{Z} from obtaining extraneous information from the data that is not pertinent to predicting the target (minimal). The Lagrangian multiplier β\beta trading off sufficiency and minimality. Existing works based on GIB primarily focus on graph-level tasks [29, 51, 28]. In this paper, we specifically discuss GIB for node classification tasks.

TABLE I: SUMMARY OF THE MAIN NOTATIONS IN THIS PAPER
Notions Descriptions
𝑨\bm{A} original structure
𝑨\bm{A}^{*} the optimized original
𝑿\bm{X} original features
𝒙i\bm{x}_{i} feature vector of node viv_{i}
𝑽\bm{V} node set
𝑫\bm{D} diagnoal degree matrix
𝑫𝒊𝒊\bm{D_{ii}} The iith diagonal element of the degree matrix 𝑫\bm{D}
GG^{*} the optimized graph
Gr1G_{r1} the redefined graph 1
Gr2G_{r2} the redefined graph 2
𝒗i\bm{v}_{i} the embedding of node viv_{i} in graph Gr1G_{r1}
𝒖i\bm{u}_{i} the embedding of node viv_{i} in graph GG^{*}
s(𝒖i,𝒗i)s(\bm{u}_{i},\bm{v}_{i}) cosine similarity of 𝒖i\bm{u}_{i} and 𝒗i\bm{v}_{i}
gsg_{s} filter kerner
FF the dimension of feature
NN the number of nodes
β\beta Lagrangian multiplier

IV Methodology

In this section, we illustrate the proposed streamlined GSL model GaGSL guided by GIB. We begin with the overview of GaGSL in subsection IV-A. Subsequently, we will detail the 3 main parts of GaGSL (i.e., global feature and structure augmentation in subsection IV-B, structure redefinition in subsection IV-C, and GIB guidance in subsection IV-D). Finally, we detail the process of joint iterative optimization in subsection IV-E.

Refer to caption
Figure 2: Overview of GaGSL. Given original features 𝑿\bm{X} and structure 𝑨\bm{A} as input, GaGSL consists of the following three parts: (1) Global feature and structure augmentation: augmented features 𝑿^\hat{\bm{X}} and augmented structure 𝑨^\hat{\bm{A}} are obtained by global feature augmentation and global structure augmentation, respectively; (2) Structure redefinition: based on the augmented graph data (𝑨,𝑿^)(\bm{A},\hat{\bm{X}}) or (𝑨^,𝑿)(\hat{\bm{A}},\bm{X}), the graph structure is redefined using a structure estimator; (3) GIB Guidance: learning minimal sufficient graph structure guided by GIB.

IV-A Overview

Most existing GNNs assume that the observed graph structure accurately reflects the true relationships between nodes. However, this assumption often fails in real-world scenarios where graphs are typically noisy. This pitfall in the observed graph can lead to a rapid decline in GNN performance. Thus, a natural and promising approach is to jointly optimize the graph structure and the GNN model parameters in an integrated manner. In this paper, the proposed GaGSL aims to learn compact and informative graph structure for node classification tasks guided by the principle of GIB. Fig. 2 provides an overview of the GaGSL model.

IV-B Global Feature and Structure Augmentation

Most of the current research in GSL is predominantly conducted from a single structure. However, this single-structure approach is susceptible to biases that can result in an incomplete understanding of the entire graph structure, ultimately limiting the performance and robustness of the model. To mitigate this concern, we employ a two-pronged approach that combines global feature augmentation and global structure augmentation. This aims to comprehensively understand the graph structure from multiple perspectives.

IV-B1 Global Feature Augmentation

Nodes located in different regions of a graph may exhibit similar structural roles within their local network topology, and recognizing these roles is essential for understanding network organization. For example, nodes v1v_{1} and v6v_{6} in Fig. 2 exhibit similar structural roles despite being far apart in the graph. Following previous works [31, 52], we use spectral graph wavelets and empirical characteristic function to generate structural embedding for every node.

The filter kernel gsg_{s} is characterized by a scaling parameter ss that controls the reach of the diffusion process, where greater ss promotes wider-ranging diffusion.

In this paper, we employ the heat kernel gs(λ)=eλsg_{s}(\lambda)=e^{-\lambda s}. The spectral graph wavelet Ψ(vi)\Psi(v_{i}) centered around node viv_{i} is given by an NN-dimensional vector:

Ψs(vi)=𝑼Diag(gs(λ1),gs(λ2),,gs(λN))𝑼T𝜹i\Psi_{s}(v_{i})=\bm{U}{\rm Diag}(g_{s}(\lambda_{1}),g_{s}(\lambda_{2}),...,g_{s}(\lambda_{N}))\bm{U}^{T}\bm{\delta}_{i} (4)

where λi\lambda_{i} and 𝑼\bm{U} denote the eigenvalue and the eigenvector of the graph Laplacian 𝑳\bm{L}, respectively. 𝜹i\bm{\delta}_{i} is the one-hot vector of node viv_{i}.

To address the node mapping problem, we regard the wavelets as probability distributions and describe them using empirical characteristic functions. Following [28], the empirical characteristic function of viv_{i} is:

Φs(vi,t)=1Nn=1NeiΨs(vi)t\Phi_{s}(v_{i},t)=\frac{1}{N}\sum_{n=1}^{N}e^{-i\Psi_{s}(v_{i})t} (5)

Lastly, the structural embedding 𝒉s(vi)\bm{h}_{s}(v_{i}) of node viv_{i} is obtained by sampling a 2-dimensional parametric function (as defined in Eq. (5)) at dd different points {t1,t2,,td}\{t_{1},t_{2},...,t_{d}\}, and then concatenating the resulting values:

𝒉s(vi)=[Re(Φs(vi,ti)),Im(Φs(vi,ti))]t1,t2,,td\bm{h}_{s}(v_{i})=[{\rm Re}(\Phi_{s}(v_{i},t_{i})),{\rm Im}(\Phi_{s}(v_{i},t_{i}))]_{t_{1},t_{2},...,t_{d}} (6)

To effectively encode both the local and global structural roles of a node, we consider a set of different scales {s1,s2,,sm}\{s_{1},s_{2},...,s_{m}\} to integrate the information from different neighborhood radii to obtain a multi-scale structural role embedding:

𝒉(vi)=Concat(𝒉s1(vi),𝒉s2(vi),,𝒉sm(vi))\bm{h}({v_{i}})={\rm Concat}(\bm{h}_{s_{1}}(v_{i}),\bm{h}_{s_{2}}(v_{i}),...,\bm{h}_{s_{m}}(v_{i})) (7)

We can construct an augmented features matrix 𝑿^\hat{\bm{X}} by Eq. (7).

IV-B2 Global Structure Augmentation

In addition to structural role embeddings, we also augment structure from another perspective. Specifically, we employ the widely-used diffusion matrix to capture the global relationships between nodes. We utilize Personalized PageRank (PPR) [53], which provides a comprehensive representation of global structure. The PPR has a closed-form solution given as follows:

𝑨^=α(𝑰N(1α)𝑫1/2𝑨𝑫1/2)1\hat{\bm{A}}=\alpha(\bm{I}_{N}-(1-\alpha)\bm{D}^{-1/2}\bm{A}\bm{D}^{-1/2})^{-1} (8)

where, α\alpha denotes the restart probability. Note that 𝑺\bm{S} could be dense, thus we just keep 5 edges for each node on some datasets corresponding to the top 5 most affinity nodes on some datasets.

IV-C Structure Redefinition

Given two graph Gaf=(A,𝑿^)G_{af}=(A,\hat{\bm{X}}) and Gas=(𝑨^,𝑿)G_{as}=(\hat{\bm{A}},\bm{X}), to further capture the complex associations and semantic similarities between nodes, we perform a structure estimator for each graph. Specifically, for graph GafG_{af}, we conduct a layer of GCN [10] followed by a MLP layer:

𝑯1=GCN(𝑨,𝑿^)\bm{H}^{1}={\rm GCN}(\bm{A},\hat{\bm{X}}) (9)
wij1=MLP([𝒉i1||𝒉j1])w_{ij}^{1}={\rm MLP}([\bm{h}_{i}^{1}||\bm{h}_{j}^{1}]) (10)

where wij1w_{ij}^{1} denotes the weight between node viv_{i} and node vjv_{j}, 𝒉i1\bm{h}_{i}^{1} and 𝒉j1\bm{h}_{j}^{1} are the embeddings of node viv_{i} and node vjv_{j}, respectively. Here, to mitigate resource consumption in terms of both space and time, we only estimate the hh-order neighbors of each node. Then, wij1w_{ij}^{1} are normalized via sotfmax function to get the final weight:

𝑺ij1=exp(wij1)exp(wik1)\bm{S}_{ij}^{1}=\frac{exp(w_{ij}^{1})}{\sum exp(w_{ik}^{1})} (11)

We can construct a similarity matrix 𝑺1\bm{S}^{1} by Eq. (11). The original graph structure 𝑨\bm{A} carries relatively rich information. Ideally, the learned graph structure 𝑺1\bm{S}^{1} can complement the original graph structure 𝑨\bm{A}, creating an optimized graph for GNNs that enhances performance on the downstream task [25]. Thus, the matrix 𝑨\bm{A} is combined with the similarity matrix 𝑺1\bm{S}^{1} to get the redefined structure 𝑨r1\bm{A}_{r1}:

𝑨r1=𝑨+γ1𝑺1\bm{A}_{r1}=\bm{A}+\gamma^{1}*\bm{S}^{1} (12)

where γ1[0,1]\gamma^{1}\in[0,1] is combination coefficient. Similarly, we can obtain the redefined structure 𝑨r2\bm{A}_{r2} for graph GafG_{af}:

𝑨r2=μ𝑨+(1μ)𝑨^+γ2𝑺2\bm{A}_{r2}=\mu*\bm{A}+(1-\mu)*\hat{\bm{A}}+\gamma^{2}*\bm{S}^{2} (13)

where γ2\gamma^{2}, μ[0,1]\mu\in[0,1] are combination coefficients. Note that when using Eq. (11) based on the diffusion matrix, top-kk neighbors are selected for each node based on the PPR values.

After employing structure redefinition, we can obtain two corresponding graphs Gr1=(𝑨r1,𝑿)G_{r1}=(\bm{A}_{r1},\bm{X}) and Gr2=(𝑨r2,𝑿)G_{r2}=(\bm{A}_{r2},\bm{X}).

IV-D GIB Guidance

In this section, the question we would like to answer is how to get the optimal structure 𝑨\bm{A}^{*} for node classification tasks? And how to guide the training of 𝑨\bm{A}^{*} so that it is the minimal sufficient? In order to avoid introducing new parameters and to simplify the model as much as possible, we obtain the final structure by applying the average function, with the inputs being two redefined structures:

𝑨=12(𝑨r1+𝑨r2)\bm{A}^{*}=\frac{1}{2}(\bm{A}_{r1}+\bm{A}_{r2}) (14)

Please review that we aims to learn minimum sufficient graph structure for node classification tasks. In other words, we want the learned representations 𝒁\bm{Z}^{*} based on G=(𝑨,𝑿)G^{*}=(\bm{A}^{*},\bm{X}) to contain only labeled information, while filtering out label-irrelevant noise. To the end, we use GIB to guide the training of 𝑨\bm{A}^{*} so that it is the minimal sufficient. We presume that no information is lost during this process, following the standard practice of MI estimation [54]. Therefore, we have I(G;Gr1)I(𝒁;Gr1)I(G^{*};G_{r1})\approx I(\bm{Z}^{*};G_{r1}) and I(G;𝒀)I(𝒁;𝒀)I(G^{*};\bm{Y})\approx I(\bm{Z}^{*};\bm{Y}). For the sake of convenience in the following, we replace I(𝒁,𝒀)I(\bm{Z}^{*},\bm{Y}) with I(G;𝒀)I(G^{*};\bm{Y}) and I(𝒁;Gr1)I(\bm{Z}^{*};G_{r1}) with I(G;Gr1)I(G^{*};G_{r1}). The objectives of GaGSL are as follows:

argminGI(G;𝒀)+β1I(G;Gr1){\rm arg}\mathop{\min}_{G^{*}}-I(G^{*};\bm{Y})+\beta^{1}I(G^{*};G_{r1}) (15)
argminGI(G;𝒀)+β2I(G;Gr2){\rm arg}\mathop{\min}_{G^{*}}-I(G^{*};\bm{Y})+\beta^{2}I(G^{*};G_{r2}) (16)

By adding Eq. (15) and Eq. (16), we obtain:

argminGI(G;𝒀)+β1I(G;Gr1)+β2I(G;Gr2){\rm arg}\mathop{\min}_{G^{*}}-I(G^{*};\bm{Y})+\beta^{1}I(G^{*};G_{r1})+\beta^{2}I(G^{*};G_{r2}) (17)

We can simplify Eq. (17) by letting β1=β2=β\beta^{1}=\beta^{2}=\beta:

argminGI(G;𝒀)+β(I(G;Gr1)+I(G;Gr2)){\rm arg}\mathop{\min}_{G^{*}}-I(G^{*};\bm{Y})+\beta(I(G^{*};G_{r1})+I(G^{*};G_{r2})) (18)

where the first term I(G;𝒀)-I(G^{*};\bm{Y}) is used to encourage GG^{*} to contain maximum information about the labels 𝒀\bm{Y}. The second term I(G;Gr1)+I(G;Gr2)I(G^{*};G_{r1})+I(G^{*};G_{r2}) encourages GG^{*} to contain as little irrelevant information from Gr1G_{r1} and Gr2G_{r2} as possible, which is label-irrelevant for predicting the target.

The non-Euclidean nature of graph data makes it challenging to estimate the MI in Eq. (18) accurately [55]. Therefore, we introduce a variational upper bound of I(G;𝒀)-I(G^{*};\bm{Y}), and use the InfoNCE [56, 57] approximation to calculate I(G;Gr1)I(G^{*};G_{r1}) and I(G;Gr2)I(G^{*};G_{r2}). First, we examine the prediction term I(G;𝒀)-I(G^{*};\bm{Y}).

Proposition 4.1 (Upper bound of I(G;𝒀)-I(G^{*};\bm{Y})). Given graph GG with label 𝒀\bm{Y} and GG^{*} learned from GG, we have

I(G;𝒀)p(𝒀,G)log(qθ(𝒀|G))𝑑𝒀𝑑G+H(𝒀)-I(G^{*};\bm{Y})\leq-\iint p(\bm{Y},G^{*}){\rm log}(q_{\theta}(\bm{Y}|G^{*}))d\bm{Y}dG^{*}+H(\bm{Y}) (19)

where qθ(𝒀|G)q_{\theta}(\bm{Y}|G^{*}) is the variational approximation of the p(𝒀|G)p(\bm{Y}|G^{*}). We have the following proof based on Sun et al. [28]:

I(G;𝒀)\displaystyle-I(G^{*};\bm{Y}) =p(𝒀,G)logp(𝒀,G)p(𝒀)p(G)d𝒀dG\displaystyle=-\iint p(\bm{Y},G^{*})\log\frac{p(\bm{Y},G^{*})}{p(\bm{Y})p(G^{*})}d\bm{Y}dG^{*} (20)
=p(𝒀,G)logp(𝒀|G)p(𝒀)d𝒀dG\displaystyle=-\iint p(\bm{Y},G^{*})\log\frac{p(\bm{Y}|G^{*})}{p(\bm{Y})}d\bm{Y}dG^{*}

Since p(𝒀|G)p(\bm{Y}|G^{*}) is intractable, let qθ(𝒀|G)q_{\theta}(\bm{Y}|G^{*}) is the variational approximation of the true posterior p(𝒀|G)p(\bm{Y}|G^{*}). According to the non-negativity of Kullback Leiber divergence:

DKL(p(𝒀|G)||qθ(𝒀|G))0\displaystyle D_{KL}(p(\bm{Y}|G^{*})||q_{\theta}(\bm{Y}|G^{*}))\geq 0\Longrightarrow (21)
p(𝒀|G)logP(𝒀|G)𝑑𝒀\displaystyle\int p(\bm{Y}|G^{*})\log P(\bm{Y}|G^{*})d\bm{Y}\geq
p(𝒀|G)logqθ(𝒀|G)𝑑𝒀\displaystyle\int p(\bm{Y}|G^{*})\log q_{\theta}(\bm{Y}|G^{*})d\bm{Y}

Plug Eq. (20) into Eq. (21), then we have

I(G;𝒀)\displaystyle-I(G^{*};\bm{Y}) p(𝒀,G)logqθ(𝒀|G)p(𝒀)d𝒀dG\displaystyle\leq-\iint p(\bm{Y},G^{*})\log\frac{q_{\theta}(\bm{Y}|G^{*})}{p(\bm{Y})}d\bm{Y}dG^{*} (22)
=p(𝒀,G)logqθ(𝒀|G)𝑑𝒀𝑑G+H(𝒀)\displaystyle=-\iint p(\bm{Y},G^{*})\log{q_{\theta}(\bm{Y}|G^{*})}d\bm{Y}dG^{*}+H(\bm{Y})

where H(𝒀)H(\bm{Y}) is the entropy of label 𝒀\bm{Y}, which can be ignored in optimization procedure.

It is not trivial to compute the integral directly. To optimize the objective in Eq. (19), we approximate the integral by Monte Carlo sampling [58] of all training samples, so that we have:

I(G;𝒀)\displaystyle-I(G^{*};\bm{Y}) p(𝒀,G)logqθ(𝒀|G)𝑑𝒀𝑑G\displaystyle\leq-\iint p(\bm{Y},G^{*})\log{q_{\theta}(\bm{Y}|G^{*})}d\bm{Y}dG^{*} (23)
1Ni=1N{logqθ(𝒀i|𝒁i)}:=cls(𝒁,𝒀)\displaystyle\approx\frac{1}{N}\sum_{i=1}^{N}\{-\log q_{\theta}(\bm{Y}_{i}|\bm{Z}_{i}^{*})\}:=\mathcal{L}_{cls}(\bm{Z}^{*},\bm{Y})

where 𝒁i\bm{Z}_{i}^{*} represents the embedding of node viv_{i} and qθ(𝒀|G)q_{\theta}(\bm{Y}|G^{*}) provides the label distribution of the learned graph GG^{*}, which can be modeled as a classifier. The classification loss, cls\mathcal{L}_{cls}, is chosen to be the cross-entropy loss.

Then, we introduce how to approximate the mutual information I(G;Gr1)I(G^{*};G_{r1}) and I(G;Gr2)I(G^{*};G_{r2}) using InfoNCE. The InfoNCE loss [32] has been shown to maximize a lower bound of MI. Here, we design a MI calculator. Specifically, for Gr1G_{r1} we conduct a layer of GCN [10] followed by a shared two-layer MLP:

𝑯r1=GCN(𝑨r1,𝑿)\bm{H}^{r1}={\rm GCN}(\bm{A}_{r1},\bm{X}) (24)
𝑯pr1=MLP(𝑯r1)\bm{H}_{p}^{r1}={\rm MLP}(\bm{\bm{H}}^{r1}) (25)

where 𝑯r1\bm{H}^{r1} and 𝑯pr1\bm{H}_{p}^{r1} are the node representations obtained from GCN and MLP, respectively. Similarly, we can get the projected embedding 𝑯pr2\bm{H}_{p}^{r2} and 𝑯p\bm{H}_{p}^{*} of Gr2G_{r2} and GG^{*}, respectively. For readability purposes, we represent the embedding of node viv_{i} in graph GG^{*} as 𝒖i\bm{u}_{i}, and the embedding of viv_{i} in graph Gr1G_{r1} as 𝒗i\bm{v}_{i}. Then, the InfoNCE loss of graph Gr1G_{r1} and graph GG^{*} is given following the GCA [59]:

(G,Gr1)=12Bi=1B[(𝒖i,𝒗i)+(𝒗𝒊,𝒖𝒊)]\mathcal{L}(G^{*},G_{r1})=\frac{1}{2B}\sum_{i=1}^{B}[\ell(\bm{u}_{i},\bm{v}_{i})+\ell(\bm{v_{i}},\bm{u_{i}})] (26)

where BB is the number of nodes that randomly sampled. The pairwise objective for each positive pair (𝒖i,𝒗i)(\bm{u}_{i},\bm{v}_{i}) is defined as follows:

(𝒖i,𝒗i)=loges(𝒖i,𝒗i)/τes(𝒖i,𝒗i)/τ+kies(𝒖i,𝒗i)/τ\ell(\bm{u}_{i},\bm{v}_{i})=\log\frac{e^{s(\bm{u}_{i},\bm{v}_{i})/\tau}}{e^{s(\bm{u}_{i},\bm{v}_{i})/\tau}+\sum_{k\neq i}e^{s(\bm{u}_{i},\bm{v}_{i})/\tau}} (27)

where s(𝒖i,𝒗i)s(\bm{u}_{i},\bm{v}_{i}) is the cosine similarity and τ\tau is temperature coefficient. Similarly, we can calculate loss (G,Gr2)\mathcal{L}(G^{*},G_{r2}).

IV-E Iterative Optimization

Optimizing the parameters Θ\Theta of the structure estimator, Φ\Phi of the MI calculator, and Ω\Omega of the classifier simultaneously is challenging. The interdependence among them further complicates this process. In this study, we employ an alternating optimization approach to iteratively update Θ\Theta, Φ\Phi, and Ω\Omega inspired by Wang et al. [22].

IV-E1 Update Φ\Phi

The parameters involved in Eq. (24) and (25) are regarded as the parameters Φ\Phi of MI calculator. To encourage GG^{*} to contain as little label-irrelevant information from Gr1G_{r1} and Gr2G_{r2} as possible, the objective function used to optimize the MI calculator is presented as follows:

MI=(G,Gr1)+(G,Gr2)\mathcal{L}_{MI}=\mathcal{L}(G^{*},G_{r1})+\mathcal{L}(G^{*},G_{r2}) (28)

IV-E2 Update Ω\Omega

We can obtain the final learned structure 𝑨\bm{A}^{*} by Eq. (14). Then we employ two-layer of GCN [10] to obtain node representations.

𝒁=GCN(𝑨,𝑿)\bm{Z}^{*}={\rm GCN}(\bm{A}^{*},\bm{X}) (29)

The parameters involved in Eq. (29) are collectively considered as the classifier’s parameters Ω\Omega, and the cross-entropy loss is utilized for optimization:

cls=vVCross-Entropy(𝒁i,yv)\mathcal{L}_{cls}=\sum_{v\in V}{\rm Cross\text{-}Entropy}(\bm{Z}_{i}^{*},y_{v}) (30)

IV-E3 Update Θ\Theta

After training the classifier and MI calculator, we proceed with the continuous optimization of the structure estimator parameters Θ\Theta. Guided by GIB, the resulting loss function is as follows:

=clsβMI\mathcal{L}=\mathcal{L}_{cls}-\beta\mathcal{L}_{MI} (31)

where β\beta is a balance parameter trading off sufficiency and minimality. The first term cls\mathcal{L}_{cls} is to motivate GG^{*} to contain maximal information about the labels YY in order to improve the performance on the predicted target. The intention of the second term MI\mathcal{L}_{MI} is to minimize the information in GG^{*} from Gr1G_{r1} and Gr2G_{r2} that is label-irrelevant for predicting the target.

IV-E4 Training Algorithm

Based on the previously described update and inference rules, the training algorithm for GaGSL is outlined in Algorithm 1. Specifically, the algorithm begins by initializing all the parameters of GaGSL. In lines 4-8, GaGSL updates the parameters Θ\Theta of structure estimator. In line 9, the redefined structures are combined. In lines 11-14, the parameter Φ\Phi of the MI calculator is optimized. Finally, in lines 16-18, the classifier parameters Ω\Omega are updated. Through this substitution and iterative updating, the graph structure 𝑨\bm{A}^{*} and the better parameters promote each other.

Algorithm 1 Model training for GaGSL.
0:    adjacency matrix 𝑨,feature matrix 𝑿,labels 𝒀L\text{adjacency matrix }\bm{A},\text{feature matrix }\bm{X},\text{labels }\bm{Y}_{L},total epochs TT, training classifier epochs TcT_{c}, training structure estimator epochs TvT_{v}, training MI calculator epochs TmT_{m}
0:    learned graph structure 𝑨, GCN parameters Ω\bm{A^{*}},\text{ GCN parameters }\Omega
1:  Initialize params: Θ,Ω,Φ;\Theta,\Omega,\Phi;
2:  Initialize views: Gi1,Gi2;G_{i1},G_{i2};
3:  for i{1,2,,T} do\textbf{for }i\leftarrow\{1,2,...,T\}\textbf{ do}
4:        for i{1,2,,Tv} do\textbf{for }i\leftarrow\{1,2,...,T_{v}\}\textbf{ do}
5:            calculate 𝑺ij1,𝑺ij2 with Eq. (9), (10), and (11);\text{calculate }\bm{S}_{ij}^{1},\bm{S}_{ij}^{2}\text{ with Eq. (\ref{eq9}), (\ref{eq10}), and (\ref{eq11})};
6:            obtain 𝑨r1 with Eq. (12);\text{obtain }\bm{A}_{r1}\text{ with Eq. (\ref{eq12})};
7:            obtain 𝑨r2 with Eq. (13);\text{obtain }\bm{A}_{r2}\text{ with Eq. (\ref{eq13})};
8:            update Θ with Eq. (31);\text{update }\Theta\text{ with Eq. (\ref{eq31})};
9:        end
10:        𝑨=1/2(𝑨r1+𝑨r2);\bm{A}^{*}=1/2(\bm{A}_{r1}+\bm{A}_{r2});
11:        for i{1,2,,Tm} do\textbf{for }i\leftarrow\{1,2,...,T_{m}\}\textbf{ do}
12:            calculate (G,Gr1) with Eq. (24)-(26);\text{calculate }\mathcal{L}(G^{*},G_{r1})\text{ with Eq. (\ref{eq24})-(\ref{eq26})};
13:            calculate (G,Gr2) with Eq. (24)-(26);\text{calculate }\mathcal{L}(G^{*},G_{r2})\text{ with Eq. (\ref{eq24})-(\ref{eq26})};
14:            update Φ with Eq. (28);\text{update }\Phi\text{ with Eq. (\ref{eq28})};
15:        end
16:        for i{1,2,,Tc} do\textbf{for }i\leftarrow\{1,2,...,T_{c}\}\textbf{ do}
17:            calculate node embeddings 𝒁 with Eq. (29);\text{calculate node embeddings }\bm{Z}^{*}\text{ with Eq. (\ref{eq29})};
18:            update Ω with Eq. (30);\text{update }\Omega\text{ with Eq. (\ref{eq30})};
19:        end
20:  end
21:  return 𝑨,Ω\textbf{return }\bm{A}^{*},\Omega

V Experiments

In this section, we carry out a comprehensive evaluation to assess the effectiveness of the proposed GaGSL model. We first compare the performance of GaGSL against several state-of-the-art methods on the semi-supervised node classification task. Additionally, we perform an ablation study to verify the importance of each component within the GaGSL model. Then, we analyze the robustness of GaGSL. Finally, we present the graph structure visualization, hyper-parameter sensitivity and values in the learned structure.

V-A Experiment Setup

V-A1 Datasets

The eight datasets we employ consist of four academic networks (Cora, Citeseer, Wiki-CS, and MS Academic (MS)), three non-graph datasets (Wine, Breast Cancer (Cancer), and Digits) that are readily available in scikit-learn [60], and a blog graph dataset Polblogs. Table II provides a summary of the statistical information about these datasets. It is important to note that, for the non-graph datasets, we adopt the approach described in [25] and construct a kkNN graph as the original adjacency matrix.

TABLE II: The statistics of the datasets
Dataset #Nodes #Edges #Features #Classes #Train/#Val/#Test
Wine 178 3560 13 3 10/20/148
Cancer 569 22760 30 2 10/20/539
Digits 1797 43128 64 10 50/100/1647
Polblogs [19] 1222 33428 1490 2 121/123/978
Cora [10] 2708 5429 1433 7 140/500/1000
Citeseer [10] 3327 9228 3703 6 120/500/1000
Wiki-CS [61] 11701 291039 300 10 200/500/1000
MS [53] 18333 163788 6850 15 300/500/1000

V-A2 Baselines

To demonstrate the effectiveness of our proposed method, we compare the proposed GaGSL with two categories of baselines: three classical GNN models (GCN [10], GAT [5], SGC [20]) and four GSL based methods (Pro-GNN [19], IDGL [25], GEN [22], PRI-GSL [31]). The details are given as follows.

a) GCN: It directly encodes the graph structure using a neural network, and trains on a supervised target for all labeled nodes. This neural network employs an efficient layer-wise propagation rule, which is derived from a first-order approximation of spectral graph convolutions.

b) GAT: It introduces an attention-based mechanism for classifying nodes in graph-structured data. By stacking layers, it enables nodes to incorporate features from their neighbors and implicitly assigns different weights to different nodes in the neighborhood. Additionally, this model can be directly applied to inductive learning problems.

c) SGC: It alleviates the excessive complexity of GCNs by iteratively eliminating nonlinearities between GCN layers and consolidating weights into a single matrix.

d) Pro-GNN: To defend against adversarial attacks, it iteratively eliminates adversarial structure by preserving the graph low rank, sparsity, and feature smoothness, while maintaining the intrinsic graph structure.

e) IDGL: Building on the principle that improved node embeddings lead to better graph structure, it introduces an end-to-end graph learning framework for the joint iterative learning of graph structure and embeddings. Additionally, it frames the graph learning challenge as a similarity metric learning problem and employs adaptive graph regularization to manage the quality of the learned graph.

f) GEN: It is a graph structure estimation neural network composed of two main components: the structure model and the observation model. The structure model characterizes the underlying graph generation process, while the observation model incorporates multi-order neighborhood information to accurately infer the graph structure using Bayesian inference techniques.

g) PRI-GSL: It is an information-theoretic framework for learning graph structure, grounded in the Principle of Relevant Information to manage structure quality. It incorporates a role-aware graph structure learner to develop a more effective graph that maintains the graph’s self-organization.

V-A3 Implementation

For three classical GNN models (GCN, GAT, SGC), we use the corresponding Pytorch Geometric library implementations [62]. For four GSL based methods (Pro-GNN, IDGL, GEN, and PRI-GSL), we utilize the source codes provided by the authors and adhere to the settings outlined in their original papers, with careful tuning. For different datasets, we follow the original splits on training/validation/test. For the proposed GaGSL, we use Adam [63] optimizer and adopt 16 hidden dimensions. We set the learning rate for the classifier and the MI calculator to a fixed value of 0.01, while tuning it for the structure estimator across the values {0.1, 0.01, 0.001}. On the MS dataset we set combination coefficient μ\mu to 0. On the other datasets we set μ\mu to 1. We test the combination coefficients γ1\gamma^{1} and γ2\gamma^{2} in the range {0.1, 0.5}. The dropout for classifier is chosen form {0.3, 0.5, 0.7, 0.9}, and the dropout for MI calculator is turned amongst {0.2, 0.4, 0.6, 0.8}.

TABLE III: Quantitative results (%±σ\%\pm\sigma) on semi-supervised node classification task. The best-performing models are bolded and runners-up are underlined. The ”-” symbol indicates that experiments could not be run due to memory problems.
Dataset Metric GCN GAT SGC Pro-GNN IDGL GEN PRI-GSL GaGSL
Wine AUC 99.4±\pm0.1 98.1±\pm1.8 99.4±\pm0.1 \ul99.6±\pm0.1 99.6±\pm1.1 98.6±\pm0.7 99.1±\pm0.3 99.8±\pm0.1
F1-macro \ul97.5±\pm0.5 93.7±\pm3.1 97.2±\pm0.6 97.0±\pm0.3 95.6±\pm1.7 95.4±\pm1.7 91.9±\pm2.4 97.9±\pm0.3
F1-micro 97.4±\pm0.5 93.4±\pm4.2 97.2±\pm0.7 \ul97.5±\pm0.3 95.3±\pm1.8 95.1±\pm1.9 91.5±\pm2.5 97.9±\pm0.3
Cancer AUC 96.7±\pm0.4 95.8±\pm1.4 97.4±\pm0.2 97.8±\pm0.2 97.9±\pm0.9 97.8±\pm0.3 \ul98.4±\pm0.2 98.8±\pm0.2
F1-macro 91.5±\pm0.7 89.3±\pm2.1 91.3±\pm0.4 93.3±\pm0.5 91.9±\pm2.7 93.5±\pm0.3 \ul94.2±\pm0.6 94.6±\pm0.3
F1-micro 92.0±\pm0.7 89.8±\pm2.2 91.7±\pm0.4 93.8±\pm0.5 92.5±\pm2.5 93.9±\pm0.3 \ul94.6±\pm0.5 95.0±\pm0.2
Digits AUC 98.5±\pm1.7 99.0±\pm0.2 \ul99.2±\pm0.1 98.1±\pm0.2 98.9±\pm0.4 98.8±\pm0.4 98.3±\pm0.3 99.5±\pm0.1
F1-macro 88.9±\pm1.9 90.2±\pm0.7 89.4±\pm0.2 89.7±\pm0.3 \ul90.4±\pm1.2 92.0±\pm0.5 90.3±\pm0.8 92.7±\pm0.4
F1-micro 89.1±\pm1.8 90.3±\pm0.7 89.6±\pm0.2 89.8±\pm0.3 90.4±\pm1.2 \ul92.0±\pm0.5 90.4±\pm0.8 92.8±\pm0.4
Polblogs AUC 98.4±\pm0.0 97.2±\pm1.3 \ul98.5±\pm0.0 98.1±\pm0.2 98.0±\pm0.5 98.0±\pm0.5 98.4±\pm0.1 98.6±\pm0.1
F1-macro \ul95.3±\pm0.3 92.0±\pm2.6 94.8±\pm0.0 94.6±\pm0.7 94.4±\pm1.6 95.2±\pm0.8 95.1±\pm0.4 95.9±\pm0.2
F1-micro 95.0±\pm0.3 92.0±\pm2.7 94.8±\pm0.1 94.6±\pm0.7 94.5±\pm1.5 \ul95.2±\pm0.8 95.1±\pm0.5 95.9±\pm0.2
Cora AUC 93.7±\pm0.7 92.5±\pm0.7 96.1±\pm0.1 96.9±\pm0.9 \ul97.0±\pm0.3 94.0±\pm1.9 95.8±\pm0.3 97.3±\pm0.3
F1-macro 74.3±\pm1.8 70.3±\pm0.9 78.4±\pm0.2 78.8±\pm2.6 \ul80.4±\pm1.3 80.1±\pm1.3 75.0±\pm0.4 82.3±\pm1.1
F1-micro 75.1±\pm2.3 71.4±\pm1.1 79.4±\pm0.2 79.8±\pm2.6 \ul82.2±\pm2.6 91.4±\pm2.0 76.7±\pm0.7 83.8±\pm1.2
Citeseer AUC 87.5±\pm1.2 89.4±\pm0.2 89.9±\pm0.2 88.5±\pm0.3 91.4±\pm0.4 90.1±\pm1.7 88.6±\pm0.2 \ul90.6±\pm0.5
F1-macro 63.2±\pm0.4 63.3±\pm1.2 66.4±\pm0.5 63.1±\pm0.7 69.4±\pm0.4 69.4±\pm1.4 64.5±\pm0.4 \ul68.8±\pm0.9
F1-micro 67.1±\pm0.3 66.2±\pm1.2 70.6±\pm0.1 65.6±\pm0.8 71.9±\pm0.3 72.7±\pm1.3 67.6±\pm0.6 \ul72.1±\pm1.1
Wiki-CS AUC 91.3±\pm0.5 91.0±\pm0.6 \ul94.0±\pm0.0 93.3±\pm0.3 91.8±\pm0.2 92.6±\pm1.2 - 96.1±\pm2.7
F1-macro 62.5±\pm1.7 62.0±\pm1.7 67.5±\pm0.2 64.8±\pm2.0 \ul69.1±\pm1.1 68.4±\pm0.3 - 73.6±\pm2.4
F1-micro 67.4±\pm1.6 62.3±\pm1.4 71.5±\pm0.2 68.3±\pm1.2 \ul72.7±\pm0.8 71.1±\pm0.9 - 75.0±\pm2.4
MS AUC 95.8±\pm0.7 98.0±\pm0.2 \ul98.8±\pm0.1 - 96.5±\pm0.3 89.0±\pm0.8 - 99.6±\pm0.1
F1-macro 78.7±\pm3.6 80.7±\pm1.1 90.0±\pm0.1 - 78.5±\pm1.8 92.0±\pm0.6 - \ul90.2±\pm0.5
F1-micro 80.8±\pm3.4 83.4±\pm0.8 91.9±\pm0.2 - 82.9±\pm0.9 98.8±\pm0.3 - \ul92.3±\pm0.4
TABLE IV: Ablation study on various datasets. w/o FA (SA/SE/GIB) denotes that the feature augmentation (structure augmentation/structure estimator/GIB guidance) component is turned off.
Methods Cancer Polblogs Citeseer MS
w/o FA 93.9±\pm0.3 95.6±\pm0.2 67.8±\pm1.7 89.9±\pm0.2
w/o SA 93.6±\pm0.4 94.0±\pm0.6 65.6±\pm3.0 89±\pm0.5
w/o SE 93.8±\pm0.5 95.3±\pm0.3 67.7±\pm1.8 89.1±\pm0.4
w/o GIB 94.6±\pm0.6 95.2±\pm0.1 67.2±\pm0.8 90±\pm0.7
GaGSL 95.4±\pm0.3 96.0±\pm0.2 68.8±\pm0.9 90.2±\pm0.5

V-B Node Classification

In this section, we assess the proposed GaGSL on semi-supervised node classification, with the results presented in Table III. To conduct a comprehensive evaluation of our model, we utilize three widely-used performance metrics: F1-macro, F1-micro, and AUC. We report the mean and standard deviation of the results obtained over 10 independent trials with varying random seeds. Based on these evaluation results, we draw the following observations:

  • Compared with other baselines, the proposed GaGSL can obtain better or competitive results on all datasets, which shows that our cleverly designed GSL framework can effectively improve the node classification performance.

  • GaGSL demonstrates a significant performance improvement compared to the baseline GCN. Specifically, across all datasets, GaGSL achieves an AUC that is 0.4%-4.8% higher than GCN, an F1-macro score that is 0.4%-11.5% higher than GCN, and an F1-micro score that is 0.5%-11.5% higher than GCN. This observation suggests that GaGSL effectively mitigates the bias caused by a single perspective, enabling it to learn an appropriate graph structure by considering multi-perspective.

  • Our performance improvement compared to other GSL methods demonstrates the effectiveness of utilizing GIB for guiding GSL. The learned graph structure contains more valuable information and less noise, making it better suited for node classification tasks. Although GaGSL was the runner-up on the Citeseer dataset, it has an extremely weak disadvantage over the winner, possibly due to data imbalance or the presence of labeling noise.

V-C Ablation Study

Here, we describe the results of the ablation study for the different modules in the model. We report F1-macro and standard deviation results over 5 independent trials using different random seeds. According to the ablation study results presented in Table IV, the model’s performance exhibits a significant decline when the structure augmentation (SA) component is removed. The substantial drop in performance observed without the SA component suggests that the structure augmentation mechanism is highly important and plays a crucial role in enabling the model to effectively capture salient features. Notably, turning off any of the individual components results in a significant decrease in the model’s performance across all evaluated datasets, which underscores the effectiveness and importance of these components. Furthermore, the results highlight the benefits of leveraging GIB to guide the model training process.

Refer to caption

(a) Cancer

Refer to caption

(b) Cora

Refer to caption

(c) Citeseer

Figure 3: Results of various models under random edge deletion on Cancer, Cora, and Citeseer datasets.
Refer to caption

(a) Cancer

Refer to caption

(b) Cora

Refer to caption

(c) Citeseer

Figure 4: Results of various models under random edge addition on Cancer, Cora, and Citeseer datasets.
Refer to caption

(a) Cancer

Refer to caption

(b) Cora

Refer to caption

(c) Citeseer

Figure 5: Results of various models under feature attack on Cancer, Cora, and Citeseer datasets.

V-D Defense Performance

In this section, we perform a careful evaluation of various methods, specifically focusing on comparing GSL models. These models exhibit the ability to adapt the original graph structure, making them more robust in comparison to other GNNs. To ensure a comprehensive evaluation, we conduct attacks on both edges and features, respectively.

V-D1 Attacks on edges

To attack edges, we generate synthetics dataset by deleting or adding edges on Cancer, Cora, and Citeseer following [25]. Specifically, for each graph in the dataset, we randomly remove 5%, 10%, 15% edges or randomly inject 25%, 50%, 75% edges. We select the poisoning attack [64] and first generate the attacked graphs and subsequently train the model using these graphs. The experimental results are displayed in Figs. 3 and 4.

As can be seen in Figs. 3 and 4, GaGSL consistently outperforms all other baselines as the perturbation rate increases. On the Cancer and Citeseer datasets, the performance of GaGSL fluctuates lightly under different perturbation rates. On the Cora dataset, although GaGSL’s performance declines with increasing perturbation rates, it still outperforms other baselines. Specifically, our model improves over vanilla GCN by 12.6% and over other GSL methods by 0.8%-11% when 50% edges are added randomly on Cora. This suggests that our method is more effective against robust attacks. We also find that the GSL method (i.e. Pro-GNN [19], IDGL [25], GEN [22], PRI-GSL [31]) outperforms vanilla GCN at different perturbation rates on both the Cancer and Citeseer datasets, but on the Cancer dataset, IDGL is not as robust to attacks as vanilla GCN. The possible reason for this is the presence of labeling imbalances or outliers in the data, which may interfere with the model’s ability to learn the correct graph structure and lead to performance degradation.

V-D2 Attacks on feature

To attack feature, we introduce independent Gaussian noise to features following [27]. Specifically, we employ the mean of the maximum value of each node’s feature as the reference amplitude rr, and for every feature dimension of each node we introduce Gaussian noise λrϵ\lambda\cdot r\cdot\epsilon, where λ\lambda is the feature noise ratio, and ϵN(0,1)\epsilon\sim N(0,1). We evaluate the models’ performance with λ{0.1,0.3,0.5}\lambda\in\{0.1,0.3,0.5\}. Additionally, we perform poisoning experiments and present the results in Fig. 5.

Similarly, we can observe from Fig. 5 that in most cases, GaGSL consistently surpasses all other baselines and successfully resists feature attacks. As the feature attack rate increases, the performance of most methods decreases, but the decline rate of GaGSL is the slowest. For example, on the Cora dataset, as the perturbation rate increases from 0 to 50%, the performance of GaGSL decreases by 13.3%, while the performance of GCN decreases by 27.4%. In addition, we observed that under various feature attack scenarios, GaGSL consistently maintained top performance, whereas the rankings of other methods fluctuated. For example, on the Cora dataset, IDGL ranked second when the perturbation rate was 0%, but dropped to fifth, fourth, and fifth at perturbation rates of 10%, 30%, and 50%, respectively. Even more concerning, IDGL’s performance was worse than that of GCN at perturbation rates of 30% and 50%. This further demonstrates the strong robustness of GaGSL against feature attacks. Combining the observations from Figs. 3 and 4, we can conclude that GaGSL is able to approach the minimal sufficient structure, and thus demonstrates the capacity to withstand attacks on both graph edges and node features.

Refer to caption

(a) Original graph

Refer to caption

(b) Perturbed graph

Refer to caption

(c) Learned graph

Figure 6: Heat maps for the probability matrices of the (a) original graph, (b) Perturbed graph, and (c) Learned graph on Cora. Note that the color shades of three maps represent different scales, as shown by bars on the right.

Refer to caption

(a) Polblogs

Refer to caption

(b) Cora

Figure 7: Impact of hyper-parameters γ1\gamma^{1} and γ2\gamma^{2} on Polblogs and Cora.

Refer to caption

(a) Polblogs

Refer to caption

(b) Cora

Figure 8: Impact of hyper-parameter β\beta on Polblogs and Cora.

V-E Graph Structure Visualization

Here, we visualize the probability matrices of the original graph structure, the perturbed graph structure, and the graph structure learned by GaGSL and draw them in Fig. 6.

From the visualization, we can observe that in the perturbed graph structure, there exist noisy connections, as indicated by the higher probability of edges between different communities compared to the original graph structure. These noisy connections degrade the quality of the graph structure, thereby reducing the performance of GNNs. Additionally, we observed that, in contrast to the perturbed graph, the learned graph structure weakens the connections between communities and strengthens the connections within communities.

This observation is expected because GaGSL is optimized based on the GIB principle. The GIB optimization allows GaGSL to effectively capture information in the graph structure that contributes to accurate node classification while constraining information that is label-irrelevant. This optimization process ensures that the learned graph structure is robust against noisy connections and focuses on preserving the most informative aspects of the graph for the task at hand.

V-F Hyper-parameter Sensitivity

In this subsection, we investigate the sensitivity of key hyper-parameters: combination coefficient γ1\gamma^{1} in Eq. (12), γ2\gamma^{2} in Eq. (13), as well as the balance parameter β\beta in Eq. (31). More concretely, we vary the value of γ1\gamma^{1}, γ2\gamma^{2} and β\beta to analyze their impact on the performance of our proposed model. We vary γ1\gamma^{1} or γ2\gamma^{2} or from 0 to 1, and β\beta from 0 to 0.1. For clarity, we report the node classification results on Cora and Polblogs datasets, as similar trends are observed across other datasets. The results are presented in Figs. 7 and 8.

As can be observed from Fig. 7, by tuning the value of γ1\gamma^{1} (or γ2\gamma^{2}), we can achieve better node classification performance compared to when γ1\gamma^{1} (or γ2\gamma^{2}) is set to 0. This precisely demonstrates that our designed structure estimator is capable of capturing the complex correlations and semantic similarities between nodes.

Observing Fig. 8, it is evident that as the value of β\beta increases, the performance of GaGSL first rises to a peak and then declines. When β\beta is small, ML\mathcal{L}_{ML} has a minimal impact on the total loss function \mathcal{L}, and cls\mathcal{L}_{cls} dominates the model’s training. At this stage, increasing β\beta enhances the influence of ML\mathcal{L}_{ML}, thereby helping the model better constrain the label-irrelevant information from Gr1G_{r1} and Gr2G_{r2} in GG, thus improving performance. However, if β\beta becomes too large, the model may excessively focus on ML\mathcal{L}_{ML} and neglect cls\mathcal{L}_{cls} during training, leading to a decline in performance. Overall, an appropriate value of β\beta can effectively balance the influence of cls\mathcal{L}_{cls} and ML\mathcal{L}_{ML}, thereby enhancing the model’s performance.

Refer to caption

(a) Citeseer

Refer to caption

(b) Polblogs

Figure 9: Standard histogram of learned adjacency matrix weights on Citeseer and Polblogs

V-G Values in Learned Structure

To further elucidate the distribution of edge weights between community nodes, we categorize the edges into two teams: edges connecting nodes within the same communities and edges connecting nodes across different communities. In Fig. 9, we showcase the normalized histograms of the learned structure values (weights) on Citeseer and Polblogs datasets.

From the histograms, it is apparent that the weights of edges between different communities are predominantly concentrated in the first bin (less than 0.1). In contrast, the weights of edges within the same communities are distributed not only in the first interval but also in higher bins. This distribution demonstrates that GaGSL effectively differentiates between inter-community and intra-community edges when assigning weights. Consequently, this ability to assign appropriate weights enhances the performance and robustness of graph representations.

VI Conclusion

In this paper, we proposed a novel GSL method GaGSL, which tackles the challenge of learning a graph structure for node classification tasks guided by the GIB principle. The GaGSL method consists of three parts: global feature and structure augmentation, structure redefinition, and GIB guidance. We first mitigate the inherent bias of relying solely on a single original structure by global feature and structure augmentation. Subsequently, we design the structure estimator with different parameters to refine and optimize the graph structure. Finally, The optimization of the final graph structure is guided by the GIB principle. Experimental results on various datasets validate the superior effectiveness and robustness of GaGSL in learning compact and informative graph structure.

Future work will delve into further optimizing the GaGSL model to address label noise and data imbalance. Although this paper effectively mitigates structural noise, challenges posed by label noise and data imbalance remain unresolved. Therefore, upcoming research will concentrate on devising robust strategies to manage label noise and handle data imbalance issues. By tackling these challenges, we aim to ensure that the model achieves enhanced performance and robustness in real-world scenarios.

References

  • [1] Å. J. Holmgren, “Using graph models to analyze the vulnerability of electric power networks,” Risk analysis, vol. 26, no. 4, pp. 955–969, 2006.
  • [2] B. Krishnamurthy, P. Gill, and M. Arlitt, “A few chirps about twitter,” in Proceedings of the first workshop on Online social networks, 2008, pp. 19–24.
  • [3] Y. Zhang, Y. Fan, W. Song, S. Hou, Y. Ye, X. Li, L. Zhao, C. Shi, J. Wang, and Q. Xiong, “Your style your identity: Leveraging writing and photography styles for drug trafficker identification in darknet markets over attributed heterogeneous information network,” in The World Wide Web Conference, 2019, pp. 3448–3454.
  • [4] G. Taubin, T. Zhang, and G. Golub, “Optimal surface smoothing as filter design,” in Computer Vision—ECCV’96: 4th European Conference on Computer Vision Cambridge, UK, April 15–18, 1996 Proceedings, Volume I 4.   Springer, 1996, pp. 283–292.
  • [5] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  • [6] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, vol. 32, no. 1, pp. 4–24, 2020.
  • [7] L. Wu, P. Cui, J. Pei, L. Zhao, and X. Guo, “Graph neural networks: foundation, frontiers and applications,” in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 4840–4841.
  • [8] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in International conference on machine learning.   PMLR, 2017, pp. 1263–1272.
  • [9] X. Wang, M. Zhu, D. Bo, P. Cui, C. Shi, and J. Pei, “Am-gcn: Adaptive multi-channel graph convolutional networks,” in Proceedings of the 26th ACM SIGKDD International conference on knowledge discovery & data mining, 2020, pp. 1243–1253.
  • [10] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  • [11] M. Zhang, “Graph neural networks: link prediction,” Graph Neural Networks: Foundations, Frontiers, and Applications, pp. 195–223, 2022.
  • [12] M. Zhang and Y. Chen, “Weisfeiler-lehman neural machine for link prediction,” in Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, 2017, pp. 575–583.
  • [13] J. Zeng and P. Xie, “Contrastive self-supervised learning for graph classification,” in Proceedings of the AAAI conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 824–10 832.
  • [14] K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” arXiv preprint arXiv:1810.00826, 2018.
  • [15] Q. Li, H. Peng, J. Li, C. Xia, R. Yang, L. Sun, P. S. Yu, and L. He, “A survey on text classification: From shallow to deep learning,” arXiv preprint arXiv:2008.00364, 2020.
  • [16] C. Gao, J. Chen, S. Liu, L. Wang, Q. Zhang, and Q. Wu, “Room-and-object aware knowledge reasoning for remote embodied referring expression,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3064–3073.
  • [17] A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli, “Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks,” in 28th USENIX security symposium (USENIX security 19), 2019, pp. 321–338.
  • [18] H. Xu, Y. Ma, H.-C. Liu, D. Deb, H. Liu, J.-L. Tang, and A. K. Jain, “Adversarial attacks and defenses in images, graphs and text: A review,” International journal of automation and computing, vol. 17, pp. 151–178, 2020.
  • [19] W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang, “Graph structure learning for robust graph neural networks,” in Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020, pp. 66–74.
  • [20] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger, “Simplifying graph convolutional networks,” in International conference on machine learning.   PMLR, 2019, pp. 6861–6871.
  • [21] Y. Zhu, W. Xu, J. Zhang, Q. Liu, S. Wu, and L. Wang, “Deep graph structure learning for robust representations: A survey,” arXiv preprint arXiv:2103.03036, vol. 14, pp. 1–1, 2021.
  • [22] R. Wang, S. Mou, X. Wang, W. Xiao, Q. Ju, C. Shi, and X. Xie, “Graph structure estimation neural networks,” in Proceedings of the web conference 2021, 2021, pp. 342–353.
  • [23] L. Franceschi, M. Niepert, M. Pontil, and X. He, “Learning discrete structures for graph neural networks,” in International conference on machine learning.   PMLR, 2019, pp. 1972–1982.
  • [24] B. Jiang, Z. Zhang, D. Lin, J. Tang, and B. Luo, “Semi-supervised learning with graph learning-convolutional networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 11 313–11 320.
  • [25] Y. Chen, L. Wu, and M. Zaki, “Iterative deep graph learning for graph neural networks: Better and robust node embeddings,” Advances in neural information processing systems, vol. 33, pp. 19 314–19 326, 2020.
  • [26] H. Pei, B. Wei, K. C.-C. Chang, Y. Lei, and B. Yang, “Geom-gcn: Geometric graph convolutional networks,” arXiv preprint arXiv:2002.05287, 2020.
  • [27] T. Wu, H. Ren, P. Li, and J. Leskovec, “Graph information bottleneck,” Advances in Neural Information Processing Systems, vol. 33, pp. 20 437–20 448, 2020.
  • [28] Q. Sun, J. Li, H. Peng, J. Wu, X. Fu, C. Ji, and S. Y. Philip, “Graph structure learning with variational information bottleneck,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 4, 2022, pp. 4165–4174.
  • [29] J. Yu, T. Xu, Y. Rong, Y. Bian, J. Huang, and R. He, “Graph information bottleneck for subgraph recognition,” arXiv preprint arXiv:2010.05563, 2020.
  • [30] J. C. Principe, Information theoretic learning: Renyi’s entropy and kernel perspectives.   Springer Science & Business Media, 2010.
  • [31] Q. Sun, J. Li, B. Yang, X. Fu, H. Peng, and S. Y. Philip, “Self-organization preserved graph structure learning with principle of relevant information,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 4, 2023, pp. 4643–4651.
  • [32] A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
  • [33] D. K. Hammond, P. Vandergheynst, and R. Gribonval, “Wavelets on graphs via spectral graph theory,” Applied and Computational Harmonic Analysis, vol. 30, no. 2, pp. 129–150, 2011.
  • [34] Z. Qiao, Y. Fu, P. Wang, M. Xiao, Z. Ning, D. Zhang, Y. Du, and Y. Zhou, “Rpt: toward transferable model on heterogeneous researcher data via pre-training,” IEEE Transactions on Big Data, vol. 9, no. 1, pp. 186–199, 2022.
  • [35] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral networks and locally connected networks on graphs,” arXiv preprint arXiv:1312.6203, 2013.
  • [36] M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” Advances in neural information processing systems, vol. 29, 2016.
  • [37] R. Li, S. Wang, F. Zhu, and J. Huang, “Adaptive graph convolutional neural networks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
  • [38] C. Zhuang and Q. Ma, “Dual graph convolutional networks for graph-based semi-supervised classification,” in Proceedings of the 2018 world wide web conference, 2018, pp. 499–508.
  • [39] B. Xu, H. Shen, Q. Cao, Y. Qiu, and X. Cheng, “Graph wavelet neural network,” arXiv preprint arXiv:1904.07785, 2019.
  • [40] H. Zhu and P. Koniusz, “Simple spectral graph convolution,” in International conference on learning representations, 2021.
  • [41] C. Gallicchio and A. Micheli, “Graph echo state networks,” in The 2010 international joint conference on neural networks (IJCNN).   IEEE, 2010, pp. 1–8.
  • [42] H. Dai, Z. Kozareva, B. Dai, A. Smola, and L. Song, “Learning steady-states of iterative algorithms over graphs,” in International conference on machine learning.   PMLR, 2018, pp. 1106–1114.
  • [43] A. Micheli, “Neural network for graphs: A contextual constructive approach,” IEEE Transactions on Neural Networks, vol. 20, no. 3, pp. 498–511, 2009.
  • [44] W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
  • [45] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI open, vol. 1, pp. 57–81, 2020.
  • [46] D. Lusher, J. Koskinen, and G. Robins, Exponential random graph models for social networks: Theory, methods, and applications.   Cambridge University Press, 2013.
  • [47] T. Martin, B. Ball, and M. E. Newman, “Structural inference for uncertain networks,” Physical Review E, vol. 93, no. 1, p. 012306, 2016.
  • [48] X. Zhang and M. Zitnik, “Gnnguard: Defending graph neural networks against adversarial attacks,” Advances in neural information processing systems, vol. 33, pp. 9263–9275, 2020.
  • [49] Y. Zhang, S. Pal, M. Coates, and D. Ustebay, “Bayesian graph convolutional neural networks for semi-supervised classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 5829–5836.
  • [50] L. Yang, Z. Kang, X. Cao, D. J. 0001, B. Yang, and Y. Guo, “Topology optimization based graph convolutional network.” in IJCAI, 2019, pp. 4054–4061.
  • [51] J. Yu, J. Cao, and R. He, “Improving subgraph recognition with variational graph information bottleneck,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19 396–19 405.
  • [52] C. Donnat, M. Zitnik, D. Hallac, and J. Leskovec, “Learning structural node embeddings via diffusion wavelets,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 1320–1329.
  • [53] J. Gasteiger, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” arXiv preprint arXiv:1810.05997, 2018.
  • [54] Y. Tian, C. Sun, B. Poole, D. Krishnan, C. Schmid, and P. Isola, “What makes for good views for contrastive learning?” Advances in neural information processing systems, vol. 33, pp. 6827–6839, 2020.
  • [55] L. Paninski, “Estimation of entropy and mutual information,” Neural computation, vol. 15, no. 6, pp. 1191–1253, 2003.
  • [56] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning.   PMLR, 2020, pp. 1597–1607.
  • [57] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738.
  • [58] A. Shapiro, “Monte carlo sampling methods,” Handbooks in operations research and management science, vol. 10, pp. 353–425, 2003.
  • [59] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Graph contrastive learning with adaptive augmentation,” in Proceedings of the Web Conference 2021, 2021, pp. 2069–2080.
  • [60] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., “Scikit-learn: Machine learning in python,” the Journal of machine Learning research, vol. 12, pp. 2825–2830, 2011.
  • [61] P. Mernyei and C. Cangea, “Wiki-cs: A wikipedia-based benchmark for graph neural networks,” arXiv preprint arXiv:2007.02901, 2020.
  • [62] M. Fey and J. E. Lenssen, “Fast graph representation learning with pytorch geometric,” arXiv preprint arXiv:1903.02428, 2019.
  • [63] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [64] D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neural networks for graph data,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 2847–2856.