This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Understanding Expressivity of GNN in Rule Learning

Haiquan Qiu1, Yongqi Zhang2, Yong Li1, Quanming Yao1111Quanming Yao is the corresponding author.
1Department of Electronic Engineering, Tsinghua University
2The Hong Kong University of Science and Technology (Guangzhou)
qyaoaa@tsinghua.edu.cn
Abstract

Rule learning is critical to improving knowledge graph (KG) reasoning due to their ability to provide logical and interpretable explanations. Recently, Graph Neural Networks (GNNs) with tail entity scoring achieve the state-of-the-art performance on KG reasoning. However, the theoretical understandings for these GNNs are either lacking or focusing on single-relational graphs, leaving what the kind of rules these GNNs can learn an open problem. We propose to fill the above gap in this paper. Specifically, GNNs with tail entity scoring are unified into a common framework. Then, we analyze their expressivity by formally describing the rule structures they can learn and theoretically demonstrating their superiority. These results further inspire us to propose a novel labeling strategy to learn more rules in KG reasoning. Experimental results are consistent with our theoretical findings and verify the effectiveness of our proposed method. The code is publicly available at https://github.com/LARS-research/Rule-learning-expressivity.

1 Introduction

A knowledge graph (KG) (Battaglia et al., 2018; Ji et al., 2021) is a type of graph where edges represent multiple types of relationships between entities. These relationships can be of different types, such as friend, spouse, coworker, or parent-child, and each type of relationship is represented by a separate edge. By encapsulating the interactions among entities, KGs provide a way for machines to understand and process complex information. KG reasoning refers to the task of deducing new facts from the existing facts in KG. This task is important because it helps in many real-world applications, such as recommendation systems (Cao et al., 2019) and drug discovery (Mohamed et al., 2019).

With the success of graph neural networks (GNNs) in modeling graph-structured data, GNNs have been developed for KG reasoning in recent years. Classical methods such as R-GCN (Schlichtkrull et al., 2018) and CompGCN (Vashishth et al., 2020) are proposed for KG reasoning by aggregating the representations of two end entities of a triplet. And they are known to fail to distinguish the structural role of different neighbors. GraIL (Teru et al., 2020) and RED-GNN (Zhang & Yao, 2022) tackle this problem by encoding the subgraph around the target triplet. GraIL predicts a new triplet using the subgraph representations, while RED-GNN employs dynamic programming for efficient subgraph encoding. Motivated by the effectiveness of heuristic metrics over paths between a link, NBFNet (Zhu et al., 2021) proposes a neural network based on Bellman-Ford algorithm for KG reasoning. AdaProp (Zhang et al., 2023) and ANet (Zhu et al., 2022) enhance the scalability of RED-GNN and NBFNet respectively by selecting crucial nodes and edges iteratively. Among these methods, NBFNet, RED-GNN and their variants score a triplet with its tail entity representation and achieve state-of-the-art (SOTA) performance on KG reasoning. However, these methods are motivated by different heuristics, e.g., Bellman-Ford algorithm and enclosing subgraph encoding, which make the understanding of their effectiveness for KG reasoning difficult.

In this paper, inspired by the importance of rule learning in KG reasoning, we propose to study expressivity of SOTA GNNs for KG reasoning by analyzing the kind of rules they can learn. First, we unify SOTA GNNs for KG reasoning into a common framework called QL-GNN, based on the observation that they score a triplet with its tail entity representation and essentially extract rule structures from subgraphs with same pattern. Then, we analyze the logical expressivity of QL-GNN to study its ability of learning rule structures. The analysis helps us reveal the underneath theoretical reasons that contribute to the empirical success of QL-GNN, elucidating their effectiveness over classical methods. Specifically, our analysis is based on the formal description of rule structures in graph, which differs from previous analysis that relies on graph isomorphism testing (Xu et al., 2019; Zhang et al., 2021) and focuses on the expressivity of distinguishing various rules. The new analysis tool allows us to understand the rules learned by QL-GNN and reveals the maximum expressivity that QL-GNN can generalize through training. Based on the new theory, we also uncover the deficiencies of QL-GNN in learning rule structures and we propose EL-GNN based on labeling trick as an improvement upon QL-GNN to improve its learning ability. In summary, our paper has the following contributions:

  • Our work unifies state-of-the-art GNNs for KG reasoning into a common framework named QL-GNN, and analyzes their logical expressivity to study their ability of learning rule structures, explaining their superior performance over classical methods.

  • The logical expressivity of QL-GNN demonstrates its capability in learning a particular class of rule structures. Consequently, based on further theoretical analysis, we introduce EL-GNN, a novel GNN designed to learn rule structures that are beyond the learning capacity of QL-GNN.

  • Synthetic datasets are generated to evaluate the expressivity of various GNNs, whose experimental results are consistent with our theory. Also, results of the proposed labeling method show improved performance on real datasets.

Refer to caption
Figure 1: The existence of a triplet in KG is determined by the corresponding rule structure. We investigates the kind of rule structures can be learned by SOTA GNNs for KG reasoning (i.e., QL-GNN), and proposes EL-GNN, which can learn more rule structures compared to QL-GNN.

2 A common framework for the state-of-the-art methods

To study the state-of-the-art GNNs for KG reasoning, we find that they (e.g., RED-GNN and NBFNet) essentially learn rule structures from GNN’s tail entity representation which encodes subgraphs with the same pattern, i.e., a subgraph with the query entity as the source node and the tail entity as the sink node. Based on this observation, we are motivated to derive a common framework for these SOTA methods and analyze their ability of learning rule structures with the derived framework.

Given a query (h,R,?)(h,R,?), the labeling trick of query entity hh ensures the SOTA methods to extract rules from a graph with the same pattern because it makes the query entity distinguishable among all entities in graph. Therefore, we unify NBFNet, RED-GNN and their variants to a common framework called Query Labeling (QL) GNN (see correspondence in Appdendix B). For a query (h,R,?)(h,R,?), QL-GNN first applies labeling trick by assigning special initial representation 𝐞h(0)\mathbf{e}_{h}^{(0)} to entity hh, which make the query entity distinguishable from other entities. Base on these initial features, QL-GNN aggregates entity representations with a LL-layer message passing neural network (MPNN) for each candidate t𝒱t\in\mathcal{V}. MPNN’s last layer representation of entity tt in QL-GNN is denoted as 𝐞t(L)[h]\mathbf{e}_{t}^{(L)}[h] indicating its dependency on query entity hh. Finally, QL-GNN scores new facts (h,R,t)(h,R,t) with tail entity representation 𝐞t(L)[h]\mathbf{e}_{t}^{(L)}[h]. For example, NBFNet uses the score function s(h,R,t)=FFN(𝐞t(L)[h])s(h,R,t)=\text{FFN}(\mathbf{e}_{t}^{(L)}[h]) for new triplet (h,R,t)(h,R,t) where FFN()\text{FFN}(\cdot) denotes a feed-forward neural network.

Even RED-GNN, NBFNet and their variant may take the different MPNNs to calculate 𝐞t(L)[h]\mathbf{e}_{t}^{(L)}[h], without loss of generality, their MPNNs can take the following form in QL-GNN (omit [h][h] for simplicity):

𝐞v(k)=δ(𝐞v(k1),ϕ({{ψ(𝐞u(k1),R)|u𝒩R(v),R}})),\displaystyle\mathbf{e}_{v}^{(k)}=\delta\Big{(}\mathbf{e}_{v}^{(k-1)},\phi\left(\big{\{}\{\psi(\mathbf{e}_{u}^{(k-1)},R)|u\in\mathcal{N}_{R}(v),R\in\mathcal{R}\}\big{\}}\right)\Big{)}, (1)

where δ\delta and ϕ\phi are combination and aggregation functions respectively, ψ\psi is the message function encoding the relation RR and entity uu neighboring to vv, {{}}\{\{\cdots\}\} is a multiset, and 𝒩R(v)\mathcal{N}_{R}(v) is the neighboring entity set {u|(u,R,v)}\{u|(u,R,v)\in\mathcal{E}\}.

3 Expressivity of QL-GNN

In this section, we explore the logical expressivity of QL-GNN to analyze the types of rule structures QL-GNN can learn. First, we provide the logic to describe rules in KGs. Then, we analyze logical expressivity of QL-GNN using Theorem 3.2 and Corollary 3.3, formally demonstrating the kind of rule structures it can learn. Finally, we compare QL-GNN with classical methods and highlight its superior expressivity in KG reasoning.

3.1 Expressivity analysis with logic of rule structures

From previous works of rule mining on KG (Yang et al., 2017; Sadeghian et al., 2019), rule structures are usually described as a formula in first-order logic. We also follow this way to formally describe the rule structures in KG. Therefore, we have the following correspondence between the elements in rule structures and logic:

  • Variable: variables denoted with lowercase italic letters x,y,zx,y,z represent entities in a KG;

  • Unary predicate: unary predicate Pi(x)P_{i}(x) is corresponding to the entity property PiP_{i} in a KG, e.g., red(x)\text{red}(x) denotes the color of an entity xx is red;

  • Binary predicate: binary predicate Rj(x,y)R_{j}(x,y) is corresponding to the relation RjR_{j} in a KG, e.g., father(x,y)\text{father}(x,y) denotes xx is the father of yy;

  • Constant: constant denoted with lowercase letters 𝗁,𝖼\mathsf{h},\mathsf{c} with serif typestyle is the unique identifier of some entity in a KG.

Except from the above elements, the quantifier \exists expresses the existence of entities satisfying a condition, \forall expresses universal quantification, and N\exists^{\geq N} represents the existence of at least NN entities satisfying a condition. The logical connective \wedge denotes conjunction, \vee denotes disjunction, and \top and \bot represent true and false, respectively. Using these symbols, rule structures can be represented by describing their elements directly. For example, C3(x,y):=z1z2,R1(x,z1)R2(z1,z2)R3(z2,y)C_{3}(x,y):=\exists z_{1}z_{2},R_{1}(x,z_{1})\wedge R_{2}(z_{1},z_{2})\wedge R_{3}(z_{2},y) in Figure 2 describes a chain-like structure between xx and yy with three relations R1,R2,R3R_{1},R_{2},R_{3}. Rule structures can be represented using the rule formula R(x,y)R(x,y), and the existence of a rule structure for the triplet (h,R,t)(h,R,t) is equivalent to the satisfaction of the rule formula R(x,y)R(x,y) at the entity pair (h,t)(h,t). In this paper, logical expressivity of GNN is a measurement of the ability of GNN to learn logical formulas and is defined as the set of logical formulas that GNN can learn. Therefore, since rule structures can be described by logical formulas, the logical expressivity of QL-GNN can determine their ability to learn rule structures in KG reasoning.

3.2 What kind of rule structures can QL-GNN learn?

In this section, we analyze the logical expressivity of QL-GNN regarding what kind of rule structure it can learn. Given a query (h,R,?)(h,R,?), we first have the following proposition about the rule formula describing a rule structure.

Proposition 3.1.

The rule structure for query (h,R,?)(h,R,?) can be described with rule formula R(x,y)R(x,y) or rule formula R(𝗁,x)R(\mathsf{h},x) 111The rule formula R(𝗁,x)R(\mathsf{h},x) is equivalent to zR(z,x)Ph(z)\exists zR(z,x)\wedge P_{h}(z) where Ph(x)P_{h}(x) denotes the assignment of constant 𝗁\mathsf{h} to xx and is called constant predicate in our paper. where 𝗁\mathsf{h} is the logical constant assigned to query entity hh.

QL-GNN applies labeling trick to the query entity hh, which can be equivalently seen as assigning constant 𝗁\mathsf{h} to query entity hh222The initial representation of an entity should be unique among all entities to be regarded as constant in logic. The initial representation assigned to query entity are indeed unique in NBFNet, RED-GNN and their variants..With Proposition 3.1 (proven in Appendix A), the logical expressivity of QL-GNN can be analyzed by the types of rule formula R(𝗁,x)R(\mathsf{h},x) it can learn. In this case, the rule structure of triplet (h,R,t)(h,R,t) exists if and only if the logical formula R(𝗁,x)R(\mathsf{h},x) is satisfied at entity tt.

3.2.1 Expressivity of QL-GNN

Before presenting the logical expressivity of QL-GNN, we start by explaining how QL-GNN learns the rule formula R(𝗁,x)R(\mathsf{h},x). Following the definition in Barceló et al. (2020), we treat R(𝗁,x)R(\mathsf{h},x) as a binary classifier. When given a candidate tail entity tt, if the triplet (h,R,t)(h,R,t) exists in a KG, the binary classifier R(𝗁,x)R(\mathsf{h},x) should output true; otherwise, it should output false. If QL-GNN can learn the rule formula R(𝗁,x)R(\mathsf{h},x), it implies that QL-GNN can estimate binary classifier R(𝗁,x)R(\mathsf{h},x). Consequently, if the rule formula R(𝗁,x)R(\mathsf{h},x) is satisfied at entity tt, the representation 𝐞t(L)[h]\mathbf{e}_{t}^{(L)}[h] is mapped to a high probability value, indicating the existence of triplet (h,R,t)(h,R,t) in KG. Conversely, when the rule formula is not satisfied at tt, 𝐞t(L)[h]\mathbf{e}_{t}^{(L)}[h] is mapped to a low probability value, indicating the absence of the triplet.

The rule structures that QL-GNN can learn are described by a family of logic called graded modal logic (CML) (De Rijke, 2000; Otto, 2019). CML is defined by recursion with the base elements ,\top,\bot, all unary predicates Pi(x)P_{i}(x), and the recursion rule: if φ(x),φ1(x),φ2(x)\varphi(x),\varphi_{1}(x),\varphi_{2}(x) are formulas in CML, ¬φ(x),φ1(x)φ2(x),Ny(R(y,x)φ(y))\neg\varphi(x),\varphi_{1}(x)\wedge\varphi_{2}(x),\exists^{\geq N}y\left(R(y,x)\wedge\varphi(y)\right) are also formulas in CML. Since QL-GNN introduces a constant 𝗁\mathsf{h} to the query entity hh, we use the notation CML[G,𝗁]\text{CML}[G,\mathsf{h}] to denote the CML recursively built from base elements in GG and constant 𝗁\mathsf{h} (equivalent to constant predicate Ph(x)P_{h}(x)). Then, the following theorem and corollary show the expressivity of QL-GNN for KG reasoning.

Theorem 3.2 (Logical expressivity of QL-GNN).

For KG reasoning, given a query (h,R,?)(h,R,?), a rule formula R(𝗁,x)R(\mathsf{h},x) is learned by QL-GNN if and only if R(𝗁,x)R(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}].

Corollary 3.3.

The rule structures learned by QL-GNN can be constructed with the recursion:

  • Base case: all unary predicates Pi(x)P_{i}(x) can be learned by QL-GNN; the constant predicate Ph(x)P_{h}(x) can be learned by QL-GNN;

  • Recursion rule: if the rule structures R1(𝗁,x),R2(𝗁,x),R(𝗁,y)R_{1}(\mathsf{h},x),R_{2}(\mathsf{h},x),R(\mathsf{h},y) are learned by QL-GNN, R1(𝗁,x)R2(𝗁,y)R_{1}(\mathsf{h},x)\wedge R_{2}(\mathsf{h},y), Ny(Ri(y,x)R(𝗁,y))\exists^{\geq N}y\left(R_{i}(y,x)\wedge R(\mathsf{h},y)\right) are learned by QL-GNN.

Theorem 3.2 (proved in Appendix C) provides the logical expressivity of QL-GNN with rule formula R(𝗁,x)R(\mathsf{h},x) in CML[G,𝗁]\text{CML}[G,\mathsf{h}], which shows that querying labeling transforms R(x,y)R(x,y) to R(𝗁,x)R(\mathsf{h},x) and enable QL-GNN to learn the corresponding rule structure. To gain a concrete understanding of the rule structures learned by QL-GNN, Corollary 3.3 provides the recursive definition for these rule structures. Note that Theorem 3.2 cannot be directly applied to analyze the expressivity of QL-GNN when learning more than one rule structures. The ability of learning more than one rule structures relates to the capacity of QL-GNN, which we take as a future direction. Theorem 3.2 also reveals the maximum expressivity that QL-GNN can generalize through training, and its proof also provides some insights about the design QL-GNN with better generalization (more discussions are provided in Appendix F.1). Besides, our results in this section can be reduced to single relational-graph by restricting the relation type to a single relation type, and we give these results as corollaries in Appendix E.

3.2.2 Examples

We analyze several rule structures and their corresponding rule formulas in Figure 2 as illustrative examples, demonstrating the application of our theory in analyzing the rule structures that QL-GNN can learn. The real examples of these rule structures are shown in Figure 1. In Appdendix A, we have detailed analysis of rule structures discussed in the paper and present some rules from real datasets.

Chain-like rules, e.g., C3(x,y)C_{3}(x,y) in Figure 2, are basic rule structures investigated in many previous works (Sadeghian et al., 2019; Teru et al., 2020; Zhu et al., 2021). QL-GNN assigns constant 𝗁\mathsf{h} to query entity hh, thus triplets with relation C3C_{3} can be predicted by learning the rule formula C3(𝗁,x)C_{3}(\mathsf{h},x). C3(𝗁,x)C_{3}(\mathsf{h},x) are formulas in CML[G,𝗁]\text{CML}[G,\mathsf{h}] and can be recursively defined with rules in Corollary 3.3 (proven in Corollary A.2). Therefore, our theory gives a general proof of QL-GNN’s ability to learn chain-like structures.

Refer to caption
Figure 2: Example of rule structures and their corresponding rule formulas QL-GNN can learn.

The second type of rule structure I1(𝗁,x)I_{1}(\mathsf{h},x) in Figure 2 is composed of a chain-like structure from query entity to tail entity along with additional entity z2z_{2} connected to the chain. I1(𝗁,x)I_{1}(\mathsf{h},x) are formulas in CML[G,𝗁]\text{CML}[G,\mathsf{h}] and can be defined with recursive rules in Corollary 3.3 (proven in Corollary A.3), which indicates that I1(𝗁,x)I_{1}(\mathsf{h},x) can be learned by QL-GNN. These structures are important in KG reasoning because the entity connected to the chain can bring extra information about property of the entity it connected to (see examples of rule in Appendix A).

3.3 Comparison with classical methods

Classical methods such as R-GCN and CompGCN perform KG reasoning by first applying MPNN (1) to compute the entity representations 𝐞v(L),v𝒱\mathbf{e}_{v}^{(L)},v\in\mathcal{V} and then scoring the triplet (h,R,t)(h,R,t) by s(h,R,t)=Agg(𝐞h(L),𝐞t(L))s(h,R,t)=\text{Agg}(\mathbf{e}_{h}^{(L)},\mathbf{e}_{t}^{(L)}) with aggregation function Agg(,)\text{Agg}(\cdot,\cdot). For simplicity, we take CompGCN as an example to analyze the expressivity of the classical methods on learning rule structures.

Since CompGCN scores a triplet using its query and tail entity representations without applying labeling trick, the rule structures learned by CompGCN should be in the form of R(x,y)R(x,y). In CompGCN, the query and tail entities’ representations encode different subgraphs. However, the joint subgraph they represent may not necessarily be connected. This suggests that the rule structures learned by CompGCN are non-structural, indicating there is no path between its query and tail entities except for relation RR. This observation is proven with the following theorem.

Theorem 3.4 (Logical expressivity of CompGCN).

For KG reasoning, CompGCN can learn the rule formula R(x,y)=fR({φ(x)},{φ(y)})R(x,y)=f_{R}\left(\{\varphi(x)\},\{\varphi^{\prime}(y)\}\right) where fRf_{R} is a formula involving sub-formulas from {φ(x)}\{\varphi(x)\} and {φ(y)}\{\varphi^{\prime}(y)\} which are the sets of formulas in CML[G]\text{CML}[G].

Remark.

Theorem 3.4 indicates that representations of two end entities encoding two formulas respectively, and these two formulas are independent. Thus, the rule structures learned by CompGCN should be two disconnected subgraphs surrounding the query and tail entities respectively.

Similar to Theorem 3.2, CompGCN learns rule formula R(x,y)R(x,y) by treating it as a binary classifier. In a KG, the binary classifier R(x,y)R(x,y) should output true if the triplet (h,R,t)(h,R,t) exists; otherwise, it should output false. If CompGCN can learn the rule formula R(x,y)R(x,y), it implies that it can estimate the binary classifier R(x,y)R(x,y). Consequently, if the rule formula R(x,y)R(x,y) is (not) satisfied at entity pair (h,t)(h,t), the score s(h,R,t)s(h,R,t) is a high (low) value, indicating the existence (absence) of triplet (h,R,t)(h,R,t).

Theorem 3.4 (proven in Appendix C) shows that CompGCN can only learn rule formula R(x,y)R(x,y) for non-structural rules. One important type of relation in this category is the similarity between two entities (experiments in Appendix D.2), like same_color(x,y)\texttt{same\_color}(x,y) indicating entities with the same color. However, structural rules are more commonly observed in KG reasoning (Lavrac & Dzeroski, 1994; Sadeghian et al., 2019; Srinivasan & Ribeiro, 2020). Since Theorem 3.4 indicates CompGCN fails to learn connected rule structures that are not independent, the structural rules in Figure 2 cannot be learned by CompGCN. Such a comparison shows why QL-GNN is more efficient than classical methods, e.g., R-GCN and CompGCN, in real applications. Compared with previous work on single-relational graphs, Zhang et al. (2021) shows CompGCN cannot distinguish many non-isomorphic links, while our paper derives expressivity of CompGCN for learning rule structures.

4 Entity Labeling GNN based on rule formula transformation

QL-GNN is proven to be able to learn the class of rule structures defined in Corollary 3.3. For rule structures outside this class, we try to learn them with a novel labeling trick based on QL-GNN. Our general idea is to transform the rule structures outside this class into the rule structures in this class by adding constants to the graph. The following proposition and corollary show how to add constants to a rule structure so that it can be described by formulas in CML and how to apply labeling trick to make it learnable for QL-GNN.

Proposition 4.1.

Let R(𝗁,x)R(\mathsf{h},x) describe a single-connected rule structure 𝖦\mathsf{G} in GG. If we assign constants 𝖼1,𝖼2,,𝖼k\mathsf{c}_{1},\mathsf{c}_{2},\cdots,\mathsf{c}_{k} to all kk entities with out-degree larger than one in 𝖦\mathsf{G}, the rule structure 𝖦\mathsf{G} can be described with a new rule formula R(𝗁,x)R^{\prime}(\mathsf{h},x) in CML[G,𝗁,𝖼1,𝖼2,,𝖼k]\text{CML}[G,\mathsf{h},\mathsf{c}_{1},\mathsf{c}_{2},\cdots,\mathsf{c}_{k}].

Corollary 4.2.

Applying labeling trick with unique initial representations to entities assigned with constants 𝖼1,𝖼2,,𝖼k\mathsf{c}_{1},\mathsf{c}_{2},\cdots,\mathsf{c}_{k} in Proposition 4.1, the rule structure 𝖦\mathsf{G} can be learned by QL-GNN.

For instance, in Figure 3, the rule structure UU cannot be distinguished from the rule structure TT by recursive definition in Corollary 3.3, thus cannot be learned by QL-GNN. In this example, Proposition 4.1 suggests assigning constant 𝖼\mathsf{c} to the entity colored with gray in Figure 3, then a new rule formula

U(𝗁,x):=R1(𝗁,𝖼)(z2,z3,R2(𝖼,z2)R4(z2,x)R3(𝖼,z3)R5(z3,x))\displaystyle U^{\prime}(\mathsf{h},x):=R_{1}(\mathsf{h},\mathsf{c})\wedge\big{(}\exists z_{2},z_{3},R_{2}(\mathsf{c},z_{2})\wedge R_{4}(z_{2},x)\wedge R_{3}(\mathsf{c},z_{3})\wedge R_{5}(z_{3},x)\big{)}

in CML[G,𝗁,𝖼]\text{CML}[G,\mathsf{h},\mathsf{c}] (Corollary A.5) can describe the rule structure of UU. Therefore, the rule structure of UU can be learned with U(𝗁,x)U^{\prime}(\mathsf{h},x) by QL-GNN with constant 𝖼\mathsf{c} and cannot be learned by classical methods and vanilla QL-GNN.

Algorithm 1 Entity Labeling
0:  query (h,R,?)(h,R,?), knowledge graph GG, degree threshold dd.
1:  compute the out-degree dvd_{v} of each entity vv in GG;
2:  for entity vv in GG do
3:     if dv>dd_{v}>d then
4:        assign a unique representation 𝐞v(0)\mathbf{e}_{v}^{(0)} to entity vv;
5:     end if
6:  end for
7:  assign initial representation 𝐞h(0)\mathbf{e}_{h}^{(0)} to the query entity hh;
8:  Return: initial representation of all entities.
Refer to caption
Figure 3: Two rule structures cannot be distinguished by QL-GNN.

Based on Corollary 4.2, we need apply labeling trick to entities other than the query entities in QL-GNN to learn the rule structures outside the scope of Corollary 3.3. The new method is called Entity-Labeling (EL) GNN shown in Algorithm 1 and is different from QL-GNN in assigning constants to all the entities with out-degree larger than dd. We choose the degree threshold dd as a hyperparameter because a small dd (such as 11) will introduce too many constants to KG, which impedes the generalization of GNN (Abboud et al., 2021) (see an explanation from logical perspective in Appendix F.2). In fact, a smaller dd makes GNN learn the rule formulas with many constants and results bad generalization, while a larger dd may not be able to transform indistinguishable rules into formulas in CML. As a result, the degree threshold dd should be tuned to balance the expressivity and generalization of GNN. Same as the constant 𝗁\mathsf{h} in QL-GNN, we add a unique initial representation 𝐞v(0)\mathbf{e}_{v}^{(0)} for entities vv whose out-degree dv>dd_{v}>d in steps 3-5. For the query entity hh, we assign it with a unique initial representation 𝐞h(0)\mathbf{e}_{h}^{(0)} in step 7. In Algorithm 1, it can be seen that the additional time of EL-GNN comes from traversing all entities in the graph. The additional time complexity is linear with respect to the number of entities, which is negligible compared to QL-GNN. For convenience, GNN initialized with EL algorithm is denoted as EL-GNN (e.g., EL-NBFNet) in our paper.

Discussion

In Figure 1, we visually compare the expressivity of QL-GNN and EL-GNN. Classical methods, e.g., R-GCN and CompGCN, are not compared here because they can solely learn non-structural rules which are not commonly-seen in real applications. QL-GNN, e.g., NBFNet and RED-GNN, excels at learning rule structures described by formula R(𝗁,x)R(\mathsf{h},x) in CML[G,𝗁]\text{CML}[G,\mathsf{h}]. The proposed EL-GNN, encompassing QL-GNN as a special case, can learn rule structures described by formula R(𝗁,x)R(\mathsf{h},x) in CML[G,𝗁,𝖼1,,𝖼k]\text{CML}[G,\mathsf{h},\mathsf{c}_{1},\cdots,\mathsf{c}_{k}] which has a larger description scope than CML[G,𝗁]\text{CML}[G,\mathsf{h}].

5 Related Works

5.1 Expressivity of Graph Neural Network (GNN)

GNN (Kipf & Welling, 2016; Gilmer et al., 2017) has shown good performance on a wide range of tasks involving graph-structured data, thus many existing works try to analyze the expressivity of GNNs. Most of these works analyze the expressivity of GNNs from the perspective of graph isomorphism testing. A well-known result (Xu et al., 2019) shows that the expressivity of vanilla GNN is limited to WL test and the result is extended to KG by Barcelo et al. (2022). To improve the expressivity of GNNs, most of the existing works either design GNNs motivated by high-order WL test (Morris et al., 2019; 2020; Barcelo et al., 2022) or apply special initial representations (Abboud et al., 2021; You et al., 2021; Sato et al., 2021; Zhang et al., 2021). Except for using graph isomorphism testing, Barceló et al. (2020) analyze the logical expressivity of GNNs and identify that the logical rules from graded modal logic can be learned by vanilla GNN. However, their analysis is limited to node classification on the single-relational graph. Except from the expressivity of vanilla GNN, Tena Cucala et al. (2022) propose monotonic GNN whose prediction can be explained by symbolical rules in Datalog and the expressivity of monotonic GNN is further analyzed in Cucala et al. (2023).

Regarding the expressivity of GNNs for link prediction, Srinivasan & Ribeiro (2020) demonstrate that GNNs’ structural node representations alone are insufficient for accurate link prediction. To overcome this limitation, they introduce a method that incorporates Monte Carlo samples of node embeddings obtained from network embedding techniques instead of relying solely on GNNs. However, Zhang et al. (2021) discovered that by leveraging the labeling trick in GNNs, it is indeed possible to learn structural link representations for effective link prediction. This finding provides reassurance regarding the viability of GNNs for this task. Nonetheless, their analysis is confined to single-relational graphs, and their conclusions are limited to the fact that the labeling trick enables distinct representations for some non-isomorphic links, which other approaches cannot achieve. In this paper, we delve into the analysis of GNNs’ logical expressivity to study their ability of learning rule structures. By doing so, we aim to gain a comprehensive understanding of the rule structures that SOTA GNNs can learn in graphs. Our analysis encompasses both single-relational graph and KGs, thus broadening the applicability of our findings.

A concurrent work by Huang et al. (2023) analyzes the expressivity of GNNs for NBFNet (a kind of QL-GNN in our paper) with conditional MPNN while our work unifies state-ot-the-art GNNs into QL-GNN and analyzes the expressivity from a different perspective focusing on the understanding of relationship between labeling trick and constants in logic.

5.2 Knowledge graph reasoning

KG reasoning is the task to predict new facts based on the known facts in a KG G=(𝒱,,)G=(\mathcal{V},\mathcal{E},\mathcal{R}) where 𝒱,,\mathcal{V},\mathcal{E},\mathcal{R} are sets of entities, edges and relation types in the graph respectively. The facts (or edges, links) are typically expressed as triplets in the form of (h,R,t)(h,R,t), where the head entity hh and tail entity tt are related with the relation type RR. KG reasoning can be modeled as the process of predicting the tail entity tt of a query in the form (h,R,?)(h,R,?) where hh is called the query entity in our paper. The head prediction (?,R,t)(?,R,t) can be transformed into tail prediction (t,R1,?)(t,R^{-1},?) with inverse relation R1R^{-1}. Thus, we focus on tail prediction in this paper.

Embedding-based methods like TransE (Bordes et al., 2013), ComplEx (Trouillon et al., 2016), RotatE (Sun et al., 2019), and QuatE (Zhang et al., 2019) have been developed for KG reasoning. They learn embeddings for entities and relations, and predict facts by aggregating their representations. To capture local evidence within graphs, Neural LP (Yang et al., 2017) and DRUM (Sadeghian et al., 2019) learn logical rules based on predefined chain-like structures. However, apart from chain-like rules, these methods failed to learn more complex structures in KG (Hamilton et al., 2018; Ren et al., 2019). GNNs have also been used for KG reasoning, such as R-GCN (Schlichtkrull et al., 2018) and CompGCN (Vashishth et al., 2020), which aggregate entity and relation representations to calculate scores for new facts. However, these methods struggle to differentiate between the structural roles of different neighbors (Srinivasan & Ribeiro, 2020; Zhang et al., 2021). GraIL (Teru et al., 2020) addresses this by extracting enclosing subgraphs to predict new facts, while RED-GNN (Zhang & Yao, 2022) employs dynamic programming for efficient subgraph extraction and predicts new facts based on the tail entity representation. To extract relevant structures from graph, AdaProp (Zhang et al., 2023) improves RED-GNN by employing adaptive propagation to filter out irrelevant entities and retain promising targets. Motivated by the effectiveness of heuristic path-based metrics for link prediction, NBFNet (Zhu et al., 2021) proposes a neural network aligned with Bellman-Ford algorithm for KG reasoning. Zhu et al. (2022) propose ANet to learn a priority function to select important nodes and edges at each iteration. Specifically, AdaProp and ANet are variants of RED-GNN and NBFNet, respectively, designed to enhance their scalability. Among these methods, RED-GNN, NBFNet, AdaProp, and ANet achieve state-of-the-art performance on KG reasoning.

6 Experiment

In this section, we validate our theoretical findings from Section 3 and showcase the efficacy of our proposed EL-GNN (Section 4) on synthetic and real datasets through experiments. All experiments were implemented in Python using PyTorch and executed on A100 GPUs with 80GB memory.

6.1 Experiments on synthetic datasets

We generate six KGs based on rule structures in Figure 2, 3, 6 to validate our theory on expressivity and verify the improved performance of EL-GNN. These rule structures are either analyzed in the previous sections, or representative for evaluating GNN’s ability for learning rule structures. We evaluate R-GCN, CompGCN, RED-GNN, NBFNet, EL-RED-GNN, and EL-NBFNet (using RED-GNN/NBFNet as backbone with Algorithm 1). Our evaluation metric is prediction Accuracy which measures how well a rule structure is learned. We report testing accuracy of classical methods, QL-GNN, and EL-GNN on six synthetic graphs. Hyperparameters for all methods are automatically tuned with Ray (Liaw et al., 2018) based on the validation accuracy.

Table 1: Accuracy on synthetic data.
Method Method C3C_{3} C4C_{4} I1I_{1} I2I_{2} TT UU
Classical R-GCN 0.016 0.031 0.044 0.024 0.067 0.014
CompGCN 0.016 0.021 0.053 0.039 0.067 0.027
QL-GNN RED-GNN 1.0 1.0 1.0 1.0 1.0 0.405
NBFNet 1.0 1.0 1.0 1.0 1.0 0.541
EL-GNN EL-RED-GNN 1.0 1.0 1.0 1.0 1.0 0.797
EL-NBFNet 1.0 1.0 1.0 1.0 1.0 0.838
Dataset generation

Given a target relation, there are three steps to generate a dataset: (1) rule structure generation: generate specific rule structures according to their definition; (2) noisy triplets generation: generate noisy triplets to avoid GNN from learning naive rule structures; (3) missing triplets completion: generate missing triplets based on the target rule structure because the noisy triplets generation step could add triplets satisfying the target rule structure. We use triplets generated from rule structure and noisy triplets generation steps as known triplets in graph. Triplets with the target relation are separated into training, validation, and testing sets. Our experimental setting differs slightly from previous works as all GNNs in the experiments only perform message passing on the known triplets in the graph. This setup is reasonable and allows for evaluating the performance of GNNs in learning rule structures because the presence of a triplet can be determined based on the known triplets in the graph, following the rule structure generation process.

Results

Table 1 presents the testing accuracy of classical GNN methods, QL-GNN, and EL-GNN on six synthetic datasets (denoted as C3,C4,I1,I2,T,C_{3},C_{4},I_{1},I_{2},T, and UU) generated from their corresponding rule structures. The experimental results support our theory. CompGCN performs poorly on all six datasets, as it fails to learn the underlying rule structures discussed in examples of Section 3 (refer to Section D.2 for experiments of CompGCN). QL-GNN achieves perfect predictions (100% accuracy) for triplets with relations Cl,Ii,C_{l},I_{i}, and TT, successfully learning the corresponding rule formulas from CML[G,𝗁]\text{CML}[G,\mathsf{h}]. EL-GNN demonstrates improved expressivity, as evidenced by its performance on dataset UU, aligning with the analysis in Section 4. Furthermore, EL-GNN effectively learns rule formulas C(𝗁,x)C(\mathsf{h},x) and I(𝗁,x)I(\mathsf{h},x), validating its expressivity.

Refer to caption
Figure 4: Accuracy versus out-degree dd of EL-GNN on the dataset with relation UU.

Furthermore, we demonstrate the impact of the degree threshold dd on EL-GNN with dataset UU. The testing accuracy in Figure 4 reveals that an excessively small or large out-degree dd hinders the performance of EL-GNN. Therefore, it is important to empirically fine-tune the hyperparameter dd. To test the robustness of QL-GNN and EL-GNN in learning rules with incomplete structures, we randomly remove triplets in the training set to evaluate the accuracy of learning rule structures. The results can be found in Appendix D.4.

6.2 Experiments on real datasets

In this section, we follow the standard setup as Zhu et al. (2021) to test EL-GNN’s effectiveness on four real datasets: Family (Kok & Domingos, 2007), Kinship (Hinton et al., 1986), UMLS (Kok & Domingos, 2007), WN18RR (Dettmers et al., 2017), and FB15k-237 (Toutanova & Chen, 2015). For a fair comparison, we evaluate EL-NBFNet and EL-RED-GNN (applying EL to NBFNet and RED-GNN) using the same hyperparameters as NBFNet and RED-GNN and handcrafted dd. We compare it with embedding-based methods (RotatE, QuatE), rule-based methods (Neural LP, DRUM), and GNN-based methods (CompGCN, NBFNet, RED-GNN). To evaluate performance, we provide testing accuracy and standard deviation obtained from three repetitions for thorough evaluation.

In Table 2, we present our experimental findings. The results first show that NBFNet and RED-GNN (QL-GNN) outperform CompGCN. Furthermore, the proposed EL algorithm improves the accuracy of RED-GNN and NBFNet on real datasets. However, the degree of improvement varies across datasets due to the number and variations of rule types, and the quality of missing triplets in training sets. More experimental results, e.g., time cost and more performance metrics, are in Appendix D.5.

Table 2: Accuracy and standard deviation on real datasets. The best (and comparable best) results are in “bold”, the second (and comparable second) best are underlined.
Method Class Methods Family Kinship UMLS WN18RR FB15k-237
Embedding- based RotatE 0.865±0.004 0.704±0.002 0.860±0.003 0.427±0.003 0.240±0.001
QuatE 0.897±0.001 0.311±0.003 0.907±0.002 0.441±0.002 0.255±0.004
Rule-based Neural LP 0.872±0.002 0.481±0.006 0.630±0.001 0.369±0.003 0.190±0.002
DRUM 0.880±0.003 0.459±0.005 0.676±0.004 0.424±0.002 0.252±0.003
GNN-based CompGCN 0.883±0.001 0.751±0.003 0.867±0.002 0.443±0.001 0.265±0.001
RED-GNN 0.988±0.002 0.820±0.003 0.946±0.001 0.502±0.001 0.284±0.002
NBFNet 0.977±0.001 0.819±0.002 0.946±0.002 0.496±0.002 0.320±0.001
EL-RED-GNN 0.990±0.002 0.839±0.001 0.952±0.003 0.504±0.001 0.322±0.002
EL-NBFNet 0.985±0.001 0.842±0.003 0.953±0.002 0.501±0.003 0.332±0.001

7 Conclusion

In this paper, we analyze the expressivity of the state-of-the-art GNNs for learning rules in KG reasoning, explaining their superior performance over classical methods. Our analysis sheds light on the rule structures that GNNs can learn. Additionally, our theory motivates an effective labeling method to improve GNN’s expressivity. Moving forward, we will extend our analysis to GNNs with general labeling trick and try to extract explainable rule structures from trained GNN. Limitations and impacts are discussed in Appendix G.

Acknowledgments

Q. Yao was in part supported by National Key Research and Development Program of China under Grant 2023YFB2903904 and NSFC (No. 92270106).

References

  • Abboud et al. (2021) Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In International Joint Conference on Artificial Intelligence, 2021.
  • Arakelyan et al. (2021) Erik Arakelyan, Daniel Daza, Pasquale Minervini, and Michael Cochez. Complex query answering with neural link predictors. In International Conference on Learning Representations, 2021.
  • Barceló et al. (2020) Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan-Pablo Silva. The logical expressiveness of graph neural networks. In International Conference on Learning Representations, 2020.
  • Barcelo et al. (2022) Pablo Barcelo, Mikhail Galkin, Christopher Morris, and Miguel Romero Orth. Weisfeiler and leman go relational. In The First Learning on Graphs Conference, 2022. URL https://openreview.net/forum?id=wY_IYhh6pqj.
  • Battaglia et al. (2018) Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
  • Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. Advances in Neural Information Processing Systems, 2013.
  • Cao et al. (2019) Yixin Cao, Xiang Wang, Xiangnan He, Zikun Hu, and Tat-Seng Chua. Unifying knowledge graph learning and recommendation: Towards a better understanding of user preferences. In International World Wide Web Conference, 2019.
  • Cucala et al. (2023) David Tena Cucala, Bernardo Cuenca Grau, Boris Motik, and Egor V Kostylev. On the correspondence between monotonic max-sum gnns and datalog. arXiv preprint arXiv:2305.18015, 2023.
  • De Rijke (2000) Maarten De Rijke. A note on graded modal logic. Studia Logica, 64(2):271–283, 2000.
  • Dettmers et al. (2017) Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2D knowledge graph embeddings. In AAAI conference on Artificial Intelligence, 2017.
  • Gilmer et al. (2017) Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning, 2017.
  • Hamilton et al. (2018) Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, and Jure Leskovec. Embedding logical queries on knowledge graphs. Advances in neural information processing systems, 31, 2018.
  • Hinton et al. (1986) Geoffrey E Hinton et al. Learning distributed representations of concepts. In Annual Conference of the Cognitive Science Society, 1986.
  • Huang et al. (2023) Xingyue Huang, Miguel Romero Orth, İsmail İlkan Ceylan, and Pablo Barceló. A theory of link prediction via relational weisfeiler-leman. arXiv preprint arXiv:2302.02209, 2023.
  • Ji et al. (2021) Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and S Yu Philip. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE transactions on neural networks and learning systems, 33(2):494–514, 2021.
  • Kipf & Welling (2016) Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2016.
  • Kok & Domingos (2007) Stanley Kok and Pedro Domingos. Statistical predicate invention. In International Conference on Machine Learning, 2007.
  • Lavrac & Dzeroski (1994) Nada Lavrac and Saso Dzeroski. Inductive logic programming. In WLP, pp.  146–160. Springer, 1994.
  • Liaw et al. (2018) Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and Ion Stoica. Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118, 2018.
  • Mohamed et al. (2019) Sameh K. Mohamed, Vít Novácek, and Aayah Nounu. Discovering protein drug targets using knowledge graph embeddings. Bioinformatics, 2019.
  • Morris et al. (2019) Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In AAAI conference on Artificial Intelligence, 2019.
  • Morris et al. (2020) Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings. Advances in Neural Information Processing Systems, 2020.
  • Otto (2019) Martin Otto. Graded modal logic and counting bisimulation. arXiv preprint arXiv:1910.00039, 2019.
  • Ren et al. (2019) Hongyu Ren, Weihua Hu, and Jure Leskovec. Query2box: Reasoning over knowledge graphs in vector space using box embeddings. In International Conference on Learning Representations, 2019.
  • Sadeghian et al. (2019) Ali Sadeghian, Mohammadreza Armandpour, Patrick Ding, and Daisy Zhe Wang. Drum: End-to-end differentiable rule mining on knowledge graphs. Advances in Neural Information Processing Systems, 2019.
  • Sato et al. (2021) Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random features strengthen graph neural networks. In SIAM International Conference on Data Mining, 2021.
  • Schlichtkrull et al. (2018) Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, 2018.
  • Srinivasan & Ribeiro (2020) Balasubramaniam Srinivasan and Bruno Ribeiro. On the equivalence between positional node embeddings and structural graph representations. ICLR, 2020.
  • Sun et al. (2019) Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations, 2019.
  • Tena Cucala et al. (2022) DJ Tena Cucala, B Cuenca Grau, Egor V Kostylev, and Boris Motik. Explainable gnn-based models over knowledge graphs. 2022.
  • Teru et al. (2020) Komal Teru, Etienne Denis, and Will Hamilton. Inductive relation prediction by subgraph reasoning. In International Conference on Machine Learning, 2020.
  • Toutanova & Chen (2015) Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. In Alexandre Allauzen, Edward Grefenstette, Karl Moritz Hermann, Hugo Larochelle, and Scott Wen-tau Yih (eds.), Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pp. 57–66, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.18653/v1/W15-4007. URL https://aclanthology.org/W15-4007.
  • Trouillon et al. (2016) Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In International Conference on Machine Learning, 2016.
  • Vashishth et al. (2020) Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. Composition-based multi-relational graph convolutional networks. In International Conference on Learning Representations, 2020.
  • Xu et al. (2019) Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019.
  • Yang et al. (2017) Fan Yang, Zhilin Yang, and William W Cohen. Differentiable learning of logical rules for knowledge base reasoning. Advances in Neural Information Processing Systems, 2017.
  • You et al. (2021) Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In AAAI Conference on Artificial Intelligence, 2021.
  • Zhang et al. (2021) Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling trick: A theory of using graph neural networks for multi-node representation learning. Advances in Neural Information Processing Systems, 2021.
  • Zhang et al. (2019) Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. Quaternion knowledge graph embeddings. Advances in Neural Information Processing Systems, 32, 2019.
  • Zhang & Yao (2022) Yongqi Zhang and Quanming Yao. Knowledge graph reasoning with relational digraph. In International World Wide Web Conference, 2022.
  • Zhang et al. (2023) Yongqi Zhang, Zhanke Zhou, Quanming Yao, Xiaowen Chu, and Bo Han. Adaprop: Learning adaptive propagation for graph neural network based knowledge graph reasoning. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp.  3446–3457, 2023.
  • Zhu et al. (2021) Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. Neural bellman-ford networks: A general graph neural network framework for link prediction. Advances in Neural Information Processing Systems, 2021.
  • Zhu et al. (2022) Zhaocheng Zhu, Xinyu Yuan, Mikhail Galkin, Sophie Xhonneux, Ming Zhang, Maxime Gazeau, and Jian Tang. A*net: A scalable path-based reasoning approach for knowledge graphs. arXiv preprint arXiv:2206.04798, 2022.

Appendix A Rule analysis

We first give a simple proof for Proposition 3.1.

proof of Proposition 3.1.

Since R(𝗁,x)R(\mathsf{h},x) is equivalent to zR(z,x)Ph(z)\exists zR(z,x)\wedge P_{h}(z), where Ph(z)P_{h}(z) is the constant predicate only satisfied at entity hh. Because R(z,x)R(z,x) can describe the rule structure of (h,R,?)(h,R,?), zR(z,x)Ph(z)\exists zR(z,x)\wedge P_{h}(z) can describe the rule structure of (h,R,?)(h,R,?) as well. ∎

We use the notation G,vPiG,v\models P_{i} (G,vPiG,v\nvDash P_{i}) to represent that the unary predicate Pi(x)P_{i}(x) is (not) satisfied at entity vv.

Definition A.1 (Definition of graded modal logic).

A formula in graded modal logic of KG GG is recursively defined as follows:

  1. 1.

    If φ(x)=\varphi(x)=\top, G,vφG,v\models\varphi if vv is an entity in KG;

  2. 2.

    If φ(x)=Pc(x)\varphi(x)=P_{c}(x), G,vφG,v\models\varphi if and only if vv has the property PcP_{c} or can be uniquely identified by constant 𝖼\mathsf{c};

  3. 3.

    If φ(x)=φ1(x)φ2(x)\varphi(x)=\varphi_{1}(x)\wedge\varphi_{2}(x), G,vφG,v\models\varphi if and only if G,vφ1G,v\models\varphi_{1} and G,vφ2G,v\models\varphi_{2};

  4. 4.

    If φ(x)=¬ϕ(x)\varphi(x)=\neg\phi(x), G,vφG,v\models\varphi if and only if G,vϕG,v\nvDash\phi;

  5. 5.

    If φ(x)=Ny,Rj(y,x)ϕ(y)\varphi(x)=\exists^{\geq N}y,R_{j}(y,x)\wedge\phi(y), G,vφG,v\models\varphi if and only if the set of entities {u|u𝒩Rj(v) and G,uϕ}\{u|u\in\mathcal{N}_{R_{j}}(v)\text{ and }G,u\models\phi\} has cardinality at least NN.

Corollary A.2.

C3(𝗁,x)C_{3}(\mathsf{h},x) are formulas in CML[G,𝗁]\text{CML}[G,\mathsf{h}].

Proof.

C3(𝗁,x)C_{3}(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}] as it can be recursively defined as follows

φ1(x)\displaystyle\varphi_{1}(x) =Ph(x),\displaystyle=P_{h}(x),
φ2(x)\displaystyle\varphi_{2}(x) =y,R1(y,x)φ1(y),\displaystyle=\exists y,R_{1}(y,x)\wedge\varphi_{1}(y),
φ3(x)\displaystyle\varphi_{3}(x) =y,R2(y,x)φ2(y),\displaystyle=\exists y,R_{2}(y,x)\wedge\varphi_{2}(y),
C3(𝗁,x)\displaystyle C_{3}(\mathsf{h},x) =y,R3(y,x)φ3(y).\displaystyle=\exists y,R_{3}(y,x)\wedge\varphi_{3}(y).

Corollary A.3.

I1(𝗁,x)I_{1}(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}].

Proof.

I1(𝗁,x)I_{1}(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}] as it can be recursively defined as follows

φ1(x)\displaystyle\varphi_{1}(x) =Ph(x),\displaystyle=P_{h}(x),
φ2(x)\displaystyle\varphi_{2}(x) =y,R1(y,x)φ1(y),\displaystyle=\exists y,R_{1}(y,x)\wedge\varphi_{1}(y),
φs(x)\displaystyle\varphi_{s}(x) =y,R3(y,x),\displaystyle=\exists y,R_{3}(y,x)\wedge\top,
φ3(x)\displaystyle\varphi_{3}(x) =φs(x)φ2(x),\displaystyle=\varphi_{s}(x)\wedge\varphi_{2}(x),
I1(𝗁,x)\displaystyle I_{1}(\mathsf{h},x) =y,R2(y,x)φ3(y).\displaystyle=\exists y,R_{2}(y,x)\wedge\varphi_{3}(y).

Corollary A.4.

T(𝗁,x)T(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}].

Proof.

By Corollary A.2, C3(𝗁,x):=z1z2,R1(𝗁,z1)R2(z1,z2)R4(z2,x)C^{\prime}_{3}(\mathsf{h},x):=\exists z_{1}z_{2},R_{1}(\mathsf{h},z_{1})\wedge R_{2}(z_{1},z_{2})\wedge R_{4}(z_{2},x) and C3(𝗁,x):=z1z2,R1(𝗁,z1)R3(z1,z2)R5(z2,x)C_{3}^{\star}(\mathsf{h},x):=\exists z_{1}z_{2},R_{1}(\mathsf{h},z_{1})\wedge R_{3}(z_{1},z_{2})\wedge R_{5}(z_{2},x) are formulas in CML[G,𝗁]\text{CML}[G,\mathsf{h}]. Thus T(𝗁,x)=C3(𝗁,x)C3(𝗁,x)T(\mathsf{h},x)=C^{\prime}_{3}(\mathsf{h},x)\wedge C_{3}^{\star}(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}]. ∎

Corollary A.5.

U(𝗁,x)U^{\prime}(\mathsf{h},x) is a formula in CML[G,𝗁,𝖼]\text{CML}[G,\mathsf{h},\mathsf{c}].

Proof.

U(𝗁,x)U^{\prime}(\mathsf{h},x) is a formula in CML[G,𝗁,𝖼]\text{CML}[G,\mathsf{h},\mathsf{c}] as it can be recursively defined as follows

φ1(x)\displaystyle\varphi_{1}(x) =Ph(x),φc(x)=Pc(x),\displaystyle=P_{h}(x),\varphi_{c}(x)=P_{c}(x),
φ2(x)\displaystyle\varphi_{2}(x) =y,R1(y,x)φ1(y),\displaystyle=\exists y,R_{1}(y,x)\wedge\varphi_{1}(y),
φ3(x)\displaystyle\varphi_{3}(x) =φ2(x)φc(x),\displaystyle=\varphi_{2}(x)\wedge\varphi_{c}(x),
φ4(x)\displaystyle\varphi^{\prime}_{4}(x) =y,R2(y,x)φ3(y),\displaystyle=\exists y,R_{2}(y,x)\wedge\varphi_{3}(y),
φ5(x)\displaystyle\varphi^{\prime}_{5}(x) =y,R4(y,x)φ4(y),\displaystyle=\exists y,R_{4}(y,x)\wedge\varphi^{\prime}_{4}(y),
φ4′′(x)\displaystyle\varphi^{\prime\prime}_{4}(x) =y,R3(y,x)φ3(y),\displaystyle=\exists y,R_{3}(y,x)\wedge\varphi_{3}(y),
φ5′′(x)\displaystyle\varphi^{\prime\prime}_{5}(x) =y,R5(y,x)φ4′′(y),\displaystyle=\exists y,R_{5}(y,x)\wedge\varphi^{\prime\prime}_{4}(y),
U(𝗁,x)\displaystyle U^{\prime}(\mathsf{h},x) =φ5(x)φ5′′(x)\displaystyle=\varphi^{\prime}_{5}(x)\wedge\varphi^{\prime\prime}_{5}(x)

where the constant 𝖼\mathsf{c} ensures that there is only one entity satisfied for unary predicate φ3(x)\varphi_{3}(x). ∎

Example of rules

We can find some relations in reality corresponding to rules in Figure 2. Here are two examples of C3C_{3} and I1I_{1}:

  • Relation nationality (C3C_{3}): Einsteinborn_inUlmhometown_ofBornnationalityGermany\text{Einstein}\rightarrow_{\text{born\_in}}\text{Ulm}\rightarrow_{\text{hometown\_of}}\text{Born}\rightarrow_{nationality}\text{Germany};

  • Relation father (I1I_{1}): AspouseBparentC\text{A}\rightarrow_{\text{spouse}}\text{B}\rightarrow_{\text{parent}}\text{C} and DsisterhoodB\text{D}\rightarrow_{\text{sisterhood}}\text{B}.

Rule structures in real datasets

To show that the expressivity is meaningful in our paper, we select three rule structures from Family and FB15k-237 in Figure 5 to show the existence of rule structures in real datasets. With the definition of CML, the rule structure in Figure 5(a) is not a formula in CML and rule structures in Figure 5(b) and 5(c) are formulas in CML. The real rules shows that rules defined by CML is common in real-world datasets and the rules beyond CML also exist, which highlights the importance of our work.

Refer to caption
Figure 5: Some rule structures in real datasets. The rule structure (a) is from Family dataset and is not a rule formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}], which cannot not be learned by QL-GNN. The rule structures (b) and (c) are from FB15k-237 dataset and are rule formulas in CML[G,𝗁]\text{CML}[G,\mathsf{h}], which can be learned by QL-GNN.
Summary

Here we give Table 3 to illustrate the correspondence between GNNs for KG reasoning, rule structures, and theories presented in our paper.

Table 3: Whether GNNs investigated in our paper can learn the rule formulas in Figure 2 and 3 and the exemplar methods of these GNNs. ✓(✗) mean the corresponding GNN can(not) lean the rule formula.
Rule formula C3(𝗁,x)C_{3}(\mathsf{h},x) I1(𝗁,x)I_{1}(\mathsf{h},x) T(𝗁,x)T(\mathsf{h},x) U(𝗁,x)U(\mathsf{h},x) Theoretical result Exemplar Methods
Classical Theorem 3.4 R-GCN, CompGCN
QL-GNN Theorem 3.2 NBFNet, RED-GNN
EL-GNN Proposition 4.1 EL-NBFNet/RED-GNN

Appendix B Relation between QL-GNN and NBFNet/RED-GNN

In this part, we show that NBFNet and RED-GNN are special cases of QL-GNN in Table 4 and 5 respectively.

Table 4: NBFNet is a special case of QL-GNN.
NBFNet
Query representation Relation embedding
Non-query representation 0
MPNN Aggregate({Message(𝒉x(t1),𝒘q(x,r,v))|(x,r,v)(v)}{𝒉v(0)})\textsc{Aggregate}\left(\left\{\textsc{Message}\left(\bm{h}^{(t-1)}_{x},{\bm{w}}_{q}(x,r,v)\right)\middle|(x,r,v)\in{\mathcal{E}}(v)\right\}\cup\left\{\bm{h}^{(0)}_{v}\right\}\right)
Triplet score Feed-forward network
Table 5: RED-GNN is a special case of QL-GNN.
RED-GNN
Query representation 0
Non-query representation NULL
MPNN δ({es,r}:(es,r,e)eqφ(𝒉eq,es1,𝒉r))\delta\Big{(}\sum\nolimits_{\{e_{s},r\}:(e_{s},r,e)\in\mathcal{E}_{e_{q}}^{\ell}}\varphi\big{(}{\bm{h}}^{\ell-1}_{e_{q},e_{s}},\bm{h}_{r}^{\ell}\big{)}\Big{)}
Triplet score Linear transformation

Appendix C Proof

We use the notation G,(h,t)RjG,(h,t)\models R_{j} (G,(h,t)RjG,(h,t)\nvDash R_{j}) to denote Rj(x,y)R_{j}(x,y) is (not) satisfied at h,th,t.

C.1 Base theorem: what kind of logical formulas can MPNN backbone for KG learn?

In this section, we analyze the expressivity of MPNN backbone (1) for learning logical formulas in KG. This section is the extension of Barceló et al. (2020) to KG.

In a KG G=(𝒱,,)G=(\mathcal{V},\mathcal{E},\mathcal{R}), MPNN with LL layers is a type of neural network that applies graph GG and initial entity representation 𝐞v(0)\mathbf{e}_{v}^{(0)} to learn the representations 𝐞v(L),v𝒱\mathbf{e}_{v}^{(L)},v\in\mathcal{V}. MPNN employs message-passing mechanisms (Gilmer et al., 2017) to propagate information between entities in graph. The kk-th layer of MPNN updates the entity representation via the following message-passing formula

𝐞v(k)=δ(𝐞v(k1),ϕ({{ψ(𝐞u(k1),R)|u𝒩R(v),R}})),\displaystyle\mathbf{e}_{v}^{(k)}=\delta\Big{(}\mathbf{e}_{v}^{(k-1)},\phi\left(\{\{\psi(\mathbf{e}_{u}^{(k-1)},R)|u\in\mathcal{N}_{R}(v),R\in\mathcal{R}\}\}\right)\Big{)},

where δ\delta and ϕ\phi are combination and aggregation functions respectively, ψ\psi is the message function encoding the relation RR and entity uu neighboring to vv, {{}}\{\{\cdots\}\} is a multiset, and 𝒩R(v)\mathcal{N}_{R}(v) is the neighboring entity set {u|(u,R,v)}\{u|(u,R,v)\in\mathcal{E}\}.

To understand how MPNN can learn logical formulas, we regard logical formula φ(x)\varphi(x) as a binary classifier indicating whether φ(x)\varphi(x) is satisfied at entity xx. Then, we commence with the following definition.

Definition C.1.

A MPNN captures a logical formula φ(x)\varphi(x) if and only if given any graph GG, the MPNN representation can be mapped to a binary value, where True indicates that φ(x)\varphi(x) satisfies on entity xx, while False does not satisfy.

According to the above definition, MPNN can learn logical formula in KG by encoding whether these logical formulas is satisfied in the representation of the corresponding entity. For example, if MPNN can learn a logical formula φ(x)\varphi(x), it implies that 𝐞v(L)\mathbf{e}_{v}^{(L)} can be mapped to a binary value True/False by a function indicating whether φ(x)\varphi(x) is satisfied at entity vv. Previous work (Barceló et al., 2020) has proven that vanilla GNN for single-relational graph can learn the logical formulas from graded modal logic (De Rijke, 2000; Otto, 2019) (a.k.a., Counting extension of Modal Logic, CML). In this section, we will present a similar theory of MPNN for KG.

The insight of MPNN’s ability to learn formulas in CML lies in the alignment between certain CML formulas and the message-passing mechanism, which also holds for KG. Specifically, Ny(Rj(y,x)φ(y))\exists^{\geq N}y\left(R_{j}(y,x)\wedge\varphi(y)\right) is the formula aligned with MPNN’s message-passing mechanism and allows to check the property of neighbor yy of entity variable xx. We use notation CML[G]\text{CML}[G] to denote CML of a graph GG. Then, we give the following theorem to find out the kind of logical formula MPNN (1) can learn in KG.

Theorem C.2.

In a KG GG, a logical formula φ(x)\varphi(x) is learned by MPNN (1) from its representations if and only if φ(x)\varphi(x) is a formula in CML[G]\text{CML}[G].

Our theorem can be viewed as an extension of Theorem 4.2 in Barceló et al. (2020) to KG and is the elementary tool for analyzing the expressivity of GNNs for KG reasoning. The proof of Theorem C.2 is in Appendix C and employs novel techniques that specifically account for relation types. Our theorem shows that CML of KG is the tightest subclass of logic that MPNN can learn. Similarly, our theorem is about the ability to implicitly learn logical formulas by MPNN rather than explicitly extracting them.

C.2 Proof of Theorem C.2

The backward direction of Theorem C.2 is proven by constructing a MPNN that can learn any formula φ(x)\varphi(x) in CML. The forward direction relies on the results from recent theoretical results in Otto (2019). Our theorem can be seen as an extension of Theorem 4.2 in Barceló et al. (2020) to KG.

We first prove the backward direction of Theorem C.2.

Lemma C.3.

Each formula φ(x)\varphi(x) in CML can be learned by MPNN (1) from its entity representations.

Proof.

Let φ(x)\varphi(x) be a formula in CML. We decompose φ\varphi into a series of sub-formulas sub[φ]=(φ1,φ2,,φL)\text{sub}[\varphi]=(\varphi_{1},\varphi_{2},\cdots,\varphi_{L}) where φk\varphi_{k} is a sub-formula of φ\varphi_{\ell} if kk\leq\ell and φ=φL\varphi=\varphi_{L}. Assume the MPNN representation 𝐞v(i)L,v𝒱,i=1L\mathbf{e}_{v}^{(i)}\in\mathbb{R}^{L},v\in\mathcal{V},i=1\cdots L. In this proof, the theoretical analysis will based on the following simple choice of (1)

𝐞v(i)=σ(𝐞v(i1)𝐂+j=1ru𝒩Rj(v)𝐞u(i1)𝐀Rj+𝐛)\displaystyle\mathbf{e}_{v}^{(i)}=\sigma\left(\mathbf{e}_{v}^{(i-1)}\mathbf{C}+\sum_{j=1}^{r}\sum_{u\in\mathcal{N}_{R_{j}}(v)}\mathbf{e}_{u}^{(i-1)}\mathbf{A}_{R_{j}}+\mathbf{b}\right) (2)

with σ=min(max(0,x),1)\sigma=\min(\max(0,x),1), 𝐀Rj,𝐂L×L\mathbf{A}_{R_{j}},\mathbf{C}\in\mathbb{R}^{L\times L} and 𝐛L\mathbf{b}\in\mathbb{R}^{L}. The entries of the \ell-th columns of 𝐀Rj,𝐂\mathbf{A}_{R_{j}},\mathbf{C}, and 𝐛\mathbf{b} depend on the sub-formulas of φ\varphi as follows:

  • Case 0. if φ(x)=P(x)\varphi_{\ell}(x)=P_{\ell}(x) where PP_{\ell} is a unary predicate, 𝐂=1\mathbf{C}_{\ell\ell}=1;

  • Case 1. if φ(x)=φj(x)φk(x)\varphi_{\ell}(x)=\varphi_{j}(x)\wedge\varphi_{k}(x), 𝐂j=𝐂k=1\mathbf{C}_{j\ell}=\mathbf{C}_{k\ell}=1 and 𝐛=1\mathbf{b}_{\ell}=-1;

  • Case 2. if φ=¬φk(x)\varphi_{\ell}=\neg\varphi_{k}(x), 𝐂k=1\mathbf{C}_{k\ell}=-1 and 𝐛=1\mathbf{b}_{\ell}=1;

  • Case 3. if φ(x)=Ny(Rj(y,x)φk(y))\varphi_{\ell}(x)=\exists^{\geq N}y\left(R_{j}(y,x)\wedge\varphi_{k}(y)\right), (𝐀Rj)k=1\left(\mathbf{A}_{R_{j}}\right)_{k\ell}=1 and 𝐛=N+1\mathbf{b}_{\ell}=-N+1.

with all the other values set to 0.

Before the proof, for every entity v𝒱v\in\mathcal{V}, the initial representation 𝐞v(0)=(t1,t2,,tn)\mathbf{e}_{v}^{(0)}=(t_{1},t_{2},\cdots,t_{n}) has t=1t_{\ell}=1 if the sub-formula φ=P(x)\varphi_{\ell}=P_{\ell}(x) is satisfied at vv, and t=0t_{\ell}=0 otherwise.

Let G=(𝒱,,)G=(\mathcal{V},\mathcal{E},\mathcal{R}) be a KG. We next prove that for every φsub[φ]\varphi_{\ell}\in\text{sub}[\varphi] and every entity v𝒱v\in\mathcal{V} it holds that

(𝐞v(i))=1ifG,vφ,and(𝐞v(i))=0otherwise,\left(\mathbf{e}_{v}^{(i)}\right)_{\ell}=1\quad\text{if}\quad G,v\models\varphi_{\ell},\quad\text{and}\quad\left(\mathbf{e}_{v}^{(i)}\right)_{\ell}=0\quad\text{otherwise},

for every iL\ell\leq i\leq L.

Now, we prove this by induction of the number of formulas in φ\varphi.

Base case: One sub-formula in φ\varphi. In this case, the formula is an atomic predicate φ=φ(x)=P(x)\varphi=\varphi_{\ell}(x)=P_{\ell}(x). Because 𝐂=1\mathbf{C}_{\ell\ell}=1 and (𝐞v(0))=1,(𝐞v(0))i=0,i(\mathbf{e}_{v}^{(0)})_{\ell}=1,(\mathbf{e}_{v}^{(0)})_{i}=0,i\neq\ell, we have (𝐞v(1))=1(\mathbf{e}_{v}^{(1)})_{\ell}=1 if G,vφG,v\models\varphi_{\ell} and (𝐞v(1))=0(\mathbf{e}_{v}^{(1)})_{\ell}=0 otherwise. For i1i\geq 1, 𝐞v(i)\mathbf{e}_{v}^{(i)} satisfies the same property.

Induction Hypothesis: kk sub-formulas in φ\varphi with k<k<\ell. Assume (𝐞v(i))k=1\left(\mathbf{e}_{v}^{(i)}\right)_{k}=1 if G,vφkG,v\models\varphi_{k} and (𝐞v(i))k=0\left(\mathbf{e}_{v}^{(i)}\right)_{k}=0 otherwise for kiLk\leq i\leq L.

Proof: \ell sub-formulas in φ\varphi. Let ii\geq\ell. Case 1-3 should be considered.

Case 1. Let φ(x)=φj(x)φk(x)\varphi_{\ell}(x)=\varphi_{j}(x)\wedge\varphi_{k}(x). Then 𝐂j=𝐂k=1\mathbf{C}_{j\ell}=\mathbf{C}_{k\ell}=1 and 𝐛=1\mathbf{b}_{\ell}=-1. Then we have

(𝐞v(i))=σ((𝐞v(i1))j+(𝐞v(i1))k1).\displaystyle(\mathbf{e}_{v}^{(i)})_{\ell}=\sigma\left((\mathbf{e}_{v}^{(i-1)})_{j}+(\mathbf{e}_{v}^{(i-1)})_{k}-1\right).

By the induction hypothesis, (𝐞v(i1))j=1(\mathbf{e}_{v}^{(i-1)})_{j}=1 if only if G,vφjG,v\models\varphi_{j} and (𝐞v(i1))j=0(\mathbf{e}_{v}^{(i-1)})_{j}=0 otherwise. Similarly, (𝐞v(i1))k=1(\mathbf{e}_{v}^{(i-1)})_{k}=1 if and only if G,vφkG,v\models\varphi_{k} and (𝐞v(i1))k=0(\mathbf{e}_{v}^{(i-1)})_{k}=0 otherwise. Then we have (𝐞v(i))=1(\mathbf{e}_{v}^{(i)})_{\ell}=1 if and only if (𝐞v(i1))j+(𝐞v(i1))k11(\mathbf{e}_{v}^{(i-1)})_{j}+(\mathbf{e}_{v}^{(i-1)})_{k}-1\geq 1, which means (𝐞v(i1))j=1(\mathbf{e}_{v}^{(i-1)})_{j}=1 and (𝐞v(i1))k=1(\mathbf{e}_{v}^{(i-1)})_{k}=1. Then (𝐞v(i))=1(\mathbf{e}_{v}^{(i)})_{\ell}=1 if and only if G,vφjG,v\models\varphi_{j} and G,vφkG,v\models\varphi_{k}, i.e., G,vφG,v\models\varphi_{\ell}, and (𝐞v(i))=0(\mathbf{e}_{v}^{(i)})_{\ell}=0 otherwise.

Case 2. Let φ(x)=¬φk(x)\varphi_{\ell}(x)=\neg\varphi_{k}(x). Because of 𝐂k=1\mathbf{C}_{k\ell}=-1 and 𝐛=1\mathbf{b}_{\ell}=1, we have

(𝐞v(i))=σ((𝐞v(i1))k+1).\displaystyle(\mathbf{e}_{v}^{(i)})_{\ell}=\sigma\left(-(\mathbf{e}_{v}^{(i-1)})_{k}+1\right).

By the induction hypothesis, (𝐞v(i1))k=1(\mathbf{e}_{v}^{(i-1)})_{k}=1 if and only if G,vφkG,v\models\varphi_{k} and (𝐞v(i1))k=0(\mathbf{e}_{v}^{(i-1)})_{k}=0 otherwise. Then we have (𝐞v(i))=1(\mathbf{e}_{v}^{(i)})_{\ell}=1 if and only if (𝐞v(i1))k+11-(\mathbf{e}_{v}^{(i-1)})_{k}+1\geq 1, which means (𝐞v(i1))k=0(\mathbf{e}_{v}^{(i-1)})_{k}=0. Because (𝐞v(i1))k=0(\mathbf{e}_{v}^{(i-1)})_{k}=0 if and only if G,vφkG,v\nvDash\varphi_{k}, we have (𝐞v(i))=1(\mathbf{e}_{v}^{(i)})_{\ell}=1 if and only if G,vφkG,v\nvDash\varphi_{k}, i.e., G,vφG,v\models\varphi_{\ell}, and (𝐞v(i))=0(\mathbf{e}_{v}^{(i)})_{\ell}=0 otherwise.

Case 3. Let φ(x)=Ny(Rj(y,x)φk(y))\varphi_{\ell}(x)=\exists^{\geq N}y\left(R_{j}(y,x)\wedge\varphi_{k}(y)\right). Because of (𝐀Rj)k=1\left(\mathbf{A}_{R_{j}}\right)_{k\ell}=1 and 𝐛=N+1\mathbf{b}_{\ell}=-N+1, we have

(𝐞v(i))=σ(u𝒩Rj(v)(𝐞u(i1))kN+1).\displaystyle(\mathbf{e}_{v}^{(i)})_{\ell}=\sigma\left(\sum_{u\in\mathcal{N}_{R_{j}}(v)}(\mathbf{e}_{u}^{(i-1)})_{k}-N+1\right).

By the induction hypothesis, (𝐞u(i1))k=1(\mathbf{e}_{u}^{(i-1)})_{k}=1 if and only if G,uφkG,u\models\varphi_{k} and (𝐞u(i1))k=0(\mathbf{e}_{u}^{(i-1)})_{k}=0 otherwise. Let m=|{u|u𝒩Rj(v) and G,uφk}|m=|\{u|u\in\mathcal{N}_{R_{j}}(v)\text{ and }G,u\models\varphi_{k}\}|. Then we have (𝐞v(i))=1(\mathbf{e}_{v}^{(i)})_{\ell}=1 if and only if u𝒩Rj(v)(𝐞u(i1))kN+11\sum_{u\in\mathcal{N}_{R_{j}}(v)}(\mathbf{e}_{u}^{(i-1)})_{k}-N+1\geq 1, which means mNm\geq N. Because G,uφkG,u\models\varphi_{k}, uu is connected to vv with relation RjR_{j}, and mNm\geq N, we have (𝐞v(i))=1(\mathbf{e}_{v}^{(i)})_{\ell}=1 if and only if G,vφG,v\models\varphi_{\ell} and (𝐞v(i))=0(\mathbf{e}_{v}^{(i)})_{\ell}=0 otherwise.

To learn a logical formula φ(x)\varphi(x), we only apply a linear classifier to 𝐞v(L),v𝒱\mathbf{e}_{v}^{(L)},v\in\mathcal{V} to extract the component of 𝐞v(L)\mathbf{e}_{v}^{(L)} corresponding to φ\varphi. If G,vφG,v\models\varphi, the value of the corresponding extracted component is 1.

Next, we prove the forward direction of Theorem C.2.

Theorem C.4.

A formula φ(x)\varphi(x) is learned by MPNN (1) if it can be expressed as a formula in CML.

To prove Theorem C.4, we introduce Definition C.5, Lemma C.6, Theorem C.7, and Lemma C.8.

Definition C.5 (Unraveling tree).

Let GG be a KG, vv be entity in GG, and LL\in\mathbb{N}. The unravelling of vv in GG at depth LL, denoted by UnrGL(v)\text{Unr}_{G}^{L}(v), is a tree composed of

  • a node (v,R1,u1,,Ri,ui)(v,R_{1},u_{1},\cdots,R_{i},u_{i}) for each path (v,R1,u1,,Ri,ui)(v,R_{1},u_{1},\cdots,R_{i},u_{i}) in GG with iLi\leq L,

  • an edge RiR_{i} between (v,R1,u1,,Ri1,ui1)(v,R_{1},u_{1},\cdots,R_{i-1},u_{i-1}) and (v,R1,u1,,Ri,ui)(v,R_{1},u_{1},\cdots,R_{i},u_{i}) when (ui,Ri,ui1)(u_{i},R_{i},u_{i-1}) is a triplet in GG (assume u0u_{0} is vv), and

  • each node (v,R1,u1,,Ri,ui)(v,R_{1},u_{1},\cdots,R_{i},u_{i}) has the same properties as uiu_{i} in GG.

Lemma C.6.

Let GG and GG^{\prime} be two KGs, vv and vv^{\prime} be two entities in GG and GG^{\prime} respectively. Then for every LL\in\mathbb{N}, the RWL test (Barcelo et al., 2022) assigns the same color/hash to vv and vv^{\prime} at round LL if and only if there is an isomorphism between UnrGL(v)\text{Unr}_{G}^{L}(v) and UnrGL(v)\text{Unr}_{G^{\prime}}^{L}(v^{\prime}) sending vv to vv^{\prime}.

Proof.

Base Case: When L=1L=1, the result is obvious.

Induction Hypothesis: Relational WL (RWL) test assigns the same color to vv and vv^{\prime} at round L1L-1 if and only if there is an isomorphism between UnrGL1(v)\text{Unr}_{G}^{L-1}(v) and UnrGL1(v)\text{Unr}_{G^{\prime}}^{L-1}(v^{\prime}) sending vv to vv^{\prime}.

Proof: In the LL-th round,

\bullet Prove “same color \Rightarrow isomorphism”.

cL(v)=\displaystyle c^{L}(v)= hash(cL1(v),{{(cL1(u),Ri)|u𝒩Ri(v),i=1,,r}}),\displaystyle\text{hash}(c^{L-1}(v),\big{\{}\big{\{}(c^{L-1}(u),R_{i})|u\in\mathcal{N}_{R_{i}}(v),i=1,\cdots,r\big{\}}\big{\}}),
cL(v)=\displaystyle c^{L}(v^{\prime})= hash(cL1(v),{{(cL1(u),Ri)|u𝒩Ri(v),i=1,,r}}).\displaystyle\text{hash}(c^{L-1}(v^{\prime}),\big{\{}\big{\{}(c^{L-1}(u^{\prime}),R_{i})|u\in\mathcal{N}_{R_{i}}(v^{\prime}),i=1,\cdots,r\big{\}}\big{\}}).

Because cL(v)=cL(v)c^{L}(v)=c^{L}(v^{\prime}), we have cL1(v)=cL1(v)c^{L-1}(v)=c^{L-1}(v^{\prime}), and there exists an entity pair (u,u),u𝒩Ri(v),u𝒩Ri(v)(u,u^{\prime}),u\in\mathcal{N}_{R_{i}}(v),u^{\prime}\in\mathcal{N}_{R_{i}}(v^{\prime}) that

(cL1(u),Ri)=(cL1(u),Ri).\displaystyle(c^{L-1}(u),R_{i})=(c^{L-1}(u^{\prime}),R_{i}).

Then we have cL1(u)=cL1(u)c^{L-1}(u)=c^{L-1}(u^{\prime}). According to induction hypothesis, we have UnrGL1(u)UnrGL1(u)\text{Unr}_{G}^{L-1}(u)\cong\text{Unr}_{G^{\prime}}^{L-1}(u^{\prime}). Also, because the edge connecting entity pair (v,u)(v,u) and (v,u)(v^{\prime},u^{\prime}) is RiR_{i}, so there is an isomorphism between UnrGL(v)\text{Unr}_{G}^{L}(v) and UnrGL(v)\text{Unr}_{G^{\prime}}^{L}(v^{\prime}) sending vv to vv^{\prime}.

\bullet Prove “isomorphism \Rightarrow same color”.

Because there exists an isomorphism π\pi between UnrGL(v)\text{Unr}_{G}^{L}(v) and UnrGL(v)\text{Unr}_{G^{\prime}}^{L}(v^{\prime}) sending vv to vv^{\prime}, assume π\pi is an bijective between the neighbors of vv and vv^{\prime}, e.g, u𝒩Ri(v),u𝒩Ri(v)u\in\mathcal{N}_{R_{i}}(v),u^{\prime}\in\mathcal{N}_{R_{i}}(v^{\prime}) and ui=π(ui)u_{i}^{\prime}=\pi(u_{i}), the relation between entity pair (u,v)(u,v) and (u,v)(u^{\prime},v^{\prime}) is RiR_{i}.

Next we prove cL1(u)=cL1(u)c^{L-1}(u)=c^{L-1}(u^{\prime}). Because UnrGL(v)\text{Unr}_{G}^{L}(v) and UnrGL(v)\text{Unr}_{G}^{L}(v^{\prime}) are isomorphism, and π\pi maps u𝒩Ri(v)u\in\mathcal{N}_{R_{i}}(v) to u𝒩Ri(v)u^{\prime}\in\mathcal{N}_{R_{i}}(v^{\prime}), for the left tree with L1L-1 depth, i.e., UnrGL1(u)\text{Unr}_{G}^{L-1}(u) and UnrGL1(u)\text{Unr}_{G^{\prime}}^{L-1}(u^{\prime}), π\pi can be the isomorphism mapping between UnrGL1(u)\text{Unr}_{G}^{L-1}(u) and UnrGL1(u)\text{Unr}_{G^{\prime}}^{L-1}(u^{\prime}). According to induction hypothesis, we have cL1(u)=cL1(u)c^{L-1}(u)=c^{L-1}(u^{\prime}). Because UnrGL(v)UnrGL(v)\text{Unr}_{G}^{L}(v)\cong\text{Unr}_{G^{\prime}}^{L}(v^{\prime}), we also have UnrGL1(u)UnrGL1(u)\text{Unr}_{G}^{L-1}(u)\cong\text{Unr}_{G^{\prime}}^{L-1}(u^{\prime}) which means cL1(u)=cL1(u)c^{L-1}(u)=c^{L-1}(u^{\prime}). After running RWL test, we have cL(v)=cL(v)c^{L}(v)=c^{L}(v^{\prime}). ∎

Theorem C.7.

Let φ(x)\varphi(x) be a unary formula in the formal description of graph GG in Section 3.1. If φ(x)\varphi(x) is not equivalent to a formula in CML, there exist two KGs GG and GG^{\prime} and two entities vv in GG and vv^{\prime} in GG^{\prime} such that UnrGL(v)UnrGL(v)\text{Unr}_{G}^{L}(v)\cong\text{Unr}_{G^{\prime}}^{L}(v^{\prime}) for every LL\in\mathbb{N} and such that G,vφG,v\models\varphi but G,vφG^{\prime},v^{\prime}\nvDash\varphi.

Proof.

The theorem follows directly from Theorem 2.2 in Otto (2019). Because G,v#G,vG,v\sim_{\#}G^{\prime},v^{\prime} and UnrGL(v)UnrGL(v)\text{Unr}_{G}^{L}(v)\cong\text{Unr}_{G^{\prime}}^{L}(v^{\prime}) are equivalent with the definition of counting bisimulation (i.e., notation #\sim_{\#}). ∎

Lemma C.8.

If a formula φ(x)\varphi(x) is not equivalent to any formula in CML, there is no MPNN (1) that can learn φ(x)\varphi(x).

Proof.

Assume for a contradiction that there exists a MPNN that can learn φ(x)\varphi(x). Since φ(x)\varphi(x) is not equivalent to any formula in CML, with Theorem C.7, there exists two KGs GG and GG^{\prime} and two entities vv in GG and vv^{\prime} in GG^{\prime} such that UnrGL(v)UnrGL(v)\text{Unr}_{G}^{L}(v)\cong\text{Unr}_{G^{\prime}}^{L}(v^{\prime}) for every LL\in\mathbb{N} and such that G,vφG,v\models\varphi and G,vφG^{\prime},v^{\prime}\nvDash\varphi. By Lemma C.6, because UnrGL(v)UnrGL(v)\text{Unr}_{G}^{L}(v)\cong\text{Unr}_{G^{\prime}}^{L}(v^{\prime}) for every LL\in\mathbb{N}, we have 𝐞v(L)=𝐞v(L)\mathbf{e}_{v}^{(L)}=\mathbf{e}_{v^{\prime}}^{(L)}. But this contradicts the assumption that MPNN is supposed to learn φ(x)\varphi(x). ∎

Proof of Theorem C.4.

Theorem can be obtained directly from Lemma C.8. ∎

Proof of Theorem C.2.

Theorem can be obtained directly by combining Lemma C.3 and Theorem C.4. ∎

The following two remarks intuitively explain why MPNN can learn formulas in CML.

Remark C.9.

Theorem C.2 applies to both CML[G]\text{CML}[G] and CML[G,𝖼1,𝖼2,,𝖼k]\text{CML}[G,\mathsf{c}_{1},\mathsf{c}_{2},\cdots,\mathsf{c}_{k}]. The atomic unary predicate Pi(x)P_{i}(x) in CML of graph GG is learned by the initial representations 𝐞v(0),v𝒱\mathbf{e}_{v}^{(0)},v\in\mathcal{V}, which can be achieved by assigning special vectors to 𝐞v(0),v𝒱\mathbf{e}_{v}^{(0)},v\in\mathcal{V}. In particular, the constant predicate Pc(x)P_{c}(x) in CML[G,𝖼]\text{CML}[G,\mathsf{c}] is learned by assigning a unique vector (e.g., one-hot vector for different entities) as the initial representation of the entity with unique identifier 𝖼\mathsf{c}. The other sub-formulas ¬φ(x),φ1(x)φ2(x)\neg\varphi(x),\varphi_{1}(x)\wedge\varphi_{2}(x) in Definition A.1 can be learned by continuous logical operations (Arakelyan et al., 2021) which are independent of message-passing mechanisms.

Remark C.10.

Assume the (i1)(i-1)-th layer representations 𝐞v(i1),v𝒱\mathbf{e}_{v}^{(i-1)},v\in\mathcal{V} can learn the formula φ(x)\varphi(x) in CML, the ii-th layer representations 𝐞v(i),v𝒱\mathbf{e}_{v}^{(i)},v\in\mathcal{V} of MPNN can learn Ny,Rj(y,x)φ(y)\exists^{\geq N}y,R_{j}(y,x)\wedge\varphi(y) with specific aggregation function in (1) because 𝐞v(i),v𝒱\mathbf{e}_{v}^{(i)},v\in\mathcal{V} can aggregate the logical formulas in the one-hop neighbor representation 𝐞v(i1),v𝒱\mathbf{e}_{v}^{(i-1)},v\in\mathcal{V} (i.e., φ(x)\varphi(x)) with message-passing mechanisms.

The following remark clarifies the scope of Theorem C.2 and 3.2.

Remark C.11.

The positive results for our theorem (e.g., a MPNN variant can learn a logical formula) hold for MPNNs powerful than the MPNN we construct in (2), while our negative results (e.g., a MPNN variant cannot learn a logical formula) hold for any general MPNNs (1). Hence, the backward direction remains valid irrespective of the aggregate and combine operators under consideration. This limitation is inherent to the MPNN architecture represented by (1) and not specific to the chosen representation update functions. On the other hand, the forward direction holds for MPNNs that are more powerful than (2).

C.3 Proof of Theorem 3.2

Definition C.12.

QL-GNN learns a rule formula R(𝗁,x)R(\mathsf{h},x) if and only if given any graph GG, the QL-GNN’s score of a new triplet (h,R,t)(h,R,t) can be mapped to a binary value, where True indicates that R(𝗁,x)R(\mathsf{h},x) satisfies on entity tt, while False does not satisfy.

Proof.

We set the KG as GG and restrict the unary formulas in CML[G,𝗁]\text{CML}[G,\mathsf{h}] to the form of R(𝗁,x)R(\mathsf{h},x). This theorem is directly obtained by Theorem C.2 because constant hh can be equivalently transformed to constant predicate Ph(x)P_{h}(x). ∎

Proof of Corollary 3.3.

Base case: Since the unary predicate can be encoded into the initial representation of the entity according to Section C.1. Then the base case is obvious.

Recursion rule: Since the rule structures R(𝗁,x),R1(𝗁,x),R2(𝗁,x)R(\mathsf{h},x),R_{1}(\mathsf{h},x),R_{2}(\mathsf{h},x) are unary predicate and can be learned by QL-GNN, they are formulas in CML[G,𝗁]\text{CML}[G,\mathsf{h}]. According to recursive definition of CML, R1(𝗁,x)R2(𝗁,y)R_{1}(\mathsf{h},x)\wedge R_{2}(\mathsf{h},y), Ny(Ri(y,x)R(𝗁,y))\exists^{\geq N}y\left(R_{i}(y,x)\wedge R(\mathsf{h},y)\right) are also formulas in CML[G,𝗁]\text{CML}[G,\mathsf{h}], therefore can be learned by QL-GNN.

C.4 Proof of Theorem 3.4

Definition C.13.

CompGCN learns a rule formula R(x,y)R(x,y) if and only if given any graph GG, the QL-GNN’s score of a new triplet (h,R,t)(h,R,t) can be mapped to a binary value, where True indicates that R(x,y)R(x,y) satisfies on entity pair (h,t)(h,t), while False does not satisfy.

Proof.

According to Theorem C.2, the MPNN representation 𝐞v(L)\mathbf{e}_{v}^{(L)} can represent the formulas in CML[G]\text{CML}[G]. Assume φ1(x)\varphi_{1}(x) and φ2(y)\varphi_{2}(y) can be represented by the MPNN representation 𝐞v(L),v𝒱\mathbf{e}_{v}^{(L)},v\in\mathcal{V} and there exists two functions g1g_{1} and g2g_{2} that can extract the logical formulas from 𝐞v(L)\mathbf{e}_{v}^{(L)}, i.e., gi(𝐞v(L))=1g_{i}(\mathbf{e}_{v}^{(L)})=1 if G,vφiG,v\models\varphi_{i} and gi(𝐞v(L))=0g_{i}(\mathbf{e}_{v}^{(L)})=0 if G,vφiG,v\nvDash\varphi_{i} for i=1,2i=1,2. We show how the following two logical operators can be learned by s(h,R,t)s(h,R,t) for candidate triplet (h,R,t)(h,R,t):

  • Conjunction: φ1(x)φ2(y)\varphi_{1}(x)\wedge\varphi_{2}(y). The conjunction of φ1(x),φ2(y)\varphi_{1}(x),\varphi_{2}(y) can be learned with function s(h,R,t)=g1(𝐞h(L))g2(𝐞t(L))s(h,R,t)=g_{1}(\mathbf{e}_{h}^{(L)})\cdot g_{2}(\mathbf{e}_{t}^{(L)}).

  • Negation: ¬φ1(x)\neg\varphi_{1}(x). The negation of φ1(x)\varphi_{1}(x) can be learned with function s(h,R,t)=1g1(𝐞h(L))s(h,R,t)=1-g_{1}(\mathbf{e}_{h}^{(L)}).

The disjunction \vee can be obtained by ¬(¬φ1(x)¬φ2(y))\neg(\neg\varphi_{1}(x)\wedge\neg\varphi_{2}(y)). More complex formula involving sub-formulas from {φ(x)}\{\varphi(x)\} and {φ(y)}\{\varphi^{\prime}(y)\} can be learned by combining the score functions above. ∎

C.5 Proof of Proposition 4.1

Lemma C.14.

Assume φ(x)\varphi(x) describes a single-connected rule structure 𝖦\mathsf{G} in a KG. If assign constant to entities with out-degree large than 1 in the KG, the structure 𝖦\mathsf{G} can be described with formula φ(x)\varphi^{\prime}(x) in CML of KG with assigned constants.

Proof.

According to Theorem C.7, assume φ(x)\varphi^{\prime}(x) with assigned constants is not equivalent to a formula in CML, there should exist two rule structures 𝖦,𝖦\mathsf{G},\mathsf{G}^{\prime} in KG G,GG,G^{\prime}, and entity vv in 𝖦\mathsf{G} and entity vv^{\prime} in 𝖦\mathsf{G}^{\prime} such that Unr𝖦L(v)Unr𝖦L(v)\text{Unr}_{\mathsf{G}}^{L}(v)\cong\text{Unr}_{\mathsf{G}^{\prime}}^{L}(v^{\prime}) for every LL\in\mathbb{N} and such that 𝖦,vφ\mathsf{G},v\models\varphi^{\prime} but 𝖦,vφ\mathsf{G}^{\prime},v^{\prime}\nvDash\varphi^{\prime}.

Since each entity in 𝖦\mathsf{G} (𝖦\mathsf{G}^{\prime}) with out-degree larger than 1 is assigned with a constant, the rule structure 𝖦\mathsf{G} (𝖦\mathsf{G}^{\prime}) can be uniquely recovered from its unravelling tree Unr𝖦L(v)\text{Unr}_{\mathsf{G}}^{L}(v) (Unr𝖦L(v)\text{Unr}_{\mathsf{G}^{\prime}}^{L}(v)) for sufficient large LL. Therefore, if Unr𝖦L(v)Unr𝖦L(v)\text{Unr}_{\mathsf{G}}^{L}(v)\cong\text{Unr}_{\mathsf{G}^{\prime}}^{L}(v^{\prime}) for every LL\in\mathbb{N}, the corresponding rule structures 𝖦\mathsf{G} and 𝖦\mathsf{G}^{\prime} should be isomorphism too, which means 𝖦,vφ\mathsf{G},v\models\varphi^{\prime} and 𝖦,vφ\mathsf{G}^{\prime},v^{\prime}\models\varphi^{\prime}. Thus, φ(x)\varphi^{\prime}(x) must be a formula in CML. ∎

Proof of Proposition 4.1.

The theorem holds by restricting the unary formula to the form of R(𝗁,x)R(\mathsf{h},x) on Lemma C.14. ∎

Proof of Corollary 4.2.

By converting new constants 𝖼1,𝖼2,,𝖼k\mathsf{c}_{1},\mathsf{c}_{2},\cdots,\mathsf{c}_{k} to constant predicates Pc1(x),Pc2(x),,Pck(x)P_{c_{1}}(x),P_{c_{2}}(x),\cdots,P_{c_{k}}(x), the corollary holds by using Theorem 3.2. ∎

Appendix D Experiments

D.1 More rule structures in synthetic datasets

In Section 6.1, we also include the following rule structures in the synthetic datasets, i.e., C4C_{4} and I2I_{2} in Figure 6, for experiments. C4C_{4} and I2I_{2} are both formulas from CML[G,𝗁]\text{CML}[G,\mathsf{h}]. The proof of C4C_{4} is similar to the proof of C3C_{3} in Corollary A.2. The proof of I2I_{2} is similar to that of I1I_{1} and is in Corollary D.1.

Refer to caption
Figure 6: In the synthetic experiments, we also compare the performance of various GNNs on the synthetic datasets generated from C4C_{4} and I2I_{2}.
Corollary D.1.

I2(𝗁,x)I_{2}(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}].

Proof.

I2(𝗁,x)I_{2}(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}] as it can be recursively defined as follows

φ1(x)\displaystyle\varphi_{1}(x) =Ph(x),\displaystyle=P_{h}(x),
φ2(x)\displaystyle\varphi_{2}(x) =y,R1(y,x)φ1(y),\displaystyle=\exists y,R_{1}(y,x)\wedge\varphi_{1}(y),
φ3(x)\displaystyle\varphi_{3}(x) =y,R2(y,x)φ2(y),\displaystyle=\exists y,R_{2}(y,x)\wedge\varphi_{2}(y),
φs(x)\displaystyle\varphi_{s}(x) =2y,R4(y,x),\displaystyle=\exists^{\geq 2}y,R_{4}(y,x)\wedge\top,
φ4(x)\displaystyle\varphi_{4}(x) =φs(x)φ3(x),\displaystyle=\varphi_{s}(x)\wedge\varphi_{3}(x),
I2(𝗁,x)\displaystyle I_{2}(\mathsf{h},x) =y,R3(y,x)φ4(y).\displaystyle=\exists y,R_{3}(y,x)\wedge\varphi_{4}(y).

D.2 Experiments for CompGCN

The classical framework of KG reasoning is inadequate for assessing the expressivity of CompGCN because the query (h,R,?)(h,R,?) assumes that certain logical formula φ(x){\varphi(x)} are satisfied at the head entity hh by default. In order to validate the expressivity of CompGCN, it is necessary to predict all missing triplets directly based on entity representations without relying on the query (h,R,?)(h,R,?). To accomplish this, we create a new dataset called SS that adheres to the rule formula S(x,y)=φ(x)φ(y)S(x,y)=\varphi^{\star}(x)\wedge\varphi^{\star}(y), where the logical formula is defined as:

φ(x)=yR1(x,y)(xR2(y,x)(yR3(x,y))).\varphi^{\star}(x)=\exists yR_{1}(x,y)\wedge\left(\exists xR_{2}(y,x)\wedge(\exists yR_{3}(x,y))\right).

Here, φ(x)\varphi^{\star}(x) is represented with parameter reusing (reusing xx and yy) and is a formula in CML. Therefore, the formula S(x,y)S(x,y) takes the form of R(x,y)=fR({φ(x)},{φ(y)})R(x,y)=f_{R}(\{\varphi(x)\},\{\varphi^{\prime}(y)\}) and can be learned by CompGCN, as indicated by Theorem 3.4. To validate our theorem, we generate a synthetic dataset SS using the same steps outlined in Section 6.1, following the rule S(x,y)S(x,y). We then train CompGCN on dataset SS. The experimental results demonstrate that CompGCN effectively learns the rule formula S(x,y)S(x,y) with 100% accuracy. Comparing it with QL-GNN is unnecessary since the latter is specifically designed for KG reasoning setting involving the query (h,R,?)(h,R,?).

D.3 Statistics of synthetic datasets

Table 6: Statistics of the synthetic datasets.
Datasets C3C_{3} C4C_{4} I1I_{1} I2I_{2} TT UU SS
known triplets 1514 2013 843 1546 2242 2840 320
training 1358 2265 304 674 83 396 583
validation 86 143 20 43 6 26 37
testing 254 424 57 126 15 183 109

D.4 Results on synthetic with missing triplets

We randomly remove 5%, 10%, and 20% edges from synthetic datasets to test the robustness of QL-GNN and EL-GNN for rule structures learning. The results of QL-GNN and EL-GNN are shown in Table 7 and 8 respectively. The results show that the completeness of rule structure correlates strongly with the performance of QL-GNN and EL-GNN.

Table 7: The accuracy of QL-GNN on synthetic datasets with missing triplets.
Triplet missing ratio C3C_{3} C4C_{4} I1I_{1} I2I_{2} TT UU
5% 0.899 0.866 0.760 0.783 0.556 0.329
10% 0.837 0.718 0.667 0.685 0.133 0.279
20% 0.523 0.465 0.532 0.468 0.111 0.162
Table 8: The accuracy of EL-GNN on synthetic datasets with missing triplets.
Triplet missing ratio C3C_{3} C4C_{4} I1I_{1} I2I_{2} TT UU
5% 0.878 0.807 0.842 0.857 0.244 0.5
10% 0.766 0.674 0.725 0.661 0.222 0.347
20% 0.499 0.405 0.637 0.458 0.111 0.257

D.5 More experimental details on real datasets

MRR and Hit@10

Here we supplement MRR and Hit@10 of NBFNet and EL-NBFNet on real datasets in Table 9. The improvement of EL-NBFNet on MRR and Hit@10 is not as significant as that on Accuracy because the EL-NBFNet is designed for exactly learning rule formulas and only Accuracy can be guaranteed to be improved.

Table 9: MRR and Hit@10 of NBFNet and EL-NBFNet on real datasets.
Family Kinship UMLS WN18RR FB15k-237
MRR Hit@10 MRR Hit@10 MRR Hit@10 MRR Hit@10 MRR Hit@10
NBFNet 0.983 0.993 0.900 0.997 0.970 0.997 0.548 0.657 0.415 0.599
EL-NBFNet 0.990 0.991 0.905 0.996 0.975 0.993 0.562 0.669 0.424 0.607
Different hyperparameters of dd

We have observed that a larger or smaller dd does not necessarily lead to better performance in Figure 4. For real datasets, we also observed similar phenomenon in Table 10. For real datasets, we uses d=5,30,100,100,300d=5,30,100,100,300 for Family, Kinship, UMLS, WN18RR, and FB15k-237, respectively.

Table 10: The accuracy of EL-NBFNet on UMLS with different dd.
d=0d=0 d=50d=50 d=100d=100 d=150d=150 NBFNet
0.948 0.958 0.963 0.961 0.951
Time cost of EL-NBFNet

In Table 11, we show the time cost of EL-NBFNet and NBFNet on real datasets. The time cost is measured by seconds of testing phase. The results show that EL-NBFNet is slightly slower than NBFNet. The reason is that EL-NBFNet needs to traverse all entities on KG to assign constants to entities with out-degree larger than degree threshold dd.

Table 11: Time cost (seconds of testing) of EL-NBFNet on real datasets.
Methods Family Kinship UMLS WN18RR FB15k-237
EL-NBFNet 270.3 14.0 6.7 35.6 20.1
NBFNet 269.6 13.5 6.4 34.3 19.8

Appendix E Theory of GNNs for single-relational link prediction

Our theory of KG reasoning can be easily extended to the single-relational link prediction. The following two corollaries are the extensions of Theorem 3.2 and Theorem 3.4 to the single-relational link prediction, respectively.

Corollary E.1 (Theorem 3.2 on single-relational link prediction).

For single-relational link prediction, given a query (h,R,?)(h,R,?), a rule formula R(𝗁,x)R(\mathsf{h},x) is learned by QL-GNN if and only if R(𝗁,x)R(\mathsf{h},x) is a formula in CML[G,𝗁]\text{CML}[G,\mathsf{h}].

Corollary E.2 (Theorem 3.4 on single-relational link prediction).

For single-relational link prediction, CompGCN can learn the rule formula R(x,y)=fR({φ(x)},{φ(y)})R(x,y)=f_{R}\left(\{\varphi(x)\},\{\varphi^{\prime}(y)\}\right) where fRf_{R} is a logical formula involving sub-formulas from {φ(x)}\{\varphi(x)\} and {φ(y)}\{\varphi^{\prime}(y)\} which are the sets of formulas in CML[G]\text{CML}[G] that can be learned by GNN (1).

Corollary E.1 and E.2 can be directly proven by restricting the logic of KG to single-relational graph, which means there is only one binary predicate in logic of graph.

Appendix F Understanding generalization based on expressivity

F.1 Understanding expressivity vs. generalization

In this section, we provide some insights on the relation between expressivity and generalization. Expressivity in deep learning pertains to a model’s capacity to accurately represent information, whereas the ability of a model to achieve this level of expressivity depends on its generalization. Considering generalization requires not only contemplating the model design but also assessing whether the training algorithm can enable the model to achieve its expressivity. The experiments in this paper can also show this relation about expressivity and generalization from two perspective: (1) The experimental results of QL-GNN shows that its expressivity can be achieved with classical deep learning training strategies; (2) In the development of deep learning, a consensus is that more expressivity often leads to better generalization. The experimental results of EL-GNN verify this consensus.

In addition, our theory can provide some insights on model design with better generalization. Based on the constructive proof of Lemma C.3, if QL-GNN can learn a rule formula R(𝗁,x)R(\mathsf{h},x) with LL recursive definition, QL-GNN can learn R(𝗁,x)R(\mathsf{h},x) with layers and hidden dimensions no less than LL. Assuming learning rr relations with QL-GNN and numbers of recursive definition for these relations are L1,L2,,LrL_{1},L_{2},\cdots,L_{r} respectively, QL-GNN can learn these relations with layers no more than maxiLimax_{i}L_{i} and hidden dimensions no more than Li\sum L_{i}. Since these bounds are nearly worst-case scenarios, both the dimensions and layers can be further optimized. Also, in the constructive proof of Lemma C.3, the aggregation function is summation, and it is difficult for mean and max/min aggregation function to capture sub-formula Ny(Ri(y,x)R(𝗁,y))\exists^{\geq N}y\left(R_{i}(y,x)\wedge R(\mathsf{h},y)\right). From the perspective of rule learning, QL-GNN extracts structural information at each layer. Therefore, to learn rule structures, QL-GNN needs an activation function with compression capability for information extraction from inputs. Empirically, QL-GNN with identify activation function fails to learn with rules in synthetic dataset.

Moreover, because our theory cannot help understand generalization related to network training, the dependence to hyperparameters of network training, e.g., the number of training examples, graph size, number of entities, cannot be revealed from our theory.

F.2 Why assigning lots of constants hurts generalization?

We take the relation C3C_{3} as an example to show why assigning lots of constants hurts generalization from logical perspective. We add two different constants 𝖼1\mathsf{c}_{1} and 𝖼2\mathsf{c}_{2} to the rule formula C3(h,x)C_{3}(h,x), which results two different rule formulas C3(𝗁,y)=z1R1(𝗁,z1)R2(z1,𝖼1)R3(𝖼1,x)C_{3}^{\prime}(\mathsf{h},y)=\exists z_{1}R_{1}(\mathsf{h},z_{1})\wedge R_{2}(z_{1},\mathsf{c}_{1})\wedge R_{3}(\mathsf{c}_{1},x) and C3(𝗁,y)=z1R1(𝗁,z1)R2(z1,𝖼2)R3(𝖼2,x)C_{3}^{\star}(\mathsf{h},y)=\exists z_{1}R_{1}(\mathsf{h},z_{1})\wedge R_{2}(z_{1},\mathsf{c}_{2})\wedge R_{3}(\mathsf{c}_{2},x). Predicting new triplets for relation C3C_{3} can now be achieved by learning the rule formulas C3(𝗁,x),C3(𝗁,x)C_{3}(\mathsf{h},x),C_{3}^{\prime}(\mathsf{h},x), or C3(𝗁,x)C_{3}^{\star}(\mathsf{h},x). Among these rule formulas, C3(𝗁,x)C_{3}(\mathsf{h},x) is the rule with the best generalization, while C3(𝗁,x)C_{3}^{\prime}(\mathsf{h},x) and C3(𝗁,x)C_{3}^{\star}(\mathsf{h},x) require the rule structure to pass through the entities with identifiers of constants 𝖼1\mathsf{c}_{1} and 𝖼2\mathsf{c}_{2}, respectively. Thus, when adding constants, maintaining performance requires the network to learn both rule formulas C3(𝗁,x),C3(𝗁,x)C_{3}^{\prime}(\mathsf{h},x),C_{3}^{\star}(\mathsf{h},x) simultaneously which may potentially require a network with larger capacity. Even EL-GNN is unnecessary to learn C3(𝗁,x),C3(𝗁,x)C_{3}^{\prime}(\mathsf{h},x),C_{3}^{\star}(\mathsf{h},x) since C3(𝗁,x)C_{3}(\mathsf{h},x) is learnable, EL-GNN cannot avoid learning rules with more than one constant in it when the rules are out of CML.

Appendix G Limitations and Impacts

Our work offers a fresh perspective on understanding GNN’s expressivity in KG reasoning. Unlike most existing studies that focus on distinguishing ability, we analyze GNN’s expressivity based solely on its ability to learn rule structures. Our work has the potential to inspire further studies. For instance, our theory analyzes GNN’s ability to learn a single relation, but in practice, GNNs are often applied to learn multiple relations. Therefore, determining the number of relations that GNNs can effectively learn for KG reasoning remains an interesting problem that can help determine the size of GNNs. Furthermore, while our experiments are conducted on synthetic datasets without missing triplets, real datasets are incomplete (e.g., missing triplets in testing sets). Thus, understanding the expressivity of GNNs for KG reasoning on incomplete datasets remains an important challenge.