This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\recdate

March 31, 2021

Interlacing Results for Hypergraphs

Raffaella Mulas1,2 1The Alan Turing Institute1The Alan Turing Institute London NW1 2DB London NW1 2DB UK
2University of Southampton UK
2University of Southampton Southampton SO17 1BJ Southampton SO17 1BJ UK UK
Abstract

Hypergraphs are a generalization of graphs in which edges can connect any number of vertices. They allow the modeling of complex networks with higher-order interactions, and their spectral theory studies the qualitative properties that can be inferred from the spectrum, i.e. the multiset of the eigenvalues, of an operator associated to a hypergraph. It is expected that a small perturbation of a hypergraph, such as the removal of a few vertices or edges, does not lead to a major change of the eigenvalues. In particular, it is expected that the eigenvalues of the original hypergraph interlace the eigenvalues of the perturbed hypergraph. Here we work on hypergraphs where, in addition, each vertex–edge incidence is given a real number, and we prove interlacing results for the adjacency matrix, the Kirchoff Laplacian and the normalized Laplacian. Tightness of the inequalities is also shown.

networks, hypergraphs, eigenvalues, spectral theory, interlacing

1 Introduction

Hypergraphs are a generalization of graphs in which edges can connect any number of vertices. They allow the modeling of bitcoin transactions [17], quantum entropies [2], chemical reaction networks [12], cellular networks [15], social networks [20], neural networks [6], opinion formation [16], epidemic networks [3], transportation networks [1]. Moreover, hypergraphs with real coefficients have been introduced in [13] as a generalization of classical hypergraphs where, in addition, each vertex–edge incidence is given a real coefficient. These coefficients allow to model, for instance, the stoichiometric coefficients when considering chemical reaction networks, or the probability that a given vertex belongs to an edge. In [13], also the adjacency matrix and the normalized Laplacian associated to such hypergraphs have been introduced, while the corresponding Kirchhoff Laplacian has been introduced in [9].
Here we study the spectral properties of these operators and we prove, in particular, interlacing results. We show that, given an operator 𝒪\mathcal{O} (which is either the adjacency matrix, the normalized Laplacian or the Kirchhoff Laplacian), then the eigenvalues of the operator 𝒪(G)\mathcal{O}(G) associated to a hypergraph GG interlace the eigenvalues of 𝒪(G)\mathcal{O}(G^{\prime}), if GG^{\prime} is obtained from GG by deleting vertices or edges. We also prove the tightness of these inequalities.
Since spectral theory studies the qualitative properties of a graph — and, more generally, of a hypergraph — that can be inferred by the spectra of its associated operators, interlacing results are meaningful as they offer a measure of how much a spectrum changes when deleting vertices or edges. We refer the reader to [10, 4, 18, 7] for some literature on interlacing results in the case of graphs, simplicial complexes and hypergraphs.
The paper is structured as follows. In Section 2 we offer an overview of the definitions on hypergraphs that will be needed throughout this paper. In Section 3 we recall the Courant–Fischer–Weyl min-max principle and we apply it to characterize the eigenvalues of the adjacency matrix, normalized Laplacian and Kirchhoff Laplacian associated to a hypergraph. In Section 4 we apply the Cauchy interlacing Theorem to prove various interlacing results, and in Section 5 we prove some additional interlacing results for the normalized Laplacian, using a generalization of the proof method developed by Butler in [4]. Finally, in Section 6 we draw some conclusions.

2 Definitions

We recall the basic definitions on hypergraphs with real coefficients, following [13].

Definition 2.1.

A hypergraph with real coefficients (Fig. 1) is a triple G=(V,E,𝒞)G=(V,E,\mathcal{C}) such that:

  • V={v1,,vn}V=\{v_{1},\ldots,v_{n}\} is a finite set of nodes or vertices;

  • E={e1,,em}E=\{e_{1},\ldots,e_{m}\} is a multiset of elements ejVe_{j}\subseteq V called edges;

  • 𝒞={Cv,e:vV and eE}\mathcal{C}=\{C_{v,e}:v\in V\text{ and }e\in E\} is a set of coefficients Cv,eC_{v,e}\in\mathbb{R} and it is such that

    Cv,e=0ve.C_{v,e}=0\iff v\notin e. (1)
Refer to caption
Figure 1: A hypergraph with real coefficients that has 66 vertices and 22 edges.

From here on, we fix an hypergraph with real coefficients G=(V,E,𝒞)G=(V,E,\mathcal{C}) and we assume that each vertex is contained in at least one edge, that is, there are no isolated vertices.

Definition 2.2.

Given eEe\in E, its cardinality, denoted |e||e|, is the number of vertices that are contained in ee.

Remark 2.1.

The oriented hypergraphs introduced by Reff and Rusnak in [19] are hypergraphs with real coefficients such that Cv,e{1,0,1}C_{v,e}\in\{-1,0,1\} for each vVv\in V and eEe\in E. Signed graphs are oriented hypergraphs such that |e|=2|e|=2 for each eEe\in E, and simple graphs are signed graphs such that, for each eEe\in E, there exists a unique vVv\in V and there exists a unique wVw\in V satisfying

Cv,e=Cw,e=1.C_{v,e}=-C_{w,e}=1.

Moreover, weighted hypergraphs are hypergraphs with real coefficients such that, for each eEe\in E and for each vev\in e, Cv,e=:ω(e)C_{v,e}=:\omega(e) does not depend on vv.

Definition 2.3.

Given vVv\in V, its degree is

deg(v):=eE(Cv,e)2.\deg(v):=\sum_{e\in E}(C_{v,e})^{2}. (2)

The n×nn\times n diagonal degree matrix of GG is

D:=D(G)=diag(deg(vi))i=1,,n.D:=D(G)=\textrm{diag}\bigl{(}\deg(v_{i})\bigr{)}_{i=1,\ldots,n}.

Note that DD is invertible, since we are assuming that there are no isolated vertices.

Definition 2.4.

The n×nn\times n adjacency matrix of GG is A:=A(G)=(Aij)ijA:=A(G)=(A_{ij})_{ij}, where Aii:=0A_{ii}:=0 for all i=1,,ni=1,\ldots,n and

Aij:=eECvi,eCvj,efor all ij.A_{ij}:=-\sum_{e\in E}C_{v_{i},e}\cdot C_{v_{j},e}\quad\text{for all }i\neq j.
Definition 2.5.

The n×mn\times m incidence matrix of GG is :=(G)=(ij)ij\mathcal{I}:=\mathcal{I}(G)=(\mathcal{I}_{ij})_{ij}, where

ij:=Cvi,ej.\mathcal{I}_{ij}:=C_{v_{i},e_{j}}.
Definition 2.6.

The normalized Laplacian of GG is the n×nn\times n matrix

L:=L(G)=IdD(G)1/2A(G)D(G)1/2,L:=L(G)=\operatorname{Id}-D(G)^{-1/2}A(G)D(G)^{-1/2},

where Id\operatorname{Id} is the n×nn\times n identity matrix.

Remark 2.2.

In [13], the normalized Laplacian is defined as the n×nn\times n matrix

(G):=IdD(G)1A(G),\mathcal{L}(G):=\operatorname{Id}-D(G)^{-1}A(G),

which is not necessarily symmetric. Here we chose to work on L(G)L(G), which generalizes the classical normalized Laplacian for graphs introduced by Fan Chung in [5], so that we can apply the properties of symmetric matrices. From a spectral point of view, working on L(G)L(G) or (G)\mathcal{L}(G) is equivalent. In fact,

(G)=D(G)1/2L(G)D(G)1/2,\mathcal{L}(G)=D(G)^{-1/2}L(G)D(G)^{1/2},

hence the matrices L(G)L(G) and (G)\mathcal{L}(G) are similar and, therefore, isospectral.

The Kirchhoff Laplacian, in the context of hypergraphs with real coefficients, was introduced by Hirono et al. [9]. We recall it and we introduce the dual Kirchhoff Laplacian.

Definition 2.7.

The Kirchhoff Laplacian of GG is the n×nn\times n matrix

K:=K(G)=(G)(G)=D(G)A(G).K:=K(G)=\mathcal{I}(G)\cdot\mathcal{I}(G)^{\top}=D(G)-A(G).

The dual Kirchhoff Laplacian of GG is the m×mm\times m matrix

K:=K(G):=(G)(G).K^{*}:=K^{*}(G):=\mathcal{I}(G)^{\top}\cdot\mathcal{I}(G).
Remark 2.3.

K(G)K(G) and K(G)K^{*}(G) have the same non-zero eigenvalues. It follows from the fact that, if ff and gg are linear operators, then the non-zero eigenvalues of fgfg and gfgf are the same.

Given an n×nn\times n real symmetric matrix QQ, its spectrum consists of nn real eigenvalues, counted with multiplicity. We denote them by

λ1(Q)λn(Q).\lambda_{1}(Q)\leq\ldots\leq\lambda_{n}(Q).

Since the trace of an n×nn\times n matrix (i.e. the sum of its diagonal elements) equals the sum of its eigenvalues, we have

i=1nλi(A)=0,i=1nλi(L)=n,andi=1nλi(K)=i=1ndeg(vi).\sum_{i=1}^{n}\lambda_{i}(A)=0,\quad\sum_{i=1}^{n}\lambda_{i}(L)=n,\quad\mbox{and}\quad\sum_{i=1}^{n}\lambda_{i}(K)=\sum_{i=1}^{n}\deg(v_{i}).

The idea of the interlacing results that we will prove is to show how the removal of part of the hypergraph effects the eigenvalues of its associated operators. We define two operations that can be done for removing part of a hypergraph: the vertex deletion and the edge deletion, as generalizations of the ones in [18]. Similarly, we also define the restriction of a hypergraph to a subset of edges.

Definition 2.8.

Given vVv\in V, we let Gv:=(V{v},Ev,𝒞v)G\setminus v:=(V\setminus\{v\},E_{v},\mathcal{C}_{v}), where:

  • Ev:={e{v}:eE}E_{v}:=\{e\setminus\{v\}:e\in E\};

  • 𝒞v:=𝒞{Cv,e:eE}\mathcal{C}_{v}:=\mathcal{C}\setminus\{C_{v,e}:e\in E\}.

We say that GvG\setminus v is obtained from GG by a vertex deletion of vv.

Definition 2.9.

Given eEe\in E, we let Ge:=(V,E{e},𝒞e)G\setminus e:=(V,E\setminus\{e\},\mathcal{C}_{e}), where

𝒞e:=𝒞{Cv,e:vV}.\mathcal{C}_{e}:=\mathcal{C}\setminus\{C_{v,e}:v\in V\}.

We say that GeG\setminus e is obtained from GG by an edge deletion of ee. More generally, given FEF\subseteq E, we denote by GFG\setminus F the hypergraph obtained from GG by deleting all edges in FF.

Definition 2.10.

Given FEF\subseteq E, the restriction of GG to FF is G|F:=(VF,F,𝒞|F)\left.G\right|_{F}:=(V_{F},F,\left.\mathcal{C}\right|_{F}), where

  • VF:={vV:ve for some eF}V_{F}:=\{v\in V:v\in e\text{ for some }e\in F\} and

  • 𝒞|F:={Cv,e𝒞:vVF and eF}\left.\mathcal{C}\right|_{F}:=\{C_{v,e}\in\mathcal{C}:v\in V_{F}\text{ and }e\in F\}.

3 Min-max principle

We recall the Courant–Fischer–Weyl min-max principle (Theorem 2.1 in [4]):

Theorem 3.1 (Min-max principle).

Let QQ be an n×nn\times n real symmetric matrix. Let 𝒳k\mathcal{X}^{k} denote a kk–dimensional subspace of n\mathbb{R}^{n} and 𝐱𝒳k\mathbf{x}\bot\mathcal{X}^{k} signify that 𝐱𝐲\mathbf{x}\bot\mathbf{y} for all 𝐲𝒳k\mathbf{y}\in\mathcal{X}^{k}. Then

λk(Q)=min𝒳nk1(max𝐱𝒳nk1,𝐱0𝐱Q𝐱𝐱𝐱)=max𝒳k(min𝐱𝒳k,𝐱0𝐱Q𝐱𝐱𝐱)\lambda_{k}(Q)=\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{x}\bot\mathcal{X}^{n-k-1},\,\mathbf{x}\neq 0}\frac{\mathbf{x}^{\top}Q\mathbf{x}}{\mathbf{x}^{\top}\mathbf{x}}\right)=\max_{\mathcal{X}^{k}}\left(\min_{\mathbf{x}\bot\mathcal{X}^{k},\,\mathbf{x}\neq 0}\frac{\mathbf{x}^{\top}Q\mathbf{x}}{\mathbf{x}^{\top}\mathbf{x}}\right)

for k=1,,nk=1,\ldots,n.

In the case of the normalized Laplacian, by considering the substitution 𝐱=D1/2𝐲\mathbf{x}=D^{1/2}\mathbf{y},

𝐱L𝐱𝐱𝐱=(D1/2𝐲)L(D1/2𝐲)(D1/2𝐲)(D1/2𝐲)=𝐲(D1/2LD1/2)𝐲𝐲D𝐲,\frac{\mathbf{x}^{\top}L\mathbf{x}}{\mathbf{x}^{\top}\mathbf{x}}=\frac{(D^{1/2}\mathbf{y})^{\top}L(D^{1/2}\mathbf{y})}{(D^{1/2}\mathbf{y})^{\top}(D^{1/2}\mathbf{y})}=\frac{\mathbf{y}^{\top}(D^{1/2}LD^{1/2})\mathbf{y}}{\mathbf{y}^{\top}D\mathbf{y}},

where

𝐲(D1/2LD1/2)𝐲=𝐲K𝐲=𝐲()𝐲=(𝐲)(𝐲)=eE(viVyiCvi,e)2.\mathbf{y}^{\top}(D^{1/2}LD^{1/2})\mathbf{y}=\mathbf{y}^{\top}K\mathbf{y}=\mathbf{y}^{\top}(\mathcal{I}\mathcal{I}^{\top})\mathbf{y}=(\mathbf{y}^{\top}\mathcal{I})(\mathbf{y}^{\top}\mathcal{I})^{\top}=\sum_{e\in E}\left(\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\right)^{2}.

Therefore, by the min-max principle,

λk(L)=min𝒳nk1(max𝐲𝒳nk1,𝐲0eE(viVyiCvi,e)2viVyi2deg(vi)).\lambda_{k}(L)=\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}\deg(v_{i})}\right). (3)

Similarly,

λk(K)=min𝒳nk1(max𝐲𝒳nk1,𝐲0eE(viVyiCvi,e)2viVyi2)\lambda_{k}(K)=\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}}\right) (4)

and

λk(A)=min𝒳nk1(max𝐲𝒳nk1,𝐲0eEvi,vjV,ij(yiyjCvi,eCvj,e)viVyi2).\lambda_{k}(A)=\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\sum_{v_{i},v_{j}\in V,\,i\neq j}\biggl{(}-y_{i}\cdot y_{j}\cdot C_{v_{i},e}\cdot C_{v_{j},e}\biggr{)}}{\sum_{v_{i}\in V}y_{i}^{2}}\right). (5)
Remark 3.1.

By the above characterizations, it is clear that the eigenvalues of LL and KK are non-negative.

4 Cauchy interlacing

We recall the Cauchy interlacing Theorem (Theorem 4.3.17 in [11]) and we apply it in order to prove interlacing results for AA, LL and KK when vertices or edges are removed.

Theorem 4.1 (Cauchy interlacing Theorem).

Let QQ be an n×nn\times n real symmetric matrix and let PP be an (n1)×(n1)(n-1)\times(n-1) principal sub-matrix of QQ. Then

λk(Q)λk(P)λk+1(Q)for all k{1,,n1}.\lambda_{k}(Q)\leq\lambda_{k}(P)\leq\lambda_{k+1}(Q)\quad\text{for all }k\in\{1,\ldots,n-1\}.
Corollary 4.2.

Given vVv\in V,

  1. 1.

    λk(A(G))λk(A(Gv))λk+1(A(G))\lambda_{k}(A(G))\leq\lambda_{k}(A(G\setminus v))\leq\lambda_{k+1}(A(G)), for all k{1,,n1}k\in\{1,\ldots,n-1\};

  2. 2.

    λk(K(G))λk(K(Gv))λk+1(K(G))\lambda_{k}(K(G))\leq\lambda_{k}(K(G\setminus v))\leq\lambda_{k+1}(K(G)), for all k{1,,n1}k\in\{1,\ldots,n-1\};

  3. 3.

    λk(L(G))λk(L(Gv))λk+1(L(G))\lambda_{k}(L(G))\leq\lambda_{k}(L(G\setminus v))\leq\lambda_{k+1}(L(G)), for all k{1,,n1}k\in\{1,\ldots,n-1\}.

Given eEe\in E,

  • 4.

    λk(K(G))λk(K(Ge))λk+1(K(G))\lambda_{k}(K(G))\leq\lambda_{k}(K(G\setminus e))\leq\lambda_{k+1}(K(G)), for all k{1,,n1}k\in\{1,\ldots,n-1\}.

Proof.

Since A(Gv)A(G\setminus v) is an (n1)×(n1)(n-1)\times(n-1) principal sub-matrix of A(G)A(G), the first claim follows from the Cauchy interlacing Theorem. The second and the third claim are analogous. Similarly, since K(Ge)K^{*}(G\setminus e) is an (m1)×(m1)(m-1)\times(m-1) principal sub-matrix of K(G)K^{*}(G), by the Cauchy interlacing Lemma we have that

λt(K(G))λt(K(Ge))λt+1(K(G))for all t{1,,m1}.\lambda_{t}(K^{*}(G))\leq\lambda_{t}(K^{*}(G\setminus e))\leq\lambda_{t+1}(K^{*}(G))\quad\text{for all }t\in\{1,\ldots,m-1\}.

Since KK and KK^{*} have the same non-zero eigenvalues for any hypergraph (cf. Remark 2.3), the claim follows. ∎

We now apply the Cauchy interlacing Theorem in order to prove the following

Theorem 4.3.

Let QQ be an n×nn\times n real symmetric matrix, let MM be an m×mm\times m real symmetric matrix and assume that there exists a principal sub-matrix of both QQ and MM of size nr=mln-r=m-l. Then,

λkl(Q)λk(M)λk+r(Q),for all k{l+1,,nr}.\lambda_{k-l}(Q)\leq\lambda_{k}(M)\leq\lambda_{k+r}(Q),\quad\text{for all }k\in\{l+1,\ldots,n-r\}.
Proof.

Let PP be a principal sub-matrix of both QQ and MM, of size nr=mln-r=m-l. By repeatedly applying the Cauchy interlacing Theorem,

λj(Q)λj(P)λj+r(Q)for all j{1,,nr}\lambda_{j}(Q)\leq\lambda_{j}(P)\leq\lambda_{j+r}(Q)\quad\text{for all }j\in\{1,\ldots,n-r\}

and

λj(M)λj(P)λj+l(M)for all j{1,,ml}.\lambda_{j}(M)\leq\lambda_{j}(P)\leq\lambda_{j+l}(M)\quad\text{for all }j\in\{1,\ldots,m-l\}.

Therefore,

λj(Q)λj(P)λj+l(M)λj+l(P)λj+l+r(Q),\lambda_{j}(Q)\leq\lambda_{j}(P)\leq\lambda_{j+l}(M)\leq\lambda_{j+l}(P)\leq\lambda_{j+l+r}(Q),

for all j{1,,nlr}j\in\{1,\ldots,n-l-r\}. Hence,

λkl(Q)λk(M)λk+r(Q),for all k{l+1,,nr}.\lambda_{k-l}(Q)\leq\lambda_{k}(M)\leq\lambda_{k+r}(Q),\quad\text{for all }k\in\{l+1,\ldots,n-r\}.

Corollary 4.4.

Given FEF\subseteq E such that G|F\left.G\right|_{F} has tt vertices,

λkt+1(A(G))λk(A(GF))λk+t1(A(G))\lambda_{k-t+1}(A(G))\leq\lambda_{k}(A(G\setminus F))\leq\lambda_{k+t-1}(A(G)) (6)

for all k{t,,n(t1)}k\in\{t,\ldots,n-(t-1)\}, and

λkt(L(G))λk(L(GF))λk+t(L(G)),\lambda_{k-t}(L(G))\leq\lambda_{k}(L(G\setminus F))\leq\lambda_{k+t}(L(G)), (7)

for all k{t+1,,nt}k\in\{t+1,\ldots,n-t\}.

Proof.

Let w1,,wtw_{1},\ldots,w_{t} be the vertices of G|FG|_{F}. Then, A(G{w1,,wt1})A(G\setminus\{w_{1},\ldots,w_{t-1}\}) is a principal sub-matrix of both A(G)A(G) and A(GF)A(G\setminus F). Similarly, L(G{w1,,wt})L(G\setminus\{w_{1},\ldots,w_{t}\}) is a principal sub-matrix of both L(G)L(G) and L(GF)L(G\setminus F). The claim follows by Theorem 4.3. ∎

Remark 4.1.

Corollary 4.4 is accurate in the sense that it is not possible to substitute, in the claim, the inequalities in (6) by

λkt+2(A(G))λk(A(GF))λk+t2(A(G)),\lambda_{k-t+2}(A(G))\leq\lambda_{k}(A(G\setminus F))\leq\lambda_{k+t-2}(A(G)),

and the inequalities in (7) by

λkt+1(L(G))λk(L(GF))λk+t1(L(G)).\lambda_{k-t+1}(L(G))\leq\lambda_{k}(L(G\setminus F))\leq\lambda_{k+t-1}(L(G)).

The accuracy for AA can be easily seen by considering the case where FF consists of a single loop \ell (that is, an edge containing one single vertex). Clearly, removing one loop from the hypergraph does not change its adjacency matrix and therefore the inequalities in (6), in this case, can be re-written as

λk(A(G))λk(A(G))λk(A(G)).\lambda_{k}(A(G))\leq\lambda_{k}(A(G))\leq\lambda_{k}(A(G)).

The accuracy of (7) is shown by the next example.

Example 4.5.

Let G:=(V,E,𝒞)G:=(V,E,\mathcal{C}) be such that (Fig. 2):

  • V={v1,v2,v3}V=\{v_{1},v_{2},v_{3}\};

  • E={e1,}E=\{e_{1},\ell\}, where e1={v1,v2,v3}e_{1}=\{v_{1},v_{2},v_{3}\} and ={v1}\ell=\{v_{1}\};

  • 𝒞v,e=1\mathcal{C}_{v,e}=1 for each eEe\in E and each vev\in e.

Refer to caption
Figure 2: The hypergraph in Example 4.5.

Then,

D(G)=(200010001),A(G)=(011101110)D(G)=\begin{pmatrix}2&0&0\\ 0&1&0\\ 0&0&1\\ \end{pmatrix},\quad A(G)=\begin{pmatrix}0&-1&-1\\ -1&0&-1\\ -1&-1&0\\ \end{pmatrix}

and therefore

L(G)=(11/21/21/2111/211).L(G)=\begin{pmatrix}1&1/\sqrt{2}&1/\sqrt{2}\\ 1/\sqrt{2}&1&1\\ 1/\sqrt{2}&1&1\\ \end{pmatrix}.

Hence,

λ1(L(G))=0,λ2(L(G))=352,λ3(L(G))=3+52;\lambda_{1}(L(G))=0,\quad\lambda_{2}(L(G))=\frac{3-\sqrt{5}}{2},\quad\lambda_{3}(L(G))=\frac{3+\sqrt{5}}{2};

while

L(G)=(111111111),L(G\setminus\ell)=\begin{pmatrix}1&1&1\\ 1&1&1\\ 1&1&1\\ \end{pmatrix},

therefore

λ1(L(G))=0,λ2(L(G))=0,λ3(L(G))=3.\lambda_{1}(L(G\setminus\ell))=0,\quad\lambda_{2}(L(G\setminus\ell))=0,\quad\lambda_{3}(L(G\setminus\ell))=3.

In particular,

λ3(L(G))>λ3(L(G))\lambda_{3}(L(G\setminus\ell))>\lambda_{3}(L(G))

and

λ2(L(G))<λ2(L(G)).\lambda_{2}(L(G\setminus\ell))<\lambda_{2}(L(G)).

This shows the accuracy of Corollary 4.4 for LL.

5 Alternative interlacing for the normalized Laplacian

In the case when we remove an edge that is not a loop from a hypergraph, we can improve Corollary 4.4 by partly generalizing, for LL, Theorem 1.2 in [4].

Theorem 5.1.

Given e^E\hat{e}\in E of cardinality t2t\geq 2,

λkt+1(L(G))λk(L(Ge^)),\lambda_{k-t+1}(L(G))\leq\lambda_{k}(L(G\setminus\hat{e})), (8)

for all k{t,,n}k\in\{t,\ldots,n\}. More generally, given FEF\subseteq E such that |e|2|e|\geq 2 for each eFe\in F and eF|e|=t\sum_{e\in F}|e|=t,

λkt+|F|(L(G))λk(L(GF)).\lambda_{k-t+|F|}(L(G))\leq\lambda_{k}(L(G\setminus F)).
Proof.

Up to re-labeling of the vertices, assume that e^={v1,,vt}\hat{e}=\{v_{1},\ldots,v_{t}\} and let

𝒵:={𝐞2𝐞1(Cv1,e^(t1)Cv2,e^),,𝐞t𝐞1(Cv1,e^(t1)Cvt,e^)}\mathcal{Z}:=\biggl{\{}\mathbf{e}_{2}-\mathbf{e}_{1}\cdot\left(\frac{C_{v_{1},\hat{e}}}{(t-1)C_{v_{2},\hat{e}}}\right),\ldots,\mathbf{e}_{t}-\mathbf{e}_{1}\cdot\left(\frac{C_{v_{1},\hat{e}}}{(t-1)C_{v_{t},\hat{e}}}\right)\biggr{\}}

where 𝐞1,,𝐞t\mathbf{e}_{1},\ldots,\mathbf{e}_{t} are the first tt standard unit vectors in n\mathbb{R}^{n} and therefore the condition 𝐲𝒵\mathbf{y}\bot\mathcal{Z} implies that

viVyiCvi,e^=y1Cv1,e^+y2Cv2,e^++ytCvt,e^=y1Cv1,e^(t1)y1Cv1,e^(t1)=0.\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},\hat{e}}=y_{1}\cdot C_{v_{1},\hat{e}}+y_{2}\cdot C_{v_{2},\hat{e}}+\ldots+y_{t}\cdot C_{v_{t},\hat{e}}=y_{1}\cdot C_{v_{1},\hat{e}}-(t-1)\cdot y_{1}\cdot\frac{C_{v_{1},\hat{e}}}{(t-1)}=0.

By (3), we have

λk(L(Ge^))\displaystyle\lambda_{k}(L(G\setminus\hat{e})) =min𝒳nk1(max𝐲𝒳nk1,𝐲0eEe^(iVyiCvi,e)2viVyi2deg(vi)vie^yi2Cvi,e^2)\displaystyle=\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E\setminus\hat{e}}\biggl{(}\sum_{i\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}\deg(v_{i})-\sum_{v_{i}\in\hat{e}}y_{i}^{2}\cdot C_{v_{i},\hat{e}}^{2}}\right)
=min𝒳nk1(max𝐲𝒳nk1,𝐲0eE(viVyiCvi,e)2(viVyiCvi,e^)2viVyi2deg(vi)vie^yi2Cvie^2)\displaystyle=\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}-\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},\hat{e}}\biggr{)}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}\deg(v_{i})-\sum_{v_{i}\in\hat{e}}y_{i}^{2}\cdot C_{v_{i}\hat{e}}^{2}}\right)
min𝒳nk1(max𝐲𝒳nk1,𝐲𝒵,𝐲0eE(viVyiCvi,e)2viVyi2deg(vi)vie^yi2Cvi,e^2)\displaystyle\geq\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\bot\mathcal{Z},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}\deg(v_{i})-\sum_{v_{i}\in\hat{e}}y_{i}^{2}\cdot C_{v_{i},\hat{e}}^{2}}\right)
min𝒳nk1(max𝐲𝒳nk1,𝐲𝒵,𝐲0eE(viVyiCvi,e)2viVyi2deg(vi))\displaystyle\geq\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\bot\mathcal{Z},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}\deg(v_{i})}\right)
min𝒳nk+t2(max𝐲𝒳nk+t2,𝐲0eE(viVyiCvi,e)2viVyi2deg(vi))\displaystyle\geq\min_{\mathcal{X}^{n-k+t-2}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k+t-2},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}\deg(v_{i})}\right)
=λkt+1(L(G)).\displaystyle=\lambda_{k-t+1}(L(G)).

In the third line we added the condition 𝐲𝒵\mathbf{y}\bot\mathcal{Z}, that makes the second term of the numerator vanish and that restricts the maximum over a smaller set. In the fifth line we considered an optimization that includes the one in the fourth line as particular case. This proves the claim. ∎

As shown by the next example, Theorem 5.1 is accurate in the sense that it is not possible to substitute, in the claim, the inequality (8) by

λkt+2(L(G))λk(L(Ge^))\lambda_{k-t+2}(L(G))\leq\lambda_{k}(L(G\setminus\hat{e}))

which becomes, for |e^|=t=2|\hat{e}|=t=2,

λk(L(G))λk(L(Ge^)).\lambda_{k}(L(G))\leq\lambda_{k}(L(G\setminus\hat{e})).
Example 5.2.

Let GG be the simple graph on 77 nodes in Fig. 3. By Theorem 2.1 and Theorem 3.1 in [14],

λ7(L(G))>43=λ7(L(Ge^)).\lambda_{7}(L(G))>\frac{4}{3}=\lambda_{7}(L(G\setminus\hat{e})).

This shows the accuracy of Theorem 5.1.

Refer to caption
Figure 3: The simple graph in Example 5.2.

Now, while Theorem 5.1 considers the removal of edges of cardinality 2\geq 2, the following result is about the removal of loops.

Proposition 5.3.

If E\ell\in E is a loop, then

λk(G)λk(G)\lambda_{k}(G\setminus\ell)\geq\lambda_{k}(G)

for all kk such that λk(G)1\lambda_{k}(G)\geq 1, and

λk(G)λk(G)\lambda_{k}(G\setminus\ell)\leq\lambda_{k}(G)

for all kk such that λk(G)1\lambda_{k}(G)\leq 1.

Proof.

Assume that ={v1}\ell=\{v_{1}\} and λk(G)1\lambda_{k}(G)\geq 1. By (3), we have

λk(L(G))\displaystyle\lambda_{k}(L(G\setminus\ell)) =min𝒳nk1(max𝐲𝒳nk1,𝐲0eE(viVyiCvi,e)2y12Cv1,2viVyi2deg(vi)y12Cv1,2)\displaystyle=\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}-y_{1}^{2}\cdot C_{v_{1},\ell}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}\deg(v_{i})-y_{1}^{2}\cdot C_{v_{1},\ell}^{2}}\right)
min𝒳nk1(max𝐲𝒳nk1,𝐲0eE(viVyiCvi,e)2viVyi2deg(vi))\displaystyle\geq\min_{\mathcal{X}^{n-k-1}}\left(\max_{\mathbf{y}\bot\mathcal{X}^{n-k-1},\,\mathbf{y}\neq 0}\frac{\sum_{e\in E}\biggl{(}\sum_{v_{i}\in V}y_{i}\cdot C_{v_{i},e}\biggr{)}^{2}}{\sum_{v_{i}\in V}y_{i}^{2}\deg(v_{i})}\right)
=λk(L(G)),\displaystyle=\lambda_{k}(L(G)),

since adding the same non-negative quantity to the numerator and to the denominator makes the resultant fraction closer to 11. This proves the first claim. The proof of the second claim is analogous. ∎

6 Conclusions

We have shown that, if the structure of a hypergraph with real coefficients is perturbed by removing (or adding) vertices and edges, then the eigenvalues of the perturbed hypergraph interlace those of the original hypergraph. We have proved various inequalities for each operator that we considered (adjacency matrix, Kirchhoff Laplacian and normalized Laplacian), and we have shown tightness of the inequalities. These results are in line with intuition, because the spectra of the operators associated to a hypergraph encode important structural properties of the hypergraph. For future directions, it will be interesting to apply these interlacing results to problems arising in both pure mathematics and applied network analysis.

Acknowledgments.
The author would like to thank Tetsuo Hatsuda (RIKEN iTHEMS) and the organizers of the International Conference on Blockchains and their Applications (Kyoto 2021) for the invitation. This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.

References

  • [1] E. Andreotti, Spectra of hyperstars on public transportation networks, arXiv preprint, arXiv:2004.07831.
  • [2] N. Bao, N. Cheng, S. Hernández-Cuenca and V.P. Su, The Quantum Entropy Cone of Hypergraphs, arXiv preprint, arXiv:2002.05317 (2020).
  • [3] Á. Bodó, G.Y. Katona and P.L. Simon, SIS epidemic propagation on hypergraphs, Bull. Math. Biol. 78(4), 713–735 (2016).
  • [4] S. Butler, Interlacing for weighted graphs using the normalized Laplacian, Electron. J. Linear Algebra 16, 90–98 (2007).
  • [5] F. Chung, Spectral graph theory, American Mathematical Society (1997).
  • [6] C. Curto, What can topology tell us about the neural code?, Bull. Amer. Math. Soc. 54, 63–78 (2017).
  • [7] W.H. Haemers, Interlacing eigenvalues and graphs, Linear Algebra Appl. 226–228, 593–616 (1995).
  • [8] F.J. Hall, The Adjacency Matrix, Standard Laplacian, and Normalized Laplacian, and Some Eigenvalue Interlacing Results, Lecture notes, Georgia State University (2010).
  • [9] Y. Hirono, T. Okada, H. Miyazaki and Y. Hidaka, Structural reduction of chemical reaction networks based on topology, arXiv preprint, arXiv:2102.07687 (2021).
  • [10] D. Horak and J. Jost, Spectra of combinatorial Laplace operators on simplicial complexes, Adv. Math. 244, 303–336 (2013).
  • [11] R.A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, second edition (2013).
  • [12] J. Jost and R. Mulas, Hypergraph Laplace operators for chemical reaction networks, Adv. Math. 351 (2019), 870–896.
  • [13] J. Jost and R. Mulas, Normalized Laplace Operators for Hypergraphs with Real Coefficients, J. Complex Netw. 9(1), cnab009. DOI:10.1093/comnet/cnab009 (2021).
  • [14] J. Jost, R. Mulas and F. Münch, Spectral gap of the largest eigenvalue of the normalized graph Laplacian, Communications in Mathematics and Statistics, DOI:10.1007/s40304-020-00222-7 (2021).
  • [15] S. Klamt, U.U. Haus and F. Theis, Hypergraphs and cellular networks, PLoS Comp. Biol. 5(5), e1000385 (2009).
  • [16] N. Lanchier and J. Neufer, Stochastic dynamics on hypergraphs and the spatial majority rule model, J. Stat. Phys. 151(1), 21–45 (2013).
  • [17] S. Ranshous, C.A. Joslyn, S. Kreyling, K. Nowak, N.F. Samatova, C.L. West and S. Winters, Exchange Pattern Mining in the Bitcoin Transaction Directed Hypergraph, in International Conference on Financial Cryptography and Data Security, pp. 248–263, Springer, Cham (2017).
  • [18] N. Reff, Spectral properties of oriented hypergraphs, Electron. J. Linear Algebra 27 (2014).
  • [19] N. Reff and L. Rusnak, An oriented hypergraphic approach to algebraic graph theory, Linear Algebra Appl. 437, 2262–2270 (2012).
  • [20] Z.K. Zhang and C. Liu, A hypergraph model of social tagging networks, J. Stat. Mech. 2010, P10005 (2010).