This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Topological Signal Processing over
Simplicial Complexes

Sergio Barbarossa, , Stefania Sardellitti, 
The authors are with the Department of Information Engineering, Electronics, and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184, Rome, Italy. E-mails: {sergio.barbarossa; stefania.sardellitti}@uniroma1.it. This work has been supported by H2020 EU/Taiwan Project 5G CONNI, Nr. AMD-861459-3 and by MIUR, under the PRIN Liquid-Edge contract. Some preliminary results of this work were presented at the 2018 IEEE Workshop in Data Science [1].
Abstract

The goal of this paper is to establish the fundamental tools to analyze signals defined over a topological space, i.e. a set of points along with a set of neighborhood relations. This setup does not require the definition of a metric and then it is especially useful to deal with signals defined over non-metric spaces. We focus on signals defined over simplicial complexes. Graph Signal Processing (GSP) represents a special case of Topological Signal Processing (TSP), referring to the situation where the signals are associated only with the vertices of a graph. Even though the theory can be applied to signals of any order, we focus on signals defined over the edges of a graph and show how building a simplicial complex of order two, i.e. including triangles, yields benefits in the analysis of edge signals. After reviewing the basic principles of algebraic topology, we derive a sampling theory for signals of any order and emphasize the interplay between signals of different order. Then we propose a method to infer the topology of a simplicial complex from data. We conclude with applications to real edge signals and to the analysis of discrete vector fields to illustrate the benefits of the proposed methodologies.

Index Terms:
Algebraic topology, graph signal processing, topology inference.

I Introduction

Historically, signal processing has been developed for signals defined over a metric space, typically time or space. More recently, there has been a surge of interest to deal with signals that are not necessarily defined over a metric space. Examples of particular interest are biological networks, social networks, etc. The field of graph signal processing (GSP) has recently emerged as a framework to analyze signals defined over the vertices of a graph [2], [3]. A graph 𝒢(𝒱,){\cal G}({\cal V},{\cal E}) is a simple example of topological space, composed of a set of elements (vertices) 𝒱{\cal V} and a set of edges {\cal E} representing pairwise relations. However, notwithstanding their enormous success, graph-based representations are not always able to capture all the information present in interconnected systems in which the complex interactions among the system constitutive elements cannot be reduced to pairwise interactions, but require a multiway description, as suggested in [4, 5, 6, 7, 8, 9].

To establish a general framework to deal with complex interacting systems, it is useful to start from a topological space, i.e. a set of elements 𝒱{\cal V}, along with an ensemble of multiway relations, represented by a set 𝒮{\cal S} containing subsets of various cardinality, of the elements of 𝒱{\cal V}. The structure (𝒱,𝒮){\cal H}({\cal V},{\cal S}) is known as a hypergraph. In particular, a class of hypergraphs that is particularly appealing for its rich algebraic structure is given by simplicial complexes, whose defining feature is the inclusion property stating that if a set 𝒜{\cal A} belongs to 𝒮{\cal S}, then all subsets of 𝒜{\cal A} also belong to 𝒮{\cal S} [10]. Restricting the attention to simplicial complexes is a limitation with respect to a hypergraph model. Nevertheless, simplicial complex models include many cases of interest and, most important, their rich algebraic structure makes possible to derive tools that are very useful for analyzing signals over the complex. Learning simplicial complexes representing the complex interactions among sets of elements has been already proposed in brain network analysis [6], neuronal morphologies [11], co-authorship networks [12], collaboration networks [13], [14], tumor progression analysis [15]. More generally, the use of algebraic topology tools for the extraction of information from data is not new: The framework known as Topological Data Analysis (TDA), see e.g. [16], has exactly this goal. Interesting applications of algebraic topology tools have been proposed to control systems [17], statistical ranking from incomplete data [18], [19], distributed coverage control of sensor networks [20, 21, 22], wheeze detection [23]. One of the fundamental tools of TDA is the analysis of persistent homologies extracted from data [24], [25]. Topological methods to analyze signals and images are also the subjects of the two books [26] and [27].

The goal of our paper is to establish a fundamental framework to analyze signals defined over a simplicial complex. Our approach is complementary to TDA: Rather than focusing, like TDA, on the properties of the simplicial complex extracted from data, we focus on the properties of signals defined over a simplicial complex. Our approach includes GSP as a particular case: While GSP focuses on the analysis of signals defined over the vertices of a graph, topological signal processing (TSP) considers signals defined over simplices of various order, i.e. signals defined over nodes, edges, triangles, etc. Relevant examples of edge signals are flow signals, like blood flow between different areas of the brain [28], data traffic over communication links [29], regulatory signals in gene regulatory networks [30], where it was shown that a dysregulation of these regulatory signals is one of the causes of cancer [30]. Examples of signals defined over triplets are co-authorship networks, where a (filled) triangle indicates the presence of papers co-authored by the authors associated to its three vertices [12] and the associated signal value counts the number of such publications. Further examples of even higher order structures are analyzed in [8], with the goal of predicting higher-order links. There are previous works dealing with the analysis of edge signals, like [31, 32, 33, 34]. More specifically, in [31] the authors introduced a class of filters to analyze edge signals based on the edge-Laplacian matrix [35]. A semi-supervised learning method for learning edge flows was also suggested in [32], where the authors proposed filters highlighting both divergence-free and curl-free behaviors. Other works analyzed edge signals using a line-graph transformation [33], [34]. Random walks evolving over simplicial complexes have been analyzed in [36],[37], [38]. In [36], random walks and diffusion process over simplicial complexes are introduced, while in [38] the authors focused on the study of diffusion processes on the edge-space by generalizing the well-known relationship between the normalized graph Laplacian operator and random walks on graphs.

Building a representation based on a simplicial complex is a straightforward generalization of a graph-based representation. Given a set of time series, it is well known that graph-based representations are very useful to capture relevant correlations or causality relations present between different signals [39],[40]. In such a case, each time series is associated to a node of a graph, and the presence of an edge is an indicator of the relations between the signals associated to its endpoints. As a direct generalization, if we have signals defined over the edges of a graph, capturing their relations requires inferring the presence of triangles associating triplets of edges.

In summary, our main contributions are listed below:

  1. 1.

    we show how to derive a real-valued function capturing triple-wise relations among data, to be used as a regularization function in the analysis of edge signals;

  2. 2.

    we derive a sampling theory defining the conditions for the recovery of high order signals from a subset of observations, highlighting the interplay between signals of different order;

  3. 3.

    we propose inference algorithms to extract the structure of the simplicial complex from high order signals;

  4. 4.

    we apply our edge signal processing tools to the analysis of vector fields, with a specific attention to the recovery of the RiboNucleic Acid (RNA) velocity vector field, useful to predict the evolution of a cell behavior [41].

We presented some preliminary results of our work in [1]. Here, we extend the work of [1], deriving a sampling theory for signals defined over complexes of various order, proposing new inference methods, more robust against noise, and showing applications to the analysis of wireless data traffic and to the estimation of the RNA velocity vector field.

The paper is organized as follows. Section II recalls the main algebraic principles that will form the basis for the derivation of the signal processing tools carried out in the ensuing sections. In Section III, we will first recall the eigenvectors properties of higher-order Laplacian and the Hodge decomposition. Then, we derive the real-valued extension of an edge set function, capturing triple-wise relations among the elements of a discrete set, which will be used to design unitary bases to represent edge signals. In Section IV, we illustrate some methods to recover the edge signal components from noisy observations. Section V provides theoretical conditions to recover an edge signal from a subset of samples, highlighting the interplay between signals defined over structures of different order. In Section VI, we illustrate how to use edge signal processing to filter discrete vector fields. Then, in Section VII, we propose some methods to infer the simplicial complex structure from noisy observations, illustrating their performance over both synthetic and real data. Finally, in Section VIII we draw some conclusions.

II Review of algebraic topology tools

In this section we recall the basic principles of algebraic topology [10] and discrete calculus [42], as they will form the background required for deriving the basic signal processing tools to be used in later sections. We follow an algebraic approach that is accessible also to readers with no specific background on algebraic topology.

II-A Discrete domains: Simplicial complexes

Given a finite set 𝒱{v0,,vN1}\mathcal{V}\triangleq\{v_{0},\ldots,v_{N-1}\} of NN points (vertices), a kk-simplex σik\sigma^{k}_{i} is an unordered set {vi0,,vik}\{v_{i_{0}},\ldots,v_{i_{k}}\} of k+1k+1 points with 0ijN10\leq i_{j}\leq N-1, for j=0,,kj=0,\ldots,k, and vijvinv_{i_{j}}\neq v_{i_{n}} for all ijini_{j}\neq i_{n}. A face of the kk-simplex {vi0,,vik}\{v_{i_{0}},\ldots,v_{i_{k}}\} is a (k1)(k-1)-simplex of the form {vi0,,vij1,vij+1,,vik}\{v_{i_{0}},\ldots,v_{i_{j-1}},v_{i_{j+1}},\ldots,v_{i_{k}}\}, for some 0jk0\leq j\leq k. Every kk-simplex has exactly k+1k+1 faces. An abstract simplicial complex 𝒳{\cal X} is a finite collection of simplices that is closed under inclusion of faces, i.e., if σi𝒳{\cal\sigma}_{i}\in{\cal X}, then all faces of σi\sigma_{i} also belong to 𝒳{\cal X}. The order (or dimension) of a simplex is one less than its cardinality. Then, a vertex is a 0-dimensional simplex, an edge has dimension 11, and so on. The dimension of a simplicial complex is the largest dimension of any of its simplices. A graph is a particular case of an abstract simplicial complex of order 11, containing only simplices of order 0 (vertices) and 11 (edges).

If the set of points is embedded in a real space D\mathbb{R}^{D} of dimension DD, we can associate a geometric simplicial complex with the abstract complex. A set of points in a real space D\mathbb{R}^{D} is affinely independent if it is not contained in a hyperplane; an affinely independent set in D\mathbb{R}^{D} contains at most D+1D+1 points. A geometric kk-simplex is the convex hull of a set of k+1k+1 affinely independent points, called its vertices. Hence, a point is a 0-simplex, a line segment is a 11-simplex, a triangle is a 22-simplex, a tetrahedron is a 33-simplex, and so on. A geometric simplicial complex shares the fundamental property of an abstract simplicial complex: It is a collection of simplices that is closed under inclusion and with the further property that the intersection of any two simplices in 𝒳{\cal X} is also a simplex in 𝒳{\cal X}, assuming that the empty set is an element of every simplicial complex. Although geometric simplicial complexes are easier to visualize and, for this reason, we will often use geometric terms like edges, triangles, and so on, as synonyms of pairs, triplets, we do not require the simplicial complex to be embedded in any real space, so as to leave the treatment as general as possible.

The structure of a simplicial complex is captured by the neighborhood relations of its subsets. As with graphs, it is useful to introduce first the orientation of the simplices. Every simplex, of any order, can have only two orientations, depending on the permutations of its elements. Two orientations are equivalent, or coherent, if each of them can be recovered from the other by an even number of transpositions, where each transposition is a permutation of two elements [10]. A kk-simplex σik{vi0,vi1,,vik}\sigma_{i}^{k}\equiv\{v_{i_{0}},v_{i_{1}},\ldots,v_{i_{k}}\} of order kk, together with an orientation is an oriented kk-simplex and is denoted by [vi0,vi1,,vik]\left[v_{i_{0}},v_{i_{1}},\ldots,v_{i_{k}}\right]. Two simplices of order kk, σik,σjk𝒳\sigma_{i}^{k},\sigma_{j}^{k}\in{\cal X}, are upper adjacent in 𝒳{\cal X}, if both are faces of a simplex of order k+1k+1. Two simplices of order kk, σik,σjk𝒳\sigma_{i}^{k},\sigma_{j}^{k}\in{\cal X}, are lower adjacent in 𝒳{\cal X}, if both have a common face of order k1k-1 in 𝒳{\cal X}. A (k1)(k-1)-face σjk1\sigma_{j}^{k-1} of a kk-simplex σik\sigma_{i}^{k} is called a boundary element of σik\sigma_{i}^{k}. We use the notation σjk1σik\sigma_{j}^{k-1}\subset\sigma_{i}^{k} to indicate that σjk1\sigma_{j}^{k-1} is a boundary element of σik\sigma_{i}^{k}. Given a simplex σjk1σik\sigma_{j}^{k-1}\subset\sigma_{i}^{k}, we use the notation σjk1σik\sigma_{j}^{k-1}\sim\sigma_{i}^{k} to indicate that the orientation of σjk1\sigma_{j}^{k-1} is coherent with that of σik\sigma_{i}^{k}, whereas we write σjk1σik\sigma_{j}^{k-1}\nsim\sigma_{i}^{k} to indicate that the two orientations are opposite.

For each kk, Ck(𝒳,)C_{k}({\cal X},\mathbb{R}) denotes the vector space obtained by the linear combination, using real coefficients, of the set of oriented kk-simplices of 𝒳{\cal X}. In algebraic topology, the elements of Ck(𝒳,)C_{k}({\cal X},\mathbb{R}) are called kk-chains. If {σ1k,,σnkk}\{\sigma_{1}^{k},\ldots,\sigma_{n_{k}}^{k}\} is the set of kk-simplices in 𝒳{\cal X}, a kk-chain τk\tau_{k} can be written as τk=i=1nkαiσik\tau_{k}=\sum_{i=1}^{n_{k}}\alpha_{i}\sigma_{i}^{k}. Then, given the basis {σ1k,,σnkk}\{\sigma_{1}^{k},\ldots,\sigma_{n_{k}}^{k}\}, a chain τk\tau_{k} can be represented by the vector of its expansion coefficients (α1,,αnk)(\alpha_{1},\ldots,\alpha_{n_{k}}). An important operator acting on ordered chains is the boundary operator. The boundary of the ordered kk-chain [vi0,,vik][v_{i_{0}},\ldots,v_{i_{k}}] is a linear mapping k:Ck(𝒳,)Ck1(𝒳,)\partial_{k}:C_{k}({\cal X},\mathbb{R})\rightarrow C_{k-1}({\cal X},\mathbb{R}) defined as

k[vi0,,vik]j=0k(1)j[vi0,,vij1,vij+1,,vik].\partial_{k}[v_{i_{0}},\ldots,v_{i_{k}}]\triangleq\sum_{j=0}^{k}\,(-1)^{j}[v_{i_{0}},\ldots,v_{i_{j-1}},v_{i_{j+1}},\ldots,v_{i_{k}}]. (1)

So, for example, given an oriented triangle σi2[vi0,vi1,vi2]\sigma^{2}_{i}\triangleq[v_{i_{0}},v_{i_{1}},v_{i_{2}}], its boundary is

2σi2=[vi1,vi2][vi0,vi2]+[vi0,vi1],\partial_{2}\sigma^{2}_{i}=[v_{i_{1}},v_{i_{2}}]-[v_{i_{0}},v_{i_{2}}]+[v_{i_{0}},v_{i_{1}}], (2)

i.e., a suitable linear combination of its edges. It is straightforward to verify, by simple substitution, that the boundary of a boundary is zero, i.e., kk+1=0\partial_{k}\partial_{k+1}=0.

It is important to remark that an oriented simplex is different from a directed one. As with graphs, an oriented edge establishes which direction of the flow is considered positive or negative, whereas a directed edge only permits flow in one direction [42]. In this work we will considered oriented, undirected simplices.

II-B Algebraic representation

The structure of a simplicial complex 𝒳{\cal X} of dimension KK, shortly named KK-simplicial complex, is fully described by the set of its incidence matrices 𝐁k\mathbf{B}_{k}, k=1,,Kk=1,\ldots,K. Given an orientation of the simplicial complex 𝒳{\cal X}, the entries of the incidence matrix 𝐁k\mathbf{B}_{k} establish which kk-simplices are incident to which (k1)(k-1)-simplices. Then 𝐁k\mathbf{B}_{k} is the matrix representation of the boundary operator. Formally speaking, its entries are defined as follows:

Bk(i,j)={0,ifσik1σjk1,ifσik1σjkandσik1σjk1,ifσik1σjkandσik1σjk.B_{k}(i,j)=\left\{\begin{array}[]{rll}0,&\text{if}\;\sigma^{k-1}_{i}\not\subset\sigma^{k}_{j}\\ 1,&\text{if}\;\sigma^{k-1}_{i}\subset\sigma^{k}_{j}\;\text{and}\;\sigma^{k-1}_{i}\sim\sigma^{k}_{j}\\ -1,&\text{if}\;\sigma^{k-1}_{i}\subset\sigma^{k}_{j}\;\text{and}\;\sigma^{k-1}_{i}\nsim\sigma^{k}_{j}\\ \end{array}\right.. (3)

If we consider, for simplicity, a simplicial complex of order two, composed of a set 𝒱{\cal V} of vertices, a set {\cal E} of edges, and a set 𝒯{\cal T} of triangles, having cardinalities V=|𝒱|V=|{\cal V}|, E=||E=|{\cal E}|, and T=|𝒯|T=|{\cal T}|, respectively, we need to build two incidence matrices 𝐁1V×E\mbox{$\mathbf{B}$}_{1}\in\mathbb{R}^{V\times E} and 𝐁2E×T\mbox{$\mathbf{B}$}_{2}\in\mathbb{R}^{E\times T}.

From (1), the property that the boundary of a boundary is zero maps into the following matrix form

𝐁k𝐁k+1=𝟎.\mathbf{B}_{k}\mathbf{B}_{k+1}=\mathbf{0}. (4)

The structure of a KK-simplicial complex is fully described by its high order combinatorial Laplacian matrices [43], of order k=0,,Kk=0,\ldots,K, defined as

𝐋0=𝐁1𝐁1T,\displaystyle\mathbf{L}_{0}=\mathbf{B}_{1}\mathbf{B}_{1}^{T}, (5)
𝐋k=𝐁kT𝐁k+𝐁k+1𝐁k+1T,k=1,,K1,\displaystyle\mathbf{L}_{k}=\mathbf{B}^{T}_{k}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T},\,k=1,\ldots,K-1, (6)
𝐋K=𝐁KT𝐁K.\displaystyle\mathbf{L}_{K}=\mathbf{B}_{K}^{T}\mathbf{B}_{K}. (7)

It is worth emphasizing that all Laplacian matrices of intermediate order, i.e. k=1,,K1k=1,\ldots,K-1, contain two terms: The first term, also known as lower Laplacian, expresses the lower adjacency of kk-order simplices; the second terms, also known as upper Laplacian, expresses the upper adjacency of kk-order simplices. So, for example, two edges are lower adjacent if they share a common vertex, whereas they are upper adjacent if they are faces of a common triangle. Note that the vertices of a graph can only be upper adjacent, if they are incident to the same edge. This is why the graph Laplacian 𝐋0\mathbf{L}_{0} contains only one term.

III Spectral simplicial theory

In this paper, we are interested in analyzing signals defined over a simplicial complex. Given a set 𝒮k{\cal S}_{k}, with elements of cardinality k+1k+1, a signal is defined as a real-valued map on the elements of 𝒮k{\cal S}_{k}, of the form

fk:𝒮k,k=0,1,f_{k}:{\cal S}_{k}\rightarrow\mathbb{R},\,\,k=0,1,\ldots (8)

The order of the signal is one less the cardinality of the elements of 𝒮k{\cal S}_{k}. Even though our framework is general, in many cases we focus on simplices of order up to two. In that case, we consider a set of vertices 𝒱{\cal V}, a set of edges {\cal E} and a set of triangles 𝒯{\cal T}, of dimension VV, EE, and TT, respectively. We denote with 𝒳(𝒱,,𝒯)\cal X({\cal V},{\cal E},T) the associated simplicial complex. The signals over each complex of order kk, with k=0,1k=0,1 and 22, are defined as the following maps: 𝒔0:𝒱V\mbox{\boldmath$s$}^{0}:{\cal V}\rightarrow\mathbb{R}^{V}, 𝒔1:E\mbox{\boldmath$s$}^{1}:{\cal E}\rightarrow\mathbb{R}^{E}, and 𝒔2:𝒯T\mbox{\boldmath$s$}^{2}:{\cal T}\rightarrow\mathbb{R}^{T}.

Spectral graph theory represents a solid framework to extract features of a graph looking at the eigenvectors of the combinatorial Laplacian 𝐋0\mathbf{L}_{0} of order 0. The eigenvectors associated with the smallest eigenvalues of 𝐋0\mathbf{L}_{0} are very useful, for example, to identify clusters [44]. Furthermore, in GSP it is well known that a suitable basis to represent signals defined over the vertices of a graph, i.e. signals of order 0, is given by the eigenvectors of 𝐋0\mathbf{L}_{0}. In particular, given the eigendecomposition of 𝐋0\mathbf{L}_{0}:

𝐋0=𝐔0𝚲0𝐔0T,\mathbf{L}_{0}=\mathbf{U}_{0}\mathbf{\Lambda}_{0}\mathbf{U}_{0}^{T}, (9)

the Graph Fourier Transform (GFT) of a signal 𝒔0\mbox{\boldmath$s$}^{0} over an undirected graph has been defined as the projection of the signal onto the space spanned by the eigenvectors of 𝐋0\mathbf{L}_{0}, i.e. (see [3] and the references therein)

𝒔^0𝐔0T𝒔0.\widehat{\mbox{\boldmath$s$}}^{0}\triangleq\mathbf{U}_{0}^{T}\,\mbox{\boldmath$s$}^{0}. (10)

Equivalently, a signal defined over the vertices of a graph can be represented as

𝒔0=𝐔0𝒔^0.\mbox{\boldmath$s$}^{0}=\mathbf{U}_{0}\,\widehat{\mbox{\boldmath$s$}}^{0}. (11)

From graph spectral theory, it is well known that the eigenvectors associated with the smallest eigenvalues of 𝐋0\mathbf{L}_{0} encode information about the clusters of the graph. Hence, the representation given by (11) is particularly suitable for signals that are smooth within each cluster, whereas they can vary arbitrarily across different clusters. For such signals, in fact, the representation in (11) is sparse or approximately sparse.

As a generalization of the above approach, we may represent signals of various order over bases built with the eigenvectors of the corresponding high order Laplacian matrices, given in (6)-(7). Hence, using the eigendecomposition

𝐋k=𝐔k𝚲k𝐔kT,\mathbf{L}_{k}=\mathbf{U}_{k}\mathbf{\Lambda}_{k}\mathbf{U}_{k}^{T}, (12)

we may define the GFT of order kk as the projection of a kk-order signal onto the eigenvectors of 𝐋k\mathbf{L}_{k}, i.e.

𝒔^k𝐔kT𝒔k,\widehat{\mbox{\boldmath$s$}}^{k}\triangleq\mathbf{U}_{k}^{T}\,\mbox{\boldmath$s$}^{k}, (13)

so that a signal 𝒔k\mbox{\boldmath$s$}^{k} can be represented in terms of its GFT coefficients as

𝒔k=𝐔k𝒔^k.\mbox{\boldmath$s$}^{k}=\mathbf{U}_{k}\,\widehat{\mbox{\boldmath$s$}}^{k}. (14)

Now we want to show under what conditions (14) is a meaningful representation of a kk-order signal and what is the meaning of such a representation. More specifically, the goal of this section is threefold: i) we recall first the relations between eigenvectors of various order of 𝐋k\mathbf{L}_{k}; ii) we recall the Hodge decomposition, which is a basic theory showing that the eigenvectors of any order can be split into three different classes, each representing a specific behavior of the signal; iii) we provide a theory showing how to build a topology-aware unitary basis to represent signals of various order starting only from topological properties.

III-A Relations between eigenvectors of different order

There are interesting relations between the eigenvectors of Laplacian matrices of different order [45], [46] which is useful to recall as they play a key role in spectral analysis. The following properties hold true for the eigendecomposition of kk-order Laplacian matrices, with k=1,,K1k=1,\ldots,K-1.

Proposition 1

Given the Laplacian matrices 𝐋k\mathbf{L}_{k} of any order kk, with k=1,,K1k=1,\ldots,K-1, it holds:

  1. 1.

    the eigenvectors associated with the nonzero eigenvalues of 𝐁kT𝐁k\mathbf{B}^{T}_{k}\mathbf{B}_{k} are orthogonal to the eigenvectors associated with the nonzero eigenvalues of 𝐁k+1𝐁k+1T\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T} and viceversa;

  2. 2.

    if 𝐯v is an eigenvector of 𝐁k𝐁kT\mathbf{B}_{k}\mathbf{B}_{k}^{T} associated with the eigenvalue λ\lambda, then 𝐁kT𝒗\mathbf{B}_{k}^{T}\mbox{\boldmath$v$} is an eigenvector of 𝐁kT𝐁k\mathbf{B}^{T}_{k}\mathbf{B}_{k}, associated with the same eigenvalue;

  3. 3.

    the eigenvectors associated with the nonzero eigenvalues λ\lambda of 𝐋k\mathbf{L}_{k} are either the eigenvectors of 𝐁kT𝐁k\mathbf{B}^{T}_{k}\mathbf{B}_{k} or those of 𝐁k+1𝐁k+1T\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T};

  4. 4.

    the nonzero eigenvalues of 𝐋k\mathbf{L}_{k} are either the eigenvalues of 𝐁kT𝐁k\mathbf{B}^{T}_{k}\mathbf{B}_{k} or those of 𝐁k+1𝐁k+1T\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T}.

Proof:

All above properties are easy to prove. Property 1) is straightforward: If 𝐁kT𝐁k𝒗=λ𝒗\mathbf{B}_{k}^{T}\mathbf{B}_{k}\mbox{\boldmath$v$}=\lambda\mbox{\boldmath$v$}, then

𝐁k+1𝐁k+1Tλ𝒗=𝐁k+1𝐁k+1T𝐁kT𝐁k𝒗=𝟎\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T}\lambda\mbox{\boldmath$v$}=\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T}\mbox{$\mathbf{B}$}_{k}^{T}\mbox{$\mathbf{B}$}_{k}\mbox{\boldmath$v$}={\mathbf{0}} (15)

because of (4). Similarly, for the converse. Property 2) is also straightforward: If 𝒗v is an eigenvector of 𝐁k𝐁kT\mathbf{B}_{k}\mathbf{B}_{k}^{T} associated with a nonvanishing eigenvalue λ\lambda, then

(𝐁kT𝐁k)𝐁kT𝒗=𝐁kT(𝐁k𝐁kT)𝒗=λ𝐁kT𝒗.(\mathbf{B}_{k}^{T}\mathbf{B}_{k})\mathbf{B}_{k}^{T}\mbox{\boldmath$v$}=\mathbf{B}_{k}^{T}(\mathbf{B}_{k}\mathbf{B}_{k}^{T})\mbox{\boldmath$v$}=\lambda\mathbf{B}_{k}^{T}\mbox{\boldmath$v$}. (16)

Finally, properties 3) and 4) follow from the definition of kk-order Laplacian, i.e. 𝐋k=𝐁kT𝐁k+𝐁k+1𝐁k+1T\mathbf{L}_{k}=\mathbf{B}^{T}_{k}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T} and from property 1). ∎

Remark: Recalling that the eigenvectors associated with the smallest nonzero eigenvalues of 𝐋0\mathbf{L}_{0} are smooth within each cluster, applying property 2) to the case k=1k=1, it turns out that the eigenvectors of 𝐋1\mathbf{L}_{1} associated with the smallest eigenvalues of 𝐁1T𝐁1\mathbf{B}_{1}^{T}\mathbf{B}_{1} are approximately null over the links within each cluster, whereas they assume the largest (in modulus) values on the edges across clusters. These eigenvectors are then useful to highlight inter-cluster edges.

III-B Hodge decomposition

Let us consider the eigendecomposition of the kk-th order Laplacian, for k=1,,K1k=1,\ldots,K-1,

𝐋k=𝐁kT𝐁k+𝐁k+1𝐁k+1T=𝐔k𝚲k𝐔kT.\mathbf{L}_{k}=\mathbf{B}^{T}_{k}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T}=\mathbf{U}_{k}\mathbf{\Lambda}_{k}\mathbf{U}_{k}^{T}. (17)

The structure of 𝐋k\mathbf{L}_{k}, together with the property 𝐁k𝐁k+1=𝟎\mathbf{B}_{k}\mathbf{B}_{k+1}=\mathbf{0}, induces an interesting decomposition of the space Dk\mathbb{R}^{D_{k}} of signals of order kk of dimension Dk{D_{k}}. First of all, the property 𝐁k𝐁k+1=𝟎\mathbf{B}_{k}\mathbf{B}_{k+1}=\mathbf{0} implies img(𝐁k+1)ker(𝐁k){\rm img}(\mathbf{B}_{k+1})\subseteq{\rm ker}(\mathbf{B}_{k}). Hence, each vector 𝒙ker(𝐁k)\mbox{\boldmath$x$}\in{\rm ker}(\mathbf{B}_{k}) can be decomposed into two parts: one belonging to img(𝐁k+1){\rm img}(\mathbf{B}_{k+1}) and one orthogonal to it. Furthermore, since the whole space Dk\mathbb{R}^{D_{k}} can always be written as Dkker(𝐁k)img(𝐁kT)\mathbb{R}^{D_{k}}\equiv{\rm ker}(\mathbf{B}_{k})\oplus{\rm img}(\mathbf{B}_{k}^{T}), it is possible to decompose Dk\mathbb{R}^{D_{k}} into the direct sum [47]

Dkimg(𝐁kT)ker(𝐋k)img(𝐁k+1),\begin{split}\mathbb{R}^{D_{k}}&\equiv\text{img}(\mathbf{B}_{k}^{T})\oplus\text{ker}(\mathbf{L}_{k})\oplus\text{img}(\mathbf{B}_{k+1}),\end{split} (18)

where the vectors in ker(𝐋k)\text{ker}(\mathbf{L}_{k}) are also in ker(𝐁k)\text{ker}(\mathbf{B}_{k}) and ker(𝐁k+1T)\text{ker}(\mathbf{B}_{k+1}^{T}). This implies that, given any signal 𝒔k\mbox{\boldmath$s$}^{k} of order kk, there always exist three signals 𝒔k1\mbox{\boldmath$s$}^{k-1}, 𝒔Hk\mbox{\boldmath$s$}_{H}^{k}, and 𝒔k+1\mbox{\boldmath$s$}^{k+1}, of order k1k-1, kk, and k+1k+1, respectively, such that 𝒔k\mbox{\boldmath$s$}^{k} can always be expressed as the sum of three orthogonal components:

𝒔k=𝐁kT𝒔k1+𝒔Hk+𝐁k+1𝒔k+1.\mbox{\boldmath$s$}^{k}=\mathbf{B}_{k}^{T}\,\mbox{\boldmath$s$}^{k-1}+\mbox{\boldmath$s$}_{H}^{k}+\mathbf{B}_{k+1}\,\mbox{\boldmath$s$}^{k+1}. (19)

This decomposition is known as Hodge decomposition [48] and it is the extension of the Hodge theory for differential forms on Riemannian manifold to simplicial complexes. The subspace ker(𝐋k)\text{ker}(\mathbf{L}_{k}) is called harmonic subspace since each 𝒔Hkker(𝐋k)\mbox{\boldmath$s$}_{H}^{k}\in\text{ker}(\mathbf{L}_{k}) is a solution of the discrete Laplace equation

𝐋k𝒔Hk=(𝐁kT𝐁k+𝐁k+1𝐁k+1T)𝒔Hk=𝟎.\mathbf{L}_{k}\,\mbox{\boldmath$s$}_{H}^{k}=(\mathbf{B}_{k}^{T}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k+1}^{T})\,\mbox{\boldmath$s$}_{H}^{k}=\mathbf{0}.

When embedded in a real space, a fundamental property of geometric simplicial complexes of order kk is that the dimensions of ker(𝐋k)\text{ker}(\mathbf{L}_{k}), for k=0,,Kk=0,\ldots,K are topological invariants of the KK-simplicial complex, i.e. topological features that are preserved under homeomorphic transformations of the space. The dimensions of ker(𝐋k)\text{ker}(\mathbf{L}_{k}) are also known as Betti numbers βk\beta_{k} of order kk: β0\beta_{0} is the number of connected components of the graph, β1\beta_{1} is the number of holes, β2\beta_{2} is the number of cavities, and so on [48].

The decomposition in (19) shows an interesting interplay between signals of different order, which we will exploit in the ensuing sections. Before proceeding, it is useful to clarify the meaning of the terms appearing in (19). Let us consider, for simplicity, the analysis of flow signals, i.e. the case k=1k=1. To this end, it is useful to introduce the curl and divergence operators, in analogy with their continuous time counterpart operators applied to vector fields. More specifically, given an edge signal 𝒔1\mbox{\boldmath$s$}^{1}, the (discrete) curl operator is defined as

curl(𝒔1)=𝐁2T𝒔1.\mbox{curl}(\mbox{\boldmath$s$}^{1})=\mbox{$\mathbf{B}$}_{2}^{T}\mbox{\boldmath$s$}^{1}. (20)

This operator maps the edge signal 𝒔1\mbox{\boldmath$s$}^{1} onto a signal defined over the triangle sets, i.e. in T\mathbb{R}^{T}, and it is straightforward to verify that the generic ii-th entry of the curl(𝒔1)\mbox{curl}(\mbox{\boldmath$s$}^{1}) is a measure of the flow circulating along the edges of the ii-th triangle. As an example for the simplicial complex in Fig. 1, we have

𝐁2T=[001110000000110001000000000011100]\mbox{$\mathbf{B}$}_{2}^{T}=\left[\begin{array}[]{cccccccccccc}0&0&1&\!\!\!\!-1&1&0&0&0&0&0&0\\ 0&1&\!\!\!\!-1&0&0&0&1&0&0&0&0\\ 0&0&0&0&0&0&\!\!\!\!-1&1&1&0&0\end{array}\right] (21)

and, defining 𝒔1=[e1,e2,,e10,e11]T\mbox{\boldmath$s$}^{1}=[e_{1},e_{2},\ldots,e_{10},e_{11}]^{T}, we get curl(𝒔1)=[e3e4+e5,e2e3+e7,e7+e8+e9]T\mbox{curl}(\mbox{\boldmath$s$}^{1})=[e_{3}-e_{4}+e_{5},e_{2}-e_{3}+e_{7},-e_{7}+e_{8}+e_{9}]^{T}. Each entry of 𝒔1\mbox{\boldmath$s$}^{1} is then the circulation over the corresponding triangle.

Similarly, the (discrete) divergence operator maps the edge signal 𝒔1\mbox{\boldmath$s$}^{1} onto a signal defined over the vertex space, i.e. V{\mathbb{R}}^{V}, and it is defined as

div(𝒔1)=𝐁1𝒔1.\mbox{div}(\mbox{\boldmath$s$}^{1})=\mbox{$\mathbf{B}$}_{1}\mbox{\boldmath$s$}^{1}. (22)

Again, by direct substitution, it turns out that the ii-th entry of div(𝒔1)\mbox{div}(\mbox{\boldmath$s$}^{1}) represents the net-flow passing through the ii-th vertex, i.e. the difference between the inflow and outflow at node ii. Thus a non-zero divergence reveals the presence of a source or sink node. For the example in Fig. 1, we get

𝐁1=[10000100000111100000000100001011000000000011000000011010010101100000011100000]\mbox{$\mathbf{B}$}_{1}=\left[\begin{array}[]{cccccccccccccccc}\!\!\!\!-1&0&0&0&0&\!\!\!\!-1&0&0&0&0&0\\ 1&\!\!\!\!-1&\!\!\!\!-1&\!\!\!\!-1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&\!\!\!\!-1&0&\!\!\!\!-1&\!\!\!\!-1&0\\ 0&0&0&0&0&0&0&0&0&1&\!\!\!\!-1\\ 0&0&0&0&0&0&0&\!\!\!\!-1&1&0&1\\ 0&0&1&0&\!\!\!\!-1&0&1&1&0&0&0\\ 0&0&0&1&1&1&0&0&0&0&0\end{array}\right] (23)

so that div(𝒔1)=[e1e6,e1e2e3e4,e2e7e9e10,e10e11,e8+e9+e11,e3e5+e7+e8,e4+e5+e6]T\mbox{div}(\mbox{\boldmath$s$}^{1})=[-e_{1}-e_{6},e_{1}-e_{2}-e_{3}-e_{4},e_{2}-e_{7}-e_{9}-e_{10},e_{10}-e_{11},-e_{8}+e_{9}+e_{11},e_{3}-e_{5}+e_{7}+e_{8},e_{4}+e_{5}+e_{6}]^{T}.
If we consider equation (19) in the case k=1k=1,

𝒔1=𝐁1T𝒔0+𝒔H1+𝐁2𝒔2,\mbox{\boldmath$s$}^{1}=\mathbf{B}_{1}^{T}\,\mbox{\boldmath$s$}^{0}+\mbox{\boldmath$s$}_{H}^{1}+\mathbf{B}_{2}\,\mbox{\boldmath$s$}^{2}, (24)

recalling that 𝐁1𝐁2=𝟎\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{2}=\mathbf{0}, it is easy to check that the first component in (24) has zero curl, and then it may be called an irrotational component, whereas the third component has zero divergence, and then it may be called a solenoidal component, in analogy to the calculus terminology used for vector fields. The harmonic component 𝐬H1\mathbf{s}_{H}^{1} is a flow vector that is both curl-free and divergence-free. Notice also that, in (24), 𝐁1T𝒔0\mathbf{B}_{1}^{T}\,\mbox{\boldmath$s$}^{0} represents the (discrete) gradient of 𝒔0\mbox{\boldmath$s$}^{0}.

III-C Topology-aware unitary basis

In this section, we propose a method to build a unitary basis to represent edge signals, reflecting topological properties of the complex, more specifically triple-wise relations. The idea underlying the method is to search a basis that minimizes a real-valued function capturing triple-wise relations. For example, in the graph case, a key topological property is connectivity, which is well captured by the cut function. The associated real-valued function can be built using the so called Lovász extension [49], [50] of the cut size.

Refer to caption
Figure 1: Cut of order 11.

More specifically, given a graph 𝒢(𝒱,){\cal G({\cal V},{\cal E})} and a partition of its vertex set 𝒱{\cal V} in two sets 𝒜0{\cal A}_{0} and 𝒜1{\cal A}_{1}, with 𝒜0𝒜1=𝒱{\cal A}_{0}\cup{\cal A}_{1}={\cal V} and 𝒜0𝒜1={\cal A}_{0}\cap{\cal A}_{1}=\emptyset, the cut size is defined as

F0(𝒜0,𝒜1)=cut(𝒜0,𝒜1)i𝒜0j𝒜1aijF_{0}({\cal A}_{0},{\cal A}_{1})={\rm cut}({\cal A}_{0},{\cal A}_{1})\triangleq\sum_{i\in{\cal A}_{0}}\sum_{j\in{\cal A}_{1}}a_{ij} (25)

where aij=1a_{ij}=1 if (i,j)(i,j)\in{\cal{E}} and aij=0a_{ij}=0 otherwise. F0(𝒜0,𝒜1)F_{0}({\cal A}_{0},{\cal A}_{1}) is a set function and is known to be a submodular function [49]. Now, we want to translate the set function F0(𝒜0,𝒜1)F_{0}({\cal A}_{0},{\cal A}_{1}) onto a real-valued function f0(𝒙0)f_{0}(\mbox{\boldmath$x$}^{0}), defined over V{\mathbb{R}}^{V}, to be used for the subsequent optimization. This can be done using the so called Lovász extension [49], which is equal to:

f0(𝒙0)=i=1Vj=1Vaij|xi0xj0|.f_{0}(\mbox{\boldmath$x$}^{0})=\sum_{i=1}^{V}\sum_{j=1}^{V}a_{ij}|x_{i}^{0}-x_{j}^{0}|. (26)

with 𝒙0V\mbox{\boldmath$x$}^{0}\in\mathbb{R}^{V}. The function in (26) measures the total variation of a signal defined over the nodes of a graph and then it can be used as a regularization function, whenever the signal of interest is known to be a smooth function. The function in (26) formed the basis of the method proposed in [51] to build a unitary basis for analyzing signals defined over the vertices of a graph. More specifically, in [51] the basis was built by solving the following optimization problem

𝐔(𝐮1,,𝐮V)=argmin𝐔V×Vn=2Vf0(𝐮n)s.t.𝐔T𝐔=𝐈,𝐮1=1V𝟏.\begin{array}[]{lll}\mathbf{U}\triangleq\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{V}\right)=&\!\!\!\underset{\mathbf{U}\in\mathbb{R}^{V\times V}}{\arg\min}\;\;\displaystyle\sum_{n=2}^{V}f_{0}(\mathbf{u}_{n})\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &\;\,\text{s.t.}\qquad\mathbf{U}^{T}\mathbf{U}=\mathbf{I},\,\,\,\mathbf{u}_{1}=\frac{1}{\sqrt{V}}\mathbf{1}.\end{array} (27)

In the above problem, the objective function to be minimized is convex, but the problem is non-convex because of the unitarity constraint. To simplify the search for the basis, the objective function in (27) can be relaxed to become

f0r(𝒙0)=i=1Vj=1Vaij(xi0xj0)2.f_{0}^{r}(\mbox{\boldmath$x$}^{0})=\sum_{i=1}^{V}\sum_{j=1}^{V}a_{ij}(x_{i}^{0}-x_{j}^{0})^{2}. (28)

Substituting (28) in (27), we still have a non-convex problem. However, its solution is known to be given by the eigenvectors of 𝐋0\mathbf{L}_{0}. From this perspective, the basis typically used in the GFT, built as the eigenvectors of the Laplacian matrix, can be interpreted as the solution of the above optimization problem, after relaxation.
Now, we extend the previous approach to find a regularization function capturing triple-wise relations, useful to analyze signals defined over the edges of a graph. In the previous case, the analysis of node signals started from the bi-partition of a graph and then the construction of a real-valued extension of the cut size. Here, to analyze edge signals, we need to start from the tri-partition of the discrete set and define a set function counting the number of triangles gluing the three sets together. The function to be minimized should then be the real valued (Lovász) extension of such a set function.

The combinatorial study of simplicial complexes has attracted increasing attention and the generalization of Cheeger inequalities to simplicial complexes has been the subject of several works [52], [53]. In particular, Hein et al. introduced the total variation on hypergraphs as the Lovász extension of the hypergraph cut, defined as the size of the hyperedges set connecting a bipartition of the vertex set [54]. In this work, we derive a Lovász extension defined over the edge set. More formally, we study a simplicial complex 𝒳(𝒱,,𝒯){\cal X}({\cal V},{\cal E},{\cal T}) of order two, as in the example sketched in Fig.1, and consider the partition of 𝒱{\cal V} in three sets (𝒜0,𝒜1,𝒜2)({\cal A}_{0},{\cal A}_{1},{\cal A}_{2}). The triple-wise coupling function is now defined as

F1(𝒜0,𝒜1,𝒜2)=i𝒜0j𝒜1i𝒜2aijkF_{1}({\cal A}_{0},{\cal A}_{1},{\cal A}_{2})=\sum_{i\in{\cal A}_{0}}\sum_{j\in{\cal A}_{1}}\sum_{i\in{\cal A}_{2}}a_{ijk} (29)

where aijk=1a_{ijk}=1 if (i,j,k)𝒯(i,j,k)\in{\cal{T}} and aijk=0a_{ijk}=0 otherwise. Our main result, stated in the following theorem, is that the Lovász extension of the triple-wise coupling function gives rise to a measure of the curl of edge signals along triangles.

Theorem 1

Let 𝒜0,𝒜1,𝒜2\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2} be a partition of the vertex set 𝒱\mathcal{V} of the 22-dimensional simplicial complex 𝒳(𝒱,,𝒯)\mathcal{X}(\mathcal{V},\mathcal{E},\mathcal{T}) with ||=E|\mathcal{E}|=E, |𝒯|=T|\mathcal{T}|=T. Then the Lovász extension f:Ef:\mathbb{R}^{E}\rightarrow\mathbb{R}, evaluated at 𝐱1E\mbox{\boldmath$x$}^{1}\in\mathbb{R}^{E}, of the triple-wise coupling size F1(𝒜0,𝒜1,𝒜2)F_{1}(\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2}) defined in (29), is

f1(𝒙1)=i,j,k=1Eaijk|xi1xj1+xk1|=j=1T|i=1EB2(i,j)xi1|f_{1}(\mbox{\boldmath$x$}^{1})=\sum_{i,j,k=1}^{E}a_{ijk}|x^{1}_{i}-x^{1}_{j}+x^{1}_{k}|\,=\sum_{j=1}^{T}\left|\sum_{i=1}^{E}B_{2}(i,j)x_{i}^{1}\right| (30)

where B2(i,j)B_{2}(i,j) are the edge-triplet incidence coefficients defined in (3).

Proof:

Please see Appendix A. ∎

The function in (30) represents the sum of the absolute values of the curls over all the triangles. Differently from [54], where the total variation over a hypergraph was built from a bipartition of a discrete set and it was a function defined over V\mathbb{R}^{V}, in our case, we start from a triparition of the original set and we build a real-valued extension, defined over E\mathbb{R}^{E}, i.e. a space of dimension equal to the number of edges. A suitable unitary basis for representing (curling) edge signals can then be found as the solution of the following optimization problem

𝐔(𝐮1,,𝐮E)=argmin𝐔E×En=1Ef1(𝐮n)s.t.𝐔T𝐔=𝐈.\begin{array}[]{lll}\mathbf{U}\triangleq\left(\mathbf{u}_{1},\ldots,\mathbf{u}_{E}\right)=&\!\!\!\underset{\mathbf{U}\in\mathbb{R}^{E\times E}}{\arg\min}\;\;\displaystyle\sum_{n=1}^{E}f_{1}(\mathbf{u}_{n})\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &\;\text{s.t.}\qquad\,\mathbf{U}^{T}\mathbf{U}=\mathbf{I}.\end{array} (31)

The objective function is convex, but the problem is non-convex because of the unitarity constraint. The above problem can be solved resorting to the algorithm proposed in [51], tuned to the new objective function given in (30). Alternatively, similarly to what is typically done with graphs, f1(𝒙1)f_{1}(\mbox{\boldmath$x$}^{1}) can be relaxed and replaced with

f1r(𝒙1)=j=1T(i=1EB2(i,j)xi1)2=𝒙1T𝐁2𝐁2T𝒙1.f_{1}^{r}(\mbox{\boldmath$x$}^{1})=\displaystyle\sum_{j=1}^{T}\left(\sum_{i=1}^{E}B_{2}(i,j)x^{1}_{i}\right)^{2}=\mbox{\boldmath$x$}^{1\,T}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T}\mbox{\boldmath$x$}^{1}. (32)

Substituting this function back to (31), the solution is given by the eigenvectors of 𝐁2𝐁2T\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T}.

The above arguments show that the eigenvectors of 𝐁2𝐁2T\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T} provide a suitable basis to analyze edge signals having a curling behavior. However, as we know from the Hodge decomposition recalled in the previous section, edge signals contain other useful components that are orthogonal to solenoidal signals, namely irrotational and harmonic components. Then, in general, it is useful to take as a unitary basis for analyzing edge signals all the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1}, i.e. the eigenvectors associated to the nonzero eigenvalues of 𝐁2𝐁2T\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T} and of 𝐁1T𝐁1\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{B}$}_{1}, plus the eigenvectors associated to the kernel of 𝐋1\mbox{$\mathbf{L}$}_{1}. In summary, the theory developed in this section provides a further argument, rooted on intrinsic topological properties of the simplicial complex on which the signal is defined, to exploit the Hodge decomposition to find a suitable basis for the analysis of edge signals. Furthermore, the theory says that using the Laplacian eigenvectors comes as a consequence of a relaxation step. In many cases, whenever the numerical complexity is not an issue, it may be better to keep the 1\ell_{1}-norm objective functions given in (26) and (30), as this would yield more sparse, and then more informative, representations, as a generalization of what done for directed graphs in [51].

IV Edge flows estimation

Let us consider now the observation of an edge signal affected by additive noise. Our goal in this section, is to formulate the denoising problem as a constrained problem, rooted on the Hodge decomposition. Denoising edge signals embedded in noise was already considered in [42],[31],[32] and, more specifically, in [18]. The formulation proposed in [18] was based on the definition of signals over simplicial complexes as skew-symmetric arrays of dimension equal to the order of the corresponding simplex plus one. So, the edge flow was represented as a matrix (i.e., a two-dimensional array) satisfying the constraint X(i,j)=X(j,i)X(i,j)=-X(j,i). A signal defined over the triangles was represented as an array of dimension three, satisfying the property Φ(i,j,k)=Φ(j,k,i)=Φ(k,i,j)=Φ(j,i,k)=Φ(i,k,j)=Φ(k,j,i)\Phi(i,j,k)=\Phi(j,k,i)=\Phi(k,i,j)=-\Phi(j,i,k)=-\Phi(i,k,j)=-\Phi(k,j,i). Our aim in this section, is to formulate the denoising optimization problem dealing only with vectors. Based on (19), the observed vector can be modeled as

𝒙1=𝐁1T𝒔0+𝒔H1+𝐁2𝒔2+𝒗1\mbox{\boldmath$x$}^{1}=\mathbf{B}_{1}^{T}\,\mbox{\boldmath$s$}^{0}+\mbox{\boldmath$s$}_{H}^{1}+\mathbf{B}_{2}\,\mbox{\boldmath$s$}^{2}+\mbox{\boldmath$v$}^{1} (33)

where 𝒗1\mbox{\boldmath$v$}^{1} is noise. Let us suppose, for simplicity, that the noise vector is Gaussian, with zero-mean entries all having the same variance σn2\sigma_{n}^{2}. The optimal estimator can then be formulated as the solution of the following problem

(𝒔^0,𝒔^2,𝒔^H1)=argmin𝒔0V,𝒔2T,𝒔H1E𝐁2𝒔2+𝐁1T𝒔0+𝒔H1𝒙12s.t.𝐁1𝒔H1=𝟎𝐁2T𝒔H1=𝟎(𝒬).\begin{array}[]{lll}(\hat{\mbox{\boldmath$s$}}^{0},\hat{\mbox{\boldmath$s$}}^{2},\hat{\mbox{\boldmath$s$}}^{1}_{\text{H}})=\hskip-8.5359pt\underset{{{\mbox{\boldmath$s$}}^{0}\in\mathbb{R}^{V},{\mbox{\boldmath$s$}}^{2}\in\mathbb{R}^{T},{\mbox{\boldmath$s$}}^{1}_{\text{H}}\in\mathbb{R}^{E}}}{\text{argmin}}&\hskip-14.79555pt\parallel\mathbf{B}_{2}\mbox{\boldmath$s$}^{2}+\mathbf{B}_{1}^{T}\mbox{\boldmath$s$}^{0}+\mbox{\boldmath$s$}^{1}_{H}-\mbox{\boldmath$x$}^{1}\!\parallel^{2}\\ \hskip 85.35826pt\text{s.t.}&\hskip-11.66573pt\mathbf{B}_{1}\mbox{\boldmath$s$}^{1}_{H}=\mathbf{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &\hskip-11.66573pt\mathbf{B}_{2}^{T}\mbox{\boldmath$s$}^{1}_{H}=\mathbf{0}\qquad(\mathcal{Q}).\\ \end{array} (34)

Note that problem 𝒬\mathcal{Q} is convex. Then, there exists multipliers 𝝀1V,𝝀2T\boldsymbol{\lambda}_{1}\in\mathbb{R}^{V},\boldsymbol{\lambda}_{2}\in\mathbb{R}^{T} such that the tuple (𝒔^0,𝒔^2,𝒔^H1,𝝀1,𝝀2)(\hat{\mbox{\boldmath$s$}}^{0},\hat{\mbox{\boldmath$s$}}^{2},\hat{\mbox{\boldmath$s$}}^{1}_{\text{H}},\boldsymbol{\lambda}_{1},\boldsymbol{\lambda}_{2}) satisfies the Karush-Kuhn-Tucker (KKT) conditions of 𝒬\mathcal{Q} (note that Slater’s constraint qualification is satisfied [55]). The associated Lagrangian function is

(𝒔0,𝒔2,𝒔H1,𝝀1,𝝀2)=(𝐁2𝒔2+𝐁1T𝒔0+𝒔H1𝒙1)T(𝐁2𝒔2+𝐁1T𝒔0+𝒔H1𝒙1)+𝝀1T𝐁1𝒔H1+𝝀2T𝐁2T𝒔H1.\begin{split}\mathcal{L}({\mbox{\boldmath$s$}}^{0},{\mbox{\boldmath$s$}}^{2},{\mbox{\boldmath$s$}}^{1}_{\text{H}},\boldsymbol{\lambda}_{1},\boldsymbol{\lambda}_{2})=(\mathbf{B}_{2}\mbox{\boldmath$s$}^{2}+\mathbf{B}_{1}^{T}\mbox{\boldmath$s$}^{0}+\mbox{\boldmath$s$}^{1}_{H}-\mbox{\boldmath$x$}^{1})^{T}\\ \cdot(\mathbf{B}_{2}\mbox{\boldmath$s$}^{2}+\mathbf{B}_{1}^{T}\mbox{\boldmath$s$}^{0}+\mbox{\boldmath$s$}^{1}_{H}-\mbox{\boldmath$x$}^{1})+\boldsymbol{\lambda}_{1}^{T}\mathbf{B}_{1}\mbox{\boldmath$s$}^{1}_{H}+\boldsymbol{\lambda}_{2}^{T}\mathbf{B}_{2}^{T}\mbox{\boldmath$s$}^{1}_{H}.\end{split} (35)

Exploiting the orthogonality property 𝐁1𝐁2=𝟎\mathbf{B}_{1}\mathbf{B}_{2}=\mathbf{0}, it is easy to get the following KKT conditions

(a)𝒔0(𝒔0,𝒔2,𝒔H1,𝝀1,𝝀2)=𝐁1𝐁1T𝒔0𝐁1𝒙1=𝟎(b)𝒔2(𝒔0,𝒔2,𝒔H1,𝝀1,𝝀2)=𝐁2T𝐁2𝒔2𝐁2T𝒙1=𝟎(c)𝒔H1(𝒔0,𝒔2,𝒔H1,𝝀1,𝝀2)=𝒔H1𝒙1+𝐁1T𝝀1+𝐁2𝝀2=𝟎(d)𝐁𝟏𝒔H1=𝟎,𝐁2T𝒔H1=𝟎(e)𝝀1V,𝝀2T.\hskip-2.84544pt\begin{array}[]{lllll}\text{(a)}&\!\!\!\nabla_{\mbox{\boldmath$s$}^{0}}\mathcal{L}({\mbox{\boldmath$s$}}^{0},{\mbox{\boldmath$s$}}^{2},{\mbox{\boldmath$s$}}^{1}_{\text{H}},\boldsymbol{\lambda}_{1},\boldsymbol{\lambda}_{2})=\mathbf{B}_{1}\mathbf{B}_{1}^{T}\mbox{\boldmath$s$}^{0}-\mathbf{B}_{1}\mbox{\boldmath$x$}^{1}=\mathbf{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \text{(b)}&\!\!\!\nabla_{\mbox{\boldmath$s$}^{2}}\mathcal{L}({\mbox{\boldmath$s$}}^{0},{\mbox{\boldmath$s$}}^{2},{\mbox{\boldmath$s$}}^{1}_{\text{H}},\boldsymbol{\lambda}_{1},\boldsymbol{\lambda}_{2})=\mathbf{B}_{2}^{T}\mathbf{B}_{2}\mbox{\boldmath$s$}^{2}-\mathbf{B}_{2}^{T}\mbox{\boldmath$x$}^{1}=\mathbf{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \text{(c)}&\!\!\!\nabla_{\mbox{\boldmath$s$}^{1}_{H}}\mathcal{L}({\mbox{\boldmath$s$}}^{0},{\mbox{\boldmath$s$}}^{2},{\mbox{\boldmath$s$}}^{1}_{\text{H}},\boldsymbol{\lambda}_{1},\boldsymbol{\lambda}_{2})\!=\!\mbox{\boldmath$s$}^{1}_{H}-\mbox{\boldmath$x$}^{1}\!+\mathbf{B}_{1}^{T}\boldsymbol{\lambda}_{1}\!+\mathbf{B}_{2}\boldsymbol{\lambda}_{2}\!=\!\mathbf{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \text{(d)}&\!\!\!\mathbf{B_{1}}\mbox{\boldmath$s$}^{1}_{H}=\mathbf{0},\;\,\mathbf{B}_{2}^{T}\mbox{\boldmath$s$}^{1}_{H}=\mathbf{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \text{(e)}&\!\!\!\boldsymbol{\lambda}_{1}\in\mathbb{R}^{V},\boldsymbol{\lambda}_{2}\in\mathbb{R}^{T}.\end{array}

Note that conditions (a)-(c) reduce to

(a)𝐋0𝒔0=𝐁1𝒙1(b)𝐁2T𝐁2𝒔2=𝐁2T𝒙1(c)𝒔H1=𝒙1𝐁1T𝝀1𝐁2𝝀2.\begin{split}\text{(a)}&\;\mathbf{L}_{0}\mbox{\boldmath$s$}^{0}=\mathbf{B}_{1}\mbox{\boldmath$x$}^{1}\\ \text{(b)}&\;\mathbf{B}_{2}^{T}\mathbf{B}_{2}\mbox{\boldmath$s$}^{2}=\mathbf{B}_{2}^{T}\mbox{\boldmath$x$}^{1}\\ \text{(c)}&\;\mbox{\boldmath$s$}^{1}_{H}=\mbox{\boldmath$x$}^{1}-\mathbf{B}_{1}^{T}\boldsymbol{\lambda}_{1}-\mathbf{B}_{2}\boldsymbol{\lambda}_{2}.\\ \end{split} (36)

Multiplying both sides of condition (c) by 𝐁2T\mathbf{B}_{2}^{T}, and using the second equality in (d) and condition (b), we get

𝐁2T𝐁2𝒔2=𝐁2T𝐁2𝝀2.\mathbf{B}_{2}^{T}\mathbf{B}_{2}\mbox{\boldmath$s$}^{2}=\mathbf{B}_{2}^{T}\mathbf{B}_{2}\boldsymbol{\lambda}_{2}. (37)

This means that 𝒔2\mbox{\boldmath$s$}^{2} and 𝝀2\mbox{\boldmath$\lambda$}_{2} may differ only by an additive vector lying in the nullspace of 𝐁2T𝐁2\mathbf{B}_{2}^{T}\mathbf{B}_{2}. Let us set 𝒔2=𝝀2+𝒄2\mbox{\boldmath$s$}^{2}=\boldsymbol{\lambda}_{2}+\mbox{\boldmath$c$}_{2}, with 𝐁2T𝐁2𝒄2=𝟎\mathbf{B}_{2}^{T}\mathbf{B}_{2}\mbox{\boldmath$c$}_{2}={\mathbf{0}}. Similarly, multiplying (c) by 𝐁1\mathbf{B}_{1} and using the first equality in (d) and condition (a), we obtain

𝐁1𝐁1T𝒔0=𝐁1𝐁1T𝝀1,\mathbf{B}_{1}\mathbf{B}_{1}^{T}\mbox{\boldmath$s$}^{0}=\mathbf{B}_{1}\mathbf{B}_{1}^{T}\boldsymbol{\lambda}_{1}, (38)

which implies that 𝒔0=𝝀1+𝒄1\mbox{\boldmath$s$}^{0}=\boldsymbol{\lambda}_{1}+\mbox{\boldmath$c$}_{1}, with 𝒄1\mbox{\boldmath$c$}_{1} such that 𝐁1𝐁1T𝒄1=𝟎\mathbf{B}_{1}\mathbf{B}_{1}^{T}\mbox{\boldmath$c$}_{1}={\mathbf{0}}. Thus, condition (c) reduces to

𝒔H1=𝒙1𝐁1T𝒔0𝐁2𝒔2\mbox{\boldmath$s$}^{1}_{H}=\mbox{\boldmath$x$}^{1}-\mathbf{B}_{1}^{T}\mbox{\boldmath$s$}^{0}-\mathbf{B}_{2}\mbox{\boldmath$s$}^{2} (39)

which says, as expected, that we can derive the harmonic component by subtracting the estimated solenoidal and irrotational parts from the observed flow signal 𝒙1\mbox{\boldmath$x$}^{1}. To recover the irrotational flow 𝒔irr1\mbox{\boldmath$s$}^{1}_{\text{irr}} from the 0-order signal 𝒔0\mbox{\boldmath$s$}^{0} we need to solve equation (a) in (36). Note that 𝐋0\mathbf{L}_{0} is not invertible. For connected graphs, it has rank V1V-1 and its kernel is the span of the vector 𝟏\mathbf{1} of all ones. However, since the vector 𝐛=𝐁1𝒙1\mathbf{b}=\mathbf{B}_{1}\mbox{\boldmath$x$}^{1} is also orthogonal to 𝟏\mathbf{1}, the normal equation 𝐋0𝒔0=𝐁1𝒙1\mathbf{L}_{0}\mbox{\boldmath$s$}^{0}=\mathbf{B}_{1}\mbox{\boldmath$x$}^{1} admits the nontrivial solution (at least for connected graphs):

𝒔0=𝐋0𝐁1𝒙1\mbox{\boldmath$s$}^{0}=\mathbf{L}_{0}^{\dagger}\mathbf{B}_{1}\mbox{\boldmath$x$}^{1}

where denoted the Moore-Penrose pseudo-inverse.

Similarly, the 22-order signal 𝒔2\mbox{\boldmath$s$}^{2}, solution of the second equation in (36), can be obtained as

𝒔2=(𝐁2T𝐁2)𝐁2T𝒙1{\mbox{\boldmath$s$}}^{2}=(\mathbf{B}_{2}^{T}\mathbf{B}_{2})^{\dagger}\mathbf{B}_{2}^{T}\mbox{\boldmath$x$}^{1}

since 𝐁2T𝒙1\mathbf{B}_{2}^{T}\mbox{\boldmath$x$}^{1} is orthogonal to the null space of 𝐁2T𝐁2\mathbf{B}_{2}^{T}\mathbf{B}_{2}. The irrotational, solenoidal and harmonic components can then be recovered as follows

𝒔^irr1=𝐁1T𝒔^0=𝐁1T𝐋0𝐁1𝒙1𝒔^sol1=𝐁2𝒔^2=𝐁2(𝐁2T𝐁2)𝐁2T𝒙1𝒔^H1=𝒙1𝒔^irr1𝒔^sol1.\begin{array}[]{llll}\hat{\mbox{\boldmath$s$}}^{1}_{\text{irr}}=\mbox{$\mathbf{B}$}_{1}^{T}\hat{\mbox{\boldmath$s$}}^{0}=\mbox{$\mathbf{B}$}_{1}^{T}\mathbf{L}_{0}^{\dagger}\mathbf{B}_{1}\mbox{\boldmath$x$}^{1}\\ \hat{\mbox{\boldmath$s$}}^{1}_{\text{sol}}=\mbox{$\mathbf{B}$}_{2}\hat{\mbox{\boldmath$s$}}^{2}=\mbox{$\mathbf{B}$}_{2}(\mathbf{B}_{2}^{T}\mathbf{B}_{2})^{\dagger}\mathbf{B}_{2}^{T}\mbox{\boldmath$x$}^{1}\\ \hat{\mbox{\boldmath$s$}}^{1}_{\text{H}}=\mbox{\boldmath$x$}^{1}-\hat{\mbox{\boldmath$s$}}^{1}_{\text{irr}}-\hat{\mbox{\boldmath$s$}}^{1}_{\text{sol}}\vskip 6.0pt plus 2.0pt minus 2.0pt.\end{array} (40)

Note that the first two conditions in (36) imply that the variables 𝒔0\mbox{\boldmath$s$}^{0},𝒔2\mbox{\boldmath$s$}^{2} in 𝒬\mathcal{Q} are indeed decoupled so that the optimal solutions coincide with those of the following problems

𝒔^0=argmin𝒔0V𝐁1T𝒔0𝒙12(𝒬0)\begin{array}[]{lll}\hat{\mbox{\boldmath$s$}}^{0}=&\underset{{\mbox{\boldmath$s$}}^{0}\in\mathbb{R}^{V}}{\text{argmin}}&\parallel\mathbf{B}_{1}^{T}\mbox{\boldmath$s$}^{0}-\mbox{\boldmath$x$}^{1}\parallel^{2}\qquad(\mathcal{Q}_{0})\\ \end{array} (41)
𝒔^2=argmin𝒔2T𝐁2𝒔2𝒙12(𝒬2).\begin{array}[]{lll}\hat{\mbox{\boldmath$s$}}^{2}=&\underset{{\mbox{\boldmath$s$}}^{2}\in\mathbb{R}^{T}}{\text{argmin}}&\parallel\mathbf{B}_{2}\mbox{\boldmath$s$}^{2}-\mbox{\boldmath$x$}^{1}\parallel^{2}\qquad(\mathcal{Q}_{2}).\\ \end{array} (42)
Refer to caption

(a)
Refer to caption
(b)

Figure 2: Observed flow on a simplicial complex (a) and reconstruction of the irrotational flow (b).

An example of an edge (flow) signal is reported in Fig. 2 (a), representing the simulation of data packet flow over a network, including measurement errors. The signal values are encoded in the gray color associated to each link. Suppose now, that the goal of processing is to detect nodes injecting anomalous traffic in the network, starting from the traffic shown in Fig. 2(a). If some node is a source of an anomalous traffic, that node generates an edge signal with a strong irrotational (or divergence-like) component. To detect spamming nodes, we can then project the overall observed traffic vector 𝒔1\mbox{\boldmath$s$}^{1} onto the space orthogonal to the space spanned by the nonzero eigenvectors of 𝐁2𝐁2T\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T}, computing

𝒚1=(𝐈𝐔sol𝐔solT)𝒔1\mbox{\boldmath$y$}^{1}=\left(\mbox{$\mathbf{I}$}-\mbox{$\mathbf{U}$}_{\text{sol}}\mbox{$\mathbf{U}$}_{\text{sol}}^{T}\right)\mbox{\boldmath$s$}^{1} (43)

where 𝐔sol\mbox{$\mathbf{U}$}_{\text{sol}} is the matrix whose columns collect the eigenvectors associated with the nonzero eigenvalues of 𝐁2𝐁2T\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T}. The result of this projection is reported in Fig. 2(b), where we can clearly see that the only edges with a significant contributions are the ones located around two nodes, i.e. the source nodes injecting traffic in the network, whose divergence is encoded by the node color.

V Sampling and recovering of signal defined over simplicial complex

Suppose now that we only observe a few samples of a kk-order signal. The question we address here is to find the conditions to recover the whole signal 𝒔k\mbox{\boldmath$s$}^{k} from a subset of samples. To answer this question, we may use the theory developed in [56] for signals on graph, and later extended to hypergraphs in [57]. For simplicity, we focus on signals defined over a simplicial complex of order K=2K=2, i.e. on vertices, edges and triangles. Given a set of edges 𝒮\mathcal{S}\subseteq\mathcal{E} we define an edge-limiting operator as a diagonal matrix 𝐃𝒮\mathbf{D}_{\mathcal{S}} of dimension equal to the number of edges, with a one in the positions where we measure the flow, and zero elsewhere, i.e.

𝐃𝒮=diag{𝟏𝒮}\mathbf{D}_{\mathcal{S}}=\text{diag}\{\mathbf{1}_{\mathcal{S}}\} (44)

where 𝟏𝒮\mathbf{1}_{\mathcal{S}} is the set indicator vector whose ii-th entry is equal to one if the edge ei𝒮e_{i}\in\mathcal{S} and zero otherwise. We say that an edge signal 𝒔1\mbox{\boldmath$s$}^{1} is perfectly localized over the subset 𝒮\mathcal{S}\subseteq\mathcal{E} (or 𝒮\mathcal{S}-edge-limited) if 𝒔1=𝐃𝒮𝒔1\mbox{\boldmath$s$}^{1}=\mathbf{D}_{\mathcal{S}}\mbox{\boldmath$s$}^{1}. Similarly, given the matrix 𝐔1\mathbf{U}_{1} whose columns are the eigenvectors of 𝐋1\mathbf{L}_{1}, and a subset of indices \mathcal{F}, we define the operator

𝐅=𝐔1𝚺𝐔1T\mathbf{F}_{\mathcal{F}}=\mathbf{U}_{1}\mathbf{\Sigma}_{\mathcal{F}}\mathbf{U}^{T}_{1} (45)

where 𝚺=diag(𝟏)\mathbf{\Sigma}_{\mathcal{F}}=\text{diag}(\mathbf{1}_{\mathcal{F}}). An edge signal 𝒔1\mbox{\boldmath$s$}^{1} is \mathcal{F}-bandlimited over a frequency set \mathcal{F} if 𝐅𝒔1=𝒔1\mathbf{F}_{\mathcal{F}}\mbox{\boldmath$s$}^{1}=\mbox{\boldmath$s$}^{1}. The operators 𝐃𝒮\mathbf{D}_{\mathcal{S}} and 𝐅\mathbf{F}_{\mathcal{F}} are self-adjoint and idempotent and represents orthogonal projectors, respectively, on the sets 𝒮{\mathcal{S}} and {\mathcal{F}}. If we look for edges signals which are perfectly localized in both the edge and frequency domains, some conditions for perfect localization have been derived in [56]. More specifically: a) 𝒔1\mbox{\boldmath$s$}^{1} is perfectly localized over both the edge set 𝒮\mathcal{S} and the frequency set \mathcal{F} if and only if the operator 𝐅𝐃𝒮𝐅\mathbf{F}_{\mathcal{F}}\mathbf{D}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}} has an eigenvalue equal to one, i.e.

𝐃𝒮𝐅2=𝐅𝐃𝒮2=𝐅𝐃𝒮𝐅2=1\|\mathbf{D}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}}\|_{2}=\|\mathbf{F}_{\mathcal{F}}\mathbf{D}_{\mathcal{S}}\|_{2}=\|\mathbf{F}_{\mathcal{F}}\mathbf{D}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}}\|_{2}=1 (46)

where 𝑨2\|\mbox{\boldmath$A$}\|_{2} denotes the spectral norm of 𝑨A; b) a sufficient condition for the existence of such signals is that |𝒮|+||>E|\mathcal{S}|+|\mathcal{F}|>E. Conversely, if |𝒮|+||E|\mathcal{S}|+|\mathcal{F}|\leq E, there could still exist perfectly localized vectors, when condition (46) holds.

In the following we first consider sampling on a single layer by extracting samples from the edge signals and then, we consider multi-layer processing using samples of signals defined over different layers of the simplex.

V-A Single-layer sampling

In the following theorem [56] we provide a necessary and sufficient condition to recover the edge signal 𝒔1\mbox{\boldmath$s$}^{1} from its samples 𝒔𝒮1𝐃𝒮𝒔1\mbox{\boldmath$s$}^{1}_{\mathcal{S}}\triangleq\mathbf{D}_{\mathcal{S}}\mbox{\boldmath$s$}^{1}.

Theorem 2

Given the bandlimited edge signal 𝐬1=𝐅𝐬1\mbox{\boldmath$s$}^{1}=\mathbf{F}_{\mathcal{F}}\mbox{\boldmath$s$}^{1}, it is possible to recover 𝐬1\mbox{\boldmath$s$}^{1} from a subset of samples collected over the subset 𝒮\mathcal{S}\subseteq\mathcal{E} if and only if the following condition holds:

𝐃¯𝒮𝐅2=𝐅𝐃¯𝒮2<1\|\bar{\mathbf{D}}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}}\|_{2}=\|\mathbf{F}_{\mathcal{F}}\bar{\mathbf{D}}_{\mathcal{S}}\|_{2}<1 (47)

with 𝐃¯𝒮=𝐈𝐃𝒮\bar{\mathbf{D}}_{\mathcal{S}}=\mathbf{I}-\mathbf{D}_{\mathcal{S}}.

Proof:

The proof is a straightforward extension of Th. 4.1 in [56] to signals defined on the edges of the complex. ∎

In words, the above conditions mean that there can be no \mathcal{F}-bandlimited signals that are perfectly localized on the complementary set 𝒮¯\bar{\mathcal{S}}. Perfect recovery of the signal 𝒔1\mbox{\boldmath$s$}^{1} from 𝒔𝒮1\mbox{\boldmath$s$}^{1}_{\mathcal{S}} can be achieved as

𝐫1=𝐐𝒮𝒔𝒮1\mathbf{r}^{1}=\mathbf{Q}_{\mathcal{S}}\mbox{\boldmath$s$}^{1}_{\mathcal{S}} (48)

where 𝐐𝒮=(𝐈𝐃¯𝒮𝐅)1\mathbf{Q}_{\mathcal{S}}=(\mathbf{I}-\bar{\mathbf{D}}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}})^{-1}. The existence of the above inverse is ensured by condition (47). In fact, the reconstruction error can be written as [56]

𝒔1𝐐𝒮𝒔𝒮1=𝒔1𝐐𝒮(𝐈𝐃¯𝒮)𝒔1=𝒔1𝐐𝒮(𝐈𝐃¯𝒮𝐅)𝒔1=𝟎,\mbox{\boldmath$s$}^{1}-\mathbf{Q}_{\mathcal{S}}\mbox{\boldmath$s$}^{1}_{\mathcal{S}}=\mbox{\boldmath$s$}^{1}-\mathbf{Q}_{\mathcal{S}}(\mathbf{I}-\bar{\mathbf{D}}_{\mathcal{S}})\mbox{\boldmath$s$}^{1}=\mbox{\boldmath$s$}^{1}-\mathbf{Q}_{\mathcal{S}}(\mathbf{I}-\bar{\mathbf{D}}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}})\mbox{\boldmath$s$}^{1}=\mathbf{0},

where we exploited in the second equality the bandlimited condition 𝒔1=𝐅𝒔1\mbox{\boldmath$s$}^{1}=\mathbf{F}_{\mathcal{F}}\mbox{\boldmath$s$}^{1}.

To make (47) holds true, we must guarantee that 𝐃𝒮𝐅𝒔1𝟎\mathbf{D}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}}\mbox{\boldmath$s$}^{1}\neq{\mathbf{0}} or, equivalently, that the matrix 𝐃𝒮𝐅\mathbf{D}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}} is full column rank, i.e. rank(𝐃𝒮𝐅)=||\text{rank}(\mathbf{D}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}})=|\mathcal{F}|. Then, a necessary condition to ensure this holds is |𝒮||||\mathcal{S}|\geq|\mathcal{F}|.
An alternative way to retrieve the overall signal 𝒔1\mbox{\boldmath$s$}^{1} from its samples can be obtained as follows. If (47) holds true, the entire signal 𝒔1\mbox{\boldmath$s$}^{1} can be recovered from 𝒔𝒮1\mbox{\boldmath$s$}^{1}_{\mathcal{S}} as follows

𝒔1=𝐔(𝐔T𝐃𝒮𝐔)1𝐔T𝒔𝒮1\mbox{\boldmath$s$}^{1}=\mathbf{U}_{\mathcal{F}}\left(\mathbf{U}_{\mathcal{F}}^{T}\mathbf{D}_{\mathcal{S}}\mathbf{U}_{\mathcal{F}}\right)^{-1}\mathbf{U}_{\mathcal{F}}^{T}\,\mbox{\boldmath$s$}^{1}_{\mathcal{S}} (49)

where 𝐔\mathbf{U}_{\mathcal{F}} is the E×||E\times|\mathcal{F}| matrix whose columns are the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1} associated with the frequency set \mathcal{F}.

Remark: It is worth to notice that, because of the Hodge decomposition (18), an edge signal always contains three components that are typically band-limited, as they reside on a subspace of dimension smaller than EE. This means that, if one knows a priori, that the edge signal contains only one component, e.g. solenoidal, irrotational, or harmonic, then it is possible to observe the edge signal and to recover the desired component over all the edges, under the conditions established by Theorem 2.

V-B Multi-layer sampling

In this section, we consider the case where we take samples of signals of different order and we propose two alternative strategies to retrieve an edge signal 𝒔1\mbox{\boldmath$s$}^{1} from these samples.
The first approach aims at recovering 𝒔1\mbox{\boldmath$s$}^{1} by using both, the vertex signal samples 𝒔𝒜0=𝐃𝒜𝒔0\mbox{\boldmath$s$}^{0}_{\mathcal{A}}=\mathbf{D}_{\mathcal{A}}\mbox{\boldmath$s$}^{0}, with 𝒜𝒱\mathcal{A}\subseteq\mathcal{V}, and the edge samples 𝒔𝒮1=𝐃𝒮𝒔1\mbox{\boldmath$s$}^{1}_{\mathcal{S}}=\mathbf{D}_{\mathcal{S}}\mbox{\boldmath$s$}^{1}. Hereinafter, we denote by irr\mathcal{F}_{\text{irr}}, sol\mathcal{F}_{\text{sol}} and H\mathcal{F}_{\text{H}} the set of frequency indexes in \mathcal{F} corresponding to the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1} belonging, respectively, to the irrotational, solenoidal and harmonic subspaces. Note that, if 𝒔1\mbox{\boldmath$s$}^{1} is \mathcal{F}-bandlimited then also 𝒔H1\mbox{\boldmath$s$}^{1}_{H}, 𝒔sol1\mbox{\boldmath$s$}^{1}_{\text{sol}} and 𝒔irr1\mbox{\boldmath$s$}^{1}_{\text{irr}} are bandlimited with bandwidth, respectively, |H||\mathcal{F}_{\text{H}}|, |sol||\mathcal{F}_{\text{sol}}| and |irr||\mathcal{F}_{\text{irr}}|. Define sH\mathcal{F}_{\text{sH}} as the set of frequency indexes such that sH=Hsol\mathcal{F}_{\text{sH}}=\mathcal{F}_{\text{H}}\cup\mathcal{F}_{\text{sol}}. Furthermore, given the matrix 𝐔0\mathbf{U}_{0} with columns the eigenvectors of 𝐋0\mbox{$\mathbf{L}$}_{0}, we define the operator 𝐅00=𝐔0𝚺0𝐔0T\mathbf{F}_{\mathcal{F}_{0}}^{0}=\mathbf{U}_{0}\mathbf{\Sigma}_{\mathcal{F}_{0}}\mathbf{U}^{T}_{0} where |0||\mathcal{F}_{0}| denotes the bandwidth of 𝒔0\mbox{\boldmath$s$}^{0}. Let 𝒞1\mathcal{C}_{1} be the set of indexes in 0\mathcal{F}_{0} associated with the eigenvectors belonging to ker(𝐋0)\text{ker}(\mbox{$\mathbf{L}$}_{0}) and denote by 𝐔0𝒞10\mbox{$\mathbf{U}$}^{0}_{\mathcal{F}_{0}-\mathcal{C}_{1}} the matrix whose columns are the eigenvectors of 𝐋0\mbox{$\mathbf{L}$}_{0} associated with the frequency set 0𝒞1\mathcal{F}_{0}-\mathcal{C}_{1}. Then, we can state the following theorem.

Theorem 3

Consider the second-order simplex 𝒳(𝒱,,𝒯)\mathcal{X}(\mathcal{V},\mathcal{E},\mathcal{T}) and the edge signal 𝐬1=𝐬sol1+𝐬H1+𝐁1T𝐬0\mbox{\boldmath$s$}^{1}=\mbox{\boldmath$s$}^{1}_{\text{sol}}+\mbox{\boldmath$s$}^{1}_{{H}}+\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}. Then, assume that: i) the vertex-signal 𝐬0\mbox{\boldmath$s$}^{0} and the edge signal 𝐬1\mbox{\boldmath$s$}^{1} are bandlimited with bandwidth, respectively, |0||\mathcal{F}_{0}| and ||=|sH|+|0|c1|\mathcal{F}|=|\mathcal{F}_{\text{sH}}|+|\mathcal{F}_{0}|-c_{1}, where |sH|=|sol|+|H||\mathcal{F}_{\text{sH}}|=|\mathcal{F}_{\text{sol}}|+|\mathcal{F}_{\text{H}}| and c10c_{1}\geq 0 denotes the number of eigenvectors in the bandwidth of 𝐬0\mbox{\boldmath$s$}^{0} belonging to ker(𝐋0)\text{ker}(\mbox{$\mathbf{L}$}_{0}); ii) the conditions 𝐃¯𝒜𝐅002<1\|\bar{\mathbf{D}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}\|_{2}<1 and 𝐃¯𝒮𝐅sH2<1\|\bar{\mathbf{D}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}}\|_{2}<1 hold true. Then, it follows that:

  • a)

    𝒔1\mbox{\boldmath$s$}^{1} can be perfectly recovered from both a set of vertex signal samples 𝒔𝒜0=𝐃𝒜𝒔0\mbox{\boldmath$s$}^{0}_{\mathcal{A}}=\mathbf{D}_{\mathcal{A}}\mbox{\boldmath$s$}^{0} and from the edge samples 𝒔𝒮1=𝐃𝒮𝒔1\mbox{\boldmath$s$}^{1}_{\mathcal{S}}=\mathbf{D}_{\mathcal{S}}\mbox{\boldmath$s$}^{1} via the set of equations

    [𝒔0𝒔¯1]=𝐐[𝒔𝒜0𝒔𝒮1],\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \bar{\mbox{\boldmath$s$}}^{1}\end{array}\right]=\mbox{$\mathbf{Q}$}\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{1}_{\mathcal{S}}\end{array}\right], (50)

    where 𝒔¯1=𝒔sol1+𝒔H1\bar{\mbox{\boldmath$s$}}^{1}=\mbox{\boldmath$s$}^{1}_{\text{sol}}+\mbox{\boldmath$s$}^{1}_{{H}},

    𝐐=[(𝐈𝐃¯𝒜𝐅00)1𝐎𝐏(𝐈𝐃¯𝒮𝐅sH)1]\begin{array}[]{lll}\mbox{$\mathbf{Q}$}=\left[\begin{array}[]{lll}(\mathbf{I}-\overline{\mathbf{D}}_{\mathcal{A}}\mathbf{F}_{\mathcal{F}_{0}}^{0})^{-1}&\mathbf{O}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{$\mathbf{P}$}&(\mathbf{I}-\overline{\mathbf{D}}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}_{\text{sH}}})^{-1}\end{array}\right]\end{array} (51)

    and 𝐏=(𝐈𝐃¯𝒮𝐅sH)1𝐃𝒮𝐁1T𝐅00(𝐈𝐃¯𝒜𝐅00)1\mbox{$\mathbf{P}$}=-(\mathbf{I}-\overline{\mathbf{D}}_{\mathcal{S}}\mathbf{F}_{\mathcal{F}_{\text{sH}}})^{-1}{\mathbf{D}}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mathbf{F}_{\mathcal{F}_{0}}^{0}(\mathbf{I}-\overline{\mathbf{D}}_{\mathcal{A}}\mathbf{F}_{\mathcal{F}_{0}}^{0})^{-1};

  • b)

    There exists a set of frequency indexes irr\mathcal{F}_{\text{irr}}\subset\mathcal{F}, for which the eigenvectors matrix 𝐔irr\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}} of 𝐋1\mbox{$\mathbf{L}$}_{1} satisfies the equality 𝐔irr=𝐁1T𝐔0𝒞10\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}}=\mbox{$\mathbf{B}$}^{T}_{1}\mbox{$\mathbf{U}$}^{0}_{\mathcal{F}_{0}-\mathcal{C}_{1}}, and such that any \mathcal{F}-bandlimited edge signal with set of frequency indexes =irrsH\mathcal{F}=\mathcal{F}_{\text{irr}}\cup\mathcal{F}_{\text{sH}} and |irr|=|0|c1|\mathcal{F}_{\text{irr}}|=|\mathcal{F}_{0}|-c_{1} can be recovered by using N0|0|N_{0}\geq|\mathcal{F}_{0}| samples from 𝒔0\mbox{\boldmath$s$}^{0} and N1|sH|N_{1}\geq|\mathcal{F}_{\text{sH}}| samples from 𝒔1\mbox{\boldmath$s$}^{1}.

Proof:

See Appendix B. ∎

Let us now consider the case where we want to recover 𝒔1\mbox{\boldmath$s$}^{1} from samples collected from signals of order 0, 11, and 22, i.e. 𝒔0\mbox{\boldmath$s$}^{0}, 𝒔1\mbox{\boldmath$s$}^{1} and 𝒔2\mbox{\boldmath$s$}^{2}. We denote by 𝒔2=𝐃𝒔2\mbox{\boldmath$s$}^{2}_{\mathcal{M}}=\mbox{$\mathbf{D}$}_{\mathcal{M}}\mbox{\boldmath$s$}^{2} the vector of triangle signal samples with 𝒯{\mathcal{M}}\subseteq\mathcal{T}. Furthermore, given the matrix 𝐔2\mbox{$\mathbf{U}$}_{2} with columns the eigenvectors of the second-order Laplacian 𝐋2\mbox{$\mathbf{L}$}_{2}, we define the operator 𝐅22=𝐔2𝚺2𝐔2T\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}=\mbox{$\mathbf{U}$}_{2}\mathbf{\Sigma}_{\mathcal{F}_{2}}\mbox{$\mathbf{U}$}_{2}^{T} where |2||\mathcal{F}_{2}| denotes the bandwidth of 𝒔2\mbox{\boldmath$s$}^{2}. Then, assuming 𝒔2\mbox{\boldmath$s$}^{2} bandlimited, it holds 𝒔2=𝐅22𝒔2\mbox{\boldmath$s$}^{2}=\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\mbox{\boldmath$s$}^{2}. Denote with 𝒞2\mathcal{C}_{2} the set of indexes in 2\mathcal{F}_{2} associated with the eigenvectors belonging to ker(𝐋2)\text{ker}(\mbox{$\mathbf{L}$}_{2}) and with 𝐔2𝒞22\mbox{$\mathbf{U}$}^{2}_{\mathcal{F}_{2}-\mathcal{C}_{2}} the matrix whose columns are the eigenvectors of 𝐋2\mbox{$\mathbf{L}$}_{2} associated with the index set 2𝒞2\mathcal{F}_{2}-\mathcal{C}_{2}.

Theorem 4

Consider the second-order simplex 𝒳(𝒱,,𝒯)\mathcal{X}(\mathcal{V},\mathcal{E},\mathcal{T}), the edge signal 𝐬1=𝐁2𝐬2+𝐬H1+𝐁1T𝐬0\mbox{\boldmath$s$}^{1}=\mbox{$\mathbf{B}$}_{2}\mbox{\boldmath$s$}^{2}+\mbox{\boldmath$s$}^{1}_{{H}}+\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0} and assume that: i) the vertex-signal 𝐬0\mbox{\boldmath$s$}^{0}, the edge signal 𝐬1\mbox{\boldmath$s$}^{1} and the triangle signal 𝐬2\mbox{\boldmath$s$}^{2} are bandlimited with bandwidth, respectively, |0||\mathcal{F}_{0}|, |1|=|0|+|H|+|2|(c1+c2)|\mathcal{F}_{1}|=|\mathcal{F}_{0}|+|\mathcal{F}_{\text{H}}|+|\mathcal{F}_{2}|-(c_{1}+c_{2}) and |2||\mathcal{F}_{2}|, with c10c_{1}\geq 0 and c20c_{2}\geq 0 the number of eigenvectors in the bandwidth of 𝐬0\mbox{\boldmath$s$}^{0} and 𝐬2\mbox{\boldmath$s$}^{2} belonging, respectively, to ker(𝐋0)\mbox{ker}(\mbox{$\mathbf{L}$}_{0}) and ker(𝐋2)\mbox{ker}(\mbox{$\mathbf{L}$}_{2}); ii) all conditions 𝐃¯𝒜𝐅002<1\|\bar{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}\|_{2}<1, 𝐃¯𝒮𝐅H2<1\|\bar{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{H}}\|_{2}<1 and 𝐃¯𝐅222<1\|\bar{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\|_{2}<1 hold true. Then, it follows:

  • a)

    𝒔1\mbox{\boldmath$s$}^{1} can be perfectly recovered from a set of vertex signal samples 𝒔𝒜0=𝐃𝒜𝒔0\mbox{\boldmath$s$}^{0}_{\mathcal{A}}=\mbox{$\mathbf{D}$}_{\mathcal{A}}\mbox{\boldmath$s$}^{0}, from the edge samples 𝒔𝒮1=𝐃𝒮𝒔1\mbox{\boldmath$s$}^{1}_{\mathcal{S}}=\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{\boldmath$s$}^{1} and from the triangle samples 𝒔2=𝐃𝒔2\mbox{\boldmath$s$}^{2}_{\mathcal{M}}=\mbox{$\mathbf{D}$}_{\mathcal{M}}\mbox{\boldmath$s$}^{2} as

    [𝒔0𝒔H1𝒔2]=𝐑[𝒔𝒜0𝒔𝒮1𝒔2],\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{1}_{{H}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{2}\end{array}\right]=\mbox{$\mathbf{R}$}\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{1}_{\mathcal{S}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{2}_{\mathcal{M}}\end{array}\right], (52)

    where

    𝐑=[(𝐈𝐃¯𝒜𝐅00)1𝐎𝐎𝐏1𝐏2𝐏3𝐎𝐎(𝐈𝐃¯𝐅22)1]\hskip-16.21828pt\begin{array}[]{lll}\mbox{$\mathbf{R}$}=\left[\begin{array}[]{lll}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}&\mathbf{O}&\mathbf{O}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{$\mathbf{P}$}_{1}&\mathbf{P}_{2}&\mathbf{P}_{3}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mathbf{O}&\mathbf{O}&(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2})^{-1}\end{array}\!\!\right]\end{array} (53)

    and 𝐏1=(𝐈𝐃¯𝒮𝐅H)1𝐃𝒮𝐁1T𝐅00(𝐈𝐃¯𝒜𝐅00)1\mbox{$\mathbf{P}$}_{1}=-(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}, 𝐏2=(𝐈𝐃¯𝒮𝐅H)1\mbox{$\mathbf{P}$}_{2}=(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}, 𝐏3=(𝐈𝐃¯𝒮𝐅H)1𝐃𝒮𝐁2𝐅22(𝐈𝐃¯𝐅22)1\mbox{$\mathbf{P}$}_{3}=-(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2})^{-1};

  • b)

    There exist two sets of frequency indexes sol,irr\mathcal{F}_{\text{sol}},\mathcal{F}_{\text{irr}}\subset\mathcal{F}, for which the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1} stacked in the columns of the matrices 𝐔sol\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{sol}}},𝐔irr\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}} satisfy, respectively, the equality 𝐔sol=𝐁2𝐔2𝒞22\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{sol}}}=\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{U}$}^{2}_{\mathcal{F}_{2}-\mathcal{C}_{2}}, and 𝐔irr=𝐁1T𝐔0𝒞10\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}}=\mbox{$\mathbf{B}$}^{T}_{1}\mbox{$\mathbf{U}$}^{0}_{\mathcal{F}_{0}-\mathcal{C}_{1}}. Then, any \mathcal{F}-bandlimited edge signal with frequency set =solirrH\mathcal{F}=\mathcal{F}_{\text{sol}}\cup\mathcal{F}_{\text{irr}}\cup\mathcal{F}_{\text{H}}, and |irr|=|0|c1|\mathcal{F}_{\text{irr}}|=|\mathcal{F}_{0}|-c_{1}, |sol|=|2|c2|\mathcal{F}_{\text{sol}}|=|\mathcal{F}_{2}|-c_{2}, can be recovered by using N0|0|N_{0}\geq|\mathcal{F}_{0}| samples from 𝒔0\mbox{\boldmath$s$}^{0}, N1|H|N_{1}\geq|\mathcal{F}_{\text{H}}| samples from 𝒔1\mbox{\boldmath$s$}^{1} and N2|2|N_{2}\geq|\mathcal{F}_{2}| samples from 𝒔2\mbox{\boldmath$s$}^{2}.

Proof:

See Appendix B.∎

VI Estimation of discrete vector fields

Developing tools to process signals defined over simplicial complexes is also useful to devise algorithms for processing discrete vector fields. The use of algebraic topology, and more specifically Discrete Exterior Calculus (DEC), has been already considered for the analysis of vector fields, especially in the field of computer graphic. Exterior Calculus (EC) is a discipline that generalizes vector calculus to smooth manifolds of arbitrary dimensions [58] and DEC is a discretization of EC on simplicial complexes [59], [60]. DEC methodologies have been already proposed in [61] to produce smooth tangent vector fields in computer graphics. Smoothing tangential vector fields using techniques based on the spectral decomposition of higher order Laplacian was also advocated in [62]. In this section, building on the basic ideas of [61], [62], we propose an alternative approach to smooth discrete vector fields. More specifically, as in [61], [62], the proposed approach is based on three main steps: i) map the vector field onto an edge signal; ii) filter the edge signal; and iii) map the edge signal back to a vector field. Steps i) and iii) are essentially the same as in [62]. The difference we introduce here is in step ii), i.e. in the filtering of the edge signal. Filtering edge signals has been considered before, see, e.g., [32], but in our case we consider a different formulation and we incorporate the proper metrics in the Hodge Laplacian, dictated from the initial mapping from the vector field to the corresponding edge signal.
To filter vector fields, we need first to embed the vector field and the simplicial complex into a real space. Let us denote with 𝒳\mathcal{X} a simplicial complex embedded in a real space n\mathbb{R}^{n} of dimension nn. We assume that 𝒳\mathcal{X} is flat, i.e. all simplices are in the same affine nn-subspace, and well-centered, which means the circumcenter of each simplex lies in its interior [59]. A discrete vector field 𝒗\vec{\mbox{\boldmath$v$}} can be considered as a map from points 𝒙i\mbox{\boldmath$x$}_{i} of 𝒳\mathcal{X} to n\mathbb{R}^{n}. An illustrative example is shown in Fig. 3 (a), where the discrete vector field is represented by the set of arrows associated to the points in a regular grid in 2\mathbb{R}^{2}. In many applications, it is of interest to develop tools to extract relevant information from a vector field or to reduce the effect of noise. Our proposed approach is based on the following three main steps:
1) Map a vector field onto a simplicial complex signal
Given the set of NN points 𝒙i\mbox{\boldmath$x$}_{i}, we build the simplicial complex 𝒳\mathcal{X} using a well-centered Delaunay triangulation with vertices (0-simplicies) in the points 𝒙i\mbox{\boldmath$x$}_{i} [59]. Then, starting from the set of vectors 𝒗(𝒙i)\vec{\mbox{\boldmath$v$}}(\mbox{\boldmath$x$}_{i}), we recover a continuous vector field through interpolation based on the Whitney basis functions ϕi(𝒙)\phi_{i}(\mbox{\boldmath$x$}) [63]

𝒗(𝒙)=i=1N𝒗(𝒙i)ϕi(𝒙)\vec{\mbox{\boldmath$v$}}(\mbox{\boldmath$x$})=\sum_{i=1}^{N}\vec{\mbox{\boldmath$v$}}\,(\mbox{\boldmath$x$}_{i})\phi_{i}(\mbox{\boldmath$x$}) (54)

where ϕi(𝒙)\phi_{i}(\mbox{\boldmath$x$}) are affine piecewise functions. More specifically, ϕi\phi_{i} is an hat function on vertex viv_{i}, which takes on the value one at vertex viv_{i}, i.e. ϕi(𝒙i)=1\phi_{i}(\mbox{\boldmath$x$}_{i})=1, is zero at all other vertices, and affine over each 2-simplex having viv_{i} as vertex. Then, project the vector field onto the set of 11-simplices (edges), giving rise to a set of scalar signals defined over the edges of the complex

xj1=(𝒗(σj00)+𝒗(σj10)2)σj1,j=1,,E\begin{split}x_{j}^{1}=\left(\frac{\vec{\mbox{\boldmath$v$}}(\sigma^{0}_{j_{0}})+\vec{\mbox{\boldmath$v$}}(\sigma^{0}_{j_{1}})}{2}\right)\cdot\vec{\sigma}^{1}_{j},\quad j=1,\ldots,E\end{split} (55)

where j0j_{0} and j1j_{1} are the end-points of edge jj and σj1\vec{\sigma}^{1}_{j} stands for the vector corresponding to σj1=[σj00,σj10]\sigma^{1}_{j}=[\sigma^{0}_{j_{0}},\sigma^{0}_{j_{1}}], having the same direction as the orientation of σj1\sigma^{1}_{j}. The vector 𝒙1E\mbox{\boldmath$x$}^{1}\in\mathbb{R}^{E} of edge signals associated with the vector field 𝒗\vec{\mbox{\boldmath$v$}}, is then built as 𝒙1=[x11,,xE1]T\mbox{\boldmath$x$}^{1}=[x_{1}^{1},\ldots,x_{E}^{1}]^{T}.
2) Filter the signal defined on the simplicial complex
The filtering strategy aims to recover an edge flow vector 𝒔1\mbox{\boldmath$s$}^{1} that fits the observed vector 𝒙1\mbox{\boldmath$x$}^{1}, it is smooth and sparse. We recover the vector 𝒔1\mbox{\boldmath$s$}^{1} as the solution of

min𝒔1E𝒔1𝒙12+λ𝒔1T𝐋1𝒔1+γ𝒔11(𝒫)\underset{\mbox{\boldmath$s$}^{1}\in\mathbb{R}^{E}}{\min}\quad\parallel\mbox{\boldmath$s$}^{1}-\mbox{\boldmath$x$}^{1}\parallel_{2}+\lambda\mbox{\boldmath$s$}^{1\,T}\mbox{$\mathbf{L}$}_{1}\mbox{\boldmath$s$}^{1}+\gamma\parallel\mbox{\boldmath$s$}^{1}\parallel_{1}\quad\quad(\mathcal{P}_{\mathcal{F}})

where λ,γ\lambda,\gamma are positive penalty coefficients controlling, respectively, the signal smoothness and its sparsity. Since we chose the Whitney form as interpolation basis, for any two signals of order pp, 𝒙p,𝒚pNp\mbox{\boldmath$x$}^{p},\mbox{\boldmath$y$}^{p}\in\mathbb{R}^{N_{p}}, the induced inner product is given by 𝒙pT𝐌p𝒚p\mbox{\boldmath$x$}^{pT}\mbox{$\mathbf{M}$}_{p}\mbox{\boldmath$y$}^{p}, where the Np×NpN_{p}\times N_{p}-dimensional matrix 𝐌p\mbox{$\mathbf{M}$}_{p}, incorporating the Whitney metric, is derived as in [60][Prop. 9.7]. Therefore, the Hodge Laplacian 𝐋1\mbox{$\mathbf{L}$}_{1}, weighted with the appropriate metric matrices 𝐌p\mbox{$\mathbf{M}$}_{p}, with p=0,1,2p=0,1,2, is a semidefinite positive matrix that can be written as

𝐋1=𝐁2𝐌2𝐁2T+𝐌1𝐁1T𝐌01𝐁1𝐌1\mbox{$\mathbf{L}$}_{1}=\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{M}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T}+\mbox{$\mathbf{M}$}_{1}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{M}$}_{0}^{-1}\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{M}$}_{1} (56)

while the fitting error norm becomes

𝒔1𝒙12=(𝒔1𝒙1)T𝐌1(𝒔1𝒙1).\parallel\mbox{\boldmath$s$}^{1}-\mbox{\boldmath$x$}^{1}\parallel_{2}=(\mbox{\boldmath$s$}^{1}-\mbox{\boldmath$x$}^{1})^{T}\mbox{$\mathbf{M}$}_{1}(\mbox{\boldmath$s$}^{1}-\mbox{\boldmath$x$}^{1}). (57)

3) Map the filtered signal back onto a discrete vector field
Finally, given the discrete signal of order 11, seij1s_{e_{ij}}^{1}, eij\forall e_{ij}\in\mathcal{E}, the generated piecewise linear vector field becomes [61]

𝒗(𝒙)=eijseij1[ϕi(𝒙)ϕj(𝒙)ϕj(𝒙)ϕi(𝒙)].\vec{\mbox{\boldmath$v$}}(\mbox{\boldmath$x$})=\sum_{e_{ij}}s_{e_{ij}}^{1}[\phi_{i}(\mbox{\boldmath$x$})\nabla\phi_{j}(\mbox{\boldmath$x$})-\phi_{j}(\mbox{\boldmath$x$})\nabla\phi_{i}(\mbox{\boldmath$x$})]. (58)
Refer to caption

(a)
Refer to caption
(b)
Refer to caption
(c)

Figure 3: RNA velocity fields: (a) observed and (b) noisy fields; (c) reconstructed field.

We show now an interesting application of the above procedure to the analysis of the vector field representing the RNA velocity field, defined as the time derivative of the gene expression state [41], useful to predict the future state of individual cells and then the direction of change of the entire transcriptome during the RNA sequencing dynamic process. The RNA velocity can be estimated by distinguishing between nascent (unspliced) and mature (spliced) messenger RNA (mRNA) in common single-cell RNA sequencing protocols [41]. We consider the mRNA-seq dataset of mouse chromaffin cell differentation analysed in [41]. An example of RNA velocity vector field is illustrated in Fig. 3(a). To analyze such a vector field using the proposed algorithms, we implemented steps 1) and 3) using the discrete exterior calculus operators provided by the PyDEC software developed in [60]. We considered a Delaunay well-centered triangulation of the continuous bi-dimensional space, which generates the simplicial complexes in Fig. 3 composed of N=400N=400 nodes, where we fill all the triangles. The velocity field in Fig. 3(a) is observed over 156156 vertices and the field vector at each vertex has been obtained with a local Gaussian kernel smoothing [41]. The underlying colored cells represented different states of the cell differentiation process. Then, to test our filtering strategy, we added a strong Gaussian noise to the observed velocity field, with SNR=0dB\mbox{SNR}=0\,\mbox{dB}, as illustrated in Fig. 3(b). This noise is added to incorporate mRNA molecules degradation and model inaccuracy. Then we apply the proposed filtering strategy by first reconstructing the edge signal as in (54) and then recovering the edge signal as a solution of the optimization problem 𝒫\mathcal{P}_{\mathcal{F}}. Finally, we reconstruct the vector field using the interpolation formula (58), observed at the barycentric points of each triangles. The result is reported in Fig. 3(c), where we can appreciate the robustness of the proposed filtering strategy.

VII Inference of simplicial complex topology from data

The inference of the graph topology from (node) signals is a problem that has received significant attention, as shown in the recent tutorial papers [64], [39], [40] and in the references therein. In this section, we propose algorithms to infer the structure of a simplicial complex. Given the layer structure of a simplicial complex, we propose a hierarchical approach that infers the structure of one layer, assuming knowledge of the lower order layers. For simplicity, we focus on the inference of a complex of order 22 from the observation of a set of MM edge (flow) signals 𝐗1:=[𝒙1(1),,𝒙1(M)]\mbox{$\mathbf{X}$}^{1}:=\left[\mbox{\boldmath$x$}^{1}(1),\ldots,\mbox{\boldmath$x$}^{1}(M)\right], assuming that the topology of the underlying graph is given (or it has been estimated). So, we start from the knowledge of 𝐋0\mbox{$\mathbf{L}$}_{0}, which implies, after selection of an orientation, knowledge of 𝐁1\mbox{$\mathbf{B}$}_{1}. Since 𝐋1=𝐁1T𝐁1+𝐁2𝐁2T\mathbf{L}_{1}=\mathbf{B}^{T}_{1}\mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{B}_{2}^{T}, then we need to estimate 𝐁2\mbox{$\mathbf{B}$}_{2}. Before doing that, we check, from the data, if the term 𝐁2𝐁2T\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T} is really needed. Since, from (24), the only components that may depend on 𝐁2\mbox{$\mathbf{B}$}_{2} are the solenoidal and harmonic components, we first project the observed flow signal onto the space orthogonal to the space spanned by the irrotational component, by computing

𝒙sH1(m)=(𝐈𝐔irr𝐔irrT)𝒙1(m),m=1,,M,\mbox{\boldmath$x$}^{1}_{\text{sH}}(m)=\left(\mathbf{I}-\mathbf{U}_{\text{irr}}\mathbf{U}_{\text{irr}}^{T}\right)\mbox{\boldmath$x$}^{1}(m),m=1,\ldots,M, (59)

where 𝐔irr\mathbf{U}_{\text{irr}} is the matrix whose columns are the eigenvectors associated with the nonzero eigenvalues of 𝐁1T𝐁1\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{B}$}_{1}. Then, denoting with 𝐗sH1=[𝒙sH1(1),,𝒙sH1(M)]\mbox{$\mathbf{X}$}_{\text{sH}}^{1}=\left[\mbox{\boldmath$x$}^{1}_{\text{sH}}(1),\ldots,\mbox{\boldmath$x$}^{1}_{\text{sH}}(M)\right] the signal matrix of size E×ME\times M, we measure the energy of 𝐗sH1\mbox{$\mathbf{X}$}_{\text{sH}}^{1} by taking its norm 𝐗sH1F\|\mbox{$\mathbf{X}$}_{\text{sH}}^{1}\|_{F}: If the norm is smaller than a threshold η\eta of the averaged energy of the observed data set, we stop, otherwise we proceed to estimate 𝐁2\mbox{$\mathbf{B}$}_{2}.

The first step in the estimation of 𝐁2\mbox{$\mathbf{B}$}_{2} starts from the detection of all cliques of three elements present in the graph. Their number is T=trace[(𝐋0diag(𝐋0𝟏))3]/6T={\rm trace}\left[\left(\mbox{$\mathbf{L}$}_{0}-{\rm diag}(\mbox{$\mathbf{L}$}_{0}\mathbf{1})\right)^{3}\right]/6. For each clique, we choose, arbitrarily, an orientation for the potential triangle filling it. The matrix 𝐁2\mbox{$\mathbf{B}$}_{2} can then be written as

𝐁2=n=1Ttn𝐛n𝐛nT\mbox{$\mathbf{B}$}_{2}=\sum_{n=1}^{T}t_{n}\mathbf{b}_{n}\mathbf{b}_{n}^{T} (60)

where 𝐛n\mathbf{b}_{n} is the vector of size EE associated with the nn-th clique, whose entries are all zero except the three entries associated with the three edges of the nn-th clique. Those entries assume the value 11 or 1-1, depending on the orientation of the triangle associated with the nn-th clique. The coefficients tnt_{n} in (60) are equal to one, if there is a (filled) triangle on the nn-th clique, or zero otherwise. The goal of our inference algorithm is then to decide, starting from the data, which entries of 𝒕:=[t1,,tT]\mbox{\boldmath$t$}:=[t_{1},\ldots,t_{T}] are equal to one or zero. Our strategy is to make the association that enforces a small total variation of the observed flow signal on the inferred complex, using (32) as a measure of total variation on flow signals. We propose two alternative algorithms: The first method infers the structure of 𝐁2\mbox{$\mathbf{B}$}_{2} by minimizing the total variation of the observed data; the second method performs first a Principal Component Analysis (PCA) and then looks for the matrix 𝐁2\mbox{$\mathbf{B}$}_{2} and the coefficients of the expansion over the principal components that minimize the total variation plus a penalty on the model fitting error.

Minimum Total Variation (MTV) Algorithm: The goal of this algorithm is to minimize the total variation over the observed data set, assuming knowledge of the number of triangles. The set of coefficients 𝒕t is found as solution of

min𝐭{0,1}Tq(𝐭)n=1Ttntrace(𝐗sH1T𝐛n𝐛nT𝐗sH1)(𝒫MTV)s.t.𝐭0=t,tn{0,1},n,\begin{array}[]{lll}\underset{\mathbf{t}\in\{0,1\}^{T}}{\min}&q(\mathbf{t})\triangleq\displaystyle\sum_{n=1}^{T}t_{n}\text{trace}\left(\mbox{$\mathbf{X}$}_{\text{sH}}^{1\;T}\mathbf{b}_{n}\mathbf{b}_{n}^{T}\mbox{$\mathbf{X}$}_{\text{sH}}^{1}\right)\quad(\mathcal{P}_{\text{MTV}})\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \quad\,\!\text{s.t.}&\parallel\mathbf{t}\parallel_{0}=t^{*},\quad t_{n}\in\{0,1\},\forall n,\end{array} (61)

where tt^{*} is the number of triangles that we aim to detect. In practice, this number is not known, so it has to be found through cross-validation. Even though problem 𝒫MTV\mathcal{P}_{\text{MTV}} is non-convex, it can be solved in closed form. Introducing the nonnegative coefficients cn=i=1M𝒙sH1T(i)𝐛n𝐛nT𝒙sH1(i)c_{n}=\sum_{i=1}^{M}\mbox{\boldmath$x$}_{\text{sH}}^{1\;T}(i)\mathbf{b}_{n}\mathbf{b}_{n}^{T}\mbox{\boldmath$x$}_{\text{sH}}^{1}(i), the solution can in fact be obtained by sorting the coefficients cnc_{n} in increasing order and then selecting the triangles associated with the indices of the tt^{*} lowest coefficients cnc_{n}. Note that the proposed strategy infers the presence of triangles along the cliques having the minimum curl along its edges. Hence, we expect better performance when the edge signal contains only the harmonic components, whose curls along the filled triangles is exactly null.

PCA-based Best Fitting with Minimum Total Variation (PCA-BFMTV): To robustify the MTV algorithm in the case where the edge signal contains also a solenoidal component and is possibly corrupted by noise, we propose now the PCA-BFMTV algorithm that infers the structure of 𝐁2\mbox{$\mathbf{B}$}_{2} and the edge signal that best fits the observed data set 𝐗1\mbox{$\mathbf{X}$}^{1}, while at the same time exhibiting a small total variation over the inferred topology. The method starts performing a principal component analysis of the observed data by extracting the eigenvectors associated with the largest eigenvalues of the covariance matrix estimated from the observed data set. More specifically, the proposed strategy is composed of two steps: 1) estimate the covariance matrix 𝐂^X\widehat{\mathbf{C}}_{X} from the edge signal data set 𝐗sH1\mbox{$\mathbf{X}$}^{1}_{\text{sH}} and builds the matrix 𝐔^sH\widehat{\mathbf{U}}_{\text{sH}} whose columns are the eigenvectors associated with the FF largest eigenvalues of 𝐂^X\widehat{\mathbf{C}}_{X}; 2) model the observed data set as 𝐗sH1=𝐔^sH𝐒^sH1\mbox{$\mathbf{X}$}^{1}_{\text{sH}}=\widehat{\mathbf{U}}_{\text{sH}}\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}} and searches for the coefficient matrix 𝐒^sH1\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}} and the vector 𝒕t that solve the following problem

min𝐭{0,1}T,𝐒^sH1F×Mg(𝐭,𝐒^sH1)+γ𝐗sH1𝐔^sH𝐒^sH1F2s.t.𝐭0=t,tn{0,1},n,(𝒫𝒯𝒮)\begin{array}[]{lll}\!\!\underset{\mathbf{t}\in\{0,1\}^{T},\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}\in\mathbb{R}^{F\times M}}{\min}&g(\mathbf{t},\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}})+\!\gamma\!\parallel\!\mbox{$\mathbf{X}$}^{1}_{\text{sH}}\!\!-\!\widehat{\mathbf{U}}_{\text{sH}}\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}\parallel^{2}_{F}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \hskip 28.45274pt\text{s.t.}&\parallel\mathbf{t}\parallel_{0}=t^{*},\quad t_{n}\in\{0,1\},\forall n,\vskip 6.0pt plus 2.0pt minus 2.0pt\quad(\mathcal{P}_{\mathcal{TS}})\end{array} (62)

where g(𝐭,𝐒^sH1)n=1Ttntrace(𝐒^sH1T𝐔^sHT𝐛n𝐛nT𝐔^sH𝐒^sH1)g(\mathbf{t},\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}})\triangleq\displaystyle\sum_{n=1}^{T}t_{n}\,\text{trace}(\widehat{\mbox{$\mathbf{S}$}}_{\text{sH}}^{1\;T}\widehat{\mathbf{U}}_{\text{sH}}^{T}\mathbf{b}_{n}\mathbf{b}_{n}^{T}\widehat{\mathbf{U}}_{\text{sH}}\widehat{\mbox{$\mathbf{S}$}}_{\text{sH}}^{1}) and γ\gamma is a non-negative coefficient controlling the trade-off between the data fitting error and the signal smoothness. Although problem 𝒫𝒯𝒮\mathcal{P}_{\mathcal{TS}} is non-convex, it can be solved using an iterative alternating optimization algorithm returning successive estimates of 𝐒^sH1\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}, having fixed 𝐭\mathbf{t}, and alternately 𝐭\mathbf{t}, given 𝐒^sH1\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}. Interestingly, each step of the alternating optimization problem admits a closed form solution. More specifically, at each iteration kk, the coefficient matrix 𝐒^sH1[k]\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}[k] can be found as

𝐒^sH1[k]=argmin𝐒^sH1F×Mg(𝐭[k],𝐒^sH1)+γ𝐗sH1𝐔^sH𝐒^sH1F2(𝒫𝒮k).\hskip-4.83691pt\begin{array}[]{lll}\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}[k]=&\!\!\!\!\!\!\underset{\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}\in\mathbb{R}^{F\times M}}{\arg\min}&\hskip-8.5359ptg(\mathbf{t}[k],\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}})+\!\gamma\!\parallel\!\mbox{$\mathbf{X}$}^{1}_{\text{sH}}\!\!-\!\widehat{\mathbf{U}}_{\text{sH}}\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}\parallel^{2}_{F}\quad(\mathcal{P}_{\mathcal{S}}^{k}).\end{array}

Defining 𝐋upp[k]:=n=1Ttn[k]𝐛n𝐛nT\mbox{$\mathbf{L}$}_{\text{upp}}[k]:=\sum_{n=1}^{T}t_{n}[k]\mathbf{b}_{n}\mathbf{b}_{n}^{T}, problem 𝒫𝒮k\mathcal{P}_{\mathcal{S}}^{k} admits the closed form solution

𝐒^sH1[k]=(𝐈F+γ𝐔^sHT𝐋upp[k]𝐔^sH)1𝐔^sHT𝐗sH1.\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}[k]=(\mbox{$\mathbf{I}$}_{F}+\gamma\,\widehat{\mathbf{U}}_{\text{sH}}^{T}\mbox{$\mathbf{L}$}_{\text{upp}}[k]\widehat{\mathbf{U}}_{\text{sH}})^{-1}\widehat{\mathbf{U}}_{\text{sH}}^{T}\mbox{$\mathbf{X}$}^{1}_{\text{sH}}. (63)

Then, given 𝐒^sH1[k]\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}[k], we can find the vector 𝐭[k+1]\mathbf{t}[k+1] using the same method used to solve problem MTV, in (61), i.e. setting cn[k]:=trace(𝐒^sH1[k]T𝐔^sHT𝐛n𝐛nT𝐔^sH𝐒^sH1[k])c_{n}[k]:=\text{trace}(\widehat{\mbox{$\mathbf{S}$}}_{\text{sH}}^{1}[k]^{T}\widehat{\mathbf{U}}_{\text{sH}}^{T}\mathbf{b}_{n}\mathbf{b}_{n}^{T}\widehat{\mathbf{U}}_{\text{sH}}\widehat{\mbox{$\mathbf{S}$}}_{\text{sH}}^{1}[k]) and taking the entries of 𝐭n[k+1]\mathbf{t}_{n}[k+1] equal to 11 for the indices corresponding to the first tt^{*} smallest coefficients of {cn[k]}n=1T\{c_{n}[k]\}_{n=1}^{T}, and 0 otherwise. The iterative steps of the proposed strategy are reported in the box entitled Algorithm PCA-BFMTV. Now we test the validity of our inference algorithms over both simulated and real data.

Set γ>0\gamma>0, 𝐭[0]{0,1}T\mathbf{t}[0]\in\{0,1\}^{T}, 𝐭[0]0=t\parallel\mathbf{t}[0]\parallel_{0}=t^{*},

𝐋upp[0]=n=1Ttn[0]𝐛n𝐛nT\mbox{$\mathbf{L}$}_{\text{upp}}[0]=\displaystyle\sum_{n=1}^{T}t_{n}[0]\mathbf{b}_{n}\mathbf{b}_{n}^{T}, k=1k=1

Repeat

Set 𝐒^sH1[k]=(𝐈F+γ𝐔^sHT𝐋upp[k1]𝐔^sH)1𝐔^sHT𝐗sH1\widehat{\mbox{$\mathbf{S}$}}^{1}_{\text{sH}}[k]=(\mbox{$\mathbf{I}$}_{F}+\gamma\,\widehat{\mathbf{U}}_{\text{sH}}^{T}\mbox{$\mathbf{L}$}_{\text{upp}}[k-1]\widehat{\mathbf{U}}_{\text{sH}})^{-1}\widehat{\mathbf{U}}_{\text{sH}}^{T}\mbox{$\mathbf{X}$}_{\text{sH}}^{1};

Compute 𝐭[k]\mathbf{t}[k] by sorting the coefficients

cn[k]=trace(𝐒^sH1[k]T𝐔^sHT𝐛n𝐛nT𝐔^sH𝐒^sH1[k])c_{n}[k]=\text{trace}(\widehat{\mbox{$\mathbf{S}$}}_{\text{sH}}^{1}[k]^{T}\widehat{\mathbf{U}}_{\text{sH}}^{T}\mathbf{b}_{n}\mathbf{b}_{n}^{T}\widehat{\mathbf{U}}_{\text{sH}}\widehat{\mbox{$\mathbf{S}$}}_{\text{sH}}^{1}[k]),

and setting to 11 the entries of 𝒕[k]\mbox{\boldmath$t$}[k] corresponding to the tt^{*}

smallest coefficients, and 0 otherwise;

Set k=k+1k=k+1,

until convergence.

Algorithm PCA-BFMTV

Performance on synthetic data: Some of the most critical parameters affecting the goodness of the proposed algorithms are the dimension of the subspaces associated with the solenoidal and harmonic components of the signal and the number of filled triangles in the complex. In fact, in both MTV and PCA-BFMTV a key aspect is the detection of triangles as the cliques where the associated curl is minimum. Hence, if the signal contains only the harmonic component and there is no noise, the triangles can be identified with no error, because the harmonic component is null over the filled triangles. However, when there is a solenoidal component or noise, there might be decision errors.

Refer to caption

(a)
Refer to caption
(b)

Figure 4: Error probability by observing: (a)(\text{a}) harmonic noisy signals; (b)(\text{b}) harmonic plus solenoidal noisy signals.

To test the inference capabilities of the proposed methods, in Fig. 4(a), we report the triangle error probability PeP_{e}, defined as the percentage of incorrectly estimated triangles with respect to the number of cliques with three edges in the simplex, versus the signal-to-noise ratio (SNR), when the observation contains only harmonic flows plus noise. We considered a simplex composed of N=50N=50 nodes and with a percentage of filled triangles with respect to the number of second order cliques in the graph equal to 50%50\%. We also set M=50M=50, t=105t^{*}=105 and averaged our numerical results over 10310^{3} zero-mean signal and noise random realizations. The harmonic signal bandwidth |H||\mathcal{F}_{H}| is chosen equal to 105105, which is equal to the dimension of the kernel of 𝐋1\mbox{$\mathbf{L}$}_{1}. From Fig. 4(a), we can notice, as expected, that in the noiseless case the error probability is zero, since observing only harmonic flows enables perfect recovery of the matrix 𝐁2\mbox{$\mathbf{B}$}_{2}. In the presence of noise, the MTV algorithm suffers and in fact we observe a non negligible error probability at low SNR. However, applying the PCA-BFMTV algorithm enables a significant recovery of performance, as evidenced by the blue curve that is entirely superimposed to the red curve, at least for the SNR values shown in the figure. In this example, the covariance matrix was estimated over 10510^{5} independent observations of the edge signals. The optimal γ\gamma coefficient was chosen after a cross validation operation following a line search approach aimed to minimize the error probability. The improvement of the PCA-BFMTV method with respect to the MTV method is due to the denoising made possible by the projection of the observed signal onto the space spanned by the eigenvectors associated with the largest eigenvalues of the estimated covariance matrix.

To test the proposed methods in the case where the observed signal contains both the solenoidal and harmonic components, in Fig. 4(b) we report PeP_{e} versus the SNR, for different values of the dimension of the subspace associated with the solenoidal part, indicated as |sol||{\cal F}_{\text{sol}}|. From Fig. 4(b), we can observe that the performance of both algorithms MTV and PCA-BFMTV suffers when the bandwidth |sol||{\cal F}_{\text{sol}}| of the solenoidal component is large, whereas the performance degradation becomes negligible when |sol||{\cal F}_{\text{sol}}| is small. In all cases, PCA-BFMTV significantly outperforms the MTV algorithm, especially at low SNR values, because of its superior noise attenuation capabilities. As further numerical test, we run algorithm 𝒫𝒮k\mathcal{P}^{k}_{\mathcal{S}} replacing the quadratic regularization term with the triple-wise coupling regularization function in (30), by obtaining the same performance of the PCA-BFMTV algorithm.
Performance on real data: The real data set we used to test our algorithms is the set of mobile phone calls collected in the city of Milan, Italy, by Telecom Italia, in the context of the Telecom Big Data Challenge [65]. The data are associated with a regular two-dimensional grid, composed of 100×100100\times 100 points, superimposed to the city. Every point in the grid represents a square, of size 235235 meters. In particular, the data set collects the number NijN_{ij} of calls from node ii to area jj, as a function of time. There is an edge between nodes ii and jj only if there is a non null traffic between those points. The traffic has been aggregated in time, over time intervals of one hour. We define the flow signal over edge (i,j)(i,j) as Φij=NijNji\Phi_{ij}=N_{ij}-N_{ji}. We map all the values of matrix 𝚽\mathbf{\Phi} into a vector of flow signals 𝒙1\mbox{\boldmath$x$}^{1}. We observed the calls daily traffic during the month of December 20132013. The data are aggregated for each day over an interval of one hour. Our first objective is to show whether there is an advantage in associating to the observed data set 𝐗1\mbox{$\mathbf{X}$}^{1} a complex of order 22, i.e. a set of triangles, or it is sufficient to use a purely graph-based approach. In both cases, we rely on the same graph structure, whose 𝐁1\mbox{$\mathbf{B}$}_{1} comes from the data set, after an arbitrary choice of the edges’ orientation. If we use a graph-based approach, we can build a basis of the observed flow signals using the eigenvectors of the so called edge Laplacian in [35], i.e. 𝐋1low=𝐁1T𝐁1\mbox{$\mathbf{L}$}_{1}^{\text{low}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{B}$}_{1}. We call this basis 𝐔1low\mbox{$\mathbf{U}$}_{1}^{\text{low}}. As an alternative, our proposed approach is to build a basis using the eigenvectors of 𝐋1=𝐁1T𝐁1+𝐁2𝐁2T\mbox{$\mathbf{L}$}_{1}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{B}$}_{1}+\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T}, where 𝐁2\mbox{$\mathbf{B}$}_{2} is estimated from the data set 𝐗1\mbox{$\mathbf{X}$}^{1} using our MTV algorithm. We call this basis 𝐔1\mbox{$\mathbf{U}$}_{1}. To test the relative benefits of using 𝐔1\mbox{$\mathbf{U}$}_{1} as opposed to 𝐔1low\mbox{$\mathbf{U}$}_{1}^{\text{low}}, we run a basis pursuit algorithm with the goal of finding a good trade-off between the sparsity of the representation and the fitting error. More specifically, for any given observed vector 𝒙1(m)\mbox{\boldmath$x$}^{1}(m), we look for the sparse vector 𝒔1\mbox{\boldmath$s$}^{1} as solution of the following basis pursuit problem [66]:

min𝒔1E𝒔11()s.t.𝒙1𝐕𝒔1Fϵ\begin{array}[]{lll}\underset{{\mbox{\boldmath$s$}}^{1}\in\mathbb{R}^{E}}{\text{min}}&\parallel{\mbox{\boldmath$s$}}^{1}\parallel_{1}\qquad\qquad(\mathcal{B})\\ \;\;\text{s.t.}&\parallel{\mbox{\boldmath$x$}}^{1}-\mbox{$\mathbf{V}$}{\mbox{\boldmath$s$}}^{1}\parallel_{F}\leq\epsilon\end{array} (64)

where 𝐕=𝐔1\mbox{$\mathbf{V}$}=\mbox{$\mathbf{U}$}_{1} in our case, while 𝐕=𝐔1low\mbox{$\mathbf{V}$}=\mbox{$\mathbf{U}$}_{1}^{\text{low}} in the graph-based approach. As a numerical result, in Fig. 5 we report the sparsity of the recovered edge signals versus the mean estimation error 𝒙1𝐕𝒔1F\parallel{\mbox{\boldmath$x$}}^{1}-\mbox{$\mathbf{V}$}{\mbox{\boldmath$s$}}^{1}\parallel_{F} considering as signal dictionary 𝐕\mathbf{V} the eigenvectors of either the first-order Laplacian or the lower Laplacian. We used the MTV algorithm to infer the upper Laplacian matrix by setting the number tt^{*} of triangles that we may detect equal to 800800. This value is derived numerically through cross-validation over a training data set, by choosing the value of tt^{*} that yields the minimum norm 𝒔11\parallel{\mbox{\boldmath$s$}}^{1}\parallel_{1} As can be observed from Fig. 5, using the set of the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1} yields a much smaller MSE, for a given sparsity or, conversely, a much more sparse representation, for a given MSE. An intuitive reason why our method performs so much better than a purely graph-based approach is that the matrix 𝐋1\mbox{$\mathbf{L}$}_{1} has a much reduced kernel space with respect to 𝐋1low\mbox{$\mathbf{L}$}_{1}^{\text{low}} and the basis built on 𝐋1\mbox{$\mathbf{L}$}_{1} captures much better some inner structure present in the data by inferring the structure of the additional term 𝐁2\mbox{$\mathbf{B}$}_{2} from the data itself.

As a further test, we tested the two basis 𝐔1\mbox{$\mathbf{U}$}_{1} and 𝐔1low\mbox{$\mathbf{U}$}_{1}^{\text{low}} in terms of the capability to recover the entire flow signal from a subset of samples. To this end, we exploit the band-limited property enforced by the sparse representation, enabling the use of the theory developed in Section V.A. Starting with the representation of each input vector 𝒙1\mbox{\boldmath$x$}^{1} as 𝒙1=𝐕𝒔1{\mbox{\boldmath$x$}}^{1}=\mbox{$\mathbf{V}$}{\mbox{\boldmath$s$}}^{1}, with either 𝐕=𝐔1\mbox{$\mathbf{V}$}=\mbox{$\mathbf{U}$}_{1} in our case, or 𝐕=𝐔1low\mbox{$\mathbf{V}$}=\mbox{$\mathbf{U}$}_{1}^{\text{low}} in the graph-based approach, we used the Max-Det greedy sampling strategy in [56] to select the subset of edges where to sample the flow signal and then we used the recovery rule in (49) to retrieve the overall flow signal from the samples. The numerical results are reported in Fig. 6, representing the normalized recovering error of the edge signal versus the number NsN_{s} of samples used to reconstruct the overall signal. We can notice how introducing the term 𝐁2𝐁2T\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T}, we can achieve a much smaller error, for the same number of samples.

Refer to caption
Figure 5: Sparsity versus squared error.
Refer to caption
Figure 6: Reconstruction error versus number of samples.

VIII Conclusion

In this paper we have presented an algebraic framework to analyze signals residing over a simplicial complex. In particular, we focused on signals defined over the edges of a complex of order two, i.e. including triangles, and we showed that exploiting the full algebraic structure of this complex provides an advantage with respect to graph-based tools only. We have not analyzed higher order signals. Nevertheless, looking at the structure of the higher order Laplacian and to the Hodge decomposition, the tools derived in this paper can be directly translated to analyze signals defined over higher order structures. What would be missing in the higher order cases would be a visual interpretation of the properties of the higher order Laplacian eigenvectors, as we could not be talking about solenoidal or irrotational behaviors.

We proposed a method to infer the structure of a second order simplicial complex from flow data and we showed that, in applications over real wireless traffic data, the proposed approach can significantly outperform methods based only on graph representations. Furthermore, we proposed a method to analyze discrete vector fields and showed an application to the recovery of the RNA velocity field to predict the evolution of living cells. In such a case, using the eigenvectors of 𝐋1\mathbf{L}_{1} we have been able to highlight irrotational and solenoidal behaviors that would have been difficult to highlight using only the eigenvectors of 𝐋0\mathbf{L}_{0}. Further developments should include both theoretical aspects, especially in the statistical modeling of random variables defined over a simplicial complex, and the generalization to higher order structures.

Appendix A Proof of Theorem 1

We begin by briefly recalling the basic properties of Lovász extension [49], [67] and then we proceed to the proof of Theorem 1.

Definition 1

Given a set 𝒜{\cal A}, of cardinality A=|𝒜|A=|\mathcal{A}|, and its power set 2𝒜2^{\cal A}, let us consider a set function Q:2𝒜Q:2^{\mathcal{A}}\rightarrow\mathbb{R}, with Q()=0Q(\emptyset)=0. Let 𝐱A\mbox{\boldmath$x$}\in\mathbb{R}^{A} be a vector of real variables, ordered w.l.o.g. in increasing order, such that x1x2xAx_{1}\leq x_{2}\leq\ldots\leq x_{A}. Define C0𝒜C_{0}\triangleq\mathcal{A} and Ci{j𝒜:xj>xi}C_{i}\triangleq\{j\in\mathcal{A}:x_{j}>x_{i}\} for i>0i>0. Then, the Lovász extension f:Af:\mathbb{R}^{A}\rightarrow\mathbb{R} of QQ, evaluated at 𝐱x, is given by [49]:

f(𝒙)=Q(𝒜)x1+i=1A1Q(Ci)(xi+1xi).\begin{split}f(\mbox{\boldmath$x$})\,=&\,Q(\mathcal{A})x_{1}+\displaystyle\sum_{i=1}^{A-1}Q(C_{i})(x_{i+1}-x_{i}).\end{split} (65)

An interesting property of set functions is submodularity, defined as follows:

Definition 2

A set function Q:2𝒜Q:2^{\mathcal{A}}\rightarrow\mathbb{R} is submodular if and only if, ,𝒞𝒜\forall\mathcal{B},\mathcal{C}\subseteq\mathcal{A}, it satisfies the following inequality:

Q()+Q(𝒞)Q(𝒞)+Q(𝒞).Q(\mathcal{B})+Q(\mathcal{C})\geq Q(\mathcal{B}\cup\mathcal{C})+Q(\mathcal{B}\cap\mathcal{C}).

A fundamental property is that a set function QQ is submodular iff its Lovász extension f(𝒙)f(\mbox{\boldmath$x$}) is a convex function and that minimizing f(𝒙)f(\mbox{\boldmath$x$}) on [0,1]A[0,1]^{A}, is equivalent to minimizing the set function QQ on 2𝒜2^{\mathcal{A}} [49, p.172]. Now, we wish to apply Lovász extension to a set function, defined over the edge set \mathcal{E}, counting the number of triangles gluing nodes belonging to a tri-partition of the node set 𝒱\mathcal{V}. To this end, let us consider a tri-partition {𝒜0,𝒜1,𝒜2}\{{\cal A}_{0},{\cal A}_{1},{\cal A}_{2}\} of the node set 𝒱{\mathcal{V}}. Extending the approach used for graphs, where the cut-size is introduced as a set function evaluated on the node set, to define a triangle-cut size, we need to introduce a set function defined on the edge set. Given a simplex 𝒳\mathcal{X}, assume the faces of 𝒳\mathcal{X} to be positively oriented with the order of their vertices, i.e. for the kk-simplex σik\sigma^{k}_{i} we have σik=[σi0k1,σi1k1,,σikk1]\sigma^{k}_{i}=[\sigma^{k-1}_{i_{0}},\sigma^{k-1}_{i_{1}},\ldots,\sigma^{k-1}_{i_{k}}] with i0<i1<<iki_{0}<i_{1}<\ldots<i_{k}. Given any non-empty tri-partition {𝒜0,𝒜1,𝒜2}\{\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2}\} of the node set 𝒱\mathcal{V}, we define

τ(𝒜0,𝒜1,𝒜2)={σi2𝒯:|σi2𝒜j|=1, 0j2}\tau(\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2})\!=\!\{\sigma^{2}_{i}\in\mathcal{T}:\,|\sigma^{2}_{i}\cap\mathcal{A}_{j}|=1,\,0\leq j\leq 2\} (66)

as the set of triangles σi2𝒯\sigma^{2}_{i}\in\mathcal{T} with exactly one vertex in each set 𝒜j\mathcal{A}_{j}, for j=0,1,2j=0,1,2. Given any σi2τ(𝒜0,𝒜1,𝒜2)\sigma^{2}_{i}\in\tau(\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2}), there exists a permutation πi\pi_{i} of its vertices σij0\sigma^{0}_{i_{j}}, j=0,1,2j=0,1,2, such that σi2=[σi00,σi10,σi20]\sigma^{2}_{i}=[\sigma^{0}_{i_{0}},\sigma^{0}_{i_{1}},\sigma^{0}_{i_{2}}] and σij0𝒜j\sigma^{0}_{i_{j}}\in\mathcal{A}_{j}, for j=0,1,2j=0,1,2. This returns a linear ordering111A binary relation \prec on a set XX is a linear ordering on XX if it is transitive, antisymmetric and for any two elements x,yXx,y\in X, either xyx\prec y or yxy\prec x. of the vertices of the partition such that for all i<ji<j, if σk0𝒜i\sigma^{0}_{k}\in\mathcal{A}_{i} and σm0𝒜j\sigma^{0}_{m}\in\mathcal{A}_{j}, then σk0\sigma^{0}_{k} precedes σm0\sigma^{0}_{m} in the vertex ordering. Given the tri-partition {𝒜0,𝒜1,𝒜2}\{\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2}\}, we define the set T\mathcal{E}_{T} of oriented edges such that an oriented edge [σi0,σj0]T[\sigma^{0}_{i},\sigma^{0}_{j}]\in\mathcal{E}_{T} if and only if σi0𝒜n\sigma^{0}_{i}\in\mathcal{A}_{n} and σj0𝒜m\sigma^{0}_{j}\in\mathcal{A}_{m} for some n<mn<m. Thus, we can think of the triangle cut size F1F_{1} in (29), equivalently, as a different function G(T,¯T)G(\mathcal{E}_{T},\overline{\mathcal{E}}_{T}) defined on sets of oriented edges, where ¯T\overline{\mathcal{E}}_{T} denotes the complement set of T\mathcal{E}_{T} in \mathcal{E}. Given an orientation on the simplicial complex, we can associate with the set of oriented edges T\mathcal{E}_{T} the vector 𝟏TE\mathbf{1}_{\mathcal{E}_{T}}\in\mathbb{R}^{E}, whose entries are 0 for edges in ¯T\overline{\mathcal{E}}_{T} and ±1\pm 1 for edges in T\mathcal{E}_{T}, depending on the edge orientation. It is straightforward to check that 𝐁2T𝟏T\mbox{$\mathbf{B}$}_{2}^{T}\mathbf{1}_{\mathcal{E}_{T}}, where 𝐁2\mbox{$\mathbf{B}$}_{2} is the edge-triangle incidence matrix defined in (3), computes the triangle cut for T\mathcal{E}_{T} in the sense that each entry of this vector, corresponding to a triangle, is nonzero if and only if every vertex of that triangle is in a different set of the partition. As an example, considering the simplicial complex in Fig. 1, we have T={e2,e3,e4,e7,e8}\mathcal{E}_{T}=\{e_{2},e_{3},e_{4},e_{7},e_{8}\} and the associated vector 𝟏T\mathbf{1}_{\mathcal{E}_{T}} is [0,1,1,1,0,0,1,1,0,0,0]T[0,1,1,1,0,0,1,1,0,0,0]^{T}. Then, using (21), we get 𝐁2T𝟏T=[0,1,0]T\mbox{$\mathbf{B}$}_{2}^{T}\mathbf{1}_{\mathcal{E}_{T}}=[0,1,0]^{T} and the triangle cut size is equal to 11.

Then, introducing the vector 𝐭T:=𝐁2T𝟏T\mathbf{t}_{\mathcal{E}_{T}}:=\mbox{$\mathbf{B}$}_{2}^{T}\mathbf{1}_{\mathcal{E}_{T}}, we define the edge set function as

G(T,¯T)=j=1T|𝐭T(j)|.G(\mathcal{E}_{T},\overline{\mathcal{E}}_{T})=\sum_{j=1}^{T}|\mathbf{t}_{\mathcal{E}_{T}}(j)|. (67)

It can be proved that GG is invariant under any permutation of the vertexes of the triangles, since the effect of any vertex permutation is a change of sign in the entries of 𝐭T\mathbf{t}_{\mathcal{E}_{T}}. Therefore, we get

G(T,¯T)=|τ(𝒜0,𝒜1,𝒜2)|=F1(𝒜0,𝒜1,𝒜2).G(\mathcal{E}_{T},\overline{\mathcal{E}}_{T})=|\tau({\cal A}_{0},{\cal A}_{1},{\cal A}_{2})|=F_{1}(\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2}). (68)

Now we wish to derive the Lovász extension of the set function G(T,¯T)G(\mathcal{E}_{T},\overline{\mathcal{E}}_{T}). Denote with σi2=[σi01,σi11,σi21]\sigma^{2}_{i}=[\sigma^{1}_{i_{0}},\sigma^{1}_{i_{1}},\sigma^{1}_{i_{2}}] the oriented 22-order simplex where 1ikE1\leq i_{k}\leq E, for k=0,1,2k=0,1,2. To make explicit the dependence of the function G(T,¯T)G(\mathcal{E}_{T},\overline{\mathcal{E}}_{T}) on the edge indexes i0,i1,i2i_{0},i_{1},i_{2}, we rewrite (67) as

G(T,¯T)=σi2𝒯|G~i(T,¯T)|G(\mathcal{E}_{T},\overline{\mathcal{E}}_{T})=\!\!\displaystyle\sum_{\sigma^{2}_{i}\in\mathcal{T}}\left|\,\tilde{G}_{i}(\mathcal{E}_{T},\overline{\mathcal{E}}_{T})\right| (69)

where, exploiting the structure of 𝐁2\mbox{$\mathbf{B}$}_{2}, given in (3), each term G~i\tilde{G}_{i} can be written as (we omit the dependence of G~i\tilde{G}_{i} on the edge set partition, for notation simplicity)

G~i=𝟏T(σi01)𝟏T(σi11)+𝟏T(σi21),\begin{split}\tilde{G}_{i}&=\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{0}})-\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{1}})+\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{2}}),\end{split} (70)

where 𝟏T(σik1)\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{k}}), k=0,1,2k=0,1,2, are the entries of 𝟏T\mathbf{1}_{\mathcal{E}_{T}} corresponding to the edge σik1\sigma^{1}_{i_{k}}. G~i\tilde{G}_{i} is then a set function defined only on the power set of {i0,i1,i2}\{i_{0},i_{1},i_{2}\}. We can now derive its Lovász extension f(xi0,xi1,xi2)f(x_{i_{0}},x_{i_{1}},x_{i_{2}}). First, assume xi0xi1xi2x_{i_{0}}\leq x_{i_{1}}\leq x_{i_{2}}. Then, from Def. 1, we have C0={i0,i1,i2}C_{0}=\{i_{0},i_{1},i_{2}\}, C1={i1,i2}C_{1}=\{i_{1},i_{2}\} and C2={i2}C_{2}=\{i_{2}\}. Therefore, from (70), it holds:

G~i(C0)=𝟏T(σi01)𝟏T(σi11)+𝟏T(σi21)=1G~i(C1)=𝟏T(σi11)+𝟏T(σi21)=0G~i(C2)=𝟏T(σi21)=1.\begin{array}[]{lll}\tilde{G}_{i}(C_{0})=\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{0}})-\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{1}})+\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{2}})=1\\ \tilde{G}_{i}(C_{1})=-\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{1}})+\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{2}})=0\\ \tilde{G}_{i}(C_{2})=\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{2}})=1.\end{array} (71)

Then, from (65) we get:

f(xi0,xi1,xi2)=G~i(C0)xi0+G~i(C1)(xi1xi0)+G~i(C2)(xi2xi1)=xi0xi1+xi2.\begin{split}\!\!\!f(x_{i_{0}},x_{i_{1}},x_{i_{2}})\,=&\,\tilde{G}_{i}(C_{0})x_{i_{0}}+\tilde{G}_{i}(C_{1})(x_{i_{1}}-x_{i_{0}})\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &+\tilde{G}_{i}(C_{2})(x_{i_{2}}-x_{i_{1}})=x_{i_{0}}-x_{i_{1}}+x_{i_{2}}.\end{split}

Let us now assume xi1xi0xi2x_{i_{1}}\leq x_{i_{0}}\leq x_{i_{2}}. Thereby, we have C0={i0,i1,i2}C_{0}=\{i_{0},i_{1},i_{2}\}, C1={i0,i2}C_{1}=\{i_{0},i_{2}\} and C2={i2}C_{2}=\{i_{2}\}, so that it results:

G~i(C0)=𝟏T(σi01)𝟏T(σi11)+𝟏T(σi21)=1G~i(C1)=𝟏T(σi01)+𝟏T(σi21)=2G~i(C2)=𝟏T(σi21)=1\begin{array}[]{lll}\tilde{G}_{i}(C_{0})=\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{0}})-\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{1}})+\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{2}})=1\\ \tilde{G}_{i}(C_{1})=\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{0}})+\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{2}})=2\\ \tilde{G}_{i}(C_{2})=\mathbf{1}_{\mathcal{E}_{T}}(\sigma^{1}_{i_{2}})=1\end{array} (72)

and

f(xi0,xi1,xi2)=2(xi0xi1)+(xi2xi0)+xi1=xi0xi1+xi2.f(x_{i_{0}},x_{i_{1}},x_{i_{2}})=2(x_{i_{0}}-x_{i_{1}})+(x_{i_{2}}-x_{i_{0}})+x_{i_{1}}=x_{i_{0}}-x_{i_{1}}+x_{i_{2}}.

By following similar derivations, it is not difficult to show that for xi0xi2xi1x_{i_{0}}\leq x_{i_{2}}\leq x_{i_{1}}, xi1xi2xi0x_{i_{1}}\leq x_{i_{2}}\leq x_{i_{0}}, xi2xi1xi0x_{i_{2}}\leq x_{i_{1}}\leq x_{i_{0}} and xi2xi0xi1x_{i_{2}}\leq x_{i_{0}}\leq x_{i_{1}}, it holds f(xi0,xi1,xi2)=xi0xi1+xi2f(x_{i_{0}},x_{i_{1}},x_{i_{2}})\,=x_{i_{0}}-x_{i_{1}}+x_{i_{2}}. Therefore, from (69), defining the edge signal 𝒙1=[x1,,xE]T\mbox{\boldmath$x$}^{1}=[x_{1},\ldots,x_{E}]^{T}, we can write the Lovász extension of G(T,T¯)G(\mathcal{E}_{T},\bar{\mathcal{E}_{T}}) exploiting the additive property [49] as

f1(𝒙1)=σi2𝒯|xi01xi11+xi21|\vskip-2.84544ptf_{1}(\mbox{\boldmath$x$}^{1})=\!\!\displaystyle\sum_{\sigma^{2}_{i}\in\mathcal{T}}\left|x_{i_{0}}^{1}-x_{i_{1}}^{1}+x_{i_{2}}^{1}\right| (73)

or, equivalently, as

f1(𝒙1)=j=1T|i=1EB2(i,j)xi1|.f_{1}(\mbox{\boldmath$x$}^{1})=\displaystyle\sum_{j=1}^{T}\left|\sum_{i=1}^{E}B_{2}(i,j)x_{i}^{1}\right|. (74)

From the equality G(T,T¯)=F1(𝒜0,𝒜1,𝒜2)G(\mathcal{E}_{T},\bar{\mathcal{E}_{T}})=F_{1}(\mathcal{A}_{0},\mathcal{A}_{1},\mathcal{A}_{2}) in (68), we can state that f1(𝒙1)f_{1}(\mbox{\boldmath$x$}^{1}) is the Lovász extension of F1F_{1}. This completes the proof of Theorem 11.

References

  • [1] S. Barbarossa, S. Sardellitti, and E. Ceci, “Learning from signals defined over simplicial complexes,” in 2018 IEEE Data Sci. Work. (DSW), 2018, pp. 51–55.
  • [2] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE Signal Process. Mag., vol. 30, no. 3, pp. 83–98, May 2013.
  • [3] A. Ortega, P. Frossard, J. Kovačević, J. M. F. Moura, and P. Vandergheynst, “Graph signal processing: Overview, challenges, and applications,” Proc. of the IEEE, vol. 106, no. 5, pp. 808–828, May 2018.
  • [4] S. Klamt, U.-U. Haus, and F. Theis, “Hypergraphs and cellular networks,” PLoS Comput. Biol., vol. 5, no. 5, 2009.
  • [5] O. T. Courtney and G. Bianconi, “Generalized network structures: The configuration model and the canonical ensemble of simplicial complexes,” Phys. Rev. E, vol. 93, no. 6, pp. 062311, 2016.
  • [6] C. Giusti, R. Ghrist, and D. S. Bassett, “Two’s company, three (or more) is a simplex,” J. Comput. Neurosci., vol. 41, no. 1, pp. 1–14, 2016.
  • [7] T. Shen, Z. Zhang, Z. Chen, D. Gu, S. Liang, Y. Xu, R. Li, Y. Wei, Z. Liu, Y. Yi, and X. Xie, “A genome-scale metabolic network alignment method within a hypergraph-based framework using a rotational tensor-vector product,” Sci. Rep., vol. 8, no. 1, pp. 1–16, 2018.
  • [8] A. R. Benson, R. Abebe, M. T. Schaub, A. Jadbabaie, and J. Kleinberg, “Simplicial closure and higher-order link prediction,” Proc. of Nat. Acad. of Sci., vol. 115, no. 48, pp. E11221–E11230, 2018.
  • [9] S. Agarwal, K. Branson, and S. Belongie, “Higher order learning with graphs,” in Proc. 23rd Int. Conf. on Mach. Lear., 2006, pp. 17–24.
  • [10] J. R. Munkres, Topology, Prentice Hall, 2000.
  • [11] L. Kanari, P. Dłotko, M. Scolamiero, R. Levi, J. Shillcock, K. Hess, and H. Markram, “A topological representation of branching neuronal morphologies,” Neuroinfor., vol. 16, no. 1, pp. 3–13, 2018.
  • [12] A. Patania, G. Petri, and F. Vaccarino, “The shape of collaborations,” EPJ Data Sci., vol. 6, no. 1, pp. 18, 2017.
  • [13] R. Ramanathan, A. Bar-Noy, P. Basu, M. Johnson, W. Ren, A. Swami, and Q. Zhao, “Beyond graphs: Capturing groups in networks,” in Proc. IEEE Conf. Comp. Commun. Work. (INFOCOM), 2011, pp. 870–875.
  • [14] T. J. Moore, R. J. Drost, P. Basu, R. Ramanathan, and A. Swami, “Analyzing collaboration networks using simplicial complexes: A case study,” in Proc. IEEE Conf. Comp. Commun. Work. (INFOCOM), 2012, pp. 238–243.
  • [15] T. Roman, A. Nayyeri, B. T. Fasy, and R. Schwartz, “A simplicial complex-based approach to unmixing tumor progression data,” BMC Bioinfor., vol. 16, no. 1, pp. 254, 2015.
  • [16] G. Carlsson, “Topology and data,” Bulletin Amer. Math. Soc., vol. 46, no. 2, pp. 255–308, 2009.
  • [17] A. Muhammad and M. Egerstedt, “Control using higher order Laplacians in network topologies,” in Proc. of 17th Int. Symp. Math. Theory Netw. Syst., 2006, pp. 1024–1038.
  • [18] X. Jiang, L. H. Lim, Y. Yao, and Y. Ye, “Statistical ranking and combinatorial Hodge theory,” Math. Program., vol. 127, no. 1, pp. 203–244, 2011.
  • [19] Q. Xu, Q. Huang, T. Jiang, B. Yan, W. Lin, and Y. Yao, “HodgeRank on random graphs for subjective video quality assessment,” IEEE Trans. Multimedia, vol. 14, no. 3, pp. 844–857, June 2012.
  • [20] A. Tahbaz-Salehi and A. Jadbabaie, “Distributed coverage verification in sensor networks without location information,” IEEE Trans. Autom. Contr., vol. 55, no. 8, pp. 1837–1849, Aug. 2010.
  • [21] H. Chintakunta and H. Krim, “Distributed localization of coverage holes using topological persistence,” IEEE Trans. Signal Process., vol. 62, no. 10, pp. 2531–2541, May 2014.
  • [22] V. De Silva and R. Ghrist, “Coverage in sensor networks via persistent homology,” Algeb. & Geom. Topol., vol. 7, no. 1, pp. 339–358, 2007.
  • [23] S. Emrani, T. Gentimis, and H. Krim, “Persistent homology of delay embeddings and its application to wheeze detection,” IEEE Signal Process. Lett., vol. 21, no. 4, pp. 459–463, Apr. 2014.
  • [24] H. Edelsbrunner and J. Harer, “Persistent homology-a survey,” Contemp. Math., vol. 453, pp. 257–282, 2008.
  • [25] D. Horak, S. Maletić, and M. Rajković, “Persistent homology of complex networks,” J. Statis. Mech.: Theory and Experiment, vol. 2009, no. 3, pp. P03034, 2009.
  • [26] H. Krim and A. B. Hamza, Geometric methods in signal and image analysis, Cambridge Univ. Press, 2015.
  • [27] M. Robinson, Topological signal processing, Springer, 2016.
  • [28] W. Huang, T. A. W. Bolton, J. D. Medaglia, D. S. Bassett, A. Ribeiro, and D. Van De Ville, “A graph signal processing perspective on functional brain imaging,” Proc. of the IEEE, vol. 106, no. 5, pp. 868–885, May 2018.
  • [29] K. K. Leung, W. A. Massey, and W. Whitt, “Traffic models for wireless communication networks,” IEEE J. Sel. Areas Commun., vol. 12, no. 8, pp. 1353–1364, Oct. 1994.
  • [30] R. Sever and J. S. Brugge, “Signal transduction in cancer,” Cold Spring Harbor Perspec. in Medic., vol. 5, no. 4: a006098, 2015.
  • [31] M. T. Schaub and S. Segarra, “Flow smoothing and denoising: Graph signal processing in the edge-space,” in 2018 IEEE Global Conf. Signal Inf. Process. (GlobalSIP), Nov. 2018, pp. 735–739.
  • [32] J. Jia, M. T. Schaub, S. Segarra, and A. R. Benson, “Graph-based semi-supervised & active learning for edge flows,” in Proc. of 25th ACM SIGKDD Int. Conf. Knowl. Disc. & Data Min., 2019, pp. 761–771.
  • [33] T. S. Evans and R. Lambiotte, “Line graphs, link partitions, and overlapping communities,” Phys. Rev. E, vol. 80, Jul. 2009.
  • [34] Y.-Y. Ahn, J. P. Bagrow, and S. Lehmann, “Link communities reveal multiscale complexity in networks,” Nature, vol. 466, pp. 761–764, 2010.
  • [35] M. Mesbahi and M. Egerstedt, Graph theoretic methods in multiagent networks, vol. 33, Princ. Univ. Press, 2010.
  • [36] S. Mukherjee and J. Steenbergen, “Random walks on simplicial complexes and harmonics,” Random Struct. & Algor., vol. 49, no. 2, pp. 379–405, 2016.
  • [37] O. Parzanchevski and R. Rosenthal, “Simplicial complexes: spectrum, homology and random walks,” Random Struct. & Algor., vol. 50, no. 2, pp. 225–261, 2017.
  • [38] M. T. Schaub, A. R. Benson, P. Horn, G. Lippner, and A. Jadbabaie, “Random walks on simplicial complexes and the normalized Hodge Laplacian,” arXiv preprint arXiv:1807.05044, 2018.
  • [39] G. Mateos, S. Segarra, A. G. Marques, and A. Ribeiro, “Connecting the dots: Identifying network structure via graph signal processing,” IEEE Signal Process. Mag., vol. 36, no. 3, pp. 16–43, May 2019.
  • [40] X. Dong, D. Thanou, M. Rabbat, and P. Frossard, “Learning graphs from data: A signal representation perspective,” IEEE Signal Process. Mag., vol. 36, no. 3, pp. 44–63, May 2019.
  • [41] G. La Manno et al., “RNA velocity of single cells,” Nature, vol. 560, no. 7719, pp. 494–498, 2018.
  • [42] L. J. Grady and J. R. Polimeni, Discrete calculus: Applied analysis on graphs for computational science, Sprin. Sci. & Busin. Media, 2010.
  • [43] T. E. Goldberg, “Combinatorial Laplacians of simplicial complexes,” Senior Thesis, Bard College, 2002.
  • [44] U. Von Luxburg, “A tutorial on spectral clustering,” Stat. Comput., vol. 17, no. 4, pp. 395–416, 2007.
  • [45] D. Horak and J. Jost, “Spectra of combinatorial Laplace operators on simplicial complexes,” Adv. Math., vol. 244, pp. 303–336, 2013.
  • [46] J. Steenbergen, Towards a spectral theory for simplicial complexes, Ph.D. thesis, Duke Univ., 2013.
  • [47] L.-H. Lim, “Hodge Laplacians on graphs,” S. Mukherjee (Ed.), Geometry and Topology in Statistical Inference, Proc. Sympos. Appl. Math., 76, AMS, 2015.
  • [48] B. Eckmann, “Harmonische funktionen und randwertaufgaben in einem komplex,” Comment. Math. Helv., vol. 17, no. 1, pp. 240–255, 1944.
  • [49] F. Bach, Learning with Submodular Functions: A Convex Optimization Perspective, vol. 6, Foundations and Trends in Machine Learning, 2013.
  • [50] L. Lovász, “Submodular functions and convexity,” in A. Bachem et al. (eds.) Math. Program. The State of the Art, Springer Berlin Heidelberg, pp. 235–257, 1983.
  • [51] S. Sardellitti, S. Barbarossa, and P. Di Lorenzo, “On the graph Fourier transform for directed graphs,” IEEE J. Sel. Top. Signal Process., vol. 11, no. 6, pp. 796–811, Sept. 2017.
  • [52] O. Parzanchevski, R. Rosenthal, and R. J. Tessler, “Isoperimetric inequalities in simplicial complexes,” Combinatorica, vol. 36, no. 2, pp. 195–227, 2016.
  • [53] J. Steenbergen, C. Klivans, and S. Mukherjee, “A Cheeger-type inequality on simplicial complexes,” Adv. Appl. Math., vol. 56, pp. 56 –77, 2014.
  • [54] M. Hein, S. Setzer, L. Jost, and S. S. Rangapuram, “The total variation on hypergraphs - Learning on hypergraphs revisited,” in Adv. Neur. Inform. Proces. Syst. (NIPS), Dec. 5-8 2013.
  • [55] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge university press, 2004.
  • [56] M. Tsitsvero, S. Barbarossa, and P. Di Lorenzo, “Signals on graphs: Uncertainty principle and sampling,” IEEE Trans. Signal Process., vol. 64, no. 18, pp. 4845–4860, Sept. 2016.
  • [57] S. Barbarossa and M. Tsitsvero, “An introduction to hypergraph signal processing,” in Proc. of IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), 2016, pp. 6425–6429.
  • [58] Élie Cartan, “Sur certaines expressions différentielles et le problème de pfaff,” in Annales scientifiques de l’École Normale Supérieure, 1899, vol. 16, pp. 239–332.
  • [59] A. N. Hirani, Discrete Exterior Calculus, PhD thesis, California Institute of Technology, May 2003.
  • [60] N. Bell and A. N. Hirani, “Pydec: software and algorithms for discretization of exterior calculus,” ACM Trans. Math. Soft. (TOMS), vol. 39, no. 1, pp. 3, 2012.
  • [61] M. Fisher, P. Schröder, M. Desbrun, and H. Hoppe, “Design of tangent vector fields,” in ACM Trans. Grap. (TOG), 2007, vol. 26, p. 56.
  • [62] C. Brandt, L. Scandolo, E. Eisemann, and K. Hildebrandt, “Spectral processing of tangential vector fields,” in Computer Graphics Forum. Wiley Online Library, 2017, vol. 36, pp. 338–353.
  • [63] J. Dodziuk, “Finite-difference approach to the Hodge theory of harmonic forms,” Amer. Jour. Math., vol. 98, no. 1, pp. 79–104, 1976.
  • [64] G. B. Giannakis, Y. Shen, and G. V. Karanikolas, “Topology identification and learning over graphs: Accounting for nonlinearities and dynamics,” Proc. of the IEEE, vol. 106, no. 5, pp. 787–807, May 2018.
  • [65] Big data-Dandelion API datasets, available at: https://dandelion.eu/.
  • [66] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” in SIAM J. Sci. Comput., 1998, vol. 20, pp. 33–61.
  • [67] L. Jost, S. Setzer, and M. Hein, “Nonlinear eigenproblems in data analysis: Balanced graph cuts and the ratioDCA-Prox,” in Extrac. Quant. Infor. Complex Syst., Spr. Intern. Publ., vol. 102, pp. 263–279, 2014.
  • [68] D. S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas with Application to Linear Systems Theory, Princeton Univ. Press, 2005.

Appendix B Supporting material

This document contains some supporting materials complementing the paper: “Topological Signal Processing over Simplicial Complexes” to appear in IEEE Transactions on Signal Processing. Section A contains Proposition 22, and its proof, that will be instrumental in proving Theorems 33 and 44, respectively, in Sections B and C.

B-A Proposition 2

We need first to derive the relationship between the bandwidth of the edge signal 𝒔irr1=𝐁1T𝒔0\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0} and that of the vertex signal 𝒔0\mbox{\boldmath$s$}^{0}, as stated in the following proposition.

Proposition 2

Let 𝐬irr1=𝐁1T𝐬0\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0} be the irrotational part of 𝐬1\mbox{\boldmath$s$}^{1} with 𝐬0=𝐅00𝐬0\mbox{\boldmath$s$}^{0}=\mathbf{F}^{0}_{\mathcal{F}_{0}}\mbox{\boldmath$s$}^{0} a 0\mathcal{F}_{0}-bandlimited vertex signal. Then, 𝐬irr1\mbox{\boldmath$s$}^{1}_{\text{irr}} is a irr\mathcal{F}_{\text{irr}}-bandlimited edge signal with irr\mathcal{F}_{\text{irr}} the set of indexes of the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1} stacked in the columns of 𝐔irr\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}}, such that 𝐔irr=𝐁1T𝐔0𝒞10\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{U}$}^{0}_{\mathcal{F}_{0}-\mathcal{C}_{1}}. The bandwidth of 𝐬irr1\mbox{\boldmath$s$}^{1}_{\text{irr}} is |irr|=|0|c1|\mathcal{F}_{\text{irr}}|=|\mathcal{F}_{0}|-c_{1}, where c10c_{1}\geq 0 is the number of eigenvectors in the bandwidth of 𝐬0\mbox{\boldmath$s$}^{0} belonging to ker(𝐋0)\text{ker}(\mbox{$\mathbf{L}$}_{0}).

Proof:

Let us define 𝐔00\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}}^{0} the N×|0|N\times|\mathcal{F}_{0}| matrix whose columns 𝒖i0\mbox{\boldmath$u$}_{i}^{0}, i0\forall i\in\mathcal{F}_{0} are the eigenvectors of the 0-order Laplacian 𝐁1𝐁1T\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{1}^{T}. Since 𝒔irr1=𝐁1T𝒔0\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0} we get

𝒔irr1=𝐁1T𝒔0=𝐁1T𝐔00(𝐔00)T𝒔0\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}}^{0}(\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}}^{0})^{T}\mbox{\boldmath$s$}^{0} (75)

where the last equality follows from the bandlimitedness of 𝒔0\mbox{\boldmath$s$}^{0}, i.e. 𝒔0=𝐔00(𝐔00)T𝒔0\mbox{\boldmath$s$}^{0}=\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}}^{0}(\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}}^{0})^{T}\mbox{\boldmath$s$}^{0}. From the property 2) in Prop. 1, at each eigenvector 𝒖i0\mbox{\boldmath$u$}_{i}^{0} of 𝐁1𝐁1T\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{1}^{T} with 𝒖i0ker(𝐁1𝐁1T)\mbox{\boldmath$u$}_{i}^{0}\notin\text{ker}(\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{1}^{T}) corresponds an eigenvector 𝐁1T𝒖i0\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$u$}_{i}^{0} of 𝐁1T𝐁1\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{B}$}_{1} with the same eigenvalue. Let 𝐔irr\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}} denote the E×|irr|E\times|\mathcal{F}_{\text{irr}}| matrix whose columns are the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1} associated to the frequency index set irr\mathcal{F}_{\text{irr}} with

irr={i:𝒖i=𝐁1T𝒖i0,𝒖i0ker(𝐁1𝐁1T),i0𝒞1}.\mathcal{F}_{\text{irr}}=\{\ell_{i}\in\mathcal{F}:\mbox{\boldmath$u$}_{\ell_{i}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$u$}^{0}_{i},\mbox{\boldmath$u$}_{i}^{0}\notin\text{ker}(\mbox{$\mathbf{B}$}_{1}\mbox{$\mathbf{B}$}_{1}^{T}),\forall i\in\mathcal{F}_{0}-\mathcal{C}_{1}\}. (76)

Then, if 𝐔00=[𝐔𝒞10,𝐔0𝒞10]\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}}^{0}=[\mbox{$\mathbf{U}$}_{\mathcal{C}_{1}}^{0},\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}-\mathcal{C}_{1}}^{0}], we get

𝐁1T𝐔00=[𝐎𝒞1,𝐔irr].\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}}^{0}=[\mathbf{O}_{\mathcal{C}_{1}},\,\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}}]. (77)

Therefore, equation (75) reduces to

𝒔irr1=𝐔irr(𝐔0𝒞10)T𝒔0.\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{irr}}}}(\mbox{$\mathbf{U}$}_{\mathcal{F}_{0}-\mathcal{C}_{1}}^{0})^{T}\mbox{\boldmath$s$}^{0}. (78)

From the equality 𝐁1T𝐁1𝒖i=λi𝒖i\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{B}$}_{1}\mbox{\boldmath$u$}_{\ell_{i}}=\lambda_{\ell_{i}}\mbox{\boldmath$u$}_{\ell_{i}}, multiplying both sides by 𝐁1\mbox{$\mathbf{B}$}_{1}, we easily derive 𝒖i0=𝐁1𝒖i\mbox{\boldmath$u$}^{0}_{i}=\mbox{$\mathbf{B}$}_{1}\mbox{\boldmath$u$}_{\ell_{i}}, 𝒖iker(𝐁1T𝐁1)\forall\mbox{\boldmath$u$}_{\ell_{i}}\notin\text{ker}(\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{B}$}_{1}) so that equation (78) is equivalent to

𝒔irr1=𝐔irr𝐔irrT𝐁1T𝒔0.\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{irr}}}}\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{irr}}}}^{T}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}. (79)

Since 𝒔irr1=𝐁1T𝒔0\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}, we can rewrite (79) as

𝒔irr1=𝐔irr𝐔irrT𝒔irr1.\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}}\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{irr}}}^{T}\mbox{\boldmath$s$}^{1}_{\text{irr}}. (80)

This last equality proves that 𝒔irr1\mbox{\boldmath$s$}^{1}_{\text{irr}} is a irr\mathcal{F}_{\text{irr}}-bandlimited edge signal with irr\mathcal{F}_{\text{irr}} as in (76) and bandwidth |irr|=|0|c1|\mathcal{F}_{\text{irr}}|=|\mathcal{F}_{0}|-c_{1}, defining c1=|𝒞1|0c_{1}=|\mathcal{C}_{1}|\geq 0.

B-B Proof of Theorem 3

Given the sampled signals 𝒔𝒮1\mbox{\boldmath$s$}^{1}_{\mathcal{S}} and 𝒔𝒜0\mbox{\boldmath$s$}^{0}_{\mathcal{A}}, we get

𝒔𝒜0=𝐃𝒜𝒔0𝒔𝒮1=𝐃𝒮𝒔1=𝐃𝒮𝒔¯1+𝐃𝒮𝐁1T𝒔0\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}=\mbox{$\mathbf{D}$}_{\mathcal{A}}\mbox{\boldmath$s$}^{0}\\ \mbox{\boldmath$s$}^{1}_{\mathcal{S}}=\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{\boldmath$s$}^{1}=\mbox{$\mathbf{D}$}_{\mathcal{S}}\bar{\mbox{\boldmath$s$}}^{1}+\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}\end{array} (81)

with 𝒔¯1=𝒔sol1+𝒔H1\bar{\mbox{\boldmath$s$}}^{1}=\mbox{\boldmath$s$}^{1}_{\text{sol}}+\mbox{\boldmath$s$}^{1}_{{H}}. Then, since it holds 𝒔0=𝐅00𝒔0\mbox{\boldmath$s$}^{0}=\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}\mbox{\boldmath$s$}^{0} and 𝒔1=𝐅𝒔1\mbox{\boldmath$s$}^{1}=\mbox{$\mathbf{F}$}_{\mathcal{F}}\mbox{\boldmath$s$}^{1}, we easily obtain

[𝒔𝒜0𝒔𝒮1]=𝐆[𝒔0𝒔¯1],\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{1}_{\mathcal{S}}\end{array}\right]=\mbox{$\mathbf{G}$}\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \bar{\mbox{\boldmath$s$}}^{1}\end{array}\right], (82)

where

𝐆=[𝐃𝒜𝐅00𝐎𝐃𝒮𝐁1T𝐅00𝐃𝒮𝐅sH]=[(𝐈𝐃¯𝒜𝐅00)𝐎(𝐈𝐃¯𝒮)𝐁1T𝐅00(𝐈𝐃¯𝒮𝐅sH)]\begin{array}[]{lll}\mbox{$\mathbf{G}$}&=\left[\begin{array}[]{lll}\mbox{$\mathbf{D}$}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}&\mathbf{O}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}&\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}}\end{array}\right]\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &=\left[\begin{array}[]{lll}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})&\mathbf{O}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ (\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}})\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}&(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}})\end{array}\right]\end{array} (83)

with sH\mathcal{F}_{\text{sH}} the set of frequency indexes in \mathcal{F} corresponding to the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1} belonging to the solenoidal and harmonic subspaces. Assuming that both the conditions 𝐃¯𝒜𝐅002<1\|\bar{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}\|_{2}<1 and 𝐃¯𝒮𝐅sH2<1\|\bar{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}}\|_{2}<1 hold true, the matrix 𝐆\mathbf{G} is invertible and from the inverse of partitioned matrices [68], we get

𝐐=𝐆1=[(𝐈𝐃¯𝒜𝐅00)1𝐎𝐏(𝐈𝐃¯𝒮𝐅sH)1]\begin{array}[]{lll}\mbox{$\mathbf{Q}$}=\mbox{$\mathbf{G}$}^{-1}=\left[\begin{array}[]{lll}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}&\mathbf{O}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{$\mathbf{P}$}&(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}})^{-1}\end{array}\right]\end{array} (84)

with 𝐏=(𝐈𝐃¯𝒮𝐅sH)1𝐃𝒮𝐁1T𝐅00(𝐈𝐃¯𝒜𝐅00)1\mbox{$\mathbf{P}$}=-(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}})^{-1}{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}. Then we can recovery the signals 𝒔0\mbox{\boldmath$s$}^{0} and 𝒔¯1\bar{\mbox{\boldmath$s$}}^{1} as

[𝒔0𝒔¯1]=𝐐[𝒔𝒜0𝒔𝒮1].\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \bar{\mbox{\boldmath$s$}}^{1}\end{array}\right]=\mbox{$\mathbf{Q}$}\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{1}_{\mathcal{S}}\end{array}\right]. (85)

This concludes the proof of point a). Let us prove next point b). From Proposition 22 it results that, if 𝒔0\mbox{\boldmath$s$}^{0} is 0\mathcal{F}_{0}-bandlimited, then 𝒔irr1=𝐁1T𝒔0\mbox{\boldmath$s$}^{1}_{\text{irr}}=\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0} is a irr\mathcal{F}_{\text{irr}}-bandlimted signal with irr\mathcal{F}_{\text{irr}} given in (76). This implies that the edge signal 𝒔1=𝒔sol1+𝒔H1+𝒔irr1\mbox{\boldmath$s$}^{1}=\mbox{\boldmath$s$}^{1}_{\text{sol}}+\mbox{\boldmath$s$}^{1}_{{H}}+\mbox{\boldmath$s$}^{1}_{\text{irr}} is a \mathcal{F}-bandlimited edge signal with =sHirr\mathcal{F}=\mathcal{F}_{\text{sH}}\cup\mathcal{F}_{\text{irr}} and bandwidth ||=|sH|+|0|c1|\mathcal{F}|=|\mathcal{F}_{\text{sH}}|+|\mathcal{F}_{0}|-c_{1}. Let us now consider the system in (85). We get

𝒔0=(𝐈𝐃¯𝒜𝐅00)1𝒔𝒜0𝒔¯1=(𝐈𝐃¯𝒮𝐅sH)1𝐃𝒮𝐁1T𝐅00(𝐈𝐃¯𝒜𝐅00)1𝒔𝒜0+(𝐈𝐃¯𝒮𝐅sH)1𝒔𝒮1.\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}&=(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}\\ \bar{\mbox{\boldmath$s$}}^{1}&=-(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}})^{-1}\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}+\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &\quad(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}})^{-1}\mbox{\boldmath$s$}^{1}_{\mathcal{S}}.\end{array} (86)

Using the first equation in (86) and the fact that 𝐃𝒮𝒔1=𝐃𝒮𝒔¯1+𝐃𝒮𝐁1T𝒔0\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{\boldmath$s$}^{1}=\mbox{$\mathbf{D}$}_{\mathcal{S}}\bar{\mbox{\boldmath$s$}}^{1}+\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}, it holds

𝒔¯1=(𝐈𝐃¯𝒮𝐅sH)1(𝐃𝒮𝒔¯1+𝐃𝒮𝐁1T𝒔0𝐃𝒮𝐁1T𝐅00𝒔0)=(𝐈𝐃¯𝒮𝐅sH)1𝐃𝒮𝒔¯1\begin{array}[]{lll}\bar{\mbox{\boldmath$s$}}^{1}&=(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}})^{-1}(\mbox{$\mathbf{D}$}_{\mathcal{S}}\bar{\mbox{\boldmath$s$}}^{1}+\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}-\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}\mbox{\boldmath$s$}^{0})\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &=(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{sH}}})^{-1}\mbox{$\mathbf{D}$}_{\mathcal{S}}\bar{\mbox{\boldmath$s$}}^{1}\end{array} (87)

where the last equality follows from 𝒔0=𝐅00𝒔0\mbox{\boldmath$s$}^{0}=\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}\mbox{\boldmath$s$}^{0}. Hence, from (87), it follows that to perfectly recover the solenoidal and harmonic parts of 𝒔1\mbox{\boldmath$s$}^{1} we need a number of samples N1N_{1} at least equal to the signal bandwidth |sH||\mathcal{F}_{sH}|. Finally, from the first equation in (86) it follows that to perfectly recovering 𝒔0\mbox{\boldmath$s$}^{0} we need a number of samples N0N_{0} at least equal to the bandwidth |0||\mathcal{F}_{0}| of the vertex signal 𝒔0\mbox{\boldmath$s$}^{0}. This concludes the proof of point b) in the theorem.

B-C Proof of Theorem 4

Given the sampled signals 𝒔𝒜0\mbox{\boldmath$s$}^{0}_{\mathcal{A}}, 𝒔𝒮1\mbox{\boldmath$s$}^{1}_{\mathcal{S}} and 𝒔2\mbox{\boldmath$s$}^{2}_{\mathcal{M}}, we have

𝒔𝒜0=𝐃𝒜𝒔0𝒔𝒮1=𝐃𝒮𝒔1=𝐃𝒮𝐁2𝒔2+𝐃𝒮𝒔H1+𝐃𝒮𝐁1T𝒔0𝒔2=𝐃𝐅22𝒔2.\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}=\mbox{$\mathbf{D}$}_{\mathcal{A}}\mbox{\boldmath$s$}^{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{1}_{\mathcal{S}}=\,\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{\boldmath$s$}^{1}=\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{2}{\mbox{\boldmath$s$}}^{2}+\mbox{$\mathbf{D}$}_{\mathcal{S}}{\mbox{\boldmath$s$}}^{1}_{H}+\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{2}_{\mathcal{M}}=\mbox{$\mathbf{D}$}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\mbox{\boldmath$s$}^{2}.\end{array} (88)

Then, using the bandlimitedness property, so that 𝒔0=𝐅00𝒔0\mbox{\boldmath$s$}^{0}=\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}\mbox{\boldmath$s$}^{0}, 𝒔H1=𝐅H𝒔H1\mbox{\boldmath$s$}^{1}_{H}=\mbox{$\mathbf{F}$}_{\mathcal{F}_{H}}\mbox{\boldmath$s$}^{1}_{H} and 𝒔2=𝐅22𝒔2\mbox{\boldmath$s$}^{2}=\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\mbox{\boldmath$s$}^{2}, we get

[𝒔𝒜0𝒔𝒮1𝒔2]=𝐆¯[𝒔0𝒔H1𝒔2]\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{1}_{\mathcal{S}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{2}_{\mathcal{M}}\end{array}\right]=\bar{\mbox{$\mathbf{G}$}}\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ {\mbox{\boldmath$s$}}^{1}_{H}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ {\mbox{\boldmath$s$}}^{2}\end{array}\right] (89)

with

𝐆¯=[𝐃𝒜𝐅00𝐎𝐎𝐃𝒮𝐁1T𝐅00𝐃𝒮𝐅H𝐃𝒮𝐁2𝐅22𝐎𝐎𝐃𝐅22].\begin{array}[]{lll}\bar{\mbox{$\mathbf{G}$}}=\left[\begin{array}[]{lll}{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}&\mathbf{O}&\mathbf{O}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ {\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}&{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{H}}&{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mathbf{O}&\mathbf{O}&{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\end{array}\right]\end{array}. (90)

Under the assumptions 𝐃¯𝒜𝐅002<1\|\bar{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}\|_{2}<1, 𝐃¯𝒮𝐅H2<1\|\bar{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{H}}\|_{2}<1 and 𝐃¯𝐅222<1\|\bar{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\|_{2}<1, the matrix 𝐆¯\bar{\mbox{$\mathbf{G}$}} becomes invertible and from the inverse of partitioned matrices [68], we get

𝐑=𝐆¯1=[(𝐈𝐃¯𝒜𝐅00)1𝐎𝐎𝐏1𝐏2𝐏3𝐎𝐎(𝐈𝐃¯𝐅22)1]\begin{array}[]{lll}\mbox{$\mathbf{R}$}=\bar{\mbox{$\mathbf{G}$}}^{-1}=\left[\begin{array}[]{lll}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}&\mathbf{O}&\mathbf{O}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{$\mathbf{P}$}_{1}&\mathbf{P}_{2}&\mathbf{P}_{3}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mathbf{O}&\mathbf{O}&(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2})^{-1}\end{array}\right]\end{array} (91)

and 𝐏1=(𝐈𝐃¯𝒮𝐅H)1𝐃𝒮𝐁1T𝐅00(𝐈𝐃¯𝒜𝐅00)1\mbox{$\mathbf{P}$}_{1}=-(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}, 𝐏2=(𝐈𝐃¯𝒮𝐅H)1\mbox{$\mathbf{P}$}_{2}=(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}, 𝐏3=(𝐈𝐃¯𝒮𝐅H)1𝐃𝒮𝐁2𝐅22(𝐈𝐃¯𝐅22)1\mbox{$\mathbf{P}$}_{3}=-(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2})^{-1}. Then we can recovery the signals 𝒔0\mbox{\boldmath$s$}^{0}, 𝒔H1{\mbox{\boldmath$s$}}_{H}^{1} and 𝒔2{\mbox{\boldmath$s$}}^{2} as

[𝒔0𝒔H1𝒔2]=𝐑[𝒔𝒜0𝒔𝒮1𝒔2].\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ {\mbox{\boldmath$s$}}_{H}^{1}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ {\mbox{\boldmath$s$}}^{2}\end{array}\right]=\mbox{$\mathbf{R}$}\left[\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{1}_{\mathcal{S}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{2}_{\mathcal{M}}\end{array}\right]. (92)

This concludes the proof of point a). Let us prove next point b). From Proposition 2, 𝒔irr1\mbox{\boldmath$s$}^{1}_{\text{irr}} is a irr\mathcal{F}_{\text{irr}}-bandlimited signal with irr\mathcal{F}_{\text{irr}} given in (76). To find the bandwidth of the solenoidal part 𝒔sol1=𝐁2𝒔2\mbox{\boldmath$s$}^{1}_{\text{sol}}=\mbox{$\mathbf{B}$}_{2}\mbox{\boldmath$s$}^{2} we can proceed in a similar way to the proof of Proposition 2. Using the bandlimitedness property of 𝒔2\mbox{\boldmath$s$}^{2}, so that 𝒔2=𝐅22𝒔2\mbox{\boldmath$s$}^{2}=\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\mbox{\boldmath$s$}^{2}, it results

𝒔sol1=𝐁2𝐔22𝐔22T𝒔2\mbox{\boldmath$s$}^{1}_{\text{sol}}=\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{U}$}_{\mathcal{F}_{2}}^{2}\mbox{$\mathbf{U}$}_{\mathcal{F}_{2}}^{2\,T}\mbox{\boldmath$s$}^{2} (93)

with 𝐔22T×|2|\mbox{$\mathbf{U}$}_{\mathcal{F}_{2}}^{2}\in\mathbb{R}^{T\times|\mathcal{F}_{2}|} the matrix whose columns 𝒖i2\mbox{\boldmath$u$}_{i}^{2}, i2\forall i\in\mathcal{F}_{2} are the eigenvectors of the second-order Laplacian 𝐋2=𝐁2T𝐁2\mbox{$\mathbf{L}$}_{2}=\mbox{$\mathbf{B}$}_{2}^{T}\mbox{$\mathbf{B}$}_{2}. From Proposition 1 at each eigenvector 𝒖2\mbox{\boldmath$u$}^{2} with 𝒖2ker(𝐁2T𝐁2)\mbox{\boldmath$u$}^{2}\notin\mbox{ker}(\mbox{$\mathbf{B}$}_{2}^{T}\mbox{$\mathbf{B}$}_{2}) corresponds an eigenvector 𝐁2𝒖2\mbox{$\mathbf{B}$}_{2}\mbox{\boldmath$u$}^{2} of 𝐁2𝐁2T\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{B}$}_{2}^{T} with the same eigenvalue. Denote with 𝐔sol\mbox{$\mathbf{U}$}_{\mathcal{F}_{\text{sol}}} the E×|sol|E\times|\mathcal{F}_{\text{sol}}| matrix whose columns are the eigenvectors of 𝐋1\mbox{$\mathbf{L}$}_{1} associated to the frequency index set sol\mathcal{F}_{\text{sol}} with

sol={i:𝒖i=𝐁2𝒖i2,𝒖i2ker(𝐁2T𝐁2),i2𝒞2}.\mathcal{F}_{\text{sol}}=\{\ell_{i}\in\mathcal{F}\,:\,\mbox{\boldmath$u$}_{\ell_{i}}=\mbox{$\mathbf{B}$}_{2}\mbox{\boldmath$u$}^{2}_{i},\,\mbox{\boldmath$u$}^{2}_{i}\notin\text{ker}(\mbox{$\mathbf{B}$}_{2}^{T}\mbox{$\mathbf{B}$}_{2}),\forall i\in\mathcal{F}_{2}-\mathcal{C}_{2}\}. (94)

Let us write 𝐔22\mbox{$\mathbf{U}$}_{\mathcal{F}_{2}}^{2} as

𝐔22=[𝐔𝒞22,𝐔2𝒞22]\mbox{$\mathbf{U}$}_{\mathcal{F}_{2}}^{2}=[\mbox{$\mathbf{U}$}_{\mathcal{C}_{2}}^{2},\mbox{$\mathbf{U}$}_{\mathcal{F}_{2}-\mathcal{C}_{2}}^{2}]

where the columns of 𝐔𝒞22\mbox{$\mathbf{U}$}_{\mathcal{C}_{2}}^{2} are the c2=|𝒞2|0c_{2}=|\mathcal{C}_{2}|\geq 0 eigenvectors in the kernel of 𝐋2\mbox{$\mathbf{L}$}_{2} belonging to the bandwidth of 𝒔2\mbox{\boldmath$s$}^{2}. Therefore, it results

𝐁2𝐔22=[𝐎𝒞2,𝐔sol],\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{U}$}_{\mathcal{F}_{2}}^{2}=[\mathbf{O}_{\mathcal{C}_{2}},\,\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{sol}}}}], (95)

where we used the equality 𝐁2𝐔2𝒞22=𝐔sol\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{U}$}_{\mathcal{F}_{2}-\mathcal{C}_{2}}^{2}=\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{sol}}}}. Then equation (93) reduces to

𝒔sol1=𝐔sol(𝐔2𝒞22)T𝒔2.\mbox{\boldmath$s$}^{1}_{\text{sol}}=\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{sol}}}}(\mbox{$\mathbf{U}$}_{{{\mathcal{F}_{2}-\mathcal{C}_{2}}}}^{2})^{T}\mbox{\boldmath$s$}^{2}. (96)

Since it holds 𝐁2T𝒖i=𝒖i2\mbox{$\mathbf{B}$}_{2}^{T}\mbox{\boldmath$u$}_{{\color[rgb]{0,0,0}\ell_{i}}}=\mbox{\boldmath$u$}_{i}^{2}, 𝒖i2ker(𝐁2)\forall\mbox{\boldmath$u$}_{i}^{2}\notin\mbox{ker}(\mbox{$\mathbf{B}$}_{2}), we easily get

𝒔sol1=𝐔sol(𝐔sol)T𝐁2𝒔2=𝐔sol(𝐔sol)T𝒔sol1.\mbox{\boldmath$s$}^{1}_{\text{sol}}=\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{sol}}}}(\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{sol}}}})^{T}\mbox{$\mathbf{B}$}_{2}\mbox{\boldmath$s$}^{2}=\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{sol}}}}(\mbox{$\mathbf{U}$}_{{\color[rgb]{0,0,0}\mathcal{F}_{\text{sol}}}})^{T}\mbox{\boldmath$s$}^{1}_{\text{sol}}. (97)

Then 𝒔sol1\mbox{\boldmath$s$}^{1}_{\text{sol}} is a sol\mathcal{F}_{\text{sol}}-bandlimited edge signal with sol\mathcal{F}_{\text{sol}} given in (94). This implies that 𝒔1\mbox{\boldmath$s$}^{1} is \mathcal{F}-bandlimited with =solHirr\mathcal{F}=\mathcal{F}_{\text{sol}}\cup\mathcal{F}_{\text{H}}\cup\mathcal{F}_{\text{irr}} and bandwidth ||=|0|+|H|+|2|(c1+c2)|\mathcal{F}|=|\mathcal{F}_{0}|+|\mathcal{F}_{\text{H}}|+|\mathcal{F}_{2}|-(c_{1}+c_{2}). Let us now consider the system in (92). We get

𝒔0=(𝐈𝐃¯𝒜𝐅00)1𝒔𝒜0𝒔H1=(𝐈𝐃¯𝒮𝐅H)1𝐃𝒮𝐁1T𝐅00(𝐈𝐃¯𝒜𝐅00)1𝒔𝒜0+(𝐈𝐃¯𝒮𝐅H)1𝒔𝒮1(𝐈𝐃¯𝒮𝐅H)1𝐃𝒮𝐁2𝐅22(𝐈𝐃¯𝐅22)1𝒔2𝒔2=(𝐈𝐃¯𝐅22)1𝒔2.\begin{array}[]{lll}\mbox{\boldmath$s$}^{0}&=(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ {\mbox{\boldmath$s$}}_{H}^{1}&=-(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0}(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{A}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{0}}^{0})^{-1}\mbox{\boldmath$s$}^{0}_{\mathcal{A}}+\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &\quad(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}\mbox{\boldmath$s$}^{1}_{\mathcal{S}}-(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{\text{H}}})^{-1}\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{2}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ &\quad\cdot(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2})^{-1}\mbox{\boldmath$s$}^{2}_{\mathcal{M}}\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \mbox{\boldmath$s$}^{2}&=(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{M}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{2}}^{2})^{-1}\mbox{\boldmath$s$}^{2}_{\mathcal{M}}.\end{array} (98)

Using equations in (98) and the fact that 𝒔𝒮1=𝐃𝒮𝐁2𝒔2+𝐃𝒮𝐁1T𝒔0+𝐃𝒮𝒔H1\mbox{\boldmath$s$}^{1}_{\mathcal{S}}=\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{2}{\mbox{\boldmath$s$}}^{2}+\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{$\mathbf{B}$}_{1}^{T}\mbox{\boldmath$s$}^{0}+\mbox{$\mathbf{D}$}_{\mathcal{S}}\mbox{\boldmath$s$}_{H}^{1}, it holds

𝒔H1=(𝐈𝐃¯𝒮𝐅H)1𝐃𝒮𝒔H1.\begin{array}[]{lll}{\mbox{\boldmath$s$}}_{H}^{1}&=(\mbox{$\mathbf{I}$}-\overline{\mbox{$\mathbf{D}$}}_{\mathcal{S}}\mbox{$\mathbf{F}$}_{\mathcal{F}_{H}})^{-1}\mbox{$\mathbf{D}$}_{\mathcal{S}}{\mbox{\boldmath$s$}}_{H}^{1}.\end{array} (99)

Hence, from (99), it follows that to perfectly recover the harmonic part of 𝒔1\mbox{\boldmath$s$}^{1} we need a number of edge samples N1N_{1} at least equal to the signal bandwidth |H||\mathcal{F}_{H}|. Finally, from the first and last equation in (98), we conclude that to retrieve 𝒔0\mbox{\boldmath$s$}^{0} and the solenoidal signal 𝒔sol1\mbox{\boldmath$s$}^{1}_{\text{sol}} we need, respectively, a number of node samples N0|0|N_{0}\geq|\mathcal{F}_{0}| and a number of triangle signal samples N2|2|N_{2}\geq|\mathcal{F}_{2}|.