Signal Processing on Higher-Order Networks: Livin’ on the Edge ... and Beyond

In this tutorial, we provide a didactic treatment of the emerging topic of signal processing on higher-order networks. Drawing analogies from discrete and graph signal processing, we introduce the building blocks for processing data on simplicial complexes and hypergraphs, two common higher-order network abstractions that can incorporate polyadic relationships. We provide brief introductions to simplicial complexes and hypergraphs, with a special emphasis on the concepts needed for the processing of signals supported on these structures. Speciﬁcally, we discuss Fourier analysis, signal denoising, signal interpolation, node embeddings, and nonlinear processing through neural networks, using these two higher-order network models. In the context of simplicial complexes, we speciﬁcally focus on signal processing using the Hodge Laplacian matrix, a multi-relational operator that leverages the special structure of simplicial complexes and generalizes desirable properties of the Laplacian matrix in graph signal processing. For hypergraphs, we present both matrix and tensor representations, and discuss the trade-oﬀs in adopting one or the other. We also highlight limitations and potential research avenues, both to inform practitioners and to motivate the contribution of new researchers to the area.


Introduction
Graphs provide a powerful abstraction for systems consisting of (dynamically) interacting entities.By encoding these entities as nodes and the interaction between them as edges in a graph, we can model a large range of systems in an elegant, conceptually simple framework.Accordingly, graphs have been used as models in a broad range of application areas [1,2], including neuroscience [3,4], urban transportation [5], and social sciences [6].Many of these applications may be understood in terms of graph signal processing (GSP), which provides a unifying framework for processing data supported on graphs.In GSP, we model complex data dependencies as the edges of graphs that relate signals on the nodes.In this way GSP extends and subsumes classical signal processing concepts and tools such as the Fourier transforms, filtering, sampling and reconstruction of signals, and others, to a graph-based setting [7,8,9].
To enable computations with graph-based data, we typically encode the graph structure in an adjacency matrix or its associated (normalized or combinatorial) Laplacian matrix.Rather than considering these matrices as a simple table that records pairwise coupling between nodes, it is fruitful to think of these matrices as linear operators that map data from the node space to itself.By analyzing the properties of these maps -e.g., their spectral properties -we can reveal important aspects both about the graphs themselves as well as signals defined on the nodes.Choosing an appropriate matrix operator associated with the graph structure is thus a key factor in gaining deeper insights about graphs and graph signals.In GSP, we call such maps that relate data associated with different nodes graph shift operators.Graph shift operators are natural generalizations of the classical time delay, and constitute the fundamental building blocks of graph filters and other more sophisticated processing architectures [10].The rapid advancement of GSP has benefited significantly from spectral and algebraic graph theory [11], in which the properties of matrices such as the adjacency matrix and the Laplacian have been extensively studied.
By construction, graph-based representations do not account for interactions between more than two nodes, even though such multi-way interactions are widespread in complex systems: multiple neurons can fire at the same time [12], biochemical reactions usually include more than two proteins [13], and people interact in small groups [14].To account for such polyadic interactions, a number of modeling frameworks have been proposed in the literature to represent higher-order relations, including simplicial complexes [15], hypergraphs [16], and others [17].However, in comparison to this line of work on representing the structure of complex multi-relational systems, the literature on the data processing for signals defined on higher-order networks is comparatively sparse.In this tutorial paper, we focus on the topic of signal processing on simplicial complexes and hypergraphs.Following a high-level didactic style, we concentrate on the algebraic representations of these objects, and discuss how the choice of this algebraic representation can influence the way in which we analyze and model signals associated with higher-order networks.
Similarly to graphs, higher-order interactions can be encoded in terms of matrices or, more generally, tensors.Two of the most prominent abstractions for such polyadic data are simplicial complexes [15] and hypergraphs [16].As we will see in the following, both of these abstractions have certain advantages and disadvantages: Hypergraphs are somewhat more flexible in terms of the relationships they can represent, which can be desirable in terms of modeling.Indeed a simplicial complex may be interpreted as a specific hypergraph for which only certain sets of hyperedges are allowed.The advantage of simplicial complexes, however, is that this additional structure provides deep links to computational geometry and algebraic topology, which can facilitate both the computation and interpretation of the processed signals [18].
Analogously to the graph case, we encode higher-order relations in terms of incidence matrices or tensors that provide an algebraic description of these two data models.Clearly, the choice of the linear (or multilinear) operator representing higher-order interactions will matter for revealing interesting properties about the data, leading to the key question of how to choose an appropriate abstraction for this kind of data.In comparison to graphs, the analysis of higher-order interaction data is more challenging due to several factors: (i) There exists a combinatorially large number of possible interactions: two-way, three-way, and so on.Hence, very large matrices and tensors are needed to capture all these relations; (ii) The large dimensionality of these representations gives rise to computational and statistical issues on how to efficiently extract information from higher-order data; and (iii) The theory on the structure of higher-order networks is largely unexplored relative to that of graphs.In the following, we will primarily focus on the question of choosing an appropriate algebraic descriptor to implement various signal processing tasks on simplicial complexes and hypergraphs.Specifically, we will consider the modeling assumptions inherent to an abstraction based on simplicial complexes versus hypergraphs, and discuss the relative advantages and disadvantages of a number of associated matrix and tensor descriptions that have been proposed.To make our discussions more concrete we provide a number of illustrative examples to demonstrate how the choice of an algebraic description can directly effect the type of results we can obtain.Outline.We first briefly recap selected concepts from signal processing and GSP in Section 2. In Section 3, we present tools from algebraic topology and their use in representing higher-order interactions with simplicial complexes.In Section 4, we describe methods to analyze signals defined on simplicial complexes.We then turn our attention to hypergraphs in Section 5, and focus on the modeling of higher-order interactions via hypergraphs.Section 6 then builds on these models and outlines some of the existing methods for signal processing and learning on hypergraphs.Finally, in Section 7, we close with a brief discussion summarizing the main takeaways and laying out directions for future research.

Signal processing on graphs: A selective overview
Before discussing signal processing on higher-order networks, we revisit principles from signal processing and GSP [7,8,9] and recall some important problem setups, which will later guide our discussion on higher-order signal processing.In this tutorial, we focus on undirected graphs (and higher-order networks), although signal processing on directed graphs has been studied as well [19,20].

Central tenets of discrete signal processing
In discrete signal processing (DSP), signals are processed by filters.A linear filter H is an operator that takes a signal as input and produces a transformed signal as output.This linear filtering operation is represented by a matrix-vector multiplication s out = Hs in and defines a linear system.A special role is played by the circular time shift filter S, a linear operator that delays the signal by one sample.This so-called shift operator underpins the class of time shift-invariant filters, which is arguably the most important class of linear filters in practice.Specifically, in classical DSP, every linear time shift-invariant filter can be built based on a matrix polynomial of the time-shift S [21].
A filter represented by the matrix H is shift-invariant if it commutes with the shift operator, i.e., SH = HS.This implies that H and S preserve each others eigenspaces.Since the cyclic shift S is a circulant matrix that is diagonalizable by discrete Fourier modes, this implies that the action of any shift-invariant linear filter in DSP can be understood by means of a Fourier transform.Specifically, the eigenvectors of the cyclic time-shift operator provide an orthogonal basis for linear time shift-invariant processing of discrete-time signals.Thus time-shift invariant filters are naturally interpretable by Fourier analysis [21].

Graphs, incidence matrices, and the graph Laplacian
An undirected graph G is defined by a set of nodes V = {v 1 , • • • , v N } with cardinality N and a set of edges E with cardinality E composed of unordered pairs of nodes in V. Edges can be stored in the symmetric adjacency matrix A whose entries are given by A ij = A ji = 1 if {i, j} ∈ E and 0 otherwise.Given the degree matrix D = diag(A1), the graph Laplacian associated with G is given by L = D − A. Alternatively to the adjacency matrix A, we can collect interactions between the nodes in the graph via the incidence matrix B ∈ R N ×E .For each edge e we define an arbitrary orientation, which we denote by e = (i, j).We think of such an edge e as being oriented from tail node i to its head node j.Based on this orientation, the incidence matrix B is defined such that B ie = −B je = −1 and B ke = 0 otherwise.Using this definition we can provide an equivalent expression for the graph Laplacian as L = BB .In the remainder of this paper, we choose an edge-orientation induced by the lexicographic ordering of the nodes, i.e., edges will always be oriented such that they point from a node with lower index to a node with higher index.However, we emphasize that this orientation is arbitrary and is distinct from the notion of a directed edge.

Graph signal processing
GSP generalizes the concepts and tools from DSP to signals defined on the nodes of graphs.A graph signal s : V → R is a map from the set of nodes V to the set of real numbers R.This defines an isomorphism between the set of nodes and the set of real-valued vectors of length N , so any graph signal may be represented as a vector s = [s 1 , s 2 , . . ., s N ] ∈ R N .
An example of a graph signal can be seen in Figure 1A, where the signal values at each node are indicated by the node color.Similarly to DSP, filtering in GSP can be represented by a matrix-vector multiplication operation s out = Hs in .The analog of the shift operator S in the GSP setting is any operator that captures the relational dependencies between nodes, including the adjacency matrix A, the Laplacian matrix L, or variations of these operators [8,9].As we are considering undirected graphs here, the choice of a shift operator imparts a natural orthogonal basis U in which to represent the signal.Given the eigenvalue decomposition of the shift operator S = UΛU and a filtering weight function h : R → R, we can express a shift-invariant filter in this basis as: where we have used the shorthand notation h(Λ) = diag(h(λ 1 ), • • • , h(λ N )).By analogy to the Fourier basis in DSP, the eigenvectors U of the shift operator are said to define a graph Fourier transform (GFT), and h(Λ) is called the frequency response of the filter H. Specifically, the GFT of a graph signal s is given by s = U s, while the inverse GFT is given by s = Us [9,7].
As our discussion emphasizes, any filtered signal s out = Hs in on an undirected graph can be understood in terms of three steps: (i) project the signal into the graph Fourier domain, i.e., express it in the orthogonal basis U (via multiplication with U ); (ii) amplify certain modes and attenuate others (via multiplication with h(Λ)), and (iii) push back the signal to the original node domain (via multiplication with U).The choice of an appropriate shift operator is thus crucial, as its eigenvectors define the basis for any shift-invariant graph filter for undirected graphs.We will encounter this aspect again when considering signal processing on higher-order networks.
In the context of GSP, we focus on the graph Laplacian as a shift operator.This choice has the following advantages.First, L is positive semidefinite, so that all the graph frequencies (eigenvalues) are real and non-negative.This enables us to order the GFT basis vectors (eigenvectors) in a natural way.Second, by considering the variational characterization of the eigenvalues of the Laplacian in terms of the Rayleigh quotient r(s) = s Ls/s s = ij A ij (s i −s j ) 2 /(2 s 2 ), it can be shown that eigenvectors associated with small eigenvalues have small variation along the edges of the graph (low frequency) and eigenvectors associated with large eigenvalues have large variation along edges (high frequency).In particular, eigenvectors associated with eigenvalue 0 are constant over connected components.An illustration of this is given in Figure 1B, which displays the individual basis vectors of the graph Laplacian, and the coefficients with which these basis vectors would have to be weighted to obtain the previously considered graph signal in Figure 1A.

Graph signal processing: Illustrative problems and applications
Over the last few years, several relevant problems have been addressed using GSP tools including sampling and reconstruction of graph signals [22,23,24], (blind) deconvolution [25,26], and network topology inference [27,28,29,30], to name a few.We now introduce a subset of illustrative problems and application scenarios that we will revisit in the context of higher-order signal processing.

Fourier analysis: Node embeddings and Laplacian eigenmaps
As discussed above, the GFT of a graph signal provides a fundamental tool of GSP.While we are often interested in filtering a signal and representing it in the vertex space, the Fourier representation can also be used to gain insight about specific graph components by considering a frequency domain representation of the indicator vector associated with the vertices of interest.In particular, by considering a truncated Fourier domain representation of the indicator vectors of individual nodes, we can recover a number of spectral node embeddings that have found a broad range of applications (see also [31] for a related discussion).Specifically, by considering a truncated Fourier domain representation based on the normalized Laplacian as a shift operator, we recover a variant of the so-called Laplacian eigenmaps [32], and by additionally incorporating a scaling associated with the eigenvalues, we can recover the diffusion map embedding [33,31].
We remark that while most of these spectral node embeddings focus on low frequency eigenvectors, high frequency components can also be of interest for embeddings.For instance, if the graph to be analyzed is almost bipartite, then the eigenvectors associated with the highest frequencies of the graph Laplacian will reveal the two (almost) independent node sets in the graph.Other types of (nonlinear) node embeddings may also be viewed through a GSP lens, e.g., certain node embeddings derived from graph neural networks (cf.Section 2.4.4).We refer to [34] for an extensive discussion on the highly active area of node representation learning on graphs.

Signal smoothing and denoising
A canonical task in GSP is to denoise (smooth out) a noisy signal y = y 0 + ∈ R N , where y 0 is the true signal we aim to recover and is a vector of zero-mean white Gaussian noise [35].A natural assumption is that the signal should be smooth on nearby nodes in terms of the underlying graph, so that neighboring nodes will tend to take on similar values.Following our above discussion, this amounts to assuming that the signal has a low-pass characteristic, i.e., can be well-represented by the low frequency eigenvectors of the Laplacian.Indeed, the eigenvectors of the Laplacian associated with low eigenvalues are smooth on clusters, i.e. their total variation is low within clusters and high over edges between clusters.
We formalize the above problem in terms of the following optimization problem [36,27] where ŷ is the estimate of the true signal y 0 .The coefficient α > 0 can be interpreted as a regularization parameter that trades-off the smoothness promoted by minimizing the quadratic form ŷ Lŷ = ij A ij (ŷ i − ŷj ) 2 /2 and the fit to the observed signal in terms of the squared 2-norm.The optimal solution for (2) is given by [27] ŷ = (I + αL) −1 y.
A different procedure to obtain a signal estimate is the iterative smoothing operation for a certain fixed number of iterations k and a suitably chosen update parameter µ.This may be interpreted in terms of k gradient descent steps of the cost function ŷ Lŷ.
Matching the signal modeling assumption of a smooth signal, the denoising and smoothing operators defined in (3) and ( 4) are instances of low-pass filters, i.e., filters whose frequency responses h(λ) = diag(U HU) are vectors of non-increasing (decreasing) values.In the GSP context, the low-pass filtering operation guarantees that variations over neighboring nodes are smoothed out, in line with the intuition of the optimization problem defined in (2).

Graph signal interpolation
Another common task in GSP is signal interpolation, which can alternatively be interpreted in terms of graph-based semi-supervised learning [23,37].Suppose that we are given signal values (labels) for a subset of the nodes V L ⊂ V of a graph.Our goal is to interpolate these assignments and to provide a label to all unlabeled nodes As in the signal denoising case, it is natural to adopt a smoothness assumption that posits that well-connected nodes have similar labels [38].This motivates the following constrained optimization problem [39] which aims to minimize the sum-of-squares label difference between connected nodes under the constraint that the observed node labels y i should be accounted for in the optimal solution.Notice that the objective function in (5) can again be written in terms of the quadratic form of the graph Laplacian B ŷ 2 2 = (i,j)∈E (ŷ i − ŷj ) 2 = ŷ Lŷ, highlighting the low-pass modeling assumption inherent in the optimization problem (5).

Graph neural networks
Motivated by spectral interpretations of filters and shift operators in the domain of graph signal processing, graph neural networks [40,41] have emerged as a popular approach to incorporate nonlinearities in the graph signal processing pipeline for purposes of node embedding [42,43,44], node classification [45,46], and graph classification [46].Graph neural network architectures combine notions of graph filtering, permutation invariance, and graph Fourier analysis with nonlinear models from the design of neural networks.
One such architecture is the well-known graph convolutional network [45], which resembles the functional form of (4) with interleaved nonlinear, elementwise activation functions, i.e., for a set of F 0 input features gathered in the columns of a matrix where we take Y K for some integer K as the output, k=1 are learnable weight matrices that perform linear transformations in the feature space, H is a certain graph filter, and σ(•) is a general nonlinear activation function applied elementwise.Specifically, [45] uses a normalized version of the graph Laplacian as a first-order filter H, and the ReLU activation function for σ(•).
A closer look at (6) reveals a connection with the iterative smoothing method of (4).Taking σ(•) to be the identity mapping, we see that ( 6) can be expressed as a linear graph filter independently applied to each of the F 0 features, with output defined as linear combinations of these filtered features at each node via the matrices {W k }.That is, where H K itself represents a shift-invariant graph filter, due to the assumed shift-invariance of H. Taking F 0 = F K = 1 and H = (I − µL) recovers the iterative smoothing procedure of (4).However, by interleaving nonlinear functions as in (6) and taking linear combinations of features via {W k }, we allow the architecture to learn more sophisticated, nonlinear relationships between the nodes and node features by finding optimal weights {W k } for a suitable loss function.
There are many variants of the graph neural network architecture, designed for tasks ranging from semi-supervised learning [45] to graph classification [46].We refer the reader to the survey paper [41] for further details, as well as [47] for a view focused on graph signal processing in particular.

Modeling higher-order interactions with simplicial complexes
In this section, we recap some of the mathematical underpinnings of simplicial complexes.We focus in particular on the Hodge Laplacian [15,48,49], which extends the graph Laplacian as a natural shift operator for simplicial complexes.Specifically, we discuss how the eigenvectors of the Hodge Laplacian provide an interpretable orthogonal basis for signals defined on simplicial complexes by means of the Hodge decomposition.

Background on simplicial complexes
Given a finite set of vertices V, a k-simplex S k is a subset of V with cardinality k + 1.A simplicial complex X is a set of simplices such that for any k-simplex S k in X , any subset of S k must also be in X .A face of a simplex S k is a subset of S k with cardinality k.A co-face S k+1 of a simplex S k is a (k + 1)-simplex such that S k is a subset of S k+1 .More detailed discussions and definitions can, e.g., be found in [50,48,51].
For computational purposes, we define an orientation for each simplex by fixing an ordering of its vertices.This ordering induces a reference orientation by increasing vertex label.Based on the reference orientation for each simplex, we introduce a book-keeping of the relationships between (k − 1)-simplices and k-simplices via linear maps called boundary operators that record higher-order interactions in networks.As the simplicial complexes we consider are all of finite order, these boundary operators can be represented by matrices B k .The rows of B k are indexed by (k − 1)-simplices and the columns of B k are indexed by k-simplices.For instance, B 1 is nothing but the node-to-edge incidence matrix denoted B in Section 2, while B 2 is the edge-to-triangle incidence matrix.
Example 2. We adopt the lexicographic order to define the reference orientation of simplices in Figure 2. The corresponding boundary maps B 1 and B 2 are then given by We may consider signals defined on any k-simplices (nodes, edges, triangles, etc.) of a simplicial complex as illustrated in Figure 2B-D.Just like for graph signals, we need to establish an appropriate shift operator to process such signals.While there are many possibilities, we will show in the next section that a natural choice for the shift operator is the Hodge Laplacian, a generalization of the graph Laplacian rooted in algebraic topology.

The Hodge Laplacian as a shift operator for simplicial complexes
Based on the incidence matrices defined above, we can define a sequence of so-called Hodge Laplacians [48].Specifically, the k-th combinatorial Hodge Laplacian, originally introduced in [52], is given by [52,48]: Notice that, according to this definition, the graph Laplacian corresponds to L 0 = B 1 B 1 with B 0 = 0.More generally, by equipping all spaces with an inner product induced by positive diagonal matrices, we can define a weighted version of the Hodge Laplacian (see, e.g., [50,48,49,53]).This weighted Hodge Laplacian encapsulates operators such as the random walk graph Laplacian or the normalized graph Laplacian as special cases.For simplicity, in this paper we concentrate on the unweighted case.Just like the graph Laplacian provides a useful choice for a shift operator for node signals defined on a graph due to its (spectral) properties, the Hodge Laplacian and its weighted variants provide a natural shift operator for signals defined on the edges of a simplicial complex (or graph).As the edges in our simplicial complexes are equipped with a chosen reference orientation, the Hodge Laplacian is in particular relevant as shift operator if the signals considered are indeed oriented, e.g., correspond to some kind of edge-flow in case of a signal on edges.Similar to the graph Laplacian, the Hodge Laplacian is positive semi-definite, which ensures that we can interpret its eigenvalues in terms of non-negative frequencies.Moreover, these frequencies are again aligned with a specific type of signal-smoothness displayed by the eigenvectors of the Hodge Laplacian.For signals on general k-simplices, this notion of smoothness can be understood by means of the so called Hodge decomposition [48,50,49], which states that the space of k-simplex signals can be decomposed into three orthogonal subspaces where im(•) and ker(•) are shorthand for the image and kernel spaces of the respective matrices, ⊕ represents the union of orthogonal subspaces, and N k is the cardinality of the space of signals on k-simplices (i.e., N 0 = N for the node signals, and N 1 = E for edge signals).Here we have (i) made use of the fact that a signal on a finite dimensional set of N k simplices is isomorphic to R N k ; and (ii) implicitly assumed that we are only interested in real-valued signals and thus a Hodge decomposition for a real valued vector space (see [48] for a more detailed discussion).
To facilitate the discussion on how the Hodge decomposition (9) can be related to a notion of smooth signals let us consider the concrete case k = 1 with Hodge Laplacian [49,54,55].In this case, we can provide the following meaning to the three subspaces considered in (9).First, the space im(B 1 ) can be considered as the space of gradient flows (or potential flows).Specifically, since im(B 1 ) = {f = B 1 v, for some v ∈ R N } we may create any such flow according to the following recipe: (i) assign a scalar potential to all the nodes; (ii) induce a flow along the edges by considering the difference of the potentials on the respective endpoints.Clearly, we cannot create a positive net-flow along any closed path within a complex if the flow at every edge is computed according to the gradient (difference) of the node potentials in the chosen reference orientation: the difference between the potentials along any closed path has to sum to zero, by construction.Accordingly, the space ker(B 1 ) = im(B 2 ) ⊕ ker(L 1 ) that is orthogonal to im(B 1 ) is the so-called cycle space.As indicated, the cycle space is spanned by two types of cyclic flows.The space im(B 2 ) consists of curl flows and its elements are flows that can be composed of combinations of local circulations along any 2-simplex.Specifically, we may assign a scalar potential to each 2-simplex and consider the induced flows f = B 2 t, where t is the vector of 2-simplex potentials.Note that every column of B 2 creates a triangular circulation around the respective 2-simplex along its chosen reference orientation.Hence, these flows correspond to local cycles associated with the 2-simplices present in the simplicial complex.Finally ker(L 1 ) is the harmonic space, whose elements correspond to (global) circulations that are not representable as a linear combination of curl flows.
Since the Hodge decomposition is orthogonal, p and w are the solutions of the following least squares problems min The harmonic component satisfies L 1 h = 0, and by the orthogonality of the Hodge decomposition, it can be obtained by h = c − g − r.As explained in the text, g is an element of the space im(B 1 ), i.e., the gradient space or space of cycle-free flows.Components h ∈ ker(L 1 ) and r ∈ im(B 2 ) are elements of the cycle space ker(B 1 ) = im(B 2 ) ⊕ ker(L 1 ).As can be seen in Figure 3, the curl component r can be decomposed into two local circulations, of absolute magnitude 1 and 1.7, respectively.
Importantly the gradient, curl and harmonic subspaces are spanned by certain subsets of eigenvectors of L 1 as the following lemma, which can be verified by direct computation [56,49], shows.Lemma 4. Let L 1 = B 1 B 1 + B 2 B 2 be the Hodge 1-Laplacian of a simplicial complex.Then the eigenvectors associated with nonzero eigenvalues of L 1 comprise two groups that span the gradient space and the curl space respectively.
• Consider any eigenvector v i of the graph Laplacian L 0 associated with a nonzero eigenvalue λ i .Then u grad , . ..] spans the space of all gradient flows.
• Consider any eigenvector t i of the "2-simplex coupling matrix" T = B 2 B 2 associated with a nonzero eigenvalue θ i .Then u curl , . ..] spans the space of all curl flows.The above result shows that, unlike for node signals, edge-flow signals can have a high frequency contribution, reflected by a high component in the corresponding projected space, due to two different types of (orthogonal) basis components being present in the signal: a high frequency may arise both due to a curl component as well as a strong gradient component present in the edge-flow.This has certain consequences for the filtering of edge signals that we will discuss in more detail in the following section.

Signal processing and learning on simplicial complexes
Using the algebraic framework of simplicial complexes as discussed in Section 3, in this section we revisit the four signal processing setups considered in Section 2.4-Fourier analysis and embeddings, smoothing and denoising, signal interpolation, and nonlinear (graph) neural networks-and discuss how these can be extended to simplicial complexes by means of the Hodge Laplacian and associated boundary maps.For concreteness, we concentrate primarily on edge signals, though the results presented here can be extended to signals on any type of simplices.

Fourier analysis: Edge-flow and trajectory embeddings
In the same way that the (normalized) graph Laplacian provides a node embedding of the graph, the eigenvectors of the Hodge Laplacian L 1 can be used to induce a low-frequency edge embedding.As a concrete example, let us consider the harmonic embedding, i.e., the projection of an edge signal f into the harmonic subspace, corresponding to signal with zero frequency where harm , u harm , . ..] corresponds to eigenvectors of the Hodge Laplacian L 1 associated with zero eigenvalues.As explained in Section 3, the harmonic space spanned by the vectors U harm corresponds to (globally) cyclic flows that cannot be composed from locally cyclic flows (curl flows).Analogously to the embedding of nodes via indicator signals projected onto the low frequency eigenvectors (i.e., eigenvectors associated with low eigenvalues) of the graph Laplacian, we can construct embeddings of individual edges using (12).Unlike for graphs where such node embeddings can indicate a clustering of the nodes [57], an edge embedding into the harmonic subspace characterizes the position of an edge relative to the harmonic flows.Since the harmonic flows are in one-to-one correspondence with the 1-homology of the simplicial complex, i.e., the "holes" in the complex that are not filled with faces, such an embedding may be used to identify edges whose location is in accordance with particular harmonic cycles [49,58].However, as the edges are equipped with an arbitrary reference orientation, the sign of the projection into the harmonic space is arbitrary.This is a consequence of the fact that, unlike the graph Laplacian, the Hodge Laplacian is in general not invariant, but equivariant under a change of the reference orientation of the edges (cf.section 4.4).To account for this fact, one may use a clustering approach that is invariant to this arbitrary choice of sign.For instance, we can use subspace clustering as in [58], or consider the absolute value of the projection as discussed in [49].
Rather than aiming at grouping edges together into clusters according to their relative position with respect to the 1-homology [58], we may be interested in grouping sequences of edges corresponding to trajectories on a simplicial complex by projecting appropriate signal indicator vectors of such trajectories into the harmonic space [49].Here we represent a trajectory by a vector f with entries f (i,j) = 1 if the edge (i, j) is part of the trajectory and traversed along the chosen reference orientation, f (i,j) = −1 if the edge (i, j) is part of the trajectory and traversed opposite to the chosen reference orientation, and f (i,j) = 0 otherwise.
Example 5.In Figure 4A, we construct a simplicial complex by drawing 400 random points in the unit square and generating a triangular lattice by Delaunay triangulation.We eliminate two points and all their adjacent edges in order to create two "holes" in the simplicial complex, which are not covered by a 2-simplex.These two holes are represented by orange shaded areas and can be interpreted as obstacles through which trajectories cannot pass.All (other) triangles are considered as 2-simplices.Accordingly, the Hodge Laplacian has two zero eigenvalues associated to two harmonic functions u (1) harm and u (2) harm .On the edges of the simplicial complex, we define five trajectories as displayed in Fig- ure 4A. Figure 4B shows the corresponding embeddings of the flow vectors of each trajectory and their evolution in the embedding space.More explicitly, for a given trajectory we build the embedding sequentially as follows.The embedding starts at zero.We then iteratively project the next edge in the trajectory (accounting for the chosen reference direction) into the harmonic space.In our case each edge is described by a position (u 1 , u 2 ) in the harmonic space: one component along u (1) harm and the other along u (2) harm .The embedding of the trajectory is then obtained from adding these position vectors of the individual edges.Note that due to the linearity of the projection operation, this leads to the same final embedding (marked by a red dot) as if we had directly projected the full trajectory vector.
Importantly, the embedding differentiates the topological properties of the trajectories.The magenta and olive green trajectories have a similar embedding since they both pass above the top left obstacle.The maroon and green trajectories pass between the two obstacle and have a similar embedding (negative coordinate along u (1) harm and zero component along u (2) harm ).The orange trajectory is the only one that goes through the right of the bottom right obstacle.Hence, its embedding stands out from the other four trajectories in the embedding space.For a more extensive discussion of these aspects see [49].
As we have seen in the above example, trajectories that behave similarly with respect to the 1-homology ("holes") of a simplicial complex will have a similar embedding [49].One may thus, for instance, also identify topologically similar trajectories on the simplicial complex by clustering the resulting points in the harmonic embedding.Such an approach is of interest for a number of applications: One can construct simplicial complexes and appropriate trajectory embedding from a variety of flow data, including physical flows such as buoys drifting in the ocean [49], or "virtual" flows such as click streams or flows of goods and money.Related ideas for analyzing trajectories have also been considered in the context of traffic prediction [59].
While we have considered here only harmonic embeddings corresponding to signals with zero frequency, other type of embeddings may be of interest as well.We may, for instance, be interested in gradient-flow-based embeddings, which can be used to define a form of ranking of the nodes in terms of the associated potentials [60], or be interested in other forms of flows, which are only approximately harmonic [55].

A B
Figure 4: Embedding of trajectories defined on a simplicial complex.A Five trajectories defined on a simplicial complex containing two obstacles, indicated by orange color.The simplicial complex is constructed by creating a triangular lattice from a random set of points and then introducing two "holes" in this lattice.All triangles in the lattices are assumed to correspond to 2-simplices.B The projection of the trajectories displayed in A into the two dimensional harmonic space of the simplicial complex.Notice that the trajectories that move around the obstacles in a topologically similar way have a similar embedding [49].

Flow smoothing and denoising
We now revisit the question of smoothing and denoising from the perspective of signals defined in the edge space of a simplicial complex X .In parallel, we provide a more in-depth discussion on the basis vectors and notion of a smooth signal encapsulated in the Hodge 1-Laplacian L 1 and how it differs from the graph Laplacian [48,9,61].
Let us assume that the simplicial complex X is associated with oriented flows f 0 ∈ R E defined on edges.Like in the node-based setup discussed in Section 2.4.2, we assume that we can only observe a noisy version f = f 0 + of the true underlying signal, where is again a zero-mean white Gaussian noise vector of appropriate dimension.By analogy with the graph case, in order to get a smooth estimate f of the true signal f 0 from the noisy signal f , it is tempting to adopt the successful procedures from GSP (cf.equation ( 2)) and solve the following optimization program for the edge-flows f with optimal solution f = H Q f := (I + αQ) −1 f , where the matrix Q is a regularizer that needs to be chosen to ensure a smooth estimate.Following our discussion above, since the filter H Q will inherit the eigenvectors of the regularizer Q, a natural choice for a regularizer is an appropriate (simplicial) shift operator.We discuss three possible choices for the regularizer (shift operator) Q: (i) the graph Laplacian L LG of the line-graph of the underlying graph skeleton of the complex X , i.e., the line-graph of the graph induced by the 0-simplices (nodes) and 1-simplices (edges) of X ; (ii) the edge Laplacian L e = B 1 B 1 , i.e., a form of the Hodge Laplacian that ignores all 2simplices in the complex X such that B 2 = 0; (iii) the Hodge Laplacian that takes into account all the triangles of X as well.Before embarking on this discussion,  however, let us illustrate the effects of these choices by means of the following concrete example.
Example 6. Figure 5A displays a conservative (cyclic) flow on a simplicial complex, i.e., all of the flow entering a particular node exits the node again.This flow is then distorted by a Gaussian noise vector in Figure 5B.The estimation error produced by the filter based on the line-graph (Figure 5C) is comparatively worse (36.54 vs. 1.95 and 1.02 respectively) than the estimation performance of the edge Laplacian (Figure 5D) and the Hodge Laplacian (Figure 5E) filters.
Let us explain the results obtained from the individual filters in the above example in more detail, starting with the line-graph approach.As can be seen from Figure 5C, in this case the filtering operation leads to an increased error compared to the noisy input signal.This ineffective filtering result by means of the line-graph Laplacian has been observed in [54].The reason for this unintended behavior is that the line-graph Laplacian is not well-suited as a shift operator for flow signals.The basis functions given by the eigenvectors of the line-graph Laplacians induce a notion of smooth, low frequency signals that supposes that signals on adjacent edges in the simplicial complex have a small difference.This is equivalent to the fact that low-frequency modes in the node space do not vary a lot on tightly connected nodes on a graph.However, for flow signals this type of smoothness induced by eigenvectors of the line-graph Laplacian as shift operator is often not appropriate.Specifically, real-world flow signals typically display a large degree of flow conservation: most of the flow signal entering a node exits the node again, but the relative allocation of the flow to the edges does not have to be similar.Moreover, the line-graph Laplacian does not reflect the arbitrary orientation of the edges, so that performance is dependent on the chosen sign of the flow.Notice, however, that the line-graph can be a valid representation to process signals on edges that are not encoding flows and, as such, do not have a natural orientation.For example, one might expect the level of congestion on different roads to vary smoothly across edges, thus, justifying the use of a line-graph in such a case.
Unlike the line-graph Laplacian, the Edge Laplacian captures a notion of flow conservation, which implies that smooth flows should by cyclic [54].To see this, it is insightful to inspect the quadratic regularizer induced by L e = B 1 B 1 .Note that this quadratic form can be written as f L e f = B 1 f 2 2 .This is precisely the (summed) squared divergence of the flow signal f , as each entry (B 1 f ) i corresponds to the difference of the inflow and outflow to node i where f r is the flow on edge r = (i, j), and we have used a reference orientation induced by the lexicographic order.As a consequence, all cyclic flows will induce zero cost for the regularizer f L e f , which may also be seen from the fact that ker(B 1 ) is precisely the cycle space of a graph with incidence matrix B 1 .Stated differently, any flow that is not divergence free, i.e., not cyclic, will be penalized by the quadratic form.Since by the fundamental theorem of linear algebra ker(B 1 ) ⊥ im(B 1 ), any such non-cyclic flows can be written as a gradient flow f grad = B 1 v for some vector v of scalar node potentials -in line with the Hodge decomposition discussed in (9).In contrast to the Edge Laplacian, the full Hodge Laplacian L 1 includes the additional , which may induce a non-zero cost even for certain cyclic flows.More precisely, any cyclic flow that can be written as a curl flow f curl = B 2 t, for some vector t will have a non-zero penalty.This penalty is incurred despite the fact that f curl is a cyclic flow by construction (since B 1 f curl = B 1 B 2 c = 0, the vector f curl is clearly in the cycle space; see also discussion in Section 3.2).The additional regularization term B 2 f 2 2 may thus be interpreted as squared curl penalty.From a signal processing perspective, the L 1 based filter thus allows for a more refined notion of a smooth signal.Unlike in the Edge Laplacian filter, we do not declare all cyclic flows to be maximally smooth and consist only of frequency (eigenvalue) 0 basis signals.Instead a signal can have a high-frequency even if it is cyclic, if it has a high curl component.Hence, by constructing simplicial complexes with appropriate (triangular) 2-simplices, we have additional modeling flexibility for shaping the frequency response of an edge-flow filter [62].
In our example above, this more refined notion of a smooth signal is precisely what leads to an improvement in the filtering performance, since the ground truth signal is a harmonic function with respect to the simplicial complex and thus does not contain any curl components.We remark that the eigenvector basis of L e can always be chosen to be identical to the eigenvectors of L 1 ; thus, we may represent any signal in exactly the same way in a basis of L e or L 1 ; however, the frequencies associated with all cyclic vectors will be 0 for the Edge Laplacian, while there will be cyclic flows with nonzero frequencies for L 1 , in general.This emphasizes that the construction of faces is an important modeling choice for the selection of an appropriate notion of a smooth signal.

Interpolation and semi-supervised learning
Let us now focus on the interpolation problem for edge-data on a simplicial complex [55].Analogously to node signals, we are given a simplicial complex (or its graph skeleton) and a set of "labeled" oriented edges E L ⊂ E, i.e., we assume that we have measured the edgesignals on some edges but not on all.Our goal is now to predict the signals on the unlabeled or unmeasured edges in the set E U ≡ E\E L , whose cardinality we will denote by E U .Following [55], we will again start by considering the problem setup with no 2-simplices first (B 2 = 0), before we consider the general case in which 2-simplices are present.
To arrive at a well-defined problem for imputing the remaining edge-flows, we need to make an assumption about the structure of the true signal.Following our above discussions, we will again assume that the true signal has a low-pass characteristic in the sense of the Hodge 1-Laplacian, i.e., that the edge flows are mostly conserved.Let f denote the vector of the true (partly measured) edge-flow.As discussed in the context of flow smoothing, a convenient loss function to promote flow conservation is the sum-of-squares vertex divergence We can then formalize the flow interpolation problem via the following optimization program Note that, in contrast to the node signal interpolation problem, we have to add an additional regularization term f 2 2 to guarantee the uniqueness of the optimal solution.The reason is that, if there is more than one independent cycle in the network for which we have no measurement available, we may add any cyclic flow on such a cycle while not changing the cost function.To remedy this aspect, we simply add a 2-norm regularization which promotes small edge-flow magnitudes by default.Other regularization terms are possible as well, however this formulation enables us to rewrite the above problem in a least squares form as described below.
To arrive at a least-squares formulation, we consider a trivial feasible solution f 0 for (16) that satisfies f 0 r = f r if r ∈ E L and f 0 r = 0 otherwise.Let us now define the expansion operator Φ as the linear map from R E U to R E such that the true flow f can be written as is the vector of the unmeasured true edge-flows.Reducing the number of variables considered in this way, we can convert the constrained optimization problem ( 16) into the following equivalent unconstrained least-squares estimation problem for the unmeasured edges f U : We illustrate the above procedure by the following example.
Example 7. We consider the network structure in Figure 2A.The ground truth signal is f = (−2, −2, 4, −2, 3, −7, 7, 3, 4, −4) .We pick five labeled edges at random (colored in Figure 6A).The goal is to predict the labels of the unlabeled edges (in grey with a question mark in Figure 6A).The set of labeled edges is E L = {(1, 3), (1,4), (3,6), (4, 5), (5,6)}.The set of unlabeled edges is  Analogously to our discussion above, it may be relevant to include 2-simplices for the signal interpolation problem.We interpret such an inclusion of 2-simplices in two ways.From the point of view of the cost function, it implies that instead of penalizing primarily gradient flows (which have nonzero divergence), we in addition penalize certain cyclic flows, namely those that have a nonzero curl component.From a signal processing point of view, it means that we are changing what we consider a smooth (low-pass) signal, by adjusting the frequency representation of certain flows.Accordingly, one possible formulation of the signal interpolation problem, including information about 2 simplices is subject to the constraint that the components of f corresponding to measured flows are identical to those measurements.As in (17), we can convert this program into the following least-squares problem Remark 8.Note that the problem of flow interpolation is tightly coupled to the issue of signal reconstruction from sampled measurements.Indeed, if we knew that the edge signal to be recovered was exactly bandlimited [56], then we could reconstruct the edge-signal if we had chosen the edges to be sampled appropriately.Just like the interpolation problem considered here may be seen as a semi-supervised learning problem for edge labels, finding and choosing such optimal edges to be sampled may be seen as an active learning problem in the context of machine learning .While we do not expand further in this tutorial on the choice of edges to be sampled, we point the interested reader to two heuristic active learning algorithms for edge flows presented in [55].We also refer the reader to [61,56] for a theory of sampling and reconstruction of bandlimited signals on simplicial complexes, and to [63] for a similar overview that includes an approach for topology inference based on signals supported on simplicial complexes.

Beyond linear filters: Simplicial neural networks and Hodge theory
As discussed in Section 2.4.4,graph neural networks incorporate nonlinear activation functions in the graph signal processing pipeline in order to learn rich representations for graphs.In order to generalize these architectures to operate on simplicial complexes, we discuss central concepts underpinning graph neural network architectures in order to understand desirable properties of neural networks for higher-order data.Graph neural networks in the nodal domain typically have two important features in common: Permutation equivariance.Although the nodes are given labels and an ordering for notational convenience, graph neural networks are not dependent on the chosen labeling of the nodes.That is, if the node and corresponding input labels were permuted in some way, the output of the graph neural network, modulo said permutation, will not change.
Locality.Graph neural networks in their most basic form operate locally in the graph structure.Typically, at each layer a node's representation is affected only by its own state and the state of its immediate neighbors.Forcing operations to occur locally is how the underlying graph structure is used to regularize the functional form of the graph neural network.
Based on these two principles, many architectures have been proposed, such as the popular graph convolutional network [45], which mixes one-step graph filters and nodewise nonlinearities for semi-supervised learning on the nodes of a graph.Indeed, there has been significant study in understanding the nature of graph convolutional architectures in terms of the spectral properties of the chosen shift or filter operation [64].

Simplicial Graph Neural Networks
Motivated by work on graph neural networks in the node space, and the effectiveness of the Hodge Laplacian for representing certain types of data supported on simplicial complexes as in Section 3, we now discuss considerations and limitations for building graph neural network architectures based on representations grounded in combinatorial Hodge theory.This approach to processing data on simplicial complexes generated a flurry of interest recently, with convolutional architectures based on the Hodge Laplacians and boundary maps being proposed in [65,66,67,68].As before, let X be a simplicial complex over a finite set of vertices V, with boundary operators {B k } K k=1 , where K is the order of X .We consider architectures built on the composition of matrix multiplication with boundary operators and/or Hodge Laplacians of varying order, aggregation functions, and nonlinear activation functions that obey permutation invariance, locality, and the additional properties of orientation invariance and simplicial locality.
We begin by defining orientation equivariance, which describes a similar property to permutation invariance for graph neural networks [69].

Orientation equivariance. If the chosen arbitrary reference orientation of the simplices in
X is changed, the output of the neural network architecture remains the same, modulo said change in orientation.
Due to the arbitrary nature of the simplex orientations, orientation invariance is clearly a desirable property for a neural network architecture to have.For a simple class of convolutional neural networks for flows, we must choose the nonlinear activation function carefully in order to satisfy this property.If one were to construct a simple architecture with weight matrices W 1 , W 2 for flows on a simplicial complex based on L 1 of the form we want g to not change when a different orientation is chosen.Let Θ ∈ R E×E be a matrix taking values ±1 on the diagonal and zeros elsewhere, representing a change in orientation for each edge.Then, for a flow f and Hodge Laplacian L 1 , this change in orientation is realized by Θf and ΘL 1 Θ.Therefore, for orientation equivariance to hold, we need to hold for all flows f .For this to be true, σ must be an odd function so that it commutes with Θ.A natural extension to the notion of orientation equivariance is orientation invariance, which rewrites (21) as This property has greater utility for tasks such as graph classification, where a global descriptor is desired, rather than output on each simplex.Another consideration that does not typically arise in the design of graph neural networks is data supported on different levels of the graph.Data on a simplicial complex can lie on, e.g., nodes, edges, and faces simultaneously, motivating the need for architectures that pass data along the many levels of a simplicial complex.Analogous to the property of locality for graph neural networks, we consider a notion of locality for different levels of a simplicial complex.
Simplicial locality.At each layer of an architecture with simplicial locality, information exchange only occurs between adjacent levels of the underlying simplicial complex, i.e., the output of a layer restricted to k−simplices is dependent only on the input of that layer restricted to k − 1, k, k + 1−simplices.
As an illustrative example, loosely based on the architecture proposed in [66], consider a small two-layer neural network simultaneously operating over a simplicial complex of nodes, edges, and triangles.That is, the input to the neural network is a tuple of signals (v 0 , f 0 , t 0 ) on the vertices (graph signals), edges (flows), and triangles, respectively, and each layer performs the following computation: for some odd elementwise activation function σ.That is, at each layer, signals on each level of the simplicial complex are either lifted to the next highest level via the coboundary operator, projected to its boundary using the boundary operator, or diffused via the Hodge Laplacian.This "lifting" and "projecting" can only occur between adjacent levels of the simplicial complex, due to the fact that the composition of boundary operators is null, thereby satisfying simplicial locality.
We now examine the tuple of signals (v 2 , f 2 , t 2 ).First, suppose σ is the identity mapping, so that each signal in (v 2 , f 2 , t 2 ) is a linear function of (v 0 , f 0 , t 0 ).Then, one can check that Notice that each signal is strictly a function of the signals above and below it, even after multiple layers of the architecture are evaluated.This indicates that our architecture is incapable of incorporating information from nonadjacent levels of the simplicial complex, due to the composition of boundary operators being null: note that similar properties hold for linear variants of this example making use of boundary operators in this way.This is not the case, though, when σ is nonlinear.While B 1 B 2 t = 0 may hold for all signals t on the faces, B 1 σ(B 2 t) = 0, in general.By incorporating nonlinear activation functions, we facilitate full incorporation of signals from all levels of the simplicial complex in the output at each level.We call this property extended simplicial locality.
Extended simplicial locality.For an architecture with extended simplicial locality, the output restricted to k−simplices is dependent on the input restricted to simplices at all levels, not just those of order k − 1, k, k + 1.
Notice that while simplicial locality is defined for each layer of an architecture, extended simplicial locality is a global property, so that both are simultaneously attainable.There is a trade-off in achieving extended simplicial locality by interleaving nonlinearities: although there is full influence of the entire simplicial structure on all levels of the output, the structure endowed by the boundary operators (namely, the composition of boundary operators being null) is no longer in effect.Although the Hodge decomposition (9) can still be applied to the output signals of such an architecture, the expression of the space of k-simplex signals strictly in terms of upper and lower incidence through k − 1 and k + 1 simplices ceases to hold when considering the input and output jointly, as opposed to linear filters of the Hodge Laplacian.This motivates further considerations of how nonlinearities may be necessary in modeling higher-order data, such as in the work of [70,71], where it is shown that higher-order opinion dynamics must be nonlinear, lest they be equivalently modeled by a purely pairwise system.That is, we must relax the structure of simplicial complexes in order to represent more general high-order interactions.In doing this, we exchange the connection to algebraic topology for greater flexibility in modeling.This naturally leads to the consideration of hypergraphs and associated signal processing ideas, as discussed in the next section.

Modeling higher-order interactions via hypergraphs
In this section, we discuss hypergraphs as an alternative to simplicial complexes to model higher-order analogs of graphs, and then discuss how we can construct appropriate matrix-based and tensor-based shift operators for such hypergraphs to enable the development of signal processing tools.
An important feature of simplicial complexes is that for every simplex present all of its faces are also included in the complex (and recursively the corresponding faces, and so on).This inclusion property gives rise to the hierarchy of boundary operators, which anchor simplicial complexes to algebraic topology.However, this subset inclusion property may be an undesirable restriction, if we want to represent interactions that are exclusive to multiple nodes and do not imply the interaction between all the subsets of nodes.A related problem is the issue of (extended) simplicial locality as discussed in the previous section, which arises from the restrictions imposed on the boundary operators of simplicial complexes.Finally, while simplices are endowed with a reference orientation and may be weighted, we might be interested in encoding other types of directionality or heterogeneous weighting schemes of group interactions, which are not easily compatible with the mathematical structure of simplicial complexes.
To illustrate the utility of hypergraphs as modelling tools, let us consider a number of concrete examples in which a hypergraph model may be preferred over a simplicial complex, before providing a more mathematical definition.
Example 9.In a co-authorship network [72], having a paper with three or more authors does not imply that these people have written papers in pairs.Hypergraphs can distinguish these two cases while graphs and simplicial complexes cannot, in general.Moreover, the relative contribution of the authors to a paper may be different and we thus may want to have a representation that enables us to assign heterogeneous weights within group interactions.This again can be done using hypergraphs [73].An email network may be described using a directed hypergraph [74], whenever there exist emails containing multiple senders or multiple receivers.This kind of directional information will be difficult to encode in a simplicial complex (while graphs can encode the directionality here, they lose the higher-order information).Further examples in which hypergraphs appear naturally include word-document networks in text mining [75,76], gene-disease networks in bioinformatics [77,78], and consumer-product networks in e-commerce [79].
Mathematically, a typical hypergraph H = (V, E, ω) consists of a set of vertices V, a set of hyperedges E, and a function ω : E → R + that assigns positive weights to hyperedges.Hyperedges generalize edges in the sense that each hyperedge can connect more than two vertices.In the most common case, where there is one type of node and one type of hyperedge (namely all hyperedges representing the same type of relationship such as co-authorship), a hypergraph is called homogeneous.A hypergraph is called k-uniform if all of its hyperedges have the same cardinality k.Notice, in particular, that a hypergraph is a bona fide generalization of a graph, since a 2-uniform hypergraph reduces to a graph.More interestingly, a simplicial complex may be seen as a hypergraph satisfying the property that every subset of a hyperedge is also a hyperedge.Similar to a standard graph, a hypergraph can also be directed in which case each (directed) hyperedge e is an ordered pair (T (e), H(e)) where T (e) and H(e) are two disjoint subsets of vertices respectively called the tail and the head of e [80].This flexibility is of interest, e.g., when modelling multiway communication patterns as illustrated in the example of email networks above.
While the standard framework of hypergraphs is already very flexible, in recent years several more elaborate hypergraph models have been proposed to better represent real-world datasets: (1) Heterogeneous hypergraphs refer to hypergraphs containing different types of vertices and/or different types of hyperedges [81,82,83,84] and may thus be seen as a generalization of multilayer and multiplex networks.For example, in a GPS network [85], a hyperedge can have three types of vertices (user, location, activity).Another example is online social networks such as Twitter, in which we can have different types of vertices including users, tweets, usertags, hashtags and groups as well as multiple types of hyperedges such as 'users release tweets containing hashtags or not', 'users join groups', and 'users assign usertags to themselves' [86].
(2) Edge-dependent vertex weights are introduced into hypergraphs in [73,75,76] to reflect the different contribution (e.g., importance or influence) of vertices in the same hyperedge.More precisely, for each hyperedge e ∈ E, a function γ e : e → R + is defined to assign positive weights to vertices in this hyperedge.For instance, in the co-authorship network in Example 9, the different levels of contribution of the authors of a paper can be encoded as edge-dependent vertex weights.If γ e (v) = γ e (v) for every vertex v and every pair of hyperedges e and e containing v, then we say that the vertex weights are edge-independent.Such hypergraphs are also called vertex-weighted hypergraphs [87].Moreover, if γ e (v) = 1 for all vertices v and incident hyperedges e, the vertex weights are trivial and we recover the homogeneous hypergraph model.(3) In order to leverage the fact that different subsets of vertices in one hyperedge may have different structural importance, the concept of an inhomogeneous hyperedge is proposed in [88].Each inhomogeneous hyperedge e is associated with a function w e : 2 e → R ≥0 that assigns non-negative costs to different cuts of the hyperedge, where 2 e denotes the power set of e.The weight w e (S) indicates the cost of partitioning the hyperedge e into two subsets S and e \ S.This is called a submodular hypergraph when w e satisfies submodularity constraints [89].
Similar to graphs and simplicial complexes, a key factor for developing signal processing tools for hypergraphs is the definition of an appropriate shift operator.For simplicial complexes, we argued that the Hodge Laplacian is a natural and principled operator for this purpose.For hypergraphs there are two major approaches to their mathematical representation, which induce different kinds of shift operators.
The first option is to use a matrix-based representation and derive a shift operator from it, akin to the approach of GSP.As any matrix may be interpreted as an adjacency matrix of a graph and thus induces a weighted, directed graph, this procedure may be understood as first deriving a graph-based representation of the hypergraph and then using an algebraic representation of this graph (e.g., adjacency or Laplacian matrices) as the algebraic shift operator of the hypergraph.
The second option is to represent the hypergraph using a tensor, i.e., a multi-dimensional array representation instead of the 2-dimensional array representation provided by matrices (we refer to [90,91,92] for a general introduction to tensors and tensor decompositions).While this provides, in principle, a richer set of possible representations of the shift operator, there are also challenges associated with this procedure as the definition of a hypergraph < l a t e x i t s h a 1 _ b a s e 6 4 = " B 6 1 f q 3 l a n / / z P O 3 z 5 D h 4 T T n u a 8 < l a t e x i t s h a 1 _ b a s e 6 4 = " a I h G t l 3 8 5 M c g j f r X < l a t e x i t s h a 1 _ b a s e 6 4 = " B 6 1 f q 3 l a n / / z P O 3

z 5 D h 4 T T n u a 8 U = " >
< l a t e x i t s h a 1 _ b a s e 6 4 = " a I h G t l 3 8 5 M c g j f r X < l a t e x i t s h a 1 _ b a s e 6 4 = " B 6 1 f q 3 l a n / / z P O 3

z 5 D h 4 T T n u a 8 U = " >
< l a t e x i t s h a 1 _ b a s e 6 4 = " a I h G t l 3 8 5 M c g j f r X < l a t e x i t s h a 1 _ b a s e 6 4 = " B 6 1 f q 3 l a n / / z P O 3 z 5 r V u a z W H q 4 q 9 d s 8 j i K c w C m c g w P X U I d 7 a E A T G A h 4 h l d 4 s 5 6 s F + v d + l i 0 F q x 8 5 h j + w P r 8 A b D c j x I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 4 H f J a 9 S f + 2 u W K K 0 e 4 C z h x j M q t w w = " signal and its processing is less grounded in GSP and related techniques.In the following subsections, we respectively discuss these two choices of representations, starting with matrix-based representations.

Matrix-based hypergraph representations
The most common approach to deal with hypergraph-structured data is to encode the hypergraph as a matrix.When interpreting the corresponding matrices as graphs, many of these matrix-based approaches can thus, alternatively, be viewed as deriving a graph representation for the hypergraph.Accordingly, these approaches are often described in terms of graph expansions.We prefer the term matrix representation here, as the fact that we encode a particular data structure via a matrix does not imply that the data structure is itself a graph (possibly with weights and signed edges).For instance, we studied matrix-based representations of simplicial complexes in the previous sections, but this would typically not be considered a graph expansion of a simplicial complex.
Let us now discuss some of the most common matrix-based hypergraph representations and transformations (see Figure 7 for a visual overview of the discussed variants), including the so-called clique and star expansions as the most popular variants [93].To this end, consider a homogeneous hypergraph H = (V, E, ω) and define the vertex-to-hyperedge incidence matrix as Z ∈ R |V|×|E| with entries Z ve = 1 if vertex v belongs to hyperedge e.In addition, we will represent the weights of the hyperedges by the diagonal matrix W ∈ R |E|×|E| , whose diagonal corresponds to the hyperedge weights.
Let us first consider the so called star-graph expansion (Figure 7D) [94,95].Using the above defined matrices, the star-graph expansion can be explained by constructing the following adjacency matrix A * of a bipartite graph When interpreted in terms of a graph, this construction may be explained as follows: We introduce a new vertex for each hyperedge and each of these vertices is then connected with a weight corresponding to the weight of the hyperedge to all the (original) vertices in this hyperedge.The constructed weighted graph G * = (V * , E * , ω * ), thus has a vertex set V * = V ∪ E, an edge set E * = {(v, e) : v ∈ e, e ∈ E}, and an edge weight function ω * (v, e) = ω(e).Many other weight functions are possible here as well, e.g., we may normalize by the cardinality of the hyperedges.By constructing appropriate Laplacian operators (combinatorial or normalized) of such a star expansion matrix, we can thus obtain a shift-operator for the hypergraph in a straightforward fashion.An alternative matrix-based representation that can be derived from the same matrices defined above is the clique expansion (Figure 7C) [96,97,98,99].In matrix terms, this corresponds to projecting out the hyperedge dimension of the incidence matrix Z. Specifically, if we assume unit hyperedge weights for simplicity, the clique expansion may be computed by forming the product ZZ .As this matrix has a nonzero diagonal, we can simply set the diagonal of this matrix to zero to obtain a basic clique expansion matrix A c = ZZ − Diag(diag(ZZ )).By including various weighting factors, alternative variants of this matrix can be derived.The name clique expansion becomes intuitive if we again interpret A c as the adjacency matrix of a graph: The above construction corresponds to replacing every hyperedge with a clique subgraph.More precisely, the clique expansion leads to the adjacency matrix of a graph e ∈ E, u = v}.One of the most common definitions for the edge weighting function in this context is ω c (u, v) = e∈E:u,v∈e ω(e), i.e., the edge weight in the graph is simply given by the sum of the weights of hyperedges that contain the two endpoints.However, many other weighting schemes are conceivable.
As has been shown in [93], many hypergraph learning algorithms [94,95,96,97,98,99] correspond to either the clique or star expansions with an appropriate weighting function.However, apart from these common expansions, there also exist other methods for projecting hypergraphs to graphs such as constructing a line graph [100].This line-graph expansion for hypergraphs (see Figure 7E for an illustration) may be computed in terms of (weighted variants of) the second possible projection of the incidence matrix Z, namely Z Z. Apart from these three canonical types of graph representations (star, clique, and line graph) that can be derived from the incidence matrix Z and additional (weighting) transformations, a few other matrix-based schemes have been proposed for representing hypergraphs.For instance, the recent paper [101] proposes the so-called line expansion of a hypergraph (different from the line graph; see Figure 7F), which is isomorphic to the line graph of its star expansion and aims to unify the clique and star expansions.In the line expansion, each incident vertexhyperedge pair is considered as a "line node" and two "line nodes" are connected if they share either the vertex or the hyperedge.We would like to remark that in some cases we might be more interested in the dual of one hypergraph in which the roles of vertices and hyperedges are interchanged and the incidence matrix is Z [78]; see Figure 7B.
While we have so far considered only homogeneous hypergraphs, Laplacian matrices have also been proposed for more general hypergraph models.For instance, [73,75,88] use variants of the clique expansion to derive matrix representations of hypergraphs with edgedependent vertex weights or inhomogeneous hyperedges.Specifically, in [73,75] hypergraphs with edge-dependent vertex weights are projected onto asymmetric matrices, corresponding to induced directed graphs with self-loops.The authors then use established combinatorial and normalized Laplacians for digraphs [102] applied to these matrices to derive a Laplacian matrix for hypergraphs.Finally, in [88], a novel algorithm for assigning edge weights to the graph representation is proposed, allowing for non-uniform expansions of hyperedges.
As the above discussion shows, there is an enormous variety of matrix-based representations for hypergraphs, and the relative advantages and disadvantages of these constructions are still sparsely understood.Ultimately, the choice of a particular matrix representation corresponds to a specific model for what constitutes a smooth signal on a hypergraph.We believe that a better understanding of the spectral properties of the individual constructions will thus be an important step for choosing good matrix representations for different application scenarios.

Tensor-based hypergraph representations
Instead of working with matrix-based representations, hypergraphs can alternatively be represented by tensors.A tensor is simply a multi-dimensional array, whose order is the number of indices needed to label an element in the tensor [90].For instance, a vector and a matrix are a first-order and a second-order tensor, respectively.Several different versions of a hypergraph adjacency tensor have been proposed in existing work [103,104,105,106,107,108,109,110,111,112].In this section, we focus on unweighted hypergraphs to keep our exposition accessible and to remain consistent with the majority of the existing work in this domain.
Due to their relative simplicity, k-uniform hypergraphs have been first studied in the literature.As every hyperedge is of the same order, a k-uniform hypergraph with N nodes can be naturally represented by a kth-order adjacency tensor A ∈ R N ×N ×•••×N , where each index ranges from 1 to N , and the entries of A are defined as follows [103,104] Every other entry in A is set to zero.Similarly to how it can be meaningful to normalize the adjacency matrix, normalized versions of this adjacency tensor have been proposed as well.In [105], the tensor in (30) is normalized by 1/(k − 1)!.This normalization guarantees that the degree of a vertex v i , i.e., the number of hyperedges that it belongs to, can be retrieved by summing the entries in the tensor whose first mode index is i, namely deg [108].This is desirable because it resembles the way of obtaining the degree of a vertex in a graph from its adjacency matrix.Another normalized adjacency is proposed in [106] where and the rest of the entries are equal to zero.Its associated normalized Laplacian tensor is defined as L = J − A where J is a tensor of the same size as A, and its entry ) > 0 and 0 otherwise.This normalization ensures that L has certain desirable spectral properties that mimic those of the normalized graph Laplacian [106].For example, the eigenvalues of L as defined in [113] are guaranteed to be contained in [0, 2].Having a bounded spectrum has shown to be useful in GSP for the stability analysis of graph filters [114].
For hypergraphs with non-uniform hyperedges, i.e., hyperedges of different sizes, the above construction does not extend easily.Since some edges will have smaller cardinality than others, some indices in the adjacency tensor would simply be undefined.A naive approach would be to keep an adjacency tensor for each observed cardinality of hyperedges, but this approach is computationally impractical.An alternative is to augment the above construction of an adjacency tensor for general homogeneous hypergraphs as follows.Denote by m the cardinality of the largest hyperedge across all hyperedges e ∈ E.Then, we construct an adjacency tensor of order m according to the following rules [107].For every hyperedge e = {v i 1 , • • • , v is } ∈ E of cardinality s ≤ m, we assign the following nonzero entries to A where the indices p 1 , p  Having defined adjacency and Laplacian tensors, we can now construct appropriate shift operators based on these tensors.In the context in which we are interested in processing signals y = [y 1 , y 2 , • • • , y N ] defined on the nodes, the following approach has been proposed [112].First, given the signal vector y construct the following (m − 1)th-order outer where • denotes the tensor outer product and m is the order of the adjacency or Laplacian (shift) tensor of interest.Then, the hypergraph shift operation leading to the output signal y out ∈ R N is defined elementwise as where S ij 1 ...j m−1 correspond to the entries of the chosen shift tensor.Equivalently, we may < l a t e x i t s h a 1 _ b a s e 6 4 = " B 6 1 f q 3 l a n / / z P O 3 z 5 D h 4 T T n u a 8 A X a g I A p e A B P 4 N m K r U f r x X p d t q 5 Z q 5 k T 8 A P W 2 y f w 1 I 9 P < / l a t e x i t >  express the above in terms of tensor products as Note that, due to the symmetry of the tensor S, it does not matter which mode we leave out in the tensor multiplication, i.e., which of the indices is kept fixed to i in (34).Furthermore, for the specific case where m = 2, we have that Y = y in (33) and the shift operation in (34) boils down to a standard matrix-vector multiplication as in GSP.
As in the graph case where the entry S ij of the shift operator indicates the shift from vertex v j to vertex v i , the entry S ij 1 •••j m−1 of the hypergraph shift operator indicates the shift in one hyperedge following the order 8 illustrates the process defined by (36).

Comparison between matrix-based and tensor-based hypergraph representations
The major advantage of matrix-based methods is that a lot of well-developed graphrelated algorithms can be directly utilized.However, if the resulting matrix representation is akin to a graph in that it only encodes pairwise relations between vertices (clique expansion), or hyperedges (line graphs), there will be some information loss, in general, compared to the original hypergraph structure.In contrast, for the star-expansion, all the incidence information is kept in the matrix representation.However, the resulting graph is bipartite.The bipartite graph structure might be undesirable for some applications since there are no explicit links between the same types of vertices and there are much fewer algorithms tailored for bipartite graphs than those for simple graphs [101].
Compared with matrix representations, tensors can better retain the set-level information contained in hypergraphs.However, tensor computations are more complicated and lack algorithmic guarantees [110].For example, determining the rank of a specific tensor is NPhard [115].Most existing papers have focused on super-symmetric tensors [113], while more general tensors are less explored.Indeed, how to best leverage tensor-based representations to study hypergraphs that are not homogeneous is an open problem.
Remark 12.There is a rich and complementary line of research on nonlinear Laplacian operators.In [116,117], a continuous diffusion process on the hypergraph is considered to define a Laplacian operator that enables a Cheeger-type inequality for hypergraphs.To understand this diffusion process, suppose that, at some instant, there is some signal y ∈ R |V| defined on the vertices of a hypergraph.Each hyperedge e ∈ E directs flow from vertices S e (y) = arg max v i ∈e y i having the maximum signal value to vertices I e (y) = arg min v i ∈e y i having the minimum signal value, at a total rate of c e = ω(e) • max v i ,v j ∈e |y i − y j |.As the diffusion progresses, the cardinality of S e (y) and I e (y) increases, conferring a nonlinear nature to the diffusion process, which can be modeled through a nonlinear Laplacian.A generalization of this process was proposed in [118], where hyperedges can act as mediators to receive flow from vertices in S e (y) and deliver flow to those in I e (y).Moreover, a unifying framework was recently presented in [119] by proposing a Cheeger inequality for submodular transformations.In particular, the Laplacian operators as well as the Cheeger inequalities for undirected graphs, directed graphs and hypergraphs can be recovered by defining proper submodular transformations; see [119] for more details.In [89], similar results have been independently obtained for symmetric submodular transformations.

Signal processing and learning on hypergraphs
Mimicking the respective developments in Section 2.4 for graphs and Section 4 for simplicial complexes, in this section we consider the four signal processing setups for hypergraphs equipped with the algebraic representations developed in Section 5.

Fourier analysis, node and hyperedge embeddings
As stated in Section 5.1, shift operators for hypergraphs can be represented via matrices.The corresponding eigenvectors may then be used as Fourier modes and, thus, most GSP tools discussed in Section 2 can be directly translated to hypergraphs for matrix-based hypergraph shift operators.However, unlike for graphs, even an undirected hypergraph may result in an asymmetric matrix, e.g., if hyperedge weightings are considered.Hence, one may have to adopt tools from GSP for directed graphs in this case; see [19] for a more detailed exposition of these issues.
In contrast to matrix-based shift operators, the notion of Fourier analysis for hypergraphs represented via tensors is far less developed.Nonetheless, we may proceed analogously to the matrix case and define Fourier modes via a tensor decomposition, in lieu of the eigenvector decomposition.Specifically, we can consider the orthogonal canonical polyadic (CP) decomposition [120] of the adjacency tensor A (other representative tensors can also be considered) given by where the first term is to constrain the denoised signal ŷ to be close to the observation y and the second term is a regularizer shaped by the structure of H.A possible choice for Ω H (ŷ) is to select a Laplacian matrix representation of H (cf. Section 5.1) and set the regularizer to the quadratic form as in the graph case [93,94].From the discussion after (2) it follows that the optimal solution ŷ will then be a low-pass version of y where the bases for low and high frequencies depend on the specific graph expansion selected.The most common one is to consider the clique expansion, in which we have where L c corresponds to the graph Laplacian obtained via clique expansion of the hypergraph.Alternatively, one can rely on tensor-based representations for hypergraphs in the definition of Ω H (ŷ). In particular, we can set the regularizer to be equal to the tensor-based total variation in [112].In this case, smooth signals would also be promoted but the meaning of a smooth signal will correspond to one that suffers little change under a tensor shift as defined in (34).
An alternative regularizer based on the Lovász extension of the hypergraph cut has also been proposed [124].More specifically, a parametric family of regularizers was considered which can be shown to be convex for p ≥ 1.Consequently, the optimization problem (39) remains convex and, in particular, tailored efficient algorithms have been proposed for p = 1 and p = 2; see [124].In interpreting (41) we can see that Ω H,p (ŷ) induces yet another related notion of smoothness.For every hyperedge e ∈ E we look at the difference between the extreme values of the signal attained at the nodes contained in e, we scale this penalization by the weight of the hyperedge, and we sum over all hyperedges.Intuitively, this regularizer promotes signals that are constant within the hyperedges.Moreover, the power p controls the form of the deviations from these piecewise constant signals.For example, the sparsity promoting p = 1 would encourage the signal variation to be zero within some hyperedges and possibly high in others, whereas p = 2 would promote a low (possibly non-zero) variation across all hyperedges.
If we consider a general submodular function F e instead of the hypergraph cut, then (41) can be generalized as where f e is the Lovász extension of F e (cf.Remark 12).The optimization problem (39) equipped with (42) are respectively referred to as decomposable submodular function minimization (DSFM) for p = 1 [125,126,127,128,129,130] and quadratic DSFM (QDSFM) for p = 2 [131].Similar to (40) which can also be written as ŷ, L c ŷ , (42) can be viewed as ŷ, L(ŷ) for some Laplacian operator L depending on F e .

Signal interpolation on hypergraphs
As discussed in the previous sections, signal interpolation and smoothing are closely related problems.Successful signal interpolation from an observed subset V L hinges to a large extent on the selection of a sensible model for a (smooth) ground truth signal that is compatible with the observed (desired) signal characteristics.For a chosen signal model, we may then again set up an optimization problem for interpolating hypergraph signals as ŷ Ω H (ŷ), s.t.ŷv = y v for all v ∈ V L , where Ω H is a regularizer chosen to promote the desired signal characteristics, e.g., a low-pass signal.Like for graphs and simplicial complexes, many choices for the regularization term are possible here and the optimal choice of a regularizer will generally be dependent on the considered application scenario.For instance, we may choose to use a regularizer based on the clique expansion or some of the other strategies discussed in Section 6.2.Unlike in the graph and simplicial complex setting, however, for hypergraphs we may also consider tensor-based regularizers, which can offer smoothing and interpolation strategies that are not accessible via matrix-based approaches.Developing and analyzing such approaches for hypergraphs appears to be an interesting avenue for future research.Problem (43) can also be converted to another class of optimization problem called submodular Laplacian system [132] which is a generalization of the Laplacian system on graphs [39].

Hypergraph neural networks
The design of neural network architectures to process and learn from data on hypergraphs is a nascent area of research.Given the developments in graph neural networks mentioned in Section 2.4.4 and the graph expansions for hypergraphs introduced in Section 5.1, an avenue to derive hypergraph neural networks is to compute the graph shifts based on the (clique, star, or line graph) expansions of the hypergraph and then apply a (classical) graph neural network as the one in (6) or any of the variants surveyed in [41].
In this direction, one of the earliest hypergraph neural networks [133] adopts the hypergraph Laplacian matrix associated with a weighted clique expansion in [94] as a graph shift and then implements a graph convolutional network [46,45] where shift-invariant filters are intertwined with pointwise nonlinearities.One drawback of the clique expansion is that the resulting graph tends to be dense since a hyperedge is replaced by a number of edges that is quadratic in the size of the hyperedge.A similar idea is proposed in [134], but this convolutional neural network is based on a different hypergraph Laplacian shift (proposed in [118]), which only requires a linear number of edges for each hyperedge.This provides a more efficient training when compared with that of [133].Under this same methodological umbrella, a line hypergraph convolution network is proposed in [100], which expands the hypergraph into a weighted and attributed line graph and then implements a graph convolutional network using the corresponding shift operator.
Architectures grounded on the message-passing variants of graph neural networks (cf.Section 2.4.4) have also been proposed for hypergraphs.For instance, in [101] the line expansion of the hypergraph is used to define a message passing process where potentially different aggregation functions can be used when passing messages between nodes in the expansion that have either one vertex or one hyperedge in common; see Figure 7-F.Also, [135] proposes a generalization of GraphSAGE [136] to hypergraphs, a well-established message passing architecture for graphs.Recent developments that further extend the state of the art include architectures that tackle the issue that the initially constructed hypergraphs may not be a suitable representation for data [137] as well as the formulation of attention [138] and self-attention [83] mechanisms for hypergraphs.
As a closing note, a different perspective is put forth in [139], where a convolutional neural network architecture for powerset data is introduced.These architectures are designed to learn from set functions, which are signals on the powerset of a given set.By noticing that cuts in hypergraphs can be interpreted as set functions, these convolutional architectures can be used to solve problems in hypergraphs; see [139] for more details.

Discussion
Graph signal processing tools have been highly successful in a wide range of applications, ranging from biological to social domains.This success hinges to a large extent on providing sensible notions for filtering graph signals, such that the relevant dependencies in the signal are kept intact, while undesirable noise components are filtered out.However, as graphs are only concerned with pairwise relationships, their capabilities for modeling higher-order dependencies are too limited for certain application scenarios in which polyadic relationships are essential.In such scenarios, simplicial complexes and hypergraphs have recently emerged as two promising conceptual frameworks to address the specific shortcomings of graph-based representations.
Unlike for GSP that can benefit from a rich set of results in spectral graph theory, e.g., to derive appropriate notions of shift operators and signal smoothness, the theory of signal processing on higher-order networks is far less developed.In this tutorial paper, we provided an introduction to this emerging area, focusing on the choice of appropriate shift operators and associated frequency domain representations, as well as a set of important application scenarios comprising signal smoothing and denoising, signal interpolation, and the construction of nonlinear neural network architectures that can leverage the structure of such higher-order networks.
We believe that this area holds an enormous potential for future developments.A few relevant future direction include the following.
In the context of simplicial complexes, the investigation of how these should be constructed from data to capture desirable features is certainly one aspect that deserves further research.As discussed in Sections 3 and 4, the choice of appropriate faces has direct consequences on the frequency representation of any signal and is thus highly relevant for applications [63].Similarly, while we discussed only unweighted simplicial complexes for simplicity, the appropriate introduction of weights to emphasize certain features in the data to be investigated is a pertinent issue that should be addressed in future research.Finally, while we concentrated on simplicial complexes as the most common complexes considered, the restriction to simplicial instead of other type of cell complexes such as cubical complexes is essentially artificial.From a modeling perspective, simplices may not always capture the appropriate notion of a "cell" in a higher-order interaction network.For instance, in traffic and street networks it may be beneficial to consider cubical complexes or other types of models that can better represent the grid-like structure of many of these networks [59].
In the context of hypergraphs, we provide several potential directions for future work.As the first step, constructing a suitable hypergraph is key to the final performance.Hence, it is important to develop effective and efficient methods for the construction of hypergraphs from real-world datasets that are usually large-scale.To better characterize a wider range of datasets, it is necessary to develop more general hypergraph models, such as those considering different types of vertices or having different levels of relations (cf.Section 5).A variety of problems that have been well studied in graphs or homogeneous hypergraphs are valuable to be reconsidered and extended to those less explored but more expressive models.These problems include, but are not limited to, developing spectral hypergraph theory, node clustering, classification and ranking, link prediction, hypergraph representation learning (especially for heterogeneous hypergraphs in which hyperedges are generally indecomposable [82]), the modeling and analysis of diffusion processes on hypergraphs, tensor-based representations and operations (especially for hypergraphs with edge-dependent vertex weights which are hard to be modeled using super-symmetric tensors), hypergraph kernels, hypergraph classification, and hypergraph alignment.Although one framework for hypergraph signal processing has already been proposed in [112], there are still many open questions.In GSP, graph shift and filters can be understood as some network diffusion processes, while it is not clear if and how the hypergraph shift can be connected with a physical process.Other problems such as hypergraph filter design, active sampling for reconstruction, and fast hypergraph Fourier transforms are also worth investigating.Finally, most existing hypergraph neural networks are matrix-based like those introduced in Section 6.4.A natural extension in this context would be to derive the theory of tensor-based neural networks for hypergraphs.

Figure 1 :
Figure 1: Graph signal and its Fourier decomposition.A Graph signal defined on the nodes of the graph.B Eigenvector and eigenvalue pairs of the graph Laplacian L. We visualize each of the eigenvectors in terms of a graph signal and order them from low to high graph frequencies, corresponding to a decrease in "smoothness".The decomposition of the node signal s into this basis provides the Fourier coefficients in s as indicated at the bottom of each eigenvector representation.

Figure 3 :
Figure 3: Hodge decomposition of the edge flow in the example from Figure 2. Any edge flow (left) can be decomposed into a harmonic flow, a gradient flow and a curl flow.

Figure 5 :
Figure 5: Flow smoothing on a graph.A An undirected graph with a pre-defined and oriented flow f 0 .B The observed flow is a noisy version of the flow f 0 , i.e., f 0 is distorted by a Gaussian white noise vector .C We denoise the flow by applying a Laplacian filter based on the line-graph.This filter performs worse compared to the edge space filters in D and E that account for flow conservation.D Denoised flow obtained after applying the filter based on the edge Laplacian.E Denoised flow obtained after applying the filter based on the Hodge Laplacian.The estimation error is lower than in the edge Laplacian case as the filter accounts for filled faces in the graph.

Figure 6 :
Figure 6: Semi-supervised learning for edge flow.A Synthetic flow.50% of the edges are labeled.Labeled edges are colored based on the value of their flow.The remaining edges in grey are inferred from the procedure explained in the text.B Edge flow obtained after applying the semi-supervised algorithm in (17).C Numerical value of the inferred signal.

1 <
l a t e x i t s h a 1 _ b a s e 6 4 = " u A K i x H I E r / R 0 Z l e W 9 f z c F / z b g 6 k = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e

2 <
z s e y t e D k M 6 f w B 8 7 n D z 8 I j b Y = < / l a t e x i t > v l a t e x i t s h a 1 _ b a s e 6 4 = " m e T l 2 T B P T a J e j B P n Z S A k U N l u Q J I = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F 4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e

z s e i d c 3 J 1 <
Z 0 7 g D 5 z P H 0 U c j b o = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " 2 0 9 O n 1 z B 4 B i a s 0 y d L 6 b d E z k s 4 l 8 = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s / K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W P P 6 x V L b t m d g a w S b 0 F K s E C 9 V / z q 9 m O W R i g N E 1 T r j u c m x s + o M p w J n B S 6 q c a E s h E d Y M d S S S P U f j Y 7 d U L O r N I n Y a x s S U N m 6 u + J j E Z a j 6 P A d k b U D

2 < 3 <
7 a q l 2 8 z S P I w 8 n c A r n 4 M E l 1 O A W 6 t A A B g N 4 h l d 4 c 4 T z 4 r w 7 H / P W n L O I 8 B j + w P n 8 A R d A j h U = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " hV o E d y q O B b 8 L l G V P E C B F s D h E Y y U = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s / K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W O v 0 i u W 3 L I 7 A 1 k l 3 o K U Y I F 6 r / j V 7 c c s j V A a J q j W H c 9 N j J 9 R Z T g T O C l 0 U 4 0 J Z S M6 w I 6 l k k a o / W x 2 6 o S c W a V P w l j Z k o b M 1 N 8 T G Y 2 0 H k e B 7 Y y o G e p l b y r + 5 3 V S E 1 7 5 G Z d J a l C y + a I w F c T E Z P o 3 6 X O F z I i x J Z Q p b m 8 l b E g V Z c a m U 7 A h e M s v r 5 J m p e x d l C t 3 1 V L t 5 m k e R x 5 O 4 B T O w Y N L q M E t 1 K E B D A b w D K / w 5 g j n x X l 3 P u a t O W c R 4 T H 8 g f P 5 A x j E j h Y = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " U Q a x e P r Y i 2 F u 5 4 F D D T G 9 + m

z s e i d c 3 J 1 < l a t e x i t s h a 1 _ b a s e 6 4 =
Z 0 7 g D 5 z P H 0 a h j b s = < / l a t e x i t >v " u A K i x H I E r / R 0 Z l e W 9 f z c F / z b g 6 k = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e

2 < l a t e x i t s h a 1 _ b a s e 6 4 =
z s e y t e D k M 6 f w B 8 7 n D z 8 I j b Y = < / l a t e x i t > v " m e T l 2 T B P T a J e j B P n Z S A k U N l u Q J I = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F 4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e

z s e i d c 3 J 1 < l a t e x i t s h a 1 _ b a s e 6 4 = 2 < l a t e x i t s h a 1 _ b a s e 6 4 = 3 < l a t e x i t s h a 1 _
Z 0 7 g D 5 z P H 0 a h j b s = < / l a t e x i t > e " 2 0 9 O n 1 z B 4 B i a s 0 y d L 6 b d E z k s 4 l 8 = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s/ K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W P P 6 x V L b t m d g a w S b 0 F K s E C 9 V / z q 9 m O W R i g N E 1 T r j u c m x s + o M p w J n B S 6 q c a E s h E d Y M d S S S P U f j Y 7 d U L O r N I n Y a x s S U N m 6 u + J j E Z a j 6 P A d k b U D P W y N x X / 8 z q p C a / 8 j M s k N S j Z f F G Y C m J i M v 2 b 9 L l C Z s T Y E s o U t 7 c S N q S K M m P T K d g Q v O W X V 0 m z U v Y u y p W 7 a q l 2 8 z S P I w 8 n c A r n 4 M E l 1 O A W 6 t A A B g N 4 h l d 4 c 4 T z 4 r w 7 H / P W n L O I 8 B j + w P n 8 A R d A j h U = < / l a t e x i t > e " h V o E d y q O B b 8 L l G V P E C B F s D h E Y y U = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s / K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W O v 0 i u W 3 L I 7 A 1 k l 3 o K U Y I F 6 r / j V 7 c c s j V A a J q j W H c 9 N j J 9 RZ T g T O C l 0 U 4 0 J Z S M 6 w I 6 l k k a o / W x 2 6 o S c W a V P w l j Z k o b M 1 N 8 T G Y 2 0 H k e B 7 Y y o G e p l b y r + 5 3 V S E 1 7 5 G Z d J a l C y + a I w F c T E Z P o 3 6 X O F z I i x J Z Q p b m 8 l b E g V Z c a m U 7 A h e M s v r 5 J m p e x d l C t 3 1 V L t 5 m k e R x 5 O 4 B T O w Y N L q M E t 1 K E B D A b w D K / w 5 g j n x X l 3 P u a t O W c R 4 T H 8 g f P 5 A x j E j h Y = < / l a t e x i t > e b a s e 6 4 = " U Q a x e P r Y i 2 F u 5 4 F D D T G 9 + m

4 < l a t e x i t s h a 1 _
w / B W 3 5 5 l b R r V a 9 e r d 1 d V B o 3 T / M 4 i n A C p 3 A O H l x C A 2 6 h C S 1 g M I J n e I U 3 R z o v z r v z M W 8 t O I s I j + E P n M 8 f T 0 + O K w = = < / l a t e x i t > v b a s e 6 4 = " / H O w Y J T p D 4 7 o L y m u G l e t k a m J F

1 < l a t e x i t s h a 1 _ b a s e 6 4 =
H C 7 g C j y 4 g T o 8 Q A O a w G A M z / A K b 4 5 0 X p x 3 5 2 P Z u u H k M 2 f w B 8 7 n D 0 O X j b k = < / l a t e x i t > v " u A K i x H I E r / R 0 Z l e W 9 f z c F / z b g 6 k = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e

2 < l a t e x i t s h a 1 _ b a s e 6 4 =
z s e y t e D k M 6 f w B 8 7 n D z 8 I j b Y = < / l a t e x i t > v " m e T l 2 T B P T a J e j B P n Z S A k U N l u Q J I = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F 4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e

z s e i d c 3 J 1 < 2 < 3 <
Z 0 7 g D 5 z P H 0 a h j b s = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e6 4 = " 2 0 9 O n 1 z B 4 B i a s 0 y d L 6 b d E z k s 4 l 8 = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s/ K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W P P 6 x V L b t m d g a w S b 0 F K s E C 9 V / z q 9 m O W R i g N E 1 T r j u c m x s + o M p w J n B S 6 q c a E s h E d Y M d S S S P U f j Y 7 d U L O r N I n Y a x s S U N m 6 u + J j E Z a j 6 P A d k b U D P W y N x X / 8 z q p C a / 8 j M s k N S j Z f F G Y C m J i M v 2 b 9 L l C Z s T Y E s o U t 7 c S N q S K M m P T K d g Q v O W X V 0 m z U v Y u y p W7 a q l 2 8 z S P I w 8 n c A r n 4 M E l 1 O A W 6 t A A B g N 4 h l d 4 c 4 T z 4 r w 7 H / P W n L O I 8 B j + w P n 8 A R d A j h U = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " h V o E d y q O B b 8 L l G V P E C B F s D h E Y y U = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s/ K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W O v 0 i u W 3 L I 7 A 1 k l 3 o K U Y I F 6 r / j V 7 c c s j V A a J q j W H c 9 N j J 9 RZ T g T O C l 0 U 4 0 J Z S M 6 w I 6 l k k a o / W x 2 6 o S c W a V P w l j Z k o b M 1 N 8 T G Y 2 0 H k e B 7 Y y o G e p l b y r + 5 3 V S E 1 7 5 G Z d J a l C y + a I w F c T E Z P o 3 6 X O F z I i x J Z Q p b m 8 l b E g V Z c a m U 7 A h e M s v r 5 J m p e x d l C t 3 1 V L t 5 m k e R x 5 O 4 B T O w Y N L q M E t 1 K E B D A b w D K / w 5 g j n x X l 3 P u a t O W c R 4 T H 8 g f P 5 A x j E j h Y = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " U Q a x e P r Y i 2 F u 5 4 F D D T G 9 + m

1 <
w / B W 3 5 5 l b R r V a 9 e r d 1 d V B o 3 T / M 4 i n A C p 3 A O H l x C A 2 6 h C S 1 g M I J n e I U 3 R z o v z r v z M W 8 t O I s I j + E P n M 8 f T 0 + O K w = = < / l a t e x i t > v l a t e x i t s h a 1 _ b a s e 6 4 = " u A K i x H I E r / R 0 Z l e W 9 f z c F / z b g 6 k = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F 4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e

2 <
z s e y t e D k M 6 f w B 8 7 n D z 8 I j b Y = < / l a t e x i t > v l a t e x i t s h a 1 _ b a s e 6 4 = " m e T l 2 T B P T a J e j B P n Z S A k U N l u Q J I = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F 4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e

z s e i d c 3 J 1 < 2 <
Z 0 7 g D 5 z P H 0 a h j b s = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " 2 0 9 O n 1 z B 4 B i a s 0 y d L 6 b d E z k s 4 l 8 = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 jc y u / X d j Z 3 d s / K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W P P 6 x V L b t m d g a w S b 0 F K s E C 9 V / z q 9 m O W R i g N E 1 T r j u c m x s + o M p w J n B S 6 q c a E s h E d Y M d S S S P U f j Y 7 d U L O r N I n Y a x s S U N m 6 u + J j E Z a j 6 P A d k b U D P W y N x X / 8 z q p C a / 8 j M s k N S j Z f F G Y C m J i M v 2 b 9 L l C Z s T Y E s o U t 7 c S N q S K M m P T K d g Q v O W X V 0 m z U v Y u y p W7 a q l 2 8 z S P I w 8 n c A r n 4 M E l 1 O A W 6 t A A B g N 4 h l d 4 c 4 T z 4 r w 7 H / P W n L O I 8 B j + w P n 8 A R d A j h U = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " h V o E d y q O B b 8 L l G V P E C B F s D h E Y y U = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s

3 <
6 w I 6 l k k a o / W x 2 6 o S c W a V P w l j Z k o b M 1 N 8 T G Y 2 0 H k e B 7 Y y o G e p l b y r + 5 3 V S E 1 7 5 G Z d J a l C y + a I w F c T E Z P o 3 6 X O F z I i x J Z Q p b m 8 l b E g V Z c a m U 7 A h e M s v r 5 J m p e x d l C t 3 1 V L t 5 m k e R x 5 O 4 B T O w Y N L q M E t 1 K E B D A b w D K / w 5 g j n x X l 3 P u a t O W c R 4 T H 8 g f P 5 A x j E j h Y = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " U Q a x e P r Y i 2 F u 5 4 F D D T G 9 + m

e 1 <
l a t e x i t s h a 1 _ b a s e 6 4 = " e Y W 7 Z e t l g j P s f D V Y A p F W P p / u M 5 g = " > A A A B 7 3 i c b V D L S g N B E O y N r x h f U Y 9 e B o P g Q c J u F P Q k A S 8 e I 5 g H J M s y O + l N h s w + n J k N h C U / 4 c W D I l 7 9 H W / + j Z N k D 5 p Y 0 F B U d d P d 5 S e C K 2 3 b 3 1 Z h b X 1 j c 6 u 4 X d r Z 3 d s / K B 8 e t V S c S o Z N F o t Y d n y q U P A I m 5 p r g Z 1

Figure 7 :
Figure 7: Different transformations on an example hypergraph.A The original hypergraph.B The dual hypergraph.C The clique expansion.D The star expansion.E The line graph.F The line expansion.

v 1 <
l a t e x i t s h a 1 _ b a s e 6 4 = " u A K i x H I E r / R 0 Z l e W 9 f z c F / z b g 6 k = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w eF R + f i k Z c J Y M 2 y y U I S 6 E 1 C D g i t s W m 4 F d i K N V A Y C 2 8 H k P v P b U 9 S Gh + r J z i L 0 J R 0 p P u S M 2 k y a 9 r 1 S v 1 x x q + 4 C Z J 1 4 O a l A j k a / / N U b h C y W q C w T 1 J i u 5 0 b W T 6 i 2 n A m c l 3 q x

2 < 3 <
z s e y t e D k M 6 f w B 8 7 n D z 8 I j b Y = < / l a t e x i t > v l a t e x i t s h a 1 _ b a s e 6 4 = " m e T l 2 T B P T a J e j B P n ZS A k U N l u Q J I = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m q o M e i F 4 8 V 7 A e 0 o W y 2 0 3 b p 7 i b s b g o l 9 C 9 4 8 a C I V / + Q N / + N S Z u D t j 4 Y e L w 3 w 8 y 8 I B L c W N f 9 d g o b m 1 v b O 8 X d 0 t 7 + w e F R + f i k Z c J Y M 2 y y U I S 6 E 1 C D g i t s W m 4 F d i K N V A Y C 2 8 H k P v P b U 9 S G h + r J z i L 0 J R 0 p P u S M 2 k y a 9 m u l f r n i V t 0 F y D r x c l K B H I 1 + + a s 3 C F k s U V k m q D F d z 4 2 s n 1 B t O R M 4 L / V i g x F l E z r C b k o V l W j 8 Z H H r n F y k y o A M Q 5 2 W s m S h / p 5 I q D R m J o O 0 U 1 I 7 N q t e J v 7 n d W M 7 v P U T r q L Y o m L L R c N Y E B u S 7 H E y 4 B q Z F b O U U K Z 5 e i t h Y 6 o p s 2 k 8 W Q j e 6 s v r p F W r e l f V 2 u N 1 p X 6 X x 1 G E M z i H S / D g B u r w A A 1 o A o M x P M M r v D n Se X H e n Y 9 l a 8 H J Z 0 7 h D 5 z P H 0 C N j b c = < / l a t e x i t > v l a t e x i t s h a 1 _ b a s e 6 4 = "

z s e i d c 3 J 1 <
Z 0 7 g D 5 z P H 0 U c j b o = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " 2 0 9 O n 1 z B 4 B i a s 0 y d L 6 b d E z k s 4 l 8 = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s / K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W P P 6 x V L b t m d g a w S b 0 F K s E C 9 V / z q 9 m O W R i g N E 1 T r j u c m x s + o M p w J n B S 6 q c a E s h E d Y M d S S S P U f j Y 7 d U L O r N I n Y a x s S U N m 6 u + J j E Z a j 6 P A d k b U D

2 <
7 a q l 2 8 z S P I w 8 n c A r n 4 M E l 1 O A W 6 t A A B g N 4 h l d 4 c 4 T z 4 r w 7 H / P W n L O I 8 B j + w P n 8 A R d A j h U = < / l a t e x i t > e l a t e x i t s h a 1 _ b a s e 6 4 = " hV o E d y q O B b 8 L l G V P E C B F s D h E Y y U = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e z G g B 4 D g n i M a B 6 Q L G F 2 0 p s M m Z 1 d Z m a F s A T 8 A S 8 e F P H q F 3 n z b 5 w 8 D p p Y 0 F B U d d P d F S S C a + O 6 3 0 5 u b X 1 j c y u / X d j Z 3 d s / K B 4 e N X W c K o Y N F o t Y t Q O q U X C J D c O N w H a i k E a B w F Y w u p 7 6 r U d U m s f y w Y w T 9 C M 6 k D z k j B o r 3 W O v 0 i u W 3 L I 7 A 1 k l 3 o K U Y I F 6 r / j V 7 c c s j V A a J q j W H c 9 N j J 9 R Z T g T O C l 0 U 4 0 J Z S M6 w I 6 l k k a o / W x 2 6 o S c W a V P w l j Z k o b M 1 N 8 T G Y 2 0 H k e B 7 Y y o G e p l b y r + 5 3 V S E 1 7 5 G Z d J a l C y + a I w F c T E Z P o 3 6 X O F z I i x J Z Q p b m 8 l b E g V Z c a m U 7 A h e M s v r 5 J m p e x d l C t 3 1 V L t 5 m k e R x 5 O 4 B T O w Y N L q M E t 1 K E B D A b w D K / w 5 g j n x X l 3 P u a t O W c R 4 T H 8 g f P 5 A x j E j h Y = < / l a t e x i t > S 321 < l a t e x i t s h a 1 _ b a s e 6 4 = " c F j C g B s 3 K S T h t 5 E S A 4 / + n 1 / A 1 a U = " > A A A B 7 n i c d V B N S w M x E J 3 1 s 9 a v q k c v w S J 4 K p u t 2 P Y i B S 8 e K 9 o P a J e S T d M 2 N J t d k q x Q l v 4 I L x 4 U 8 e r v 8 e a / M d t W U N E H A 4 / 3 Z p i Z F 8 S C a + O 6 H 8 7 K 6 t r 6 x m Z u K 7 + 9 s 7 u 3 X z g 4 b O k o 9 a X g m X S 9 7 N e b F + u Y w j B 8 d w A m e A o Q J 1 u I Y G N I H C B B 7 g C Z 6 d 2 H l 0 X p z X R e u K s 5 w 5 g h 9 w 3 j 4 B y w K P N g = = < / l a t e x i t > S 312 < l a t e x i t s h a 1 _ b a s e 6 4 = " B 1 T M u / 8 V 3 M 7 K M y / e R J / 9 s p Z C / + o

r 7 R
P n 8 I 9 J < / l a t e x i t > S 345 < l a t e x i t s h a 1 _ b a s e 6 4 = " n r C 9 L d p x e 0 2 O 8 Y r 8 h Z g a i c y j R D8 = " > A A A B 7 n i c d V D L S g M x F M 3 4 r P V V d e k m W A R X Q 6 b T 5 0 Y K b l x W t A 9 o h 5 J J M 2 1 o J j M k G a E M / Q g 3 L h Rx 6 / e 4 8 2 / M t B V U 9 M C F w z n 3 c u 8 9 f s y Z 0 g h 9 W G v r G 5 t b 2 7 m d / O 7 e / s F h 4 e i 4 o 6 J E E t o m E Y 9 k z 8 e K c i Z o W z P N a S + W F I c + p 1 1 / e p X 5 3 X s q

S 354 <
l a t e x i t s h a 1 _ b a s e 6 4 = " O 9 F e W i d H w f p W i V Y p C u D w 1 v I W H p k = " > A A A B 7 n i c d V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c D e m L t h s p u H F Z 0 T 6 g H U o m z b S h m c y Q Z I Q y 9 C P c u F D E r d / j z r 8 x f Q g q e i D k c M 6 9 3 H u P H w u u D c Y f z t r 6 x u b W d m Y n u 7 u 3 f 3

Figure 8 : 3 at vertex v 3
Figure 8: Tensor based shift operator on a hypergraph.The output of y out 3 at vertex v 3 is determined by a weighted sum over the hyperedges incident to v 3 , where summands correspond to the products of the vertex signals within the respective hyperedges excluding v 3 .[Figure adapted from Fig. 10(a) [112]].

Ω
H,p (ŷ) = e∈E ω(e) max u∈e ŷu − min v∈e ŷv p , 2 , • • • , p m are chosen in all possible ways from {i 1 , i 2 , • • • , i s } such that every element of this latter set is represented at least once.The rest of the entries of A are set to zero.The Laplacian tensor is then defined as L = D − A where D is a super-diagonal tensor of the same size as A and with entries D ii•••i equal to the degree of vertex v i .To illustrate definition (32), consider the following example.