Introduction

Over the past two decades, networks have emerged as a versatile description of interconnected complex systems1,2, generating crucial insights into myriad social3, biological4, and physical5 systems. However, it has also become increasingly clear that the original formulation of a static network representing a single type of pairwise interaction has its limitations. For instance, neuronal networks change over time due to plasticity and comprise both chemical and electrical interaction pathways6. For this reason, the original formulation has been generalized in different directions, including hypergraphs that account for nonpairwise interactions involving three or more nodes simultaneously7,8, multilayer networks that accommodate multiple types of interactions9,10, and temporal networks whose connections change over time11. Naturally, with the increased descriptive power comes increased analytical complexity, especially for dynamical processes on these generalized networks.

One important class of dynamical processes on networks is cluster synchronization. Many biological and technological networks show intricate cluster synchronization patterns, where one or more internally coherent but mutually independent clusters coexist12,13,14,15,16,17,18,19. Maintaining the desired dynamical patterns is critical to the function of those networked systems20,21. For instance, long-range synchronization in the theta frequency band between the prefrontal cortex and the temporal cortex has been shown to improve working memory in older adults22.

Up until now, synchronization (and other dynamical processes) in hypergraphs23,24,25, multilayer networks26,27,28, and temporal networks29,30,31 have been studied mostly on a case-by-case basis. Recently, it was shown that, in synchronization problems, simultaneous block diagonalization (SBD) optimally decouples the variational equation and enables the characterization of arbitrary synchronization patterns in large networks32. However, aside from multilayer networks, for which the multiple layers naturally translate into multiple matrices32,33, the full potential of SBD for analyzing dynamical patterns in generalized networks is yet to be realized. As a technique, SBD has also found applications in numerous fields such as semi-definite programming34, structural engineering35, signal processing36, and quantum algorithms37.

In this Article, we develop a versatile SBD-based framework that allows the stability analysis of synchronization patterns in generalized networks, which include hypergraphs, multilayer networks, and temporal networks. This framework enables us to treat all three classes of generalized networks in a unified fashion. In particular, we show that different generalized interactions can all be represented by multiple matrices (as opposed to a single matrix as in the case of standard networks), and we introduce a practical method for finding the SBD of these matrices to simplify the stability analysis. As an application of our unified framework, we use it to discover higher-order chimera states—intriguing cluster synchronization patterns that only emerge in the presence of nonpairwise couplings.

Results and discussion

General formulation and the SBD approach

Consider a general set of equations describing N interacting oscillators:

$${{{{{{{{\bf{x}}}}}}}}}_{i}[t+1]={{{{{{{\bf{F}}}}}}}}({{{{{{{{\bf{x}}}}}}}}}_{i}[t])+{{{{{{{{\bf{h}}}}}}}}}_{i}({{{{{{{{\bf{x}}}}}}}}}_{1}[t],\cdots,{{{{{{{{\bf{x}}}}}}}}}_{N}[t],t),$$
(1)

where F describes the intrinsic node dynamics and hi specifies the influence of other nodes on node i. We present our framework assuming discrete-time dynamics, although it works equally well for systems with continuous-time dynamics.

For a static network with a single type of pairwise interaction, \({{{{{{{{\bf{h}}}}}}}}}_{i}({{{{{{{{\bf{x}}}}}}}}}_{1},\cdots \ ,{{{{{{{{\bf{x}}}}}}}}}_{N},t)=\sigma \mathop{\sum }\nolimits_{j = 1}^{N}{C}_{ij}{{{{{{{\bf{H}}}}}}}}({{{{{{{{\bf{x}}}}}}}}}_{i},{{{{{{{{\bf{x}}}}}}}}}_{j})\), where σ is the coupling strength, the (potentially weighted) coupling matrix C reflects the network structure, and H is the interaction function. When the network is globally synchronized, x1 =  = xN = s, assuming H only depends on xj, the synchronization stability can be determined through the Lyapunov exponents associated with the variational equation

$${{{{{{{\boldsymbol{\delta }}}}}}}}[t+1]=\big({{{{{{{{\bf{I}}}}}}}}}_{N}\otimes {{{{{{{\rm{J}}}}}}}}{{{{{{{\bf{F}}}}}}}}({{{{{{{\bf{s}}}}}}}})+\sigma {{{{{{{\bf{C}}}}}}}}\otimes {{{{{{{\rm{J}}}}}}}}{{{{{{{\bf{H}}}}}}}}({{{{{{{\bf{s}}}}}}}})\big){{{{{{{\boldsymbol{\delta }}}}}}}}[t],$$
(2)

where \({{{{{{{\boldsymbol{\delta }}}}}}}}={({{{{{{{{\bf{x}}}}}}}}}_{1}^{\intercal}-{{{{{{{{\bf{s}}}}}}}}}^{\intercal},\cdots ,{{{{{{{{\bf{x}}}}}}}}}_{N}^{\intercal}-{{{{{{{{\bf{s}}}}}}}}}^{\intercal})}^{\intercal}\) is the perturbation vector, IN is the identity matrix,  represents the Kronecker product, and J is the Jacobian operator. In the case of undirected networks, Eq. (2) can always be decoupled into N independent low-dimensional equations by switching to coordinates that diagonalize the coupling matrix C38.

For more complex synchronization patterns, however, additional matrices encoding information about dynamical clusters are inevitably introduced into the variational equation. In particular, the identity matrix IN is replaced by diagonal matrices D(m) defined by

$${D}_{ii}^{(m)}=\left\{\begin{array}{ll}1&\,{{\mbox{if node}}}\,i\in {{{{{{{{\mathcal{C}}}}}}}}}_{m},\\ 0&\,{{\mbox{otherwise}}}\,,\end{array}\right.$$
(3)

where \({{{{{{{{\mathcal{C}}}}}}}}}_{m}\) represents the mth dynamical cluster (oscillators within the same dynamical cluster are identically synchronized). Moreover, as we show below, when hi(  ) includes nonpairwise interactions, multilayer interactions, or time-varying interactions, it leads to additional coupling matrices C(k) in the variational equation. Consequently, the variational equations for complex synchronization patterns on generalized networks share the following form:

$${{{{{{{\boldsymbol{\delta }}}}}}}}[t+1]= \left\{\mathop{\sum}\limits_{m}{{{{{{{{\bf{D}}}}}}}}}^{(m)}\otimes {{{{{{{\rm{J}}}}}}}}{{{{{{{\bf{F}}}}}}}}({{{{{{{{\bf{s}}}}}}}}}^{m})+\right.\\ \left.\mathop{\sum}\limits_{m,k}{\sigma }_{k}{{{{{{{{\bf{C}}}}}}}}}^{(k)}{{{{{{{{\bf{D}}}}}}}}}^{(m)}\otimes {{{{{{{\rm{J}}}}}}}}{{{{{{{{\bf{H}}}}}}}}}^{(m,k)}({{{{{{{{\bf{s}}}}}}}}}^{m})\right\}{{{{{{{\boldsymbol{\delta }}}}}}}}[t],$$
(4)

where sm is the synchronized state of the oscillators in the mth dynamical cluster, and JH(m, k)(sm) is a Jacobian-like matrix whose expression depends on the class of generalized networks being considered.

For Eq. (4), diagonalizing any one of the matrices D(m) or C(k) generally does not lead to optimal decoupling of the equation. Instead, all of the matrices D(m) and C(k) should be considered concurrently and be simultaneously block diagonalized to reveal independent perturbation modes. In particular, the new coordinates should separate the perturbation modes parallel to and transverse to the cluster synchronization manifold, and decouple transverse perturbations to the fullest extent possible.

For this purpose, we propose a practical algorithm to find an orthogonal transformation matrix P that simultaneously block diagonalizes multiple matrices. Given a set of symmetric matrices \({{{{{{{\mathcal{B}}}}}}}}=\{{{{{{{{{\bf{B}}}}}}}}}^{(1)},{{{{{{{{\bf{B}}}}}}}}}^{(2)},\ldots ,{{{{{{{{\bf{B}}}}}}}}}^{({{{{{{{\mathscr{L}}}}}}}})}\}\), the algorithm consists of three simple steps:

  1. i.

    Find the (orthogonal) eigenvectors vi of the matrix \({{{{{{{\bf{B}}}}}}}}=\mathop{\sum }\nolimits_{\ell = 1}^{{{{{{{{\mathscr{L}}}}}}}}}{\xi }_{\ell }{{{{{{{{\bf{B}}}}}}}}}^{(\ell )}\), where ξ are independent random coefficients which can be drawn from a Gaussian distribution. Set Q = [v1,  , vN].

  2. ii.

    Generate \({{{{{{{\bf{B}}}}}}}}=\mathop{\sum }\nolimits_{\ell = 1}^{{{{{{{{\mathscr{L}}}}}}}}}{\xi }_{\ell }{{{{{{{{\bf{B}}}}}}}}}^{(\ell )}\) for a new realization of ξ and compute \(\widetilde{{{{{{{{\bf{B}}}}}}}}}={{{{{{{{\bf{Q}}}}}}}}}^{\intercal}{{{{{{{\bf{B}}}}}}}}{{{{{{{\bf{Q}}}}}}}}\). Mark the indexes i and j as being in the same block if \({\widetilde{B}}_{ij}\ne 0\) (and thus \({\widetilde{B}}_{ji}\,\ne\, 0\)).

  3. iii.

    Set P = [vϵ(1),  , vϵ(N)], where ϵ is a permutation of 1,  , N such that indexes in the same block are sorted consecutively (i.e., the base vectors vi corresponding to the same block are grouped together).

The proposed algorithm is inspired by and adapted from Murota et al.39. There, the authors use the eigendecompostion of a random linear combination of the given matrices to find a partial SBD, but the operations needed for refining the blocks can be cumbersome. Here, we show that the simplified algorithm above is guaranteed to find the finest SBD when there are no degeneracies—i.e., no two vi have the same eigenvalue (see “Methods” for a proof). Intuitively, this is because a random linear combination of B() contains all the information about their common block structure in the absence of degeneracy, which can be efficiently extracted through eigendecompostion. When there is a degeneracy, cases exist for which the proposed algorithm does not find the finest SBD (see “Methods” for details). However, these cases are rare in practice and is a small price to pay for the improved simplicity and efficiency of the algorithm.

We note that the algorithm can be adapted to asymmetric matrices, and in all nondegenerate cases it finds the finest SBD that can be achieved by orthogonal transformations. However, this does not exclude the possibility that more general similarity transformations could result in finer blocks for certain asymmetric matrices. (For symmetric matrices, general similarity transformations do not have an advantage over orthogonal transformations).

In Fig. 1, we compare the proposed algorithm with two previous state-of-the-art algorithms on SBD32,34. The algorithms are tested on sets of N × N matrices, each consisting of 10 random matrices with predefined common block structures (see “Methods” for how the matrices are generated). For each algorithm and each matrix size N, 100 independent matrix sets are tested. Figure 1 shows the mean CPU time from each set of tests (the standard deviations are smaller than the size of the symbols). The algorithm presented here finds the finest SBD in all cases tested and has the most favorable scaling in terms of computational complexity. For instance, it can process matrices with N ≈ 1000 in under 10 s (tested on Intel Xeon E5-2680v3 processors), which is orders of magnitude faster than the other methods. The Python and MATLAB implementations of the proposed SBD algorithm are available online as part of this publication (see “Code availability”).

Fig. 1: Computational costs of different simultaneous block diagonalization algorithms as functions of matrix size N.
figure 1

The computational costs of all three algorithms scale as Nα for large N. However, the algorithm proposed here has the smallest exponent α, which translates to order-of-magnitude speedups already for moderate matrix sizes.

Cluster synchronization and chimera states in hypergraphs

Hypergraphs7 and simplicial complexes40 provide a general description of networks with nonpairwise interactions and have been widely adopted in the literature41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63. However, the associated tensors describing those higher-order structures are more involved than matrices, especially when combined with the analysis of dynamical processes64,65,66,67,68,69. There have been several efforts to generalize the master stability function (MSF) formalism38 to these settings, for which different variants of an aggregated Laplacian have been proposed24,25,70,71. The aggregated Laplacian captures interactions of all orders in a single matrix, whose spectral decomposition allows the stability analysis to be divided into structural and dynamical components, just like the standard MSF for pairwise interactions. However, such powerful reduction comes at an inevitable cost: simplifying assumptions must be made about the network structure (e.g., all-to-all coupling), node dynamics (e.g., fixed points), and/or interaction functions (e.g., linear) in order for the aggregation to a single matrix to be valid.

Here, we consider general oscillators coupled on hypergraphs without the aforementioned restrictions. For the ease of presentation and without loss of generality, we focus on networks with interactions that involve up to three oscillators simultaneously:

$${{{{{{{{\bf{x}}}}}}}}}_{i}[t+1]= {{{{{{{\bf{F}}}}}}}}\left({{{{{{{{\bf{x}}}}}}}}}_{i}[t]\right)+{\sigma }_{1}\mathop{\sum }\limits_{j=1}^{N}{A}_{ij}^{(1)}{{{{{{{{\bf{H}}}}}}}}}^{(1)}\left({{{{{{{{\bf{x}}}}}}}}}_{i}[t],{{{{{{{{\bf{x}}}}}}}}}_{j}[t]\right)\\ +{\sigma }_{2}\mathop{\sum }\limits_{j=1}^{N}\mathop{\sum }\limits_{k=1}^{N}{A}_{ijk}^{(2)}{{{{{{{{\bf{H}}}}}}}}}^{(2)}\left({{{{{{{{\bf{x}}}}}}}}}_{i}[t],{{{{{{{{\bf{x}}}}}}}}}_{j}[t],{{{{{{{{\bf{x}}}}}}}}}_{k}[t]\right).$$
(5)

The adjacency matrix A(1) and adjacency tensor A(2) represent the pairwise and the three-body interaction, respectively. To make progress, we use the following key insight from Gambuzza et al.72: for noninvasive coupling [i.e., H(1)(s, s) = 0 and H(2)(s, s, s) = 0] and global synchronization, synchronization stability in hypergraphs is determined by Eq. (4) with C(k) = − L(k), where L(k) are generalized Laplacians defined based on the adjacency tensors A(k). More concretely, L(1) is the usual Laplacian, for which \({L}_{ij}^{(1)}={\delta }_{ij}{\sum }_{k}{A}_{ik}^{(1)}-{A}_{ij}^{(1)}\); L(2) retains the zero row-sum property and is defined as \({L}_{ij}^{(2)}=-{\sum }_{k}{A}_{ijk}^{(2)}\) for i ≠ j and \({L}_{ii}^{(2)}=-{\sum }_{k\ne i}{L}_{ik}^{(2)}\). Higher-order generalized Laplacians for k > 2 can be defined similarly72.

Crucially, we can show that the generalized Laplacians are sufficient for the stability analysis of cluster synchronization patterns provided that the clusters are nonintertwined73,74 (see Supplementary Note 1 for a mathematical derivation). Thus, in these cases, the problem reduces to applying the SBD algorithm to the set formed by matrices {D(m)} (determined by the synchronization pattern) and {L(k)} (encoding the hypergraph structure). For the most general case that includes intertwined clusters, SBD still provides the optimal reduction, as long as the generalized Laplacians are replaced by matrices that encode more nuanced information about the relation between different clusters75. The resulting SBD coordinates significantly simplifies the calculation of Lyapunov exponents in Eq. (4) and can provide valuable insight on the origin of instability, as we show below.

As an application to nontrivial synchronization patterns, we study chimera states76,77 on hypergraphs. Here, chimera states are defined as spatiotemporal patterns that emerge in systems of identically coupled identical oscillators in which part of the oscillators are mutually synchronized while the others are desynchronized. For a comprehensive review on different notions of chimeras, see the recent survey by Haugland78.

The hypergraph in Fig. 2a consists of two subnetworks of optoelectronic oscillators. Each subnetwork is a simplicial complex, in which a node is coupled to its four nearest neighbors through pairwise interactions of strength σ1 and it also participates in three-body interactions of strength σ2. The two subnetworks are all-to-all coupled through weaker links of strength κσ1, and in our simulations we take κ = 1/5. The individual oscillators are modeled as discrete maps \({x}_{i}[t+1]=\beta {\sin }^{2}\left(\right.{x}_{i}[t]+\pi /4\left)\right.\), where β is the self-feedback strength that is tunable in experiments79,80. For the pairwise interaction, we set \({H}^{(1)}({x}_{i},{x}_{j})={\sin }^{2}\left(\right.{x}_{j}+\pi /4\left)\right.-{\sin }^{2}\left(\right.{x}_{i}+\pi /4\left)\right.\). For the three-body interaction, we set \({H}^{(2)}({x}_{i},{x}_{j},{x}_{k})={\sin }^{2}\left(\right.{x}_{j}+{x}_{k}-2{x}_{i}\left)\right.\). The full dynamical equation of the system can be summarized as follows:

$${x}_{i}[t+1]= \beta \ {\sin }^{2}\left(\right.{x}_{i}[t]+\frac{\pi }{4}\left)\right.\\ +{\sigma }_{1}\mathop{\sum }\limits_{j=1}^{N}{A}_{ij}^{(1)}\left({\sin }^{2}\left(\right.{x}_{j}[t]+\frac{\pi }{4}\left)\right.-{\sin }^{2}\left(\right.{x}_{i}[t]+\frac{\pi }{4}\left)\right.\right)\\ +{\sigma }_{2}\mathop{\sum }\limits_{j=1}^{N}\mathop{\sum }\limits_{k=1}^{N}{A}_{ijk}^{(2)}{\sin }^{2}\left(\right.{x}_{j}[t]+{x}_{k}[t]-2{x}_{i}[t]\left)\right..$$
(6)

Since couplings in previous optoelectronic experiments are implemented through a field-programmable gate array that can realize three-body interactions, we expect that our predictions below can be explored and verified experimentally on the same platform.

Fig. 2: Chimera states arising from nonpairwise interactions.
figure 2

a Two identical subnetworks (C1 and C2) of optoelectronic oscillators with strong intracluster connections (black lines) and weak intercluster connections (gray lines). The three-body interactions are indicated by 2-simplices (beige triangles). The eight dynamical clusters that form the chimera state are indicated by different node colors. b Common block structure of the matrices in the variational equation (4) revealed by the SBD algorithm, in which nonzero entries are represented by solid circles. The gray block corresponds to perturbations parallel to the synchronization manifold, and the pink blocks represent perturbations transverse to the synchronization manifold. Thus, only the pink blocks need to be considered in the stability analysis. For the network in a, the transverse perturbations are all localized within the subnetwork C1. c Linear stability analysis of chimera states based on the SBD coordinates for a range of the pairwise interaction strength σ1 and three-body interaction strength σ2. Chimeras are stable when the maximum transverse Lyapunov exponent Λ is negative, and they occur only in the presence of nonvanishing three-body interactions. d Chimera dynamics for σ1 = 0.6 and σ2 = 0.4 (green dot in c). Here, xi is the dynamical state of the ith oscillator, and the vertical axis indexes the oscillators in the respective subnetworks.

To characterize chimera states for which one subnetwork is synchronized and one subnetwork is incoherent, we are confronted with 10 noncommuting matrices in Eq. (4). Eight of them are {D(1),  , D(8)}, corresponding to one dynamical cluster with 7 synchronized nodes and seven dynamical clusters with 1 node each (distinguished by colors in Fig. 2a). The other two matrices are {L(1), L(2)}, which describe the pairwise and three-body interactions, respectively. Applying the SBD algorithm to these matrices reveals the common block structure depicted in Fig. 2b. The gray block corresponds to perturbations parallel to the cluster synchronization manifold and does not affect the chimera stability. The other blocks control the transverse perturbations (all localized within the synchronized subnetwork C1) and are included in the stability analysis. This allows us to focus on one 1 × 1 block at a time and to efficiently calculate the maximum transverse Lyapunov exponent (MTLE) Λ of the chimera state using previously established procedure81,82.

For the system in Fig. 2, SBD coordinates offer not only dimension reduction but also analytical insights. As we show in Supplementary Note 2, because the transverse blocks (colored pink in Fig. 2b) found by the SBD algorithm are all 1 × 1, the Lyapunov exponents associated with chimera stability are given by a simple formula,

$${{{\Lambda }}}_{i}={{{{{{\mathrm{ln}}}}}}}\,\left|1-\frac{{\sigma }_{1}}{\beta }\left({\lambda }_{i}^{(1)}+\frac{\kappa N}{2}\right)\right|+{{\Gamma }},$$
(7)

where \({\lambda }_{i}^{(1)}\) is the scalar inside the ith transverse block of L(1) after the SBD transformation. Here,

$${{\Gamma }}= \,\mathop{{{{{{{{\rm{lim}}}}}}}}}\limits_{{{{{{{{\mathcal{T}}}}}}}}\to \infty }\frac{1}{{{{{{{{\mathcal{T}}}}}}}}}\mathop{\sum }\limits_{t=1}^{{{{{{{{\mathcal{T}}}}}}}}}{{{{{{\mathrm{ln}}}}}}}\,\left|{{{{{{{\rm{J}}}}}}}}F(s[t])\right|\\ = \, \mathop{{{{{{{{\rm{lim}}}}}}}}}\limits_{{{{{{{{\mathcal{T}}}}}}}}\to \infty }\frac{\beta }{{{{{{{{\mathcal{T}}}}}}}}}\mathop{\sum }\limits_{t=1}^{{{{{{{{\mathcal{T}}}}}}}}}{{{{{{\mathrm{ln}}}}}}}\,\left|\sin \left(\right.2s[t]+\frac{\pi }{2}\left)\right.\right|$$
(8)

is a finite constant determined by the synchronous trajectory s[t] of the coherent subnetwork C1, which in turn is influenced by both σ1 and σ2.

Using Eqs. (7) and (8), we can calculate the MTLE in the σ1σ2 parameter space to map out the stable chimera region. As can be seen from Fig. 2c, where we fix β = 1.5, chimera states are unstable when oscillators are coupled only through pairwise interactions (i.e., when σ2 = 0), but they become stable in the presence of three-body interactions of intermediate strength. Figure 2d shows the typical chimera dynamics for β = 1.5, σ1 = 0.6, and σ2 = 0.4. According to Eqs. (7) and (8), the higher-order interaction stabilizes chimera states solely by changing the dynamics in the incoherent subnetwork C2, which in turn influences the synchronous trajectory in C1 and thus the value of Γ. This insight highlights the critical role played by the incoherent subnetwork in determining chimera stability82.

To test the complexity reduction capability of the SBD algorithm systematically, we consider networks consisting of M dynamical clusters, each with n nodes (Fig. 3a), such that:

  1. 1.

    each cluster is a random subnetwork with link density p, to which three-body interactions are added by transforming triangles into 2-simplices;

  2. 2.

    two clusters are either all-to-all connected (with probability q > 0) or fully disconnected from each other (with probability 1 − q).

For the analysis of the M-cluster synchronization state in these networks, the reduction in computational complexity yielded by the SBD algorithm can be measured using \({r}^{(\alpha )}={\sum }_{i}{n}_{i}^{\alpha }/{N}^{\alpha }\), where ni is the size of the ith common block for the transformed matrices. If the computational complexity of analyzing Eq. (4) in its original form scales as \({{{{{{{\mathcal{O}}}}}}}}({N}^{\alpha })\), then r(α) gives the fraction of time needed to analyze Eq. (4) in its decoupled form under the SBD coordinates. Given that the computational complexity of finding the Lyapunov exponents for a fixed point in an n-dimensional space typically lies between \({{{{{{{\mathcal{O}}}}}}}}({n}^{2})\) and \({{{{{{{\mathcal{O}}}}}}}}({n}^{3})\), here we set α = 3 as a reference for the more challenging task of calculating the Lyapunov exponents for periodic or chaotic trajectories.

Fig. 3: Complexity reduction in the analysis of synchronization patterns in hypergraphs.
figure 3

a Example of a hypergraph consisting of M = 5 clusters, each with n = 7 nodes. Inside each cluster there are pairwise interactions (black lines) and three-body interactions (beige triangles). Two clusters are either all-to-all connected (gray lines) or fully disconnected. b Reduction in computational complexity achieved by the SBD algorithm for cluster size n = 7, intracluster link density p = 0.5, and intercluster link density q = 0.5 as the cluster number M is varied. The box covers the range 25th–75th percentile, the whiskers mark the range 5th–95th percentile, and the dots indicate the remaining 10% outliers. Each boxplot is based on 1000 independent network realizations.

In Fig. 3b, we apply the SBD algorithm to {D(1),  , D(M), L(1), L(2)} and plot r(3) against the number of clusters M in the networks. We see a reduction in complexity of at least two orders of magnitude (r(3) ≤ 10−2) for M ≥ 10. This reduction does not depend sensitively on other parameters in our model (n, p, and q).

Synchronization patterns in multilayer and temporal networks

The coexistence of different types (i.e., layers) of interactions in a network9,10,83 can dramatically influence underlying dynamical processes, such as percolation84,85, diffusion86,87, and synchronization28,88,89. Multilayer networks of N oscillators diffusively coupled through K different types of interactions can be described by

$${{{{{{{{\bf{x}}}}}}}}}_{i}[t+1]={{{{{{{\bf{F}}}}}}}}({{{{{{{{\bf{x}}}}}}}}}_{i}[t])-\mathop{\sum }\limits_{k=1}^{K}{\sigma }_{k}\mathop{\sum }\limits_{j=1}^{N}{L}_{ij}^{(k)}{{{{{{{{\bf{H}}}}}}}}}^{(k)}({{{{{{{{\bf{x}}}}}}}}}_{j}[t]),$$
(9)

where L(k) is the Laplacian matrix representing the links mediating interactions of the form H(k) and coupling strength σk. It is easy to see that the corresponding variational equation for a given synchronization pattern32,90 is a special case of Eq. (4) and can be readily addressed using the SBD framework.

Temporal networks11 are another class of systems that can naturally be addressed using our SBD framework. Such networks are ubiquitous in nature and society91,92, and their time-varying nature has been shown to significantly alter many dynamical characteristics, including controllability91,93 and synchronizability94,95,96,97.

Consider a temporal network whose connection pattern at time t is described by L(t),

$${{{{{{{{\bf{x}}}}}}}}}_{i}[t+1]={{{{{{{\bf{F}}}}}}}}({{{{{{{{\bf{x}}}}}}}}}_{i}[t])-\sigma \mathop{\sum }\limits_{j=1}^{N}{L}_{ij}^{(t)}{{{{{{{\bf{H}}}}}}}}({{{{{{{{\bf{x}}}}}}}}}_{j}[t]).$$
(10)

Here, the stability analysis of synchronization patterns can by simplified by simultaneously block diagonalizing {D(m)} and {L(t)}. This framework generalizes existing master stability methods for synchronization in temporal networks98, which assumes that synchronization is global and the set of all L(t) to be commutative. We also do not require separation of time scales between the evolution of the network structure and the internal dynamics of oscillators, which was assumed in various previous studies in exchange of analytical insights30,31. It is worth noting that {L(t)} can in principle contain infinitely many different matrices. This would pose a challenge to the SBD algorithm unless there are relations among the matrices to be exploited. Here, for simplicity, we assume that L(t) are selected from a finite set of matrices. This class of temporal networks is also referred to as switched systems in the engineering literature and has been widely studied29.

As an application, we characterize chimera states on a temporal network that alternates between two different configurations. Figure 4a illustrates the temporal evolution of the network, which has intracluster coupling of strength σ and intercluster coupling of strength κσ (again for κ = 1/5, the same optoelectronic oscillator and pairwise interaction function as in Fig. 2). This system has a variational equation with noncommuting matrices {D(1),  , D(6), L(1), L(2)}, where L(1) and L(2) correspond to the network configuration at odd and even t, respectively. Applying the SBD algorithm reveals one 6 × 6 parallel block and two 2 × 2 transverse blocks (Fig. 4b), effectively reducing the dimension of the stability analysis problem from 10 to 2.

Fig. 4: Chimera states on a temporal network.
figure 4

a Two identical subnetworks of optoelectronic oscillators with strong intracluster connections (black lines) and weak intercluster connections (gray lines). The network structure switches back and forth between two different configurations. The six dynamical clusters that form the chimera state are indicated by different node colors. b Common block structure of the matrices in the variational equation (4) under the SBD coordinates. The entries of the transformed matrices that are not required to be zero are represented by solid circles. The gray block corresponds to perturbations that do not affect the chimera stability, and the pink blocks represent transverse perturbations that determine the chimera stability. c Linear stability analysis of chimera states based on the SBD coordinates for a range of coupling strength σ and self-feedback strength β. Chimeras are stable when the maximum transverse Lyapunov exponent Λ is negative. d Chimera dynamics for σ = 0.9 and β = 1.1 (green dot in c). Here, xi is the dynamical state of the ith oscillator, and the vertical axis indexes the oscillators in the respective subnetworks marked in a.

Despite the transverse blocks not being 1 × 1, by looking at the transformation matrix P one can still gather insights about the nature of the instability. For example, the first pink block consists of transverse perturbations (localized in the synchronized subnetwork) of the form \(\left(\right.a,0,-a,b,-b\left)\right.\), while perturbations in the second pink block are constrained to be \(\left(\right.c,-2(c+d),c,d,d\left)\right.\). Depending on which block becomes unstable first, the synchronized subnetwork (and thus the chimera state) loses stability through different routes. The chimera region based on the MTLE calculated under the SBD coordinates is shown in Fig. 4c and the typical chimera dynamics for σ = 0.9 and β = 1.1 are presented in Fig. 4d.

To further demonstrate the utility of the SBD framework, we systematically consider temporal networks that alternate between two different configurations. The network construction is similar to that in Fig. 3, except that here each cluster has time-varying instead of nonpairwise interactions. In the example shown in Fig. 5a, each cluster has red links active at odd t and blue links active at even t, while the black links are always active. Figure 5b confirms that the SBD algorithm consistently leads to substantial reduction in computational complexity. Moreover, as in the case of hypergraphs (Fig. 3), the complexity reduction increases as the number of clusters M is increased. Again, the results do not depend sensitively on cluster size and link densities.

Fig. 5: Complexity reduction in the analysis of synchronization patterns in temporal networks.
figure 5

a Example of a temporal network consisting of M = 5 clusters, each with n = 7 nodes. In each cluster, an expected 20% of the links are temporal (connections alternate between the blue and the red links) and the remaining 80% are static (black links). Two clusters are either all-to-all connected (gray lines) or fully disconnected. b Reduction in computational complexity achieved by the SBD algorithm for cluster size n = 7, intracluster link density p = 0.5, and intercluster link density q = 0.5 as the cluster number M is varied. The box covers the range 25th–75th percentile, the whiskers mark the range 5th–95th percentile, and the dots indicate the remaining 10% outliers. Each boxplot is based on 1000 independent network realizations.

Conclusion

In this work, we established SBD as a versatile tool to analyze complex synchronization patterns in generalized networks with nonpairwise, multilayer, and time-varying interactions. The method can be easily applied to other dynamical processes, such as diffusion87, random walks60, and consensus99. Indeed, the equations describing such processes on generalized networks often involve two or more noncommuting matrices, whose SBD naturally leads to an optimal mode decoupling and the simplification of the analysis.

The usefulness of our framework also extends beyond the generalized networks discussed here. Many real-world networks are composed of different types of nodes and can experience nonidentical delays in the communications among nodes. These heterogeneities can be represented through additional matrices and are automatically accounted for by our SBD framework in the stability analysis100. Finally, we suggest that our results may find applications beyond network dynamics, since SBD is also a powerful tool to address other problems involving multiple matrices in which dimension reduction is desired, such as independent component analysis and blind source separation101,102. The flexibility and scalability of our framework make it adaptable to various practical situations, and we thus expect it to facilitate the exploration of collective dynamics in a broad range of complex systems.

Methods

Optimality of the common block structure discovered by the SBD algorithm

Given a set of symmetric matrices \({{{{{{{\mathcal{B}}}}}}}}=\{{{{{{{{{\bf{B}}}}}}}}}^{(1)},{{{{{{{{\bf{B}}}}}}}}}^{(2)},\ldots ,{{{{{{{{\bf{B}}}}}}}}}^{({{{{{{{\mathscr{L}}}}}}}})}\}\), let \({{{{{{{\bf{B}}}}}}}}=\mathop{\sum }\nolimits_{\ell = 1}^{{{{{{{{\mathscr{L}}}}}}}}}{\xi }_{\ell }{{{{{{{{\bf{B}}}}}}}}}^{(\ell )}\), where ξ are random coefficients. Without loss of generality, we can assume all matrices B() to be in their finest common block form. Our goal is then to prove that, when there is no degeneracy, each eigenvector vi of B is localized within a single (square) block, meaning that the indices of the nonzero entries of vi are limited to the rows of one of the common blocks shared by {B()} (Fig. 6).

Fig. 6: Illustration of a localized eigenvector.
figure 6

The vector vi is localized within the green block of the matrix. The nonzero entries of the matrix and the vector are represented as solid circles.

We first notice that B inherits the common block structure of {B()}. Thus, for each ni × ni block shared by {B()}, we can always find ni eigenvectors of B that are localized within that block. When the eigenvalues of B are nondegenerate, the eigenvectors are unique, and thus all N = ∑ini eigenvectors of matrix B are localized within individual blocks.

Based on the results above, it follows that after computing the eigenvectors vi of matrix B (step i of the SBD algorithm) and sorting them according to their associated block (steps ii and iii of the SBD algorithm), the resulting orthogonal matrix P = [vϵ(1), vϵ(N)] will reveal the finest common block structure. Here, finest is characterized by the number of common blocks being maximal (which is also equivalent to the sizes of the blocks being minimal), and the block sizes are unique up to permutations.

In the presence of degeneracies (i.e., when there are distinct eigenvectors with the same eigenvalue), no theoretical guarantee can be given that the strategy above will find the finest SBD39. To see why, consider the matrices B() = diag(b(), b(), …, b()) formed by the direct sum of duplicate blocks. In this case, a generic B has eigenvalues with multiplicity M, where M is the number of duplicate blocks. For example, if u is an eigenvector corresponding to the first block of B, then \({({\xi }_{1}{{{{{{{{\bf{u}}}}}}}}}^{\intercal},\ldots ,{\xi }_{M}{{{{{{{{\bf{u}}}}}}}}}^{\intercal})}^{\intercal}\) is also an eigenvector of B (with the same eigenvalue) for any set of random coefficients {ξm}. As a result, the eigenvectors of B are no longer guaranteed to be localized within a single block.

Generating random matrices with predefined block structures

In order to compare the computational costs of different SBD algorithms, we generate sets of random matrices with predefined common block structures. For each set, we start with \({{{{{{{\mathscr{L}}}}}}}}=10\) matrices of size N. The th matrix is constructed as the direct sum of smaller random matrices, \({{{{{{{{\bf{B}}}}}}}}}^{(\ell )}=\,{{\mbox{diag}}}\,({{{{{{{{\bf{b}}}}}}}}}_{1}^{(\ell )},\ldots ,{{{{{{{{\bf{b}}}}}}}}}_{M}^{(\ell )})\), where \({{{{{{{{\bf{b}}}}}}}}}_{m}^{(\ell )}\) are symmetric matrices with entries drawn from the Gaussian distribution \({{{{{{{\mathcal{N}}}}}}}}(0,1)\). The size of the mth block \({{{{{{{{\bf{b}}}}}}}}}_{m}^{(\ell )}\) is chosen randomly between 1 and N/2 and is set to be the same for all . We then apply a random orthogonal transformation Q to \({{{{{{{\mathcal{B}}}}}}}}=\{{{{{{{{{\bf{B}}}}}}}}}^{(1)},{{{{{{{{\bf{B}}}}}}}}}^{(2)},\ldots ,{{{{{{{{\bf{B}}}}}}}}}^{({{{{{{{\mathscr{L}}}}}}}})}\}\), which results in a matrix set \(\widetilde{{{{{{{{\mathcal{B}}}}}}}}}=\{{\widetilde{{{{{{{{\bf{B}}}}}}}}}}^{(1)},{\widetilde{{{{{{{{\bf{B}}}}}}}}}}^{(2)},\ldots ,{\widetilde{{{{{{{{\bf{B}}}}}}}}}}^{({{{{{{{\mathscr{L}}}}}}}})}\}\) with no apparent block structure in \({\widetilde{{{{{{{{\bf{B}}}}}}}}}}^{(\ell )}={{{{{{{{\bf{Q}}}}}}}}}^{\intercal}{{{{{{{{\bf{B}}}}}}}}}^{(\ell )}{{{{{{{\bf{Q}}}}}}}}\). Finally, the SBD algorithms are applied to \(\widetilde{{{{{{{{\mathcal{B}}}}}}}}}\) to recover the common block structure. All tests are performed on Intel Xeon E5-2680 v3 Processors, and the CPU time used by each algorithm is recorded using the timeit function from MATLAB.