Memory selection and information switching in oscillator networks with higher-order interactions

We study the dynamics of coupled oscillator networks with higher-order interactions and their ability to store information. In particular, the fixed points of these oscillator systems consist of two clusters of oscillators that become entrained at opposite phases, mapping easily to information more commonly represented by sequences of 0's and 1's. While $2^N$ such fixed point states exist in a system of $N$ oscillators, we find that a relatively small fraction of these are stable, as chosen by the network topology. To understand the memory selection of such oscillator networks, we derive a stability criterion to identify precisely which states are stable, i.e., which pieces of information are supported by the network. We also investigate the process by which the system can switch between different stable states when a random perturbation is applied that may force the system into the basin of attraction of another stable state.


I. INTRODUCTION
Collective behavior in ensembles of network-coupled dynamical units is an important area of research due to a wide range of both natural and engineered applications [1,2]. Examples include cardiac pacemakers [3], synthetic cell engineering [4], and power grid dynamics [5,6]. Moving beyond such a system's ability to either remain incoherent or synchronize into a single entrained group, various combinations of dynamical and structural properties may give rise to more complicated synchronization patterns, for instance when oscillators form coexisting synchronized and incoherent groups known as chimera states [7] or spontaneously partition themselves in different clusters that remain entrained within but do not synchronize with one another [8]. Of particular interest here are patterns where phase oscillators form multiple entrained groups. In the illustrative case of two groups, which will be our focus in this work, oscillator systems can be used as models for information storage, treating oscillators that entrain to one or the other group as a 0 or 1, i.e., a bit, in a piece of information [9][10][11][12]. Given the mappings between phase and integrate-and-fire oscillators [13], understanding the dynamics that give rise to such patterns may shed light on cognitive function and memory [14][15][16][17][18].
In addition to the collective behavior seen in network dynamics that emerge from classical pair-wise interactions, a great deal of interest has recently been paid to higher-order interactions, i.e., interactions that take place between three or more units (and are fundamentally different than linear combinations of pair-wise interactions) [19][20][21]. A large outstanding question in this direction is: What novel effects do such higher-order interactions have on macroscopic behavior? Phase reduction techniques have already shown that such interactions take place in generic oscillator systems [24,25] and empirical data suggests that they may play an important role in brain dynamics [26][27][28]. In particular, three-way interactions interactions may describe correlations in neuronal spiking activity in the brain that provides a missing link between * persebastian.skardal@trincoll.edu structure and function [29]. Some results have been developed for treating synchronization of identical (possibly chaotic) oscillators [22,23] and higher-order interactions between phase oscillators have been investigated in a handful of studies [30][31][32][33][34], but we demonstrated in recent work that such interactions naturally enable systems to store memory and information via the kinds of patterns described above [35,36].
In this paper we study the process by which memory is generated in oscillator systems with non-trivial network topologies and higher-order interactions. We find that, compared to the all-to-all coupled case studied in Refs. [35,36], when there is an underlying network topology that constrains the interactions the memory capacity of the system, quantified by the number of stable fixed point states the dynamics supports, is also constrained. Specifically, we identify precisely which oscillator states, i.e., pieces of information, are supported by a given network topology using a linear stability criterion. We then study the process by which the system switches from one piece of information to another as perturbations are applied to the different stable states and may move the system into the basin of attraction of a different state.
The remainder of this paper is organized as follows. In Sec. II we present the model and discuss the connection between stable states and memory. In Sec. III we present a stability analysis and explore memory capacity in oscillator networks with higher-order interactions with an illustrative example. In Sec. IV we investigate the process by which the system can move between different stable states under random perturbations, thereby switching between pieces of information. In Sec. V we explore the effects of network structure and investigate the asymmetry of stable states. In Sec. VI we conclude with a discussion of our results.

II. THE 2-SIMPLEX OSCILLATOR MODEL
We begin by consider an extension of the Kuramoto model with higher-order interactions, specifically three-way, or 2simplex, interactions [37]. The all-to-all coupled case was treated in detail in Refs. [35,36], but when placed on a nontrivial network topology the system is governed by the follow-ing equations of motion for N oscillators, where θ i and ω i are the phase and natural frequency, respectively, of oscillator i and K is the global coupling strength. Importantly, the coupling function is the sine of a linear combination of three oscillators, rather than two, and interactions are determined by the adjacency tensor B. We note that this coupling function is one of the two possibilities for 2-simplex interactions that arise from phase reduction. The other possibility, sin(2θ j − θ l − θ i ), does not admit stable solutions with multiple clusters, and therefore does not support memory storage, so we focus on the coupling in Eq. (1). For simplicity we assume that the entries of B are unweighted and undirected, so that B ijl = 1 if a three-way interaction exists between oscillators i, j, and l (and other wise B ijl = 0) and This last property essentially states that interactions are not directed and, with the factor of 2 in the denominator of Eq. (1), each three-way interactions is counted exactly once. In principle, the higherorder structure encoded in B may be defined in different ways depending on the application. In other words, the entries of B may or may not depend on the entries of an underlying network consisting of nodes and links (i.e., 0-and 1-simplexes) encoded in an adjacency matrix A. However, here we consider the case where higher-order interactions are in fact dictated by the adjacency matrix A (which we assume is unweighted and undirected) so that B ijl = A ij A jl A li , i.e., a higher-order interaction exists between a group of three nodes if they are connected in a triangle. The dynamics of Eq. (1) naturally give rise to synchronized state patterns that entrain oscillators in two groups at opposite sides of the torus. To see this we introduce the set of local order parameters which allows us to rewrite Eq. (1) aṡ First, by entering the rotating reference frame θ → θ + ωt, ω i is the mean natural frequency, we may set the mean frequency to zero. Next, assuming that the dynamics relax to a stationary state, oscillator i becomes phaselocked if |ω i | ≤ Kr i (we will return to this condition later) and converges to one of two stable fixed points defined by where A suitable shift initial conditions allows us to center the mean phases ψ i about zero, and under typical conditions we expect that phase-locking oscillators will have ψ i ≈ 0, which, combined with Eq. (4), yields two entrained groups of phaselocked oscillators: one about θ = 0 and the other about θ = π. While this demonstrates that phase-locked oscillators tend to fall into one of two possible groups, the presence of drifting oscillators poses an issue in terms of interpretation. To this end, we consider the dynamics of Eq. (1) in the regime where the coupling strength is sufficiently strong compared to the spread of frequencies, so that K ≫ |ω i | for all i = 1, . . . , N . By considering the rescaled time τ = Kt and approximating in which case all oscillators converge to either 0 or π. Before we move on to the analysis of these states, it's worthwhile to discuss their representation of information. In particular, a fixed point of a system of N oscillators will be given by some θ * ∈ R N , where each θ * i = 0 or π. This is easily interpretable as a string of bits: for instance, in a system of N = 5 oscillators we may converge to the state θ * = [0, 0, π, 0, π] T , which one can be mapped to the sequence of bits (0, 0, 1, 0, 1). This manner of memory in oscillator systems is not dissimilar to that observed in neural models consisting of Ising spin particles, best exemplified in the seminal 1988 papers by Gardner [38] and Gardner and Derrida [39]. In these models information bits, i.e., 0's and 1's, are equivalent not to oscillator phases, but Ising spins: pluses and minuses. The oscillator systems studied here can then be interpreted as a different model for capturing and studying a similar phenomenon.

III. STABILITY ANALYSIS
To understand the nature of fixed point states of the dynamics given by Eq. (6), we begin by analyzing their asymptotic stability. Recall that we are interested in fixed points θ * ∈ R N with entries θ * i = 0 or π. First, 2 N such states exist. However, the dynamics are invariant under rotations, including rotation by π, i.e., θ → θ + π, so without any loss of generality we "anchor" θ * 1 = 0, leaving 2 N −1 possible fixed point solutions. Then, given a fixed point θ * , its stability is governed by the eigenvalues of the Jacobian matrix DF (θ * ) whose entries are given by Due to the rotational invariance of the dynamics, DF (θ * ) is guaranteed to have one trivial eigenvalue λ = 0, which can also be seen by noting that the rows of DF (θ * ) all sum to zero, so v ∝ 1 is an eigenvector with eigenvalue λ = 0. Therefore, if all other eigenvalue have strictly negative real part, i.e., are contained in the left-half complex plane, then θ * is stable, and otherwise it is unstable. This gives us the machinery to evaluate the stability of each of the 2 N −1 fixed points described above.
Before a more in-depth exploration of the stability of possible states we consider a small, illustrative example. Specifically, consider the network and two states depicted in Fig. 1  (a) and (b), where blue and yellow filled nodes correspond to states θ i = 0 and π, respectively. (Here we index the nodes i = 1, 2, . . . , N = 8 in the clockwise direction, starting from the top, so we constrain θ 1 = 0.) For the given network structure (where all nodes are part of at least one connected triangle and therefore are part of at least one higher-order interaction) the state depicted in (a) is stable, while the state in (b) is unstable. To see this we plot in Fig. 1(c) the eigenvalue spectra of the Jacobian matrices DF (θ * ) for the two states (a) and (b) using circles and crosses, respectively. Not that in with exception to the trivial eigenvalue λ = 0 for each, the full spectrum for (a) is contained in the left-half complex plane, whereas four eigenvalues for (b) are located in the right-half complex plane. To test this stability criterion we directly simulate the evolution of a small perturbation δθ(t) = θ(t) − θ * initially set at δθ(0) = 10 −2 and plot the results in Fig. 1(d). Time series for perturbations to states (a) and (b) are plotted as thick solid and dashed curves, respectively, showing that the perturbations exponentially decay and grow until saturation, respective. We also plot the time series for perturbations to all other states as solid or dashed curves based on the prediction from the stability criterion, getting 100% agreement.
The second important property that this example illustrates is a certain asymmetry in the collection of stable states. Note first, that of the ten total stable states, one has zero θ * = π entries, four have one π entry, four have two π entry, and one has three π entries. However, of all 2 N −1 = 128 possible fixed points, one has zero π entries, seven have one π entry, 21 have two π entries, 35 have three π entries, 35 have four π entries, and so on. In particular, the majority of all fixed point states have a roughly even distribution of 0 and π entries, but when we restrict our attention to only stable states we find a strong asymmetry as they tend to have an uneven distribution of 0 and π entries.

IV. SWITCHING
Lastly, with the stability analysis above as a tool for identifying stable states of the system, i.e., supported pieces of information, we investigate the process by which the system may switch between different pieces of memory, i.e., stable states. What we describe here is not quite a homoclinic network [40,41] but is in many senses similar. Sticking with our example network illustrated above, none of the 10 stable states depicted in Fig. 2 are connected with a heteroclinic orbit since  Fig. 2. Each directed link is drawn with width proportional to the rate at which the system switches from one state to another under a random perturbation of size δθ = 1.55. (Self-links, describing rates at which perturbations do not cause a transition, are not illustrated.) they are all asymptotically stable. Instead, we again consider perturbations δθ to these stable states θ * , but now we allow perturbations to not be arbitrarily small, letting δθ = O(1), so that the perturbed state θ * +δθ may fall outside of the basin of attraction of θ * and eventually converge to another stable state.
More precisely, we consider the following setup. For each of the 10 stable states (a)-(j) depicted in Fig. 2, we introduce 10 3 random perturbations of size δθ = 1.55. Then, for each of these perturbations to the states j we find the fraction that eventually converge to each other state i, and let this fraction populate the entry π ij of the transition matrix Π. Thus, each entry π ij describes the rate at which perturbations of this size cause a switch in states j → i. This transition network is by nature directed and weighted, and is illustrated in Fig. 3, letting more strongly (weakly) weighted links be thicker (thinner). We have neglected to illustrate self-links, i.e., entries π ii that describe rates at which the perturbation does not result in a switch in state. It should be noted that the choice δθ = 1.55 is important; choosing perturbations too small eventually leads to no transitions from one state to another, while choosing perturbations too large eventually leads to much denser networks whose links reflect only the size of the basins of attraction of each state. The choice used here lies in between these two extremes. In this example, for instance, we see that a few state pairs have particularly strong rates of transitions, i.e., (d)→(c) and (j)→(i). This stems from the similarity of the states themselves (see Fig. 2 where it can be seen that these states differ only by a single oscillator). Switching between such similar states appears to exist between other pairs as well, e.g., (b)→(a), (f)→(e), and (h)→(g), but the strength of the transition is variable. Moreover, the uniform state (a) (where θ * = 0) is a sink where the system cannot escape from. This property is not particularly surprising for reasonable perturbations since state (a) is the most linearly stable in the sense that the largest nontrivial eigenvalue is the most negative compared to those for other states.

V. NETWORK STRUCTURE AND STATE ASYMMETRY
Before concluding we investigate two aspects of the stability properties of the system. We begin by probing the effect of network structure on the overall stability of different states. Keeping in mind the combinatorial complexity of the number of possible states to probe (recall that for a network of N oscillators we have 2 N −1 states to examine) we maintain a relatively small network size, specifically N = 12, but study an ensemble of 800 soft spherical networks built using the hyperbolic graph generator [42]. In particular, we use a small but finite temperature of T = 1 guaranteeing that, while some long-range links exist, the networks are significantly clustered. (We throw out any networks that are not connected by its triangles.) Moreover, while we use a target  Fig. 2. Each directed link is drawn with width proportional to the rate at which the system switches from one state to another under a random perturbation of size δθ = 1.55. (Self-links, describing rates at which perturbations do not cause a transition, are not illustrated.) mean 1-simplex degree of k (1) = 4 we note that the mean 2simplex degree, k (2) , where k as well as its standard deviation, vary significantly from network to network.
We begin by investigating the overall stability of states by, for each network, calculating the fraction of all states that are stable. In Fig. 4(a) and (b) we plot the fraction of stable states for each network vs the corresponding mean and standard deviation of the 2-simplex degree. It's clear to see that the fraction of stable states that a network admits has a positive correlation with both the mean and standard deviation of the 2simplex degree.
We also investigate more closely the nature of the states that end up being stable. Recall that from the example depicted in Fig. 2 we observed a noticeable asymmetry. We now examine this by defining an asymmetry identifier for each state θ * as where χ θ * i =0 = 1 if θ * i = 0 and χ θ * i =0 = 0 otherwise (and similarly, χ θ * i =π = 1 if θ * i = π and χ θ * i =π = 0 otherwise). In particular, η(θ * ) = 1 or 0 in the cases that all θ * i share the same value or are even split between 0 and π. Using this asymmetry identifier to organize the entire ensemble of possible states, we then evaluate, for all states θ * that share an asymmetry identifier value η(θ * ) the fraction of states that are stable. In Fig. 5 we plot, as a function of the asymmetry identifier η(θ * ) the fraction of stable states for states sharing the value η(θ * ) averaged over the ensemble of networks used above. (Error bars indicate one standard deviation above and below the mean.) We observe that the trend observed in our example depicted in Fig. 2 persists, i.e., states that are more asymmetric are more likely to be stable.

VI. DISCUSSION
In this paper we have studied the process by which oscillator networks with higher-order interactions can select memory and information, often represented as sequences of bits, by stabilizing fixed points consisting of two groups of clustered oscillators at opposite phases. While oscillator networks have long been used as models for encoding and storing such information [16], such systems have often required fine-tuning and engineering of, e.g., individual coupling strengths, to attain a given state. However, the discovery that higher-order interactions naturally leads to a a wide variety of such states provides a more natural formalism for information storage without such fine-tuning [35,36].
Here we have presented a stability analysis for determining precisely which of the 2 N potential states are stable. This selection depends on the underlying network topology that encodes the higher-order interactions between oscillators. Moreover, we have also used our stability analysis to study the process by which the oscillator network can switch between different stable states, as a perturbation can be applied to force the system into a basin of attraction for another stable state.