STABLE AND UNSTABLE PERIODIC ORBITS IN COMPLEX NETWORKS OF SPIKING NEURONS WITH DELAYS

. Is a periodic orbit underlying a periodic pattern of spikes in a heterogeneous neural network stable or unstable? We analytically assess this question in neural networks with delayed interactions by explicitly studying the microscopic time evolution of perturbations. We show that in purely in- hibitorily coupled networks of neurons with normal dissipation (concave rise function), such as common leaky integrate-and-ﬁre neurons, all orbits under- lying non-degenerate periodic spike patterns are stable. In purely inhibitorily coupled networks with strongly connected topology and normal dissipation (strictly concave rise function), they are even asymptotically stable. In con-trast, for the same type of individual neurons, all orbits underlying such pat- terns are unstable if the coupling is excitatory. For networks of neurons with anomalous dissipation ((strictly) convex rise function), the reverse statements hold. For the stable dynamics, we give an analytical lower bound on the local size of the basin of attraction. Numerical simulations of networks with diﬀerent integrate-and-ﬁre type neurons illustrate our results.


1.
Introduction. Stable and unstable invariant sets are fundamental for the time evolution of nonlinear dynamical systems [20]. In particular, stable invariant sets constitute attractors and thus characterize the long time behavior of such systems. Unstable invariant sets also have a strong impact onto a system's dynamics. For instance, chaotic saddles [14] may control transient dynamics, and unstable periodic orbits constitute the skeleton of chaotic attractors (and chaotic saddles) and determine the statistical and geometrical properties of their dynamics. Moreover, pulse-coupled systems may even be dominated by invariant sets that are unstable periodic orbits, yet attractors in the sense of Milnor [36,37,4]. Whereas stability of invariant sets in low-dimensional systems such as one-and two-dimensional maps and certain continuous time systems is well studied, stability properties of invariant 1556 RAOUL-MARTIN MEMMESHEIMER AND MARC TIMME sets in higher-dimensional systems are typically hard to access. Here we analytically study the stability of periodic orbits in networks of spiking neuron models with delayed interactions and arbitrary, generally complicated network connectivity. Periodic orbits in these systems imply that the generated patterns of spikes repeat periodically.
Recently, neurobiological experiments yield increasing evidence for the presence of precisely timed, often repeating patterns of spikes in different neuronal systems [3,32,30,15,23,2,12,31]. Their dynamical origin, however, is still unknown. In models, patterns of precisely timed spikes occur, e.g., as attractors of the dynamics of recurrent, essentially inhibitory neural networks with non-delayed interactions, if they are designed in a special way [24] or fully coupled [19]. In two recent publications [28,27], we presented a method to design neural networks with delayed couplings and arbitrary connectivity that exhibit an arbitrary predefined periodic pattern of spikes as invariant dynamics. This naturally raises the question whether the invariant dynamics and hence the stored patterns are stable or unstable against dynamical perturbations.
For spiking neural networks, stability has mostly been analyzed in small systems with a few neurons [29,41,10,39,11] or for relatively simple collective states, such as oscillatory and synchronous [38,10,11,8,7,35] or asynchronous states [1,8,16,7,40,21]. Only a few studies assess the stability properties of more complex invariant precise spiking dynamics in larger networks. The stability of a near-synchronous periodic pattern of spikes was shown in inhibitory networks with normal dissipation and in excitatory networks with anomalous dissipation in [9]. For homogeneous, fully connected inhibitorily coupled networks of common leaky integrate-and-fire neurons without delay, the stability of the 'splay state' [33], characterized by evenly spaced spike times of all network neurons, was analytically demonstrated recently [45]. Numerical investigations of weakly diluted networks show convergence of the dynamics to stable periodic orbits. Moreover, in globally coupled networks of leaky integrate-and-fire neurons with essentially inhibitory interaction and without delay, any non-degenerate periodic pattern of spikes has been shown to be stable [19]. In ref. [24], for a more general class of neuron models without delay, it was shown that a stored simple periodic pattern of spikes is even a global attractor. Finally, for networks with mixed excitatory and inhibitory couplings we showed numerically that the same spike pattern can be stable or unstable, depending on the specific network it is realized in [28,27].
In the present article, we analyze the stability of arbitrary non-degenerate periodic spike patterns, i.e. of patterns without simultaneous spike sendings and reception. Our results apply in particular to purely inhibitorily and purely excitatorily coupled neural networks with invariant periodic dynamics that give rise to such periodic spike patterns. We show that the patterns are either all stable or all unstable, depending only on the inhibitory or excitatory character of the couplings and the type of dissipation but not on the network connectivity. For asymptotically stable orbits, we present an analytical lower bound of the size of the basin of attraction. For inhibitory coupling this estimate depends on the spike timings only, while for excitatory coupling it is also influenced by the subthreshold dynamics of the network's neurons.
As a particular result, any non-degenerate pattern is stable in networks of purely inhibitory neurons with normal dissipation (such as the common leaky integrateand-fire neuron) as well as in networks of purely excitatory neurons with anomalous Figure 1. Phase dynamics in response to an incoming spike and interpretation of the transfer function H m ε ml (φ m ). The rise function U m of neuron m is plotted as a function of its phase φ m . In the absence of interactions, φ m (t) increases uniformly with time t according to Eq. (1). If neuron l sends a spike at time t, neuron m receives this spike at time t+τ ml and in response experiences a jump U m (φ m ((t+τ ml ) − )) → U m (φ m ((t+τ ml ) − ))+ε ml = U m (φ m (t+τ ml )) in its membrane potential. Solving for the phase after the interaction yields the phase jump φ m ((t + τ ml ) − ) → φ m (t + τ ml ) = U −1 m (U m (φ m ((t + τ ml ) − )) + ε ml ) = H m ε ml (φ m ((t + τ ml ) − )), mediated by the transfer function Eq. (5).
dissipation. The central idea underlying the proofs is that interactions between neurons cause weighted averaging of perturbations. The extremal perturbations then cannot increase in absolute value and the spike pattern is stable. If the network is strongly connected, i.e. its adjacency matrix is irreducible, the averaging takes place among the state variables of all the neurons. Therefore, the neurons' perturbations converge to the same final value and the sequence of spikings converges towards a sequence which is only shifted in time with respect to the original one: The spike pattern is asymptotically stable.
In contrast, all non-degenerate periodic patterns in purely excitatory networks with normal dissipation or in purely inhibitory networks with anomalous dissipation are unstable. Here, the central idea is to track some particular arbitrarily small initial perturbation and to show that it grows in magnitude until it is larger than some positive predefined constant value independent of its original size. We illustrate our results by numerical simulations of networks of leaky integrate-and-fire neurons [22] and biological oscillators as proposed by Mirollo and Strogatz [29]. Parts of the results were presented in [26]. 2. The model.
2.1. Neuron model. Consider a network of N ≥ 2 oscillatory neurons interacting via directed delayed connections. The state of a neuron l ∈ {1, . . . , N } at time t ∈ R is characterized by one variable φ l (t) ∈ (B l , Θ l ], the phase of neuron l, that is piecewise affine in time and has a lower bound B l ∈ R − ∪ {−∞} and a finite upper bound, the threshold Θ l ∈ R + . In addition, there are state variables σ l (t) that carry information about the sending times of spikes; these will be introduced below.
In the absence of interactions, the phase of each neuron l ∈ {1, . . . , N } increases uniformly in time obeying dφ l /dt = 1.
The membrane potential of neuron l is computed from the phase by a strictly monotonic increasing rise function U l as U l (φ l ) [29] which maps (B l , Θ l ] bijectively onto and Θ U,l = U l (Θ l ), the threshold of the potential. We assume that for all neurons l in the network, U l is a C 1 -diffeomorphism that is concave, strictly concave (normal dissipation), convex or strictly convex (anomalous dissipation). When φ l reaches its threshold, i.e.
it is reset, and a spike is emitted that travels to the postsynaptic neurons. The phase threshold Θ l therefore equals the free period of the neuron in the absence of recurrent interactions.

Network interactions.
The spike arrives at some postsynaptic neuron m ∈ Post(l) after a delay time τ ml , at time θ = t + τ ml , inducing an instantaneous phase jump mediated by the transfer function H m ε ml . Here, ε ml denotes the strength of the coupling from neuron l to m, which equals the jump in neuron m's membrane potential in response to the incoming spike, cf. Fig. 1. The coupling is called inhibitory if ε ml < 0 and excitatory if ε ml > 0. If there is no connection from neuron l to neuron m, we have ε ml = 0. We will sometimes concentrate on purely inhibitory networks with ε ml ≤ 0 and purely excitatory networks with ε ml ≥ 0 for all m, l ∈ {1, ..., N }. For both positive and negative ε, the transfer function is defined as Equation (5) ensures that for sub-threshold input the jump in the phase φ(θ − ) → φ(θ) evokes a jump of size ε in the potential Fig. 1. Equation (6) covers supra-threshold input possible for positive ε. The neuron is immediately reset and a spike is emitted. Together with U m and U −1 m , for subthreshold input the transfer function H m ε (φ) is strictly monotonic increasing and differentiable both as a function of ε and of φ, where (H m ε ) ′ (φ) denotes the derivative with respect to φ. To avoid technicalities that are not essential for the stability properties of periodic orbits, we focus on homogeneous delays τ ml = τ for m, l ∈ {1, . . . , N }. Inhomogeneous delays complicate the indexing of spike patterns and the tracking of perturbations.

2.3.
Examples. This general scheme covers as a special case the leaky integrateand-fire neuron that is described by a rise function U IF (φ) = I/γ(1 − exp(−γφ)), where I > 0 is a constant external input and γ characterizes the dissipation, γ > 0 for normal dissipation (U IF strictly concave), γ < 0 for anomalous dissipation (U IF strictly convex). Another important model covered is the biological oscillator model introduced by Mirollo and Strogatz [29] and generalized in ref. [27], with rise function U MS (φ) = b −1 ln(1 + a −1 φ), where ab > 0. U MS is strictly concave for b > 0 and strictly convex for b < 0 [27].
2.4. Dimensionality. Due to the delay, the state space is formally infinite dimensional. However, under weak conditions there is a finite time t ′ after which it becomes finite-dimensional [4]. We here therefore consider times t > t ′ where the network dynamics is completely determined by the phases φ i (t) and by the spikes in transit. We assume that the spiking frequency of the neurons in the network is bounded above, in particular there are no "Zeno"-states [43]. This ensures that φ l (t) is indeed piecewise affine in time and that a neuron can only send a limited amount of spikes within a time interval of length of the delay time τ . Consequently, the number of spikes which have been sent at some time t, but have not been received, the spikes "in transit" at time t, is bounded above by some N sp ∈ N, cf. [4]. The state space has thus a finite topological dimension bounded above by N + N sp , where N sp is the finite maximal number of spikes in transit.
2.5. Patterns of spikes. Sending and receiving of spikes are the only "events" occurring in these systems. These events interrupt the continuous linear time evolution and also constitute the only nonlinearities of the system. We here focus on the stability properties of periodic spike patterns with non-degenerate events, i.e. (i) all spikes are sent at non-identical times and (ii) received at non-identical times and (iii) neurons receiving a spike do not generate a new spike at the same time.
This covers almost all periodic patterns of spikes in the following sense: We consider the set of periodic spike patterns of length T > 0 containing M ∈ N spikes each of which may be emitted by one of the N ∈ N neurons. There is a natural mapping from [0, T ) M × {1, ..., N } M to this set of spike patterns, where the first components denote the times of spikes and the second components denote the sending neuron. We assume that [0, T ) M × {1, ..., N } M is endowed with a continuous probability density which gives for each subset of patterns the probability that an element of this subset is randomly chosen to be realized in a network. The degenerate patterns then are a null subset, a random spike pattern is non-degenerate with probability one. We will thus concentrate on these non-degenerate patterns in the following. This in particular excludes spikes generated by supra-threshold excitation. Patterns including such spikes need to be studied separately, cf. [27]. Furthermore, for brevity of presentation, we exclude neurons that do not spike in the pattern.
2.6. Phase space. We now define the phase space similar to [4]. The state of the system at time t is characterized by the phases of the neurons at time t, φ l (t) ∈ (B l , Θ l ], l ∈ {1, . . . , N } and the spikes sent by each neuron in the past. We number the sending times with respect to the order of their occurrence, t i , i ∈ Z, where t i < t j for i < j and denote the neuron sending the ith spike by s i . The arrival time of the jth spike is denoted by In fact, not all sending times t i ≤ t are necessary to characterize the state at time t but only those with t − τ < t i ≤ t, or, in other words, the at most N sp spikes in transit at time t. We introduce additional variables within [0, ∞), which track the time elapsed since the sending of the jth spike until its arrival: When the spike is sent at time t j , For times t with t j < t < θ j and t > θ j , σ j (t) increases uniformly with unit derivative, i.e. the variable jumps about one. The original definition in [4] does not include such a jump. However, we will consider small perturbations, that are in maximum norm smaller than one. This definition of σ j (t) together with the definition of the norm in phase space ensures that trajectories in which the number of spikes in transit is different have large distance (at least one). If necessary, an arbitrarily large jump can be introduced. We tag the index of the spike arriving first after t by κ 0 (t) = min{j|θ j − t > 0} (15) and the index of the spike sent latest before t as Then, κ 1 (t)−κ 0 (t)+1 is the number of spikes in transit at t and if κ 1 (t)−κ 0 (t)+1 > 0, κ 0 (t), κ 1 (t) label the spikes in transit to arrive first and latest after t. The future of the system is therefore fully determined by the phases φ 1 (t), ..., φ N (t) at time t and, if present, by the spikes in transit described by σ κ1(t) (t), ..., σ κ0(t) (t) together with the neurons s κ1(t) , ..., s κ0(t) that sent them. From our assumption we have max t (κ 1 (t) − κ 0 (t) + 1) ≤ N sp , therefore we can define: The state of the system is given by the state space vectors in the state space X = (B 1 , . . , N } Nsp with the neurons' phases as first N entries, the times since each of the past N sp spikes has been sent as entries N + 1, ..., N + N sp and the indices of the neurons which sent these spikes as entries N + N sp + 1, ..., N + 2N sp . Addition and Subtraction in the latter set of indices are defined modulo N . The number of variables which certainly define the state of the system is N + 2N sp . The topological dimension is N + N sp , as the finite discrete set of indices does not contribute to it. We endow the state space with the maximum norm || . ||, which is in particular also taken over the last N sp entries. Deviations in the discrete indices generate differences between state space vectors of at least one. This will be considered as large perturbation. If necessary, the norm can be given arbitrary large weight in the last N sp entries. The norm relative to the unperturbed dynamics (gray) before and after sending the spike. Due to δ P (i) l < 0 and the free linear time evolution with slope one, the spike is sent about −ζ i = −δ P (i) l later than in the unperturbed case, att i = t i − ζ i . It therefore also arrives about −ζ i later than θ i = t i + τ , atθ i = θ i − ζ i . Assuming a strictly concave rise function, due to the inhibitory coupling, neuron l's perturbation after the spike arrival is a weighted mean of the perturbation δ i−1 l before the interaction and the spike's perturbation ensures together with Eq. (14) that states with small distance at time t have the same number and kind of spikes in transit: The senders of the last N sp spikes have to be identical, otherwise the perturbation is large due to our definition of the phase space norm. If some spike has arrived in the one state and not yet arrived in the other state, the perturbation is large due to Eq. (14). The considered neural networks are autonomous systems with impulse effect which can be described by equations where f : X\M → R n , I : M → N , with state space X and M, N ⊂ X (cf. [5,6]). Between hits of the trajectory x(t) on the set M , it follows a smooth time evolution, this corresponds to the time evolution of the neural network between spike sendings or spike arrivals. When the trajectory hits the set M at times t = u m , e.g. when a neuron reaches the threshold or a spike reaches the end of its run time given by the delay of the coupling, some jump event occurs.
3. Notation. We now characterize the network dynamics underlying a nondegenerate periodic spike pattern and its perturbation.
3.1. Unperturbed dynamics. First, we consider the unperturbed dynamics generating a non-degenerate periodic spike pattern of period T . The periodicity of the pattern implies that the phase dynamics is periodic with the same period [27]. Without loss of generality we can assume that the delay τ is smaller than T , otherwise we choose some sufficiently large integer multiple of T as new period of the pattern. The periodic spike pattern is characterized by the sending times t i ∈ [0, T ), i ∈ {1, ..., M } of the M spikes within the first period and the corresponding sending neurons s i . Sending events in different periods are related by t i + nT = t i+nM and s i = s i+nM , where n ∈ Z. We further define P (i), which is essentially the index of the spike arriving last before the sending time t i as It will prove useful to define P (i) as being at least as large as the index of the last spike to arrive before t = 0, since we will mainly consider the dynamics with t ≥ 0. Definition Eq. (20) also applies if ε is P (i) is zero.
3.2. Perturbed dynamics. We now consider another trajectory defined by the state at time t = 0 ∈ (t 0 , t 1 ) that we name perturbed dynamics. (Without loss of generality, we choose t = 0 to lie between t 0 and t 1 .) t = 0 is chosen so that there is no event precisely at this time. We denote the perturbed phases bỹ for l ∈ {1, . . . , N } and specify them by the phasesφ l (0). The phase perturbations are δ l (t). Further, we denote the perturbed spike variables bỹ where the spike perturbations are ζ j (t). The specification of the perturbed state at time t = 0 is completed by fixingσ j (0) for j ∈ {κ 1 (0) − N sp + 1, ..., κ 1 (0)}. We want to study the future evolution of trajectories close to each other at some point of time. So, the state of the perturbed dynamics should have small distance to the unperturbed dynamics at time t = 0. Changes in the last N sp discrete variables would imply large distances in the maximum norm on state space. Indeed, the perturbed dynamics should be interpretable as generated by applying a small perturbation to all phases and to all spikes in transit at time t = 0. (Perturbations to already arrived spikes still present in x(t) do not influence the future time evolution and can thus be specified arbitrarily.) Changes in the last N sp variables, i.e. changes in the neurons that sent spikes, would be changes in the past and thus non-physical. We therefore consider perturbed trajectories where the neurons which sent the last N sp spikes j ∈ {κ 1 (0) − N sp + 1, ..., κ 1 (0)} are the same as in the unperturbed dynamics. Perturbations of the spike variables change the future spike arrival times. Thus they are taken into account. The theorems and corollaries proven in the present article also hold when initial perturbations in the spike arrival times are excluded.
To track the future evolution of the perturbations after t = 0 and to compare it to the unperturbed trajectory, we now introduce some additional concepts and notations. Together with the phasesφ l (t) and spike-in-transit variablesσ i (t), the sending timest i and the arrival timesθ i of the perturbed system are distinguished from the unperturbed ones by superscribing a tilde. Analogously,κ 0 (t),κ 1 (t) denote the spikes in transit to arrive first and latest after t in the perturbed system. As long as the perturbations are small, they do not alter the order of events, so this notation is appropriate and we need not introduce an extra symbol for the neuron that sent the ith spike in the perturbed dynamics. The perturbed trajectorỹ for t ≥ 0 evolves according to the network time evolution described in the model section.
Because the order of events stays unchanged for the future evolution from t = 0 as long as the perturbations remain small, it is convenient to work with reduced phase space vectors where the unchanged part, containing the indices of sending neurons, is left out. We collect the perturbations of reduced vectors at time t into the reduced perturbation vector and tag the minimal and maximal entries δ min (t) = min(δ(t)).
It is of particular interest to consider unperturbed and perturbed trajectories with the same spikes in transit, where δ(t), δ max (t), δ min (t) are given by In particular, the initial perturbation vector reads by definition 3.3. Event based perturbations. The perturbations do not change in the intervals between events due to the piecewise linear time evolution. It is therefore convenient to introduce some notion of event based perturbation tracking for i ≥ κ 0 (0)−1 and sufficiently small perturbations that leave the order of events unchanged. We first define event based expressions for the perturbations of the spike variables, ζ i (t), where i ≥ κ 1 (0) − N sp + 1. We note that these definitions are extended to time points before t = 0, where the trajectories are not necessarily close to each other but the single events considered are already similar because the trajectories will be close at t = 0. Since the ζ i (t) are constant except for time intervals between their reception in the perturbed and the unperturbed dynamics, we define holds, cf. Fig. 2. We now consider i ≥ κ 0 (0) and the perturbations of the phases, δ l (t). We denote the perturbation of neuron l's phase after the ith spike has arrived in both the perturbed and the unperturbed dynamics by The spike which arrives last before t = 0 is the spike labeled with κ 0 (0) − 1. We define The perturbation can change if neuron l sends or receives a spike. If neuron l sends a spike the perturbation is changed for the interval [min(t j ,t j ), max(t j ,t j )) because the neuron is reset in one, e.g. the unperturbed, system, while it is not yet reset in the other, e.g. the perturbed, system. After it is reset in both systems, the original perturbation is recovered. We conclude that except for intervals between unperturbed and perturbed spike sendings of neuron l, the perturbation δ l (t) remains constant and equals δ i l until the next spike arrives, so for i ≥ κ 0 (0) we have For i = κ 0 (0)−1, θ i andθ i can be replaced by 0. If some neuron s i has perturbation δ P (i) si , with i > κ 1 (0) before it sends a spike at t i > 0, the spike inherits this perturbation sincet cf. Fig. 2. We note that this definition implies the minus sign before ζ i in Eq. (32) or Fig. 2. This convention is important for the subsequent considerations. If ζ i was defined as 'ζ i = −δ P (i) si ' complications would arise, e.g., from maximal perturbations in the phases that are transformed into minimal perturbations in the spikes.
The event based perturbation vector δ i and the extremal perturbations δ i max , δ i min are taken directly after the arrival of the ith spike in both the perturbed and the unperturbed dynamics, for i ≥ κ 0 (0), We note that the event-based perturbation vector δ i equals the time-based perturbation vector δ(t) in general only within the interval because as soon as spikes have been newly sent, the spike variables are changed. For i = κ 0 (0) − 1 an analogous expression with θ i ,θ i replaced by 0 holds. Equality of the phase perturbations holds up to the next spike arrival, cf. Eq. (35).
4. Estimation of the order preserving neighborhood.

4.1.
Approach. We determine the maximal perturbations of phases and spikes in transit so that the perturbed dynamics keep the same order of events as the unperturbed dynamics. In other words, we determine the maximal perturbations of phases and spikes in transit so that the order of events is preserved whenever and to whatever neurons or spikes in transit they are applied. This is just half the minimal distance between possible events in the unperturbed dynamics. We concentrate on events after t = 0. There are only two types of events, spike arrivals and spike sendings. So, the order of events will change (or two events will overlap) for the first time in the dynamics due to a too early spike arrival or due to a too Figure 3. Perturbations that cause the event order to change for inhibitory inputs. Sample time evolution of neuron l's phase shown for the perturbed (red) and the unperturbed dynamics (gray). a) The order of two arrival times θ i , θ i+1 is not preserved in the per- (43). c) An early spike sending occurs instead of some spike arrivalθ i , if the perturbed dynamics reaches the threshold before the spike arrives early spike sending. In both cases, the early event can happen instead of a spike arrival or instead of a spike sending event. To avoid these changes on the order of events, restrictions on the size of the perturbations are to be imposed. The strictest restrictions arise from avoiding changes in the order of subsequent events. If the order of any pair of subsequent events is conserved, also the order of every pair of events (not necessarily subsequent) is conserved. There are four possible violations of the order conservation: i) a spike arrives where in the original order of events a different spike arrives, ii) a spike arrives where in the original order of events a spike is sent, iii) a spike is sent where in the original order of events a spike arrives, iv) a spike is sent where in the original order of events a different spike is sent.
The first three cases are illustrated in Fig. 3 for an inhibitory network. In the fourth case, the spikes of two different neurons interchange, besides it is analogous to the first case.

Case i).
To avoid the occurrence of case i) in subsequent arrivals, it is sufficient to ensure that the perturbations of spikes in transit are so small thatθ i <θ i+1 . Together with Eq. (32) this leads to the restriction on the perturbations of spikes arriving at θ i , θ i+1 for all i. As an example, we here note that conditions (42) imply that also exchanges of higher order are excluded, e.g.θ i+1 >θ i for all i implies thatθ i+2 >θ i for all i, sinceθ i+2 >θ i+1 .

Case ii).
To avoid case ii), the perturbed spike times and the perturbed subsequently arriving spikes must satisfyt j <θ P (j)+1 . This implies together with Eqs. (36,32) that the perturbation of the neuron's phase δ P (j) l before spiking and 1566 RAOUL-MARTIN MEMMESHEIMER AND MARC TIMME the perturbation of the spike arriving at θ P (j)+1 have to obey 4.4. Case iii). This case leads to different restrictions for inhibitory and for excitatory spikes. For inhibitory spikes, the perturbed trajectory must not reach the threshold before the spike arrives due to the intrinsic increase of the phase with time. If we first do not include a reset due to the possible reaching of the threshold, the perturbed phase of neuron l just before the arrival timeθ j of the perturbed spike isφ because the phase increases with slope one. To indeed avoid passing the threshold, For excitatory spikes, also supra-threshold excitation can take place and thus lead to an overlap of spike arrival and spike sending. Therefore, the phase may not have reached the minimal phase U −1 l (Θ U,l − ε) at which a supra-threshold excitation takes place when a spike with strength ε arrives. This is guaranteed by replacing for the jth spike arrival.

4.5.
Case iv). The change of the order of two spike sendings, can be inferred analogously to case i). Two subsequent spike times t i , t i+1 , where i > κ 1 (0) sent by any two neurons k, l ∈ {1, . . . , N } do not interchange their order ift i <t i+1 , i.e. if the perturbations of the neurons' phases δ cf. Eq. (36).

4.6.
Restriction from a lower bound for the potential. There is an additional restriction to the perturbed dynamics in networks with inhibition where the neurons have a lower bound for the potential. Here, we have to ensure that also in the perturbed dynamics the inputs do not force the membrane potential below the lower bound, i.e.
where ε lsj < 0. This yields using Eq. (44) where l ∈ {1, ..., N }, j ∈ {1, ..., M } and we used that t j+1 − t j = θ j+1 − θ j due to the homogeneous delay. We note that for many neurons models B U,l = −∞, so that no restrictions arise from taking min l, . For purely excitatory networks, Eq. (45) has to be replaced by Eq. (46) and Eq. (49) can be omitted so that the order is guaranteed for δ(t) < d Ex max , with We computed the size of the order preserving neighborhood for purely inhibitory and purely excitatory networks. However, the generalization for mixed networks is straightforward. In considerations applying to purely excitatory, purely inhibitory and mixed networks, we write the upper bound of the order preserving neighborhood as d max . We note that d max > 0 holds due to the requirement that the spike pattern is periodic and non-degenerate.
In the following, we consider perturbations smaller than d max , so that the order of events is preserved. In particular, we show below that for asymptotically stable spike patterns the open cubic neighborhood ||δ(0)|| < d max of dimension N + N sp is contained in the basin: Since t = 0 was arbitrary (without an event at this time), a trajectory reaching this neighborhood at any time will not leave it again and approach the periodic spike pattern. Thus, d max yields a lower estimate of the linear extension of the basin in the reduced phase space. For unstable spike patterns, we will show that we can find arbitrarily small perturbations with ||δ(0)|| < d max which grow and finally leave any d 1 -neighborhood with d 1 < d max (if different parts of the periodic orbit have sufficient distance).

5.
Propagation of the perturbation. We study how perturbations change when interactions take place under the condition that they are smaller than d max before interaction.
We explicitly compute the perturbation δ j l of some neuron l after the reception of spike j, j ≥ κ 0 (0) in terms of its perturbation δ j−1 l , |δ j−1 l | < d max , before the reception and the perturbation ζ j , |ζ j | < d max , of the spike times. We do not include a reset due to possible reaching of threshold between min(θ i ,θ i ) and max(θ i ,θ i ) and evaluate the original and perturbed trajectories after this reception took place in both dynamics: The perturbations of spikes, ζ j remain unchanged at an interaction event. We note that c lj depends not only on l, j but also on the perturbations, the spike pattern, and the interaction strength ε lsj . Starting with Eq. (33), from (52) to (53), we use a case distinction together with dφ l /dt = 1 for the free evolution andθ j = θ j −ζ j . The employed relations together with Eq.
If ε lsj = 0, i.e. neuron l is not a postsynaptic neuron of neuron s j , we have and the perturbation of neuron l's phase stays unchanged. Eq. (58) also directly implies that the perturbations do not change if they are all identical. This reflects the time translation invariance of the system. 6. Lyapunov stable spike patterns. We now formulate and prove a Theorem about the Lyapunov stability of periodic orbits in networks with arbitrary (i.e. not necessarily strongly connected) topology. In systems with impulse effect (cf. model section), we face the complication that even for trajectories staying near each other for most of the time, if the jump or event times u n in one and u ′ n in the other trajectory are not identical, there can be a macroscopic deviation within the interval [min(u n , u ′ n ), max(u n , u ′ n )) where in one trajectory the event has already occurred, while in the other trajectory it has not. This leads to the following definition of stability, which allows for macroscopic deviations in small environments around the event times u n (cf. [5], p. 60): Definition 6.1. (Lyapunov stability) Let ϕ(t) be a solution of the dynamical equations (18) and (19) that hits M at times u m , m ∈ Z. Further, let x 0 ∈ X be a point in phase space and x(t; t 0 , x 0 ), t ∈ R a solution of the dynamical equations (18) and (19) that assumes at time t 0 the value x 0 . We call ϕ(t) Lyapunov stable or just stable if for all η > 0, for all ξ > 0 and for all t 0 with ∀m ∈ Z : |t 0 − u m | > η, there is a χ > 0 so that for all x 0 with ||x 0 − ϕ(t 0 )|| < χ, ||x(t; t 0 , x 0 ) − ϕ(t)|| < ξ for all t ≥ t 0 that satisfy |t − u m | > η for all m. (61) In the spirit of orbital stability of ordinary differential equations we could relax this condition by additionally allowing for reparametrizations of time.
Proof. Approach: We consider a trajectory within the order preserving neighborhood at time t = 0, i.e. ||δ(0)|| < d max and show by induction that the event-based perturbation does not grow in time.
Induction assumption: We assume that the maximum-norm of the event-based perturbation δ j−1 does not exceed the maximum norm of the initial perturbation for j − 1 ≥ κ 0 (0). Within time intervals between the same events in the perturbed and the unperturbed dynamics, the perturbation δ(t) will in general be larger.
Since ||δ j || < d max when the possible reset is not included, such a reset cannot occur by definition of d max and the above estimation holds for the full dynamics. Therefore, ||δ j || ≤ ||δ(0)|| < d max for all j. In particular, the perturbations of the spike timings do not grow larger than ||δ(0)||. Lyapunov stability: We ascertain in the following short paragraph that our findings imply that the spike pattern is indeed stable according to the notion of Lyapunov stability for systems with impulse effect. We may apply the maximum norm due to equivalence of norms in finite dimensional spaces (e.g. [42]). We choose χ = min(η, ξ) and note ||δ(0)|| < χ.
t = 0 was chosen arbitrarily so that no event occurred at this time, in particular the following holds for time origins with distance more than η from any event. We note that we do not need to choose the time origin 'far enough' from an event, because the definition of phase space already ensures that nearby trajectories have the same N sp last sent spikes and the same spikes in transit. According to Eqs. (37,68,72), differences between spike sending and arrival times after t = 0 equal phase perturbations bounded from above by ||δ(0)||. Thus, the lengths of the intervals [max(θ i ,θ i ), min(θ i+1 ,θ i+1 )) and [min(t j ,t j ), max(t j ,t j )) are bounded from above by ||δ(0)|| < χ. Outside these intervals, the perturbations are bounded by ||δ(0)|| < χ: Perturbations of phases equal event based perturbations outside these intervals (cf. Eq. (35)) and perturbations of spikes equal previous event based perturbations of phases (cf. Eq. (37)). This implies that for t > 0 with distance more than χ from any event, i.e. |t − t i | > χ for all i > κ 1 (0) and |t − θ j | > χ for all j ≥ κ 0 (0), we have ||δ(t)|| < χ. Due to η ≥ χ and ξ ≥ χ, also for all t > 0 with distance more than η from any event, |t − t i | > η for all i > κ 1 (0) and |t − θ j | > η for all j ≥ κ 0 (0), we have ||δ(t)|| < ξ and thus Lyapunov stability.
7. Asymptotically stable spike patterns. We now prove a theorem stating that under certain sufficient conditions in networks that are strongly connected, the non-degenerate, i.e. almost all periodic patterns of spikes and the underlying phase dynamics are asymptotically stable: After a sufficiently small perturbation, a dynamics which is equivalent, only shifted in time with respect to the original dynamics, is asymptotically assumed for t → ∞. For weakly but not strongly connected networks, this is in general not the case due to their inherent hierarchical structure [34].
Definition 7.1. (Asymptotic stability) Let ϕ(t) be a periodic solution of the dynamical equations (18) and (19), with period T ∈ R (i.e. ϕ(t)=ϕ(t + T ), ∀t ∈ R), that hits M at times u m , m ∈ Z. Further, let x 0 ∈ X be a point in phase space and x(t; t 0 , x 0 ), t ∈ R a solution of the dynamical equations (18) and (19) that assumes at time t 0 the value x 0 . We call ϕ(t) asymptotically stable if it is Lyapunov stable, and if for all ε > 0, δ > 0 and t 0 with ∀m : |t 0 − u m | > 0, there are χ > 0 and D ∈ R, so that for every x 0 with ||x 0 − ϕ(t 0 )|| < χ, there is a c 0 ∈ R so that ||x(t; t 0 , x 0 ) − ϕ(t + c 0 )|| < ε, for all t > D that satisfy Theorem 7.2. Consider a network that is strongly connected (has irreducible adjacency matrix), where 0 < (H l ε lm ) ′ (φ l ) < 1 for all l, m ∈ {1, ..., N } with ε lm = 0, and for all φ l ∈ (B l , Θ l ] with B U,l < U l (φ l ) + ε lm < Θ U,l . Then, the dynamics underlying any non-degenerate periodic pattern of spikes is asymptotically stable. The dynamics asymptotically approaches a dynamics identical to the original one up to a time shift and thus converges to the original periodic orbit. For initial states in the order preserving neighborhood of a periodic orbit, the dynamics converges to that orbit.
Proof. Approach: We study the evolution of perturbations of phase and spike variables within the order preserving neighborhood on an event by event basis. We show that for nonidentical perturbations, after a finite time the maximal perturbation decreases and the minimal perturbation increases. We will show that this ultimately leads to the convergence of the perturbations towards a vector with identical entries and that this implies asymptotic convergence.
Identical perturbations: If after the arrival of the jth spike, all neurons' phases φ 1 , ..., φ N and all spikes in transit σ κ1(θj) , ..., σ κ0(θj ) possess identical perturbations, the perturbed dynamics is identical to the unperturbed one up to a time shift, so that Theorem 7.2 holds for identical perturbations. From now on, we thus assume that at least one neuron's phase or spike in transit has non-maximal perturbation.
Interactions and their effect on perturbations: Since the perturbations are bounded and the original orbit is periodic, the perturbed and unperturbed phases are bounded, so that 0 < c min < c lj < c max < 1 (76) holds, because H ′ ε (.) is continuous. Since Lyapunov stability follows from Theorem 6.2, it is sufficient to consider j > κ 1 (0) for the asymptotic stability. According to Eqs. (37,58), so that each interaction leads to evaluating some weighted mean of the perturbation of the sending neuron at the time of sending and the perturbation of the receiving neuron at the time of reception. In particular, Eqs. (76, 77) imply that the perturbation of a neuron l postsynaptic to neuron s j remains unchanged at the arrival of the jth spike only if δ j−1 l = δ P (j) sj , i.e. if its perturbation at the time of reception equals the perturbation of the sending presynaptic neuron at the time of sending the spike. Further, from Eqs. (60, 76, 77), we infer that a neuron l with perturbation δ j l < δ j max will have a perturbation smaller than δ j max later on, i.e. δ j+n for all n ∈ N. Decrease of the maximal perturbation: We show that the maximal perturbations decrease within at most 2N − 1 + M sp periods, where M sp = ⌈N sp /M ⌉ and ⌈x⌉ denotes the smallest integer larger or equal to x.
First, suppose that there is no neural phase φ 1 , ..., φ N with maximal perturbation after the arrival of the jth spike, but the maximal perturbation is in the spike variables σ κ1(θj) , ..., σ κ1(θj )−Nsp+1 only. The maximal perturbation will then decrease as soon as N sp spikes have been sent because the neurons' phases do not reach maximal perturbation due to Eq. (78) and the newly sent spikes inherit the perturbations of the phases (cf. Eq. (58)), while the original spikes carrying maximal perturbation have indices i ≤ κ 1 (θ j ) and therefore do not occur in the new perturbation vector anymore. We conclude that a maximal perturbation present in the spikes only, will decrease after N sp spike sendings. Since in each period, M spikes are sent, the perturbation decays after M sp < 2N − 1 + M sp periods if the maximal perturbation is present in the spike variables only.
Therefore, we can assume that at least one phase has maximal perturbation and that there is at least one neuron's phase or one spike in transit that is non-maximally perturbed. We consider an arbitrary neuron l which is maximally perturbed after the arrival of the jth spike. To allow it to still carry perturbation δ j max after two periods, all its presynaptic neurons Pre(l) must have perturbation δ j max at the beginning of the first period. Otherwise, they have smaller perturbation also later on, in particular when they emit at least one spike in the first period which arrives before the end of the second period due to τ < T . These arriving spikes then have perturbation smaller than δ j max and decrease the perturbation of neuron l on arrival. The neurons Pre(Pre(l)) := Pre 2 (l) must have a perturbation of δ j max at the beginning of the first period to allow neuron l to conserve its maximal perturbation to the end of the fourth period. Otherwise, after at most two periods, there is a neuron in Pre(l) having non-maximal perturbation whose spikes cause the perturbation of neuron l to decrease within two further periods. Since the network is strongly connected, there are no neurons with presynaptic distance to neuron l larger than N − 1. Therefore, to ensure that neuron l keeps its maximal perturbation up to the end of the 2(N − 1)th period, all N neurons have to carry perturbation δ j max after the arrival of the jth spike. To ensure this one more period, also all spikes in transit have to carry maximal perturbation. Otherwise, within one period they cause neurons to carry a perturbation smaller than δ j max at the end of the first period, leading after 2(N − 1) further periods to a decrease of neuron l's perturbation. This is in contradiction to the assumption that the phase perturbations and the perturbations of the spikes in transit do not all have identical entries after the arrival of the jth spike. Since l was an arbitrary neuron carrying maximal perturbation, we conclude that all phases have non-maximal perturbation after 2(N − 1) + 1 = 2N − 1 periods. After further N sp spike sending events, all the spike variables have been replaced and inherited some non-maximal perturbation. Each period T sees M spike sending events. We defined M sp to be the smallest integer multiple of the pattern's period so that at least N sp spikes are sent within M sp T . Therefore the maximum perturbation decreases within 2N − 1 + M sp periods.
Increase of the minimal perturbation: Arguments completely analogous to the ones above, based on averaging of perturbations and strong connectivity of the network, show that the minimal perturbation increases within 2N −1+M sp periods.
Convergence of the perturbation vector: We now show that the event-based perturbation vector converges to a vector with identical entries. We study how the perturbations evolve within multiples of 2N − 1 + M sp periods. The number of arrival events within one period is M . We define which maps an actual event-based perturbation vector to the vector 2N − 1 + M sp periods later. We note that due to the periodicity of the spike pattern, G j is Mperiodic in the parameter j, G j = G j+nM , n ∈ N, in particular we will use below that G j+(n−1)(2N −1+Msp)M • ... • G j+(2N −1+Msp)M • G j = G n j . From Eq. (55) we see that for the step from some spike reception j ′ − 1 to the next reception j ′ , event-based perturbations of the neurons' phases δ j ′ l , l ∈ {1, ..., N }, depend continuously on the previous perturbation vector, namely on the previous perturbations of phases δ j ′ −1 l and on ζ j ′ (supra-threshold excitation cannot occur). Further, during such a step, the possibly newly sent spikes inherit the previous perturbations of the sending neurons, in particular they depend continuously on them, while the perturbations of the other spike variables stay constant or are eliminated from the perturbation vector due to later spikes. Taken together, we conclude that the map δ j ′ −1 → δ j ′ is continuous for any j ′ ≥ κ 0 (0), and therefore also G j , the compo- The sequence (max[G n j (δ j )]) n is strictly monotonically decreasing, (min[G n j (δ j )]) n is strictly monotonically increasing, both are bounded above by δ j max and bounded below by δ j min . Therefore, each is convergent, say Furthermore, (G n j (δ j )) n is bounded and therefore has a convergent subsequence, say The maximum ofδ is where we used that max(.) is continuous and that (max[G nm j (δ j )]) m is a subsequence of the sequence (max[G n j (δ j )]) n which converges to M 0 . Analogously, we compute the maximum of G j (δ), where we used in the second step that max[G j (.)] is continuous and in the third that (max[G nm+1 j (δ j )]) m is a subsequence of the convergent sequence (max[G n j (δ j )]) n . We conclude from Eqs. (82, 86) that max(G j (δ)) = M 0 = max(δ). For any vector δ with non-identical entries however, we know from the above considerations that max(G j (δ)) > max(δ) holds. So,δ is a perturbation vector with N + N sp identical entries,δ where we defined c 0 = M 0 . The same computation as above for the minimum entries reveals min(G j (δ)) = m 0 = min(δ) (with the sameδ as in (81-87)). Becausê δ has identical entries, max(δ) = min(δ) = c 0 . Now, from Theorem 6.2 we infer that max(δ j+n(2N −1+Msp)M ) and min(δ j+n(2N −1+Msp)M ) are upper and lower bound for the entries of δ j ′ for all j ′ ≥ j + n(2N − 1 + M sp )M . Since upper and lower bound converge towards the same value c 0 , the sequence (δ j ′ ) j ′ converges, This impliest due to Eqs. (32,36,37). Thus, the spiking dynamics converges towards a spiking dynamics equivalent to the original one, namely to the original spiking dynamics shifted in time by a constant value −c 0 , where due to Theorem 6.2.
Convergence of the phase dynamics: Also the perturbed phase dynamics approaches the original phase dynamics shifted in time by −c 0 : We first consider times t which do not lie between the arrivals of the same event in the perturbed and the unperturbed dynamics. The event-based perturbation vector δ j ′ equals the time-based perturbation vector only within the interval given by Eq. (41). The newly sent spikes, however, inherit perturbations already present in δ j ′ as phase perturbations due to Eq. (36). So, the perturbations δ(t) for [min(t j ,t j ), max(t j ,t j )) (92) are bounded above by δ j ′ max and bounded below by δ j ′ min . Therefore, if a sequence of times (t(j ′ )) j ′ is chosen so that t(j ′ ) lies for each j ′ within S j ′ , when j ′ → ∞. The perturbed phase dynamics approaches the shifted phase dynamics pointwise for almost all t in the sense that δ(t + nT ) →δ with n → ∞ for almost all t: We take the dynamics shifted in time by −c 0 as reference and denote the dynamical quantities by a bar, i.e.φ l (t − c 0 ) = φ l (t) for the spike phases,t i = t i − c 0 for the spike times,θ i = θ i − c 0 for the arrival times andδ(t) for the perturbation of the perturbed dynamics relative to the shifted one. Consider times t which lie between events together in the unperturbed, in the perturbed and in the shifted dynamics, For large enough j ′ , these intervals are nonempty because of Eqs. (89, 90, 91). If times t(j ′ ) are chosen within the set between the j ′ th and j ′ + 1th arrival, we havē and Eq. (93) yieldsδ (t(j ′ )) → 0 (96) with j ′ → ∞. The phase perturbationsδ l (t), l ∈ {1, ..., N }, are constant for all [min(t j ,t j ), max(t j ,t j )).
Therefore, for t(j ′ ) within the setsS j ′ between j ′ th and j ′ + 1th arrival, we havē t is not at the time of a shifted event.
Then there is some minimal distance η = min(min{|t−t i ||i ∈ Z}, min{|t−θ j ||j ∈ Z}) separating t from any shifted event. Due to the periodicity of the dynamics, also t + nT has the same finite distance from any shifted event, min(min{|t + nT − t i ||i, n ∈ Z}, min{|t+ nT −θ j ||j, n ∈ Z}) = η. Because the spike dynamics converges (Eqs. (89, 90)), there is some D ′ ∈ N so that |θ j −θ j | < η and |t j −t j | < η for all j > D ′ and therefore there is some D ′′ ∈ N so that t + nT will not lie within any interval [min(t j ,t j ), max(t j ,t j )), [min(θ j ,θ j ), max(θ j ,θ j )), if n > D ′′ . Thus, for large enough n, t + nT ∈S j ′ for some j ′ . Further, j ′ → ∞ when n → ∞, so that the times t + nT form a subsequence of some sequence (t(j ′ )) j ′ . From Eq. (98) we infer pointwise convergenceδ or, equivalently, pointwise convergencẽ where l ∈ {1, ..., N }, j ∈ {κ 1 (t) − N sp + 1, ..., κ 1 (t)}, for any initial Asymptotic stability: We consider arbitrary ε > 0 and δ > 0. We denote t ′ i = u i + c 0 + δ, where u i is the ith event within [0, T ) and where v i is the event after u i . There is a D i such that |t j −t j | < δ and |θ j −θ j | < δ for all t j ,t j ,θ j ,θ j > D i andδ(t ′ i + nT ) < ε for all t ′ i + nT > D i . Since the dynamics is linear between events,δ(t) < ε for all t ∈ (t ′ i + nT, t ′′ i + nT ) with t ′ i + nT > D i . Since i is arbitrary and the number of events is finite, we can set D = max i D i . Each t > D with distance more than δ from all events falls in an interval (t ′ i +nT, t ′′ i +nT ) for some i, and we thus haveδ(t) < ε for all such t.
8. Unstable spike patterns. Now we demonstrate that for a different, equally important class of systems non-degenerate periodic spike patterns and their underlying phase dynamics are guaranteed to be unstable. We show that there are arbitrarily small perturbations such that the perturbed dynamics depart at least to a fixed finite distance from all dynamics equivalent to the original one.  (18) and (19), with period T ∈ R, that hits M at times u m , m ∈ Z. Further, let x 0 ∈ X be a point in phase space and x(t; t 0 , x 0 ), t ∈ R a solution of the dynamical equations (18) and (19) that assumes at time t 0 the value x 0 . We call ϕ(t) unstable if there is a t 0 with ∀m : |t 0 − u m | > 0 and a ξ > 0 so that for all χ > 0, there is a x 0 with ||x 0 − ϕ(t 0 )|| < χ and for some t ≥ t 0 .
Proof. Approach: We will show that for all d 0 > 0 there is a perturbation with ||δ(0)|| < d 0 that grows with time until the difference between maximum and minimum of the perturbation exceeds any d 1 < d max . Since equivalent dynamics differ from the original one only by a perturbation vector with identical entries, the distance of the perturbed dynamics to any equivalent dynamics then exceeds d 1 /2: The dynamics is unstable. We assume here and in the following that two points in the original trajectory, which fall into the same minimal period but are separated by events, have a distance larger than d max . Otherwise, d 1 can be defined as being smaller than half the minimal distance between two such points.
We consider a class of perturbations for which we show that the maximum perturbation can only grow with time and that the minimal perturbation can only decrease. We then assume that there is a d 0 > 0 so that there is no perturbation with ||δ(0)|| < d 0 that grows and exceeds any d 1 < d max and show that this leads to a contradiction. Since we can choose the initial maximal and minimal perturbation to have different signs, after their growth and decrease their difference must be larger than d 1 .
Maximal perturbations do not decrease: We can presume d 0 ≤ d 1 /2, otherwise there are already initial perturbations with δ(0) < d 0 and with difference between maximum and minimum perturbation exceeding d 1 . We will use that for all perturbations ||δ j || ≤ d 1 < d max , j ≥ κ 0 (0) − 1, the order of events does not change. For j ≥ κ 0 (0), we can presume that Eq. (58) is applicable: If the threshold was reached between arrivals of the perturbed and the unperturbed dynamics (and Eq. (58) would not be applicable), the perturbation must just have grown larger than d max and the proof would already be completed. We track an initial perturbation δ(0) with the property that (i) it has a nonidentical maximal and minimal perturbation in the perturbations of the phases, where l ∈ {1, ..., N }, and (ii) the maximal and minimal initial perturbation has different signs. Such a perturbation is guaranteed to exist. We first show that the maximal entry can only increase with time. Between zero and θ κ0(0) only spikes can be sent that inherit perturbations already present in the phases at time t = 0. So, neither the overall maximal and minimal perturbation nor the perturbations of the phases can change. We may thus concentrate on j ≥ κ 0 (0). From our presumptions, (H l ε lm ) ′ (φ l ) ≥ 1 holds for all l, m ∈ {1, ..., N } and therefore c lj ≥ 1 for any interaction. Rewriting Eq. (58) as we directly infer that a maximal perturbation δ j−1 l = δ j−1 max in the neurons' phases cannot decrease due to an interaction, because (δ j−1 l − ζ j ) ≥ 0, with (c lj − 1) ≥ 0 and therefore δ j l ≥ δ j−1 l .
(106) Minimal perturbations do not increase: Analogously, a minimal perturbation cannot grow, since for δ j−1 Presence of extremal perturbations in the phases: We now show that the property that maximal and minimal perturbations are represented in the phase perturbations is conserved, i.e. holds for δ j with j ≥ κ 0 (0). We use complete induction and start with the perturbation δ κ0(0) l . For δ κ0(0) l the statement holds (initialization): Just before the spike arrival at min(θ κ0(0) ,θ κ0(0) ), the perturbation vector δ(min(θ κ0(0) ,θ κ0(0) ) − ) has phase perturbation entries The spike perturbation entries are identical to those of δ(0), only some might have been replaced by new ones at sending times t 1 , t 2 , ... < θ κ0(0) . The new spike perturbation entries equal perturbations already present in the phase perturbations, so that in particular the minimal and the maximal perturbation are still represented in the phase perturbations. Due to an interaction, the maximal phase perturbations can only grow, the minimal ones can only decrease (cf. Eqs. (106, 107)), while the spike perturbations remain unchanged. Using δ l (min(θ κ0(0) ,θ κ0(0) ) − ) = δ holds. Our induction assumption is that for some j − 1 ≥ κ 0 (0), the maximal and minimal perturbations δ j−1 max and δ j−1 min of the perturbation vector δ j−1 are represented in the phase perturbations δ j−1 l , min l δ j−1 As inductive step, we now show that where still l ∈ {1, ..., N }. Perturbations of spikes with indices i, κ 1 (θ j−1 ) ≥ i ≥ κ 1 (θ j−1 )−N sp +1 occurring in the perturbation vector δ j−1 are bounded by δ j−1 min and δ j−1 max . Their value does not change but they can be replaced by the perturbations of new spikes i, κ 1 (θ j−1 ) < i ≤ κ 1 (θ j ), sent between θ j−1 and θ j . Due to Eq. (36), these new spikes i inherit a perturbation ζ i = δ P (i) si , P (i) = j − 1, already present in the phases and they are therefore also bounded by δ j−1 min and δ j−1 max . When the jth spike arrives, the sizes of the perturbations of spike variables do not change while the perturbations of the phases change according to Eq. (58). Eqs. (106, 107) imply that the maximal perturbation of the phase can only increase and that the minimal perturbation can only decrease, so that the new extremal perturbations of the phase are the extremal perturbations of the new perturbation vector, Increase of maximal perturbations: After we have seen that for the perturbations considered the maximal perturbation can only increase while the minimal perturbation can only decrease and both are always represented in the phases, we show that the maximal perturbation indeed has to grow within 2(N − 1) periods as long as ||δ j || ≤ d 1 . We assume that the perturbation does not grow within 2(N − 1) periods and show that this assumption leads to a contradiction. First, we consider the case that the perturbation of some neuron l which is maximally perturbed, δ j l = δ j max , after the arrival of the jth spike, decreases within the 2(N − 1) periods. Due to Eq. (105), this can only happen if there was an arrival of a spike i, j < i ≤ j + 2(N − 1)M , with ζ i > δ j max . This implies that the maximal perturbation must have grown, which already contradicts our assumption.
We can thus presume that the perturbation of maximally perturbed neurons l does not decrease. However, Eqs. (60, 105) imply that the perturbation of neuron l stays constant only if δ i−1 l = ζ i or ε lsi = 0, i.e. to ensure that neuron l keeps its perturbation δ j max within two periods, all arriving spikes ζ i , j < i ≤ j + 2M with ε lsi = 0 must have maximal perturbation. This implies that all presynaptic neurons Pre(l) have perturbation δ j max when they spike within the first period, because their spikes arrive before the end of the second period due to τ < T and the spikes have inherited the sending neurons' perturbation due to Eq. (36). If the maximal perturbation has not yet increased, the perturbation of neurons with perturbation δ j max cannot decrease so that neurons Pre(l) have maximal perturbation later on, in particular at the end of the second period. However, to ensure that the perturbation of neurons Pre(l) does not increase within the next two periods, also all neurons Pre(Pre(l)) have to carry perturbation δ j max when they spike in the third period, which entails that they carry perturbation δ j max at the end of the fourth period. To ensure that the maximal perturbation does not increase up to the 2(N −1)th period, all neurons would have to carry perturbation δ j max at the end of the 2(N − 1)th period in contradiction to the fact that the minimal perturbation cannot grow and is represented in the perturbations of the neurons' phases, so that there is at least one neuron m with δ j+2(N −1)M m ≤ δ j min < δ j max at the end of the 2(N − 1)th period. Therefore, our assumption is wrong and the maximal perturbation grows within at least 2(N − 1) periods.
Decrease of minimal perturbations: An analogous consideration shows that the minimal perturbation decreases within at least 2(N − 1) periods.
Perturbation size eventually exceeds d 1 : Due to the increase of maximal perturbations and the decrease of minimal perturbations, the sequence (max(δ j+n·2(N −1)M )) n is strictly monotonic increasing and (min(δ j+n·2(N −1)M )) n is strictly monotonic decreasing, due to our assumption they are bounded from below and above by −d 1 and d 1 and therefore converge, Furthermore, (δ j+n·2(N −1)M ) n is bounded with respect to ||.|| by d 1 and therefore has a convergent subsequence, say We defineḠ j , which maps an actual event-based perturbation vector to the vector 2(N − 1) periods laterḠ for subsequences (n m ′ ) m ′ ⊂ (n m ) m , (n m ′′ ) m ′′ ⊂ (n m ) m . Since max and min are continuous,δ l ′ = max(δ) andδ l ′′ = min(δ) holds, so the extremal perturbations are represented in the perturbations of the phases of the limit perturbation vector δ. This implies that the minimum and maximum ofδ would change underḠ j , ifδ had nonidentical entries. Due to Eq. (117), this is not the case, soδ has identical entries. Since also min(δ) = m 0 we conclude M 0 = m 0 in contradiction to m 0 ≤ min δ(0) < max δ(0) ≤ M 0 . Therefore, our initial assumption is wrong and the arbitrarily small initial perturbations finally exceed any d 1 < d max .
Points on the periodic orbit which fall into different inter-event-intervals have a distance larger than d 1 (due to our assumption). So, the minimal distance between the perturbed dynamics and any shifted version of the original dynamics, and thus the periodic orbit, is larger than ξ = d 1 /2.

9.
Purely inhibitory networks with normal dissipation. Corollary 1. In purely inhibitory arbitrarily connected networks with concave rise functions (i.e. normal dissipation), any non-degenerate spike pattern is stable. In purely inhibitory strongly connected networks with strictly concave rise function, any non-degenerate spike pattern is even asymptotically stable. The order preserving neighborhood lies within the basin of attraction, i.e. the dynamics is stable at least against perturbations smaller than d In max . Proof. In networks with normal dissipation we have for l ∈ {1, ..., N }, ε < 0 and φ l with B U,l < U l (φ l ) + ε, due to the monotonicity of the transfer function, and because U is concave. This implies The network has been designed to realize a predefined pattern [28,27]. At time t = 30T a perturbation inside the minimal basin of attraction is applied, at t = 60T a perturbation inside the basin but outside the minimal basin and at t = 90T a perturbation outside the basin. (For details, see text.) and Theorem 6.2 yields the first part of the statement (simple stability). Further, for strictly concave rise function, strict inequality "<" holds in Eqs. (121, 122); for strongly connected networks we can apply Theorem 7.2 and the definition of the order preserving neighborhood Eq. (50) for purely inhibitory networks, which yields the second part of statement (asymptotic stability and minimal basin of attraction).  50)) are certainly attracted. Around times t = 30T , t = 60T and t = 90T perturbations are applied. The first perturbation has maximum norm slightly smaller than d In max , the dynamics therefore stays within the minimal basin of attraction and relaxes back towards the original pattern. The second and third perturbation have maximum norm 1.8d In max . After the second perturbation, the trajectory starts outside the minimal basin but still within the basin of attraction, as can be seen from the numerical simulation. The third one perturbs the trajectory out of the basin of attraction into the basin of a different non-degenerate spike pattern. 10. Purely inhibitory networks with anomalous dissipation. Corollary 2. In purely inhibitory strongly connected networks with strictly convex rise functions (anomalous dissipation) any non-degenerate spike pattern is unstable.
Proof. In networks with strictly convex rise functions U l , for ε < 0, because H l ε (φ) < φ due to the monotonicity of the transfer function. This implies and application of Theorem 8.2 yields the statement.
An illustration to Corollary 2 is given in Fig. 5. The network has been designed to realize the same pattern as in Fig. 4 Fig. 4.) Due to the strictly convex rise function, the pattern is unstable, as illustrated by c). Already due to growth of numerical errors, the dynamics quickly leaves the periodic orbit corresponding to the spike pattern. For purely inhibitory networks, there is no obvious stabilizing mechanism such as supra-threshold excitation, and, indeed, we find continued irregular activity. The three perturbations around times t = 30T , t = 60T and t = 90T do not change this dynamics qualitatively. 11. Purely excitatory networks with normal dissipation. Corollary 3. In purely excitatory, strongly connected networks with strictly concave rise functions (normal dissipation) any non-degenerate spike pattern is unstable.
Proof. In networks of excitatory neurons, we have for l ∈ {1, ..., N }, ε > 0, and φ l with U l (φ l ) + ε < Θ U,l , due to the monotonicity of the transfer function. Because U l is strictly concave, holds. This implies and application of Theorem 8.2 yields the statement.
The typical dynamics of such a network is illustrated by Fig. 6. The rough topology and the neuron parameters are the same as in the network of Fig. 4. The delay time τ is scaled by a factor of 0.3. The network is designed so that the dynamics has the periodic pattern of M = 7 spikes (t 1 , ..., t 7 ) = (0, 0.18, 0.20, 0.28, 0.44, 0.56, 0.66) with sending neurons (s 1 , ..., s 7 ) = (1, 4, 3, 2, 5, 4, 2) and period T = 0.75 as invariant solution. This is the pattern of Fig. 4 scaled by a factor 0.3 so that it can be realized by a completely excitatory network with neurons of intrinsic period one. The non-degenerate spike pattern is unstable. Already due to increase of numerical error, the dynamics leaves the pattern and converges to a different pattern of spikes. This pattern is stable, as can be seen from the perturbations applied around times t = 30T and t = 60T . It therefore must be degenerate. Indeed, it is stabilized by supra-threshold excitation: Neuron 5 excites neurons 2, 4 supra-thresholdly to simultaneous sending, these neurons in turn excite neurons 1, 3 supra-thresholdly. The perturbation at t = 90T is large enough to cause the dynamics to leave the pattern's basin of attraction and to assume a different periodic pattern, which is again stabilized by supra-threshold excitation.
12. Purely excitatory networks with anomalous dissipation. Corollary 4. In purely excitatory arbitrarily connected networks with convex rise function (anomalous dissipation) any non-degenerate spike pattern is stable. In purely excitatory strongly connected networks with strictly convex rise function any non-degenerate spike pattern is asymptotically stable. The order preserving neighborhood lies within the basin of attraction, i.e. the dynamics is stable at least against perturbations smaller than d Ex max .
Proof. For ε > 0, in networks with convex U l , holds because H l ε (φ) > φ. This implies and we can apply Theorem 6.2 which yields the first part of the statement (simple stability). For strictly convex rise function, strict inequality "<" holds in Eqs. (128, 129); in strongly connected networks, Theorem 7.2 is applicable and yields together with the definition of the order preserving neighborhood Eq. (51) the second part of the statement (asymptotic stability and minimal basin of attraction). . Asymptotic stability of non-degenerate periodic spike patterns in purely excitatory, strongly connected networks of neurons with convex rise function. The excitatory network (a) has the same rough topology, neuron types and delays as the one in Fig. 6 and it is designed to show the same periodic pattern of spikes as invariant dynamics. The rise functions are now convex as displayed in b) for neurons 2 and 4. The pattern is asymptotically stable. At time t = 30T a perturbation inside the minimal basin is applied, at t = 60T a perturbation inside the basin but outside the minimal basin and at t = 90T a perturbation outside the basin are applied.
(For details, see text.) An illustration is given by Fig. 7. The rough network topology and the neuron parameters are the same as in the network of Fig. 5. The pattern is asymptotically stable with a basin of attraction so that any trajectory with maximal distance less than d Ex max ≈ 0.008 is certainly attracted. The dynamics (c) is perturbed with a vector of maximum norm less than d Ex max at time t = 30T . It relaxes back towards the original spike pattern. Around times t = 60T and t = 90T , two different perturbations with maximum norm 6d Ex max are applied. At t = 60T , the trajectory is perturbed outside the minimal basin (e.g. the sending times of neurons 3 and 4 are interchanged) but it lies still within the basin of attraction. At t = 90T , the basin of attraction is left and the dynamics converges to a different stable periodic pattern, which is in fact degenerate and additionally stabilized by supra-threshold excitation: A sending event of neuron 2 is generated by a supra-threshold input from neuron 3.
13. Conclusion and outlook. We have analytically studied the stability properties of a general class of periodic orbits for multi-unit systems with delayed couplings and complex connectivity structure.
One of the most basic properties of a periodic pattern of spikes is its stability or instability against small perturbations. Numerical simulations have shown that in mixed networks, networks having both excitatory and inhibitory interactions, periodic patterns can be both stable or unstable (cf. [28,27]). From the three theorems formulated and proven in the present article, we directly inferred general statements about the stability properties of non-degenerate periodic patterns of spikes for purely excitatory or purely inhibitory networks where the rise functions are congenerically curved: For arbitrary underlying network connectivity we analytically showed that 1. In purely inhibitory networks with concave rise functions (normal dissipation) any non-degenerate spike pattern is stable. If the network is strongly connected and the rise functions are strictly concave, any non-degenerate spike pattern is asymptotically stable (Corollary 1). 2. In purely inhibitory strongly connected networks with strictly convex rise functions (anomalous dissipation) any non-degenerate spike pattern is unstable (Corollary 2). 3. In purely excitatory strongly connected networks with strictly concave rise functions (normal dissipation) any non-degenerate spike pattern is unstable (Corollary 3). 4. In purely excitatory networks with convex rise functions (anomalous dissipation) any non-degenerate spike pattern is stable. If the network is strongly connected and the rise functions are strictly convex, any non-degenerate spike pattern is asymptotically stable (Corollary 4).
For asymptotically stable periodic spike patterns in strongly connected networks, we also derived the analytic expressions Eqs. (50, 51) for a minimal basin of attraction. The proof of stability is based on the observation that interactions lead to averaging of perturbations so that differences between the maximal and the minimal perturbation can only decrease due to interactions. If the network is strongly connected, the averaging takes place over the entire network leading to asymptotic stability. In weakly but not strongly connected networks we cannot expect asymptotic stability in general. A simple counterexample is given by a network where several neurons do not receive any input. Their sub-pattern of spikes will not be asymptotically but only Lyapunov stable. (For general implications of the inherent hierarchical structure of weakly connected networks, see [34].) The proof of instability is based on the observation that for certain arbitrarily small perturbations, the difference between the maximal and the minimal perturbation can only increase due to interactions so that these perturbations grow and the perturbed dynamics depart from any dynamics equivalent to the original one. Implications of only weakly connected network topology remain to be investigated.
A linear stability analysis employing the Frobenius and Gershgorin Theorems [25] as applied in [35] to much simpler dynamics seems possible as well. The nonlinear stability analysis presented here has the advantages that (i) we can derive information about the size of the basin of attraction, and (ii) it allows for a generalization to irregular dynamics (cf. [17,18]). We furthermore remark that the multi-dimensional dynamical systems considered here are one of the rare examples allowing for such an approach. We therefore concentrated exclusively on this nonlinear method.
Degenerate patterns with simultaneous events such as simultaneous spike sendings and arrivals can be unstable or stable. In particular, in networks in which all non-degenerate patterns are stable, there can also be unstable periodic patterns. For excitatory coupling and concave rise function, where non-degenerate patterns are unstable, degenerate patterns can be strongly stabilized by supra-threshold excitation [10,11,37], because supra-threshold excitation eliminates small perturbations, cf. [37], so these patterns are observed as attractors in numerical simulations, cf. Fig. 6. In contrast, in purely inhibitorily coupled networks with convex rise function, there is no such obvious stabilization mechanism (as long as no additional features are introduced) and indeed, we find continued irregular activity after the dynamics has left the unstable periodic pattern. This regime remains to be studied in detail.
Precisely timed patterns of spikes have been found in neurobiological experiments [3,32,30,23,15,2,12,31,13]. For their identification, statistical methods have been derived, e.g. in [12]. Analysis of the occurrence of precisely timed patterns of spikes in model networks with these tools will on the one hand allow to test the statistical methods. On the other hand, it will show for which model parameters precisely timed patterns of spikes occur in the model spike trains and how appropriate the models are for the description of neural activity.
Important directions of future research concern the robustness of the current results against the introduction of noise or against changes of the model, e.g. introduction of few excitatory couplings in inhibitory networks or introducing synaptic couplings with finite time course. Moreover, stability of non-periodic spike patterns and the occurrence of (potentially long) transients need to be understood. Some results in this spirit are presented in [46,45,17,26,44,18].