Interval stability for complex systems

Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.


Introduction
The problem of stability of complex dynamical systems against large perturbations is important both from a theoretical point of view and for applications in many fields. For example, in climatology natural or anthropogenic extreme events may cause severe changes in local and global climate systems [1][2][3]. In power grids, local failure, overloads or line breaks may lead to synchronization losses and large-scale blackouts [4,5]. In neuroscience, external stimulation of brain areas may terminate pathological neural activity [6,7]. Other relevant examples include ecosystems [8,9], biological networks [10] and many others. The study of system stability against large perturbations is fundamentally different from the case of small perturbations. First, the dynamics of small perturbations is linear, and the linear stability analysis is nowadays a standard technique even for large-scale dynamical networks [11]. In contrast, the dynamics of large perturbations is typically nonlinear, therefore the linearization is inadequate. Second, although a realistic system must be stable against all small perturbations, it is usually stable only against some strong perturbations. Not many systems are globally stable, i.e. stable against any large perturbations [12]. Therefore, a typical task is to define the class of 'safe' perturbations after which the system returns back to the initial dynamical regime. From the nonlinear dynamics viewpoint these are perturbations which do not leave the attraction basin.
If the evolution operator of the dynamical system is given, the safety of each particular perturbation can be easily tested by direct simulation. However, the full description of the attraction basin may be a non-trivial task. Even in a space of low dimension the basin of attraction may be quite complex, for example fractal [13]. At the same time non-local characteristics of the basin may be important in many applications. For example, they may serve as early-warning indicators for approaching global bifurcations [14,15] or provide information for optimal control of the system dynamics [16,17].
Recently, the concept of basin stability (BS) was suggested to describe the stability against large perturbations [15,[18][19][20]. The BS method was later used in various applications [21][22][23][24] and also confirmed experimentally [25]. BS is a probabilistic measure that characterizes the likelihood that the perturbation is 'safe', i.e. the Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
perturbed system returns back to the attractor. The value of BS depends on the predefined class of perturbations. Later we suggested another characteristic, the stability threshold (ST) [26]. It corresponds to the 'unsafe' perturbation of minimal amplitude, or the 'minimal seed' needed to disrupt the system [27,28]. From the phasespace viewpoint, the ST corresponds to the minimal distance between the attractor and the border of its basin [29,30].
Numerical algorithms to calculate both BS and ST were suggested. The BS can be estimated by a Monte-Carlo method by trying a large number of perturbations and testing the convergence to the attractor. The ST can be estimated by solving a constrained optimization problem, i.e. finding the minimal perturbation after which the system does not converge to the attractor. Note that both methods rely on the asymptotic behavior of the trajectories, i.e. they suggest integration of the system for (infinitely) long time in order to verify the convergence. In applications, it is often impractical to consider too long transients, since it is essential that the system returns to the attractor in finite time. Thus, it is important to study stability within finite time intervals.
In this paper we introduce a novel concept of interval stability which describes the stability of a system on finite time intervals. We qualify the perturbation as 'safe' if the system returns to a close vicinity of the attractor in a given time and 'unsafe' in the opposite case. Similarly as in the case of asymptotic stability, we introduce quantitative measures to characterize the class of safe perturbations. Namely, the interval basin stability (IBS) equals the probability that a perturbation of a predefined class is safe. The interval stability threshold (IST) defines the minimal magnitude of the unsafe perturbation. We discuss numerical methods for quantification interval stability measures and demonstrate their performance for various dynamical systems.

Definitions
First, let us formally define the time in which the perturbed system returns to the attractor. Consider a dynamical system is the solution of the initial value problem (1) with x(0)=x 0 . Let A be the attractor of (1), and 0 e > be such that the εneighborhood of A belongs to the basin of its attraction. Then the return time T e of the point x from its attraction basin is defined as i.e. it is the minimal time within which the perturbed system returns to the (small) neighborhood of its attractor (see [31]). Naturally, the value of the return time depends on the size of the neighborhood ε, although we will sometimes omit the subscript ε. For points out of the basin we set T = ¥ e , and for points within the εneighborhood we set T 0 = e . Note that we do not specify the type of the attractor, i.e. it may be a steady state, a limit cycle or even a strange attractor [32][33][34][35][36]. The definition of the return time may also be easily generalized for discrete-time dynamical systems [37,38].
In figure 1, the concept of the return time is illustrated for the classical pendulum governed by the system Here, j is the angle of deflection, α is the dissipation coefficient, K characterizes the inertness and P denotes the external torque. For the parameter values used in figure 1, the system is bistable with the coexistence of the stable fixed point O and a stable limit cycle. Here the fixed point is treated as the desirable state and its stability is studied. The areas with different return times corresponding to ε=0.4 are plotted by different colors. The union of all the colored areas constitutes the attraction basin of the steady state. Now we can define the IBS and IST. Consider perturbations of (1) which define the subset X R P N Ì of the perturbed states in the system phase space. Suppose that the maximal admissible return time is τ. Then, IBS is the fraction of the perturbed states whose return time is not larger than τ: Here, V(X) denotes the volume of the subset X in the phase space. From a physical viewpoint, IBS defines the probability that the system returns to the attractor within time τ after a perturbation of the given class.
IST is defined as the minimal magnitude of the perturbation whose return time exceeds τ: The physical meaning of the IST is the following. If the magnitude of the perturbation is less than σ τ , the system is guaranteed to return to the attractor in time τ at the latest. For stronger perturbations the return time may be larger. The corresponding points a and b with dist(a, b)=σ τ define the 'thinnest site' of the attraction basin and the most 'dangerous' direction of the perturbation.
IBS and the IST naturally depend on the interval τ. Its value should be selected depending on the particular dynamical system and the particular application. As illustrated in figure 1, the larger is the return time τ , the closer are the corresponding areas to the border of the attraction basin. Therefore, in the limit of large τ the interval stability characteristics approach the limiting values corresponding to the asymptotic ones. Namely, the IBS tends to the (asymptotic) BS, and the-IST converges to the (asymptotic) ST.

Numerical methods
Let us now address the issue of numerical estimation of the interval stability characteristics. The Monte-Carlo method for the estimate of the asymptotic BS may be easily generalized to calculate IBS. The corresponding algorithm suggests testing of numerous perturbed states randomly picked from a predefined class and integrating the system during the time τ. The convergence to the ε-neighborhood of the attractor is checked and the number of points with a return time less than τ is counted. Finally, IBS is estimated as where M is the total number of tested perturbations (numerical experiments) and M τ the number of states with return time less than τ. Now let us address the problem of numerical estimation of the IST. In fact, the IST is the solution of the constrained optimization problem is the distance from x to the attractor, and h x T x º e ( ) ( ). The literature devoted to constrained optimization offers a variety of numerical methods [39][40][41][42] which are not in the focus of the present paper. Here we will just discuss a specific feature of the return time function h(x) which may be a serious obstacle for the numerical solution of the constrained optimization problem (7). Namely, this function may be discontinuous with respect to the system state x.
To understand this unexpected phenomenon, we analyze an exemplary two-dimensional system depicted in figure 2(a). The system possesses the only steady state A and we select the ε-neighborhood surrounding it as depicted by the red circle. Note that in points D and E the trajectories contact the circle, and the trajectories starting from arc DE leave the circle. However, since A is the attractor and the neighborhood belongs to its basin, the trajectories return back to the circle later. So, the trajectories from the gray sector in figure 2(a) cross the circle three times: in arc EF, in arc DE and in arc CD. The other trajectories cross the circle only once.
Let us now construct the set of points in the phase space whose return time equals τ. A naive solution would be to start from the points on the circle and integrate them backwards during the time τ. Then the circle is mapped to some closed curve T, and each point of T gets back to the circle in time τ. This seems to guarantee that the return time of each point on T equals τ. However, this is true only for trajectories out of the gray sector which cross the circle only once. The situation inside the gray sector is more complex, since points of T may have a smaller return time. Indeed, the trajectories starting from the intervals C 1 D 1 and D 1 E 1 cross the circle for the first time at moments earlier than τ. Thus, the return time equals τ only for the points in the interval E 1 F 1 . This means that the set of the points with return time τ is only a part of the closed curve T, as depicted in figure 2(b).
Thus, the return time may indeed be discontinuous in the phase space. Discontinuity of the constrain function causes significant difficulties for the constrained optimization problem, therefore it should be avoided. An obvious way to avoid the discontinuity of the return time is to select the neighborhood of the attractor which is strictly absorbing. This would exclude its contact with the trajectories. However, this is practically impossible for complex dynamical systems.
Here we suggest another approach, namely selecting another constrain function which is closely related, but not equal to the return time. Namely, we introduce the return distance R τ of the point x from the attraction basin as i.e. the minimal distance to the attractor achieved during the time interval τ. Note that the return radius depends on the value of τ. It is also naturally related to the return time. Namely, it is easy to show that T ε (x)=τ implies R τ (x)=ε. However, the opposite is not always true, as illustrated in figure 2. Indeed, the trajectory starting from point E 1 contacts the circle at point E and later crosses it at point C. This implies that if the starting point x resides on this trajectory between points E 1 and C 1 , the minimal distance to the attractor during the time interval τ is reached at point E and equals ε. Therefore the return distance R τ equals ε for all these points, and the set of points with R τ =ε consists of the set of points with T ε =τ complemented by the piece of the trajectory E 1 C 1 , as depicted in figure 2(c). Thus, in contrast to the return time, the return distance is continuous in the phase space. An interesting question is the relation between the return distance R τ and the return time T ε in the areas where the latter is discontinuous. On the curve C 1 E 1 in figure 2 . However, it is easy to see that this curve separates areas with T x t > e ( ) (above the curve) and T x t < e ( ) (below the curve). It is possible to show that the following is true for an arbitrary dynamical system: (i) return distance is a continuous function in the phase space, and (ii) near each points with R τ (x)=ε points with T x t > e ( ) and can be found (see appendix for the proof). These features suggest that the return distance is a more convenient choice of the constrain function in the constrained optimization problem associated to the IST. Namely, instead of solving (7) one can determine the IST by solving the modified problem In this version, the constrain function is continuous which makes the problem much easier for a numerical solution. Since h(x) is continuous and g(x) has no local minima out of the attractor, the solution of (8) is a point on the border of the constrained area, i.e. with R τ (x)=ε. The existence of points with T t > e in the arbitrary small vicinity guarantees by definition (5) that the found solution corresponds to the IST.
We have developed an algorithm for the quantification of IST. The algorithm is an extension of the one developed for the asymptotic ST [26]. It includes two stages. (i) First, it searches for a point with R τ (x)=ε starting from a random point in the vicinity of the attractor and moving away from it. (ii) Once the point is found, the gradient ∇R τ is calculated and the normal hyper-plane is constructed in the phase space. This hyperplane is a local approximation of the isosurface R τ =ε in the phase space. On this hyper-sphere a step in the direction towards the attractor is taken. The procedure is repeated until no sufficient progress is achieved which corresponds to the arrival to a local minimum of the distance g(x). Starting from different points, the algorithm converges to various local minima, and their comparison provides an estimate for the IST.

Results
In this section we apply the described methods of the study of the interval stability to different dynamical systems. First, in illustrative purposes we consider the classical pendulum (3) whose phase portrait is depicted in figure 1.
The numerical estimation of IBS for (3) is illustrated in figure 3. The class of the perturbed states was defined as X y y , , . . Note that this observation of the 'double saturation' feature of the estimate (6) provides a simple method for the fast and accurate estimation of the (asymptotic) BS. Namely, to do so one should consecutively increase the return time τ and the number of trials M calculating the ratio M τ /M on each step. When the saturation is achieved with respect to both M and τ, the obtained estimate is close to the real value of B.
In figure 4 we illustrate the performance of the algorithm for quantification of IST. The algorithm starts from random points (of which only two are shown), moves along the isosurface R τ =ε and converges to local minima of the distance to the attractor. Between the two found local minima, the one which is closer to the attractor corresponds to IST. In figure 5, the obtained σ τ is plotted versus the interval length τ. Note that for large enough τ the value of σ τ saturates and converges to the (asymptotic) ST lim . s s = t t ¥ The next example we consider is a network of connected pendulums This system is also often regarded as a conceptual model of power grids, i.e. networks of connected generators and consumers of electrical power [5,43]. In this context, j i is the angular variable describing the rotor position, α is the dissipation parameter, P i is the electrical power produced or consumed by the given node i, while K ij characterizes the capacity of the transmission line between the two nodes i and j. Here, we analyze a prototypical network of N=10 nodes, each one randomly assigned to be a generator or a consumer. The nodes are organized in a regular ring, each connected with two neighbors on each side ( figure 6(a)). Further, we set  . Hollow circles-starting points, blue connected circles-steps of the algorithm, red circles-the found local minima of the distance. Starting from different points the algorithm converges to various local minima of the distance to the attractor. Pi =1 for generators and P i =−1 for consumers, and equal transmission line capacities K ij =K=0.55 for all connected nodes. For these parameters and all values of 0 a > , the network demonstrates bistability with two coexisting dynamical regimes. The first one is synchronization between all oscillators which in this case rotate with the same frequency and fixed phase lags. In the context of power grids, this synchronized regime of the network operation is the desirable one. The second regime is the asynchronous one corresponding to the mutual rotation of the oscillators and malfunction of the power grid. The two regimes of the network are illustrated in figures 6(b) and (c).
We study the interval stability of the synchronized state of the network. In figure 7(a), IST is plotted versus the parameter α for different values of the return time τ. Note that for large τ the value of σ τ saturates to a constant value equal the (asymptotic) ST σ (for small α the saturation takes place for larger τ, data not shown). This feature demonstrates how σ τ can be used to determine σ. Namely, σ τ should be computed for increasing values of τ until the saturation is reached.
Thus, for large return times the interval stability transfers to asymptotic stability. In contrast, for small return times the interval stability is related to the linear stability, since the dynamics near the attractor is linear. Namely, in a small vicinity of the attractor the convergence of the trajectories is governed by the largest non-zero  )for small enough ε and τ. The lines corresponding to these estimates are plotted in figure 7(b) by dashed lines. The exact value of σ τ depends on the shape of the selected neighborhood and the directions associated with different Lyapunov exponents.
It is important to analyze the points in the global phase space associated with the IST. By definition, these are the closest points to the attractor for which the return time exceeds τ. The perturbations directed from the attractor towards these points are the most dangerous ones for the system. In the case of a dynamical network, the coordinates of this point correspond to the magnitude of partial perturbations applied to each node of the network.
The critical perturbations are illustrated in figure 8 where the magnitude of the partial perturbation is plotted versus the node number. In each panel, several critical perturbations found for different starting points are plotted by different colors. Note that the situation is different for small compared to large return times. For the large return time τ=30 the algorithm starting from different points converges to the close points suggesting the existence of a single well-pronounced minimum in the global phase space. The critical perturbation involves mostly the node number 9 and 10 which are perturbed the most strongly. In contrast to that, for the small return time τ=10 the algorithm converges to multiple different points with close distances to the attractor. This means that many different directions of perturbation are equally dangerous for the network. The presumable reason for this may be understood from the consideration of the linear dynamics near the attractor. As was shown above, it determines the IST for small τ. In the case of the network (9) the spectrum of the synchronized state includes a lot of equal Lyapunov exponents due to the high symmetry of the network. This suggests the presence of many directions in the phase space in which the trajectories converge with equal rate.
The last example we consider is a network of bistable units governed by the system Here, x i characterizes the activity of the unit i, a 0 1 i < < is its excitation threshold, and K ij is the coupling between the jth and the ith units. In the absence of interactions (K ij =0) each unit is settled either in the low (x i =0) or in the high (x i =1) stable level. These states are separated by the unstable steady state x i =a i serving as the excitation threshold (not to be mixed with the ST, although these terms are related). When the coupling is introduced, the stationary levels persist although may shift due to the input from the peers. As shown in [44], the mean-field dynamics of a homogeneous neural network may be sometimes approximated by a one-dimensional equation similar to the one describing a single node of (10). Therefore the network (10) may be interpreted as a population of connected sub-networks, or an inhomogeneous neural network with incorporated clusters [45] (see also [46,47]). For illustrative purposes we again consider the network with a ring structure where each of N=10 nodes is connected to its nearest neighbors from each side with the same coupling strength K ij =K (see figure 9(a)). To avoid the excessive symmetry and to study the influence of the parameter mismatches the thresholds of the nodes were set different so that a i =a 0 +iΔ with a 0 =0.45 and Δ=0.01.
We study the IBS of the 'silence' state of the network for which all the nodes are low (x i =0 for all i). We select ε=0.004 and τ=20 which means that the perturbation is 'safe' if the network returns to the ε-sphere of the steady state in time τ at the latest. Figure 9(a) illustrates the results obtained by the algorithm for numerical estimate of the IST depending on the coupling strength K. Different lines depict the distances to the local minima obtained by the algorithm starting from various random points. We see N+1=11 different lines some of which terminate for certain values of K. Different lines correspond to different local minima in the phase space, i.e. various direction of the perturbations associated with different nodes of the network.
One of the minima is associated with a negative perturbation of the nodes (dashed line). This minimum is specific for the interval stability only and disappears in the asymptotic limit t  ¥. The rest of the minima are associated with positive perturbations of the nodes (solid lines). Further we concentrate on them only.
For K=0 the dynamics of the nodes is independent, and the destabilization of any node leads to the destabilization of the whole network. Let us analyze the 'partial' IST of each node. The dynamics of the single node is one-dimensional, i.e. the partial IST corresponds to such a perturbation of the variable x i that it returns back to zero and reaches the level x i =ε in time τ after the excitation. Thus, the partial IST σ τ i is just a solution of the initial value problem associated with the dynamics of the single node in reverse time. Namely, if Obliviously, the higher the excitation threshold a i is, the larger is the partial IST σ τ i . The excitation threshold also serves as the partial (asymptotic) ST which implies that σ τ i →a i for t  ¥.
For K=0, the partial IST of each unit corresponds to the local minimum of the distance between the silent steady state and the 'unsafe' area of the global phase space. Since different nodes possess unequal excitation thresholds a i the partial ISTs are also different. Therefore the lines corresponding to the local minima start from various levels. The IST of the entire network is defined by the local minimum with the smallest distance to the attractor, i.e. the lowest one.
When the coupling strength K grows, the structure of the global phase space transforms, and the coordinates of the local minima change. The general tendency is a decreasing of the distances to all the minima. Note that the perturbations associated with the local thresholds involve several nodes rather than one. However, one of the nodes is perturbed predominantly. At certain parameter values some of the local minima disappear. For strong enough coupling only one local minimum survives (except the one associated with negative perturbations). This only minimum defines the IST of the network. It is associated with the predominant perturbation of the three most excitable nodes of the network. As the coupling strength grows further the IST decreases significantly because of the strong positive feedback which destabilizes the network after even small perturbation. In the first case a single well pronounced minimum is reached from different initial points, while in the second case multitude of minima with (almost) the same distance to the attractor are found.

Conclusions
In the present paper we have suggested a novel concept of the stability against large perturbations-the interval stability. In this framework a perturbation of the dynamical system is classified as 'safe' if the perturbed system returns to a small neighborhood of the attractor in a given interval of time. Naturally, the interval stability depends on the size of the neighborhood ε and the time interval τ. These parameters should be selected in consideration of the practical context to which the study is applied. For example, the choice of ε may be based on what is considered as normal, or perfect operation of the system. The choice of τ may be then related to the maximal allowable duration of the malfunction period.
We have introduced quantitative measures to characterize the interval stability. The IBS is the probability that a perturbation from the predefined class is safe. The IST defines the minimal magnitude of the unsafe perturbation. From the phase space viewpoint, the IST corresponds to the minimal distance between the attractor and the set of 'unsafe' points with the return time larger than τ. The direction associated with this point also plays an important role being the most dangerous for the system.
The interval stability is a natural bridge filling the gap between linear and asymptotic stability. Namely, for small ε and τ the dynamics of the perturbations is linear, and the rate of their fading is determined by the maximal non-zero Lyapunov exponent of the attractor. Therefore the IST depends on the linear stability of the attractor. In contrast, for large τ the class of safe perturbations comprises the whole basin of attraction, therefore the IST converges to the (asymptotic) ST.
We have suggested numerical algorithms for estimate of the IBS and IST of an arbitrary dynamical system. Since IBS is a probabilistic measure, it is natural to use Monte-Carlo methods for its estimate. In contrast, IST is a deterministic measure, namely a solution of the constrained optimization problem. Solving this problem led us to an unexpected problem of discontinuity of the constrain function. However, some reformulation allowed to avoid this discontinuity.
In contrast with the case of asymptotic stability, numerical estimation of the interval stability measures relies on integration of the dynamical system for finite time interval τ which is beneficial for the performance. We have demonstrated the potential of the developed algorithms for various dynamical systems. The most promising direction is study of the interval stability of dynamical networks. In this context, IST is associated with the most dangerous 'combined' perturbation applied to several nodes simultaneously.