LOCAL STABILIZATION AND NETWORK SYNCHRONIZATION: THE CASE OF STATIONARY REGIMES

. Relationships between local stability and synchronization in networks of identical dynamical systems are established through the Master Sta- bility Function approach. First, it is shown that stable equilibria of the local dynamics correspond to stable stationary synchronous regimes of the entire network if the coupling among the systems is suﬃciently weak or balanced (in other words, stationary synchronous regimes can be unstable only if the cou- pling is suﬃciently large and unbalanced). Then, it is shown that [de]stabilizing factors at local scale are [de]synchronizing at global scale again if the coupling is suﬃciently weak or balanced. These results allow one to transfer, with almost no eﬀort, what is known for simple prototypical models in biology and engi- neering to complex networks composed of these models. This is shown through a series of applications ranging from networks of electrical circuits to various problems in ecology and sociology involving migrations of plants, animal and human populations.

1. Introduction. We focus in this paper on the relationships between local stability and synchronization in networks of identical dynamical systems. No constraints are imposed on the topology of the networks and on the dimension and dynamics of the systems. By contrast, the interactions among the systems are assumed to be linear and symmetric not only because this simplifies the analysis but also because this is the case of interest in many applications of population dynamics where the interactions among the systems represent migration flows among various regions or patches.
In such networks complete synchronization is always possible because if all systems are in the same initial state, then all natural interactions are fully balanced and all systems remain synchronous forever since they are governed by the same equations. This implies that the network can remain forever in a synchronous regime where all systems evolve on the same local attractor, repellor, or saddle, as if they 624 STEFANO FASANI AND SERGIO RINALDI were in isolation. However it is not guaranteed that these synchronous regimes are stable. In fact even when all systems are evolving on the same attractor a small perturbation of the state of one or more systems can give rise to a transient that brings the network toward a completely different regime. For example, in-phase synchronous mechanical or electrical oscillators can lose their synchrony after a small perturbation and end up into a chaotic regime or into a periodic but out-of-phase regime ( [1], [2]).
Various methods exist for the analysis of the stability of the synchronous regimes, but two of them are particularly effective. The first, known as Master Stability Function (MSF) approach [3], is, for generic networks, a necessary and sufficient condition for local stability, while the second, known as Connection Graph Method [4], is based on Liapunov functions and is a sufficient condition for global stability.
Here we use the MSF approach that can be applied to all possible synchronous regimes (stationary, periodic, quasi-periodic, and chaotic).
The aim of the paper is to establish a bridge between the local dynamics of the systems and the rate of convergence toward synchronous regimes. This is conceptually important but also very useful because the dynamics of many simple prototypical models in biology and engineering are already well understood while almost nothing is known on the synchrony of networks composed of these models. More precisely, we show that the factors stabilizing [destabilizing] local attractors accelerate [delay] the convergence to synchronous regimes. In other words, we show that stabilizing [destabilizing] factors at the local scale have the property of enhancing [attenuating] synchrony at the global scale. This sort of principle, that can be shortly stated by saying that " [de]stabilizing factors are [de]synchronizing", is valid for weak coupling but holds true also for large couplings provided all state variables have roughly the same weight in forming the interactions among the systems. Most likely the principle is valid for any type of attractor (equilibria, limit cycles, tori, and strange attractors) but, in any case, its proof is rather different in each case because there is not a unique way of defining what is a stabilizing factor for each kind of attractor. For this reason we consider in this paper only the case of stationary synchronous regimes where all systems are exactly at (or close to) the same stable equilibrium, while the extension of the principle for the other synchronous regimes will be discussed elsewhere.
The paper is organized as follows. In the next section we define the class of networks we consider and recall the basic features of the MSF approach. Then, in the third section we derive the principle "[de]stabilizing factors are [de]synchronizing" and in the fourth one we prove its usefulness through the analysis of six problems taken from biology, engineering, and sociology. The value and limitations of the principle, as well as some open problems, are shortly discussed in the last section.
2. Network synchronization and master stability function. The networks we consider in this paper are composed of N identical and linearly interacting systems. Each system i (i = 1, 2, . . . , N ) has n state variables which are the components of a n-dimensional vector x (i) . The network is described byẋ ) is the local model, namely the equation governing the dynamics of an isolated system; S i is the set of systems directly connected to system i (i.e. the neighborhood of i); d is the coupling strength; and H = diag [h 1 , h 2 , . . . , h n ] (with h l ≥ 0 for all l, and n l=1 h l = 1) is the coupling profile, i.e. a constant diagonal matrix specifying the relative weights of the state variables in the interaction term. This form of the matrix H is the one which is most frequently used in population dynamics ( [5], [6], [7], [8]). More complex H matrices could be considered but the analysis would become more difficult.
Solutions of (1) characterized by are called synchronous solutions. As a matter of fact, when (2) holds at time t = 0, (1) vanish for all t, and the dynamics of each system is simply governed byẋ(t) = f (x(t)). Notice that this is possible thanks to the assumption that all systems are identical. When (2) holds, the trajectory of system (1) is confined to a n-dimensional linear manifold Ω called synchrony manifold. It is of utmost importance to assess whether the synchronous solution (2) is stable, i.e. whether system (1) converges to (2) from any nearby initial state (local synchronization) or from any state (global synchronization). Through straigthforward manipulations, equation (1) can be rewritten aṡ where the N ×N connectivity matrix G = [g ij ] describes the topology of the network. More precisely, for i = j, g ij = g ji = −1 if systems i, j are directly connected and g ij = g ji = 0 otherwise, whereas g ii = − j =i g ij is the degree of system i, i.e. the number of systems directly connected with i. To avoid degeneracies, we assume that any pair of systems is connected either directly or through a chain of systems. Thus G is a real, symmetric and irreducible matrix ([1], [9]). In addition, all offdiagonal elements are non-positive, and each row has zero-sum. As a consequence, the eigenvalues µ i of G are real and Given a network, i.e. a connectivity matrix G, the local stability of the synchrony manifold Ω can be ascertained by looking at the evolution of the differences (x (i) (t)− x (1) (t)) , i = 2, 3, . . . , N , which are described, after negleting the higher-order terms in the Taylor expansion, by a (n × (N − 1))-dimensional linear system with time varying Jacobian matrix given by Through a suitable change of coordinates based on the eigenvectors of the matrix G, it can be shown [3], [6]  where ∂f /∂x is evaluated along a solution ofẋ = f (x). These matrices depend upon the local model f , the coupling profile H, and the product µ i d. Given f and H, one can therefore consider the parameterized family of n × n matrices with ε > 0, and denote by L(ε) the largest Liapunov exponent of each element of the family. The function L(ε) is known as Master Stability Function (MSF) [3] and is very useful for discussing the impact of the local characteristics on the synchronization of the network. In particular, if (ε, ε) is an interval where L(ε) < 0 and the synchronous solution is stable for i.e. synchrony can be obtained for suitable values of the coupling strength. It is worth noticing that (5) is always satisfied in fully connected networks, because in that case all positive eigenvalues of G coincide. The most interesting synchronous solutions one can study with the MSF approach are those in which x (i) (t) in (2) is an attractor of the isolated systemẋ = f (x). In such cases, the negativity of L(ε) reveals that after any small perturbation the network returns to the synchronous solution at the exponential rate exp(L(ε)t). Thus, lower (negative) values of L(ε) imply a faster convergence to the synchronous regime.
3. Stabilizing factors enhance synchronization. From now on we limit the discussion to networks in stationary synchronous regimes, namely to networks in which all systems are at the same equilibrium x, satisfying f (x) = 0. The MSF approach is particularly simple in this case because the matrix A in (4) is timeinvariant and L(ε) is nothing but the real part of the dominant eigenvalue of A. Moreover, we restrict our attention to hyperbolic stable equilibria x, i.e. we assume that the Jacobian matrix (∂f /∂x) x has eigenvalues with negative real part and denote with λ the real part of the dominant eigenvalue.
Under these assumptions, for small values of ε the eigenvalues of A are close to the eigenvalues of (∂f /∂x) x so that L(ε) is close to λ, and hence L(ε) < 0. By contrast, for large values of ε, if h i = 0 ∀i, i.e. if all state variables appear in the interaction term, the eigenvalues of A are close to −h i ε so that L(ε) is approximately equal to − (min i (h i )ε) and hence L(ε) < 0. Finally, if the coupling profile is sufficiently balanced, i.e. if the weights h i of the state variables in the interaction term are roughly equal to 1/n, then the eigenvalues of A are approximately equal to the eigenvalues of (∂f /∂x) x shifted to the left of an amount ε/n, so that L(ε) ∼ = λ−ε/n, and hence L(ε) is negative for all ε (recall that λ < 0) and decays linearly with ε. All we said is summarized by the following obvious result. Point i) of Theorem 3.1 says that any stable equilibrium x of the isolated systeṁ x = f (x) is associated to a stable synchronous stationary regime in the network if the coupling strength is sufficiently small. Although point iii) suggests that L(ε) decreases with ε, i.e. that coupling strength reinforces synchronization, this is not always true as shown in Figure 1 which reports the MSF L(ε, h) for three small values of h in the case Figure 1 actually points out that the stationary synchronous regime can become unstable when the coupling strength is sufficiently increased. This is a well known phenomenon often called Turing instability ( [11], [12], [13]) in the context of PDE models, where the change of sign of the MSF is associated with the loss of stability of the constant and uniform solution of the PDE and with the emergence of spatiotemporal patterns including standing, traveling, and spiral waves [14]. Let us now consider systems with dynamics depending continuously upon a parameter p, i.e.ẋ = f (x, p), so that the equilibrium x is a function of p and assume that the equilibrium x(p) does not bifurcate in a given parameter range and therefore remains hyperbolic. Under this assumption, we can identify the subintervals where p is a stabilizing factor, as those where ∂λ/∂p < 0, while in the remaining subintervals the parameter is a destabilizing factor because ∂λ/∂p > 0. The sense of this definition is that any small increase of a stabilizing[destabilizing] factor p, accelerates[delays] the convergence toward the equilibrium x(p) in the isolated systemẋ = f (x, p) (recall that in the vicinity of the equilibrium the convergence is described by the exponential function exp(λt)). In order to avoid possible misunderstandings it is important to notice that when an isolated system has multiple stable equilibria a parameter can be stabilizing for some of them and destabilizing for the others.
Since the parameter p has also an influence on the MSF it is natural to define as synchronizing [ If x(p) is a stable hyperbolic equilibrium ofẋ = f (x, p) (i.e. the Jacobian matrix (∂f /∂x) x has a dominant eigenvalue with negative real part (λ(p) < 0)), then if p is a stabilizing[destabilizing] factor (i.e. ∂λ/∂p < 0 [> 0]) then p is a synchronizing[desynchronizing] factor for ε sufficiently small (i.e. for low coupling strength) and for any ε if the coupling profile is almost balanced (i.e. if h i ∼ = 1/n ∀i).
Proof. If p is increased of a small amount ∆p, then λ(p) varies of an amount approximately equal to (∂λ/∂p)∆p. But for small ε, L(ε, p) is in a ε-neighborhood of λ(p), while L(ε, p+∆p) ∼ = L(ε, p)+(∂L/∂p)∆p is in a ε-neighborhood of λ(p+∆p). Since for ε sufficiently small the two neighborhoods have no point in common, ∂L/∂p and ∂λ/∂p have the same sign. Moreover, if the coupling profile is perfectly balanced (h i = 1/n ∀i), then the eigenvalues of A in (4) are the eigenvalues of (∂f /∂x) x shifted to the left of an amount 1/n. Thus, the eigenvalues of A vary with respect to p exactly as the eigenvalues of (∂f /∂x) x . By continuity, one can conclude that (∂L/∂p) is approximately equal to (∂λ/∂p) if the coupling profile is almost balanced.
Theorem 3.2 suggests that stabilizing factors might be desynchronizing for high coupling strengths when some of the state variables are dominant in the interaction term. Figure 2 shows this possibility for a system with ∂f ∂x in which p is stabilizing but not synchronizing for large values of ε.

4.
Examples. We show in this section how Theorem 3.2 can be effectively applied to detect the role that some parameters have on the dynamics of the entire network when their influence on the local dynamics is known. One example is taken from electrical engineering, while the others refer to plant, animal, and human populations migrating among various regions.
In two examples out of six the coupling is small, while in the remaining ones it is balanced. We anticipate that in most cases the principle "[de]stabilizing factors are [de]synchronizing" holds true even when it is not guaranteed by Theorem 3.2, i.e. when the coupling is large and unbalanced.
4.1. Example 1 (antagonism and fanaticism). Increased human migratory flows among various regions in the world have stressed the interest in the role played by migration on the evolution of antagonistic cultures. Particularly considerable, in this context, is the case of cultures characterized by some degree of fanaticism, that in extreme cases can give rise to religious wars and terrorism. Some of these cultures are supported by groups of fanatics that would smoothly go extinct if they would not have opportunities to fight against each other. Actually, stronger are the opponents of group i and higher is the recruitment of new members in the i-th group. A naïve model that captures this perverse mechanism of interaction among two groups of dimension x 1 and x 2 in a given country is the followinġ where exp(−α i t) is the rate at which the i-th group decays in the absence of opponents and β i is a measure of the fear for group i. Model (6) is linear, and has a unique equilibrium x, namely the origin (absence of fanaticism). The corresponding Jacobian matrix is stable if and only if α 1 α 2 > β 1 β 2 : thus the product of the fears β = β 1 β 2 is a destabilizing factor. Under the assumption that the migration rates of the two groups are the same, we can immediately deduce from Theorem 3.2 that β is a desynchronizing factor, i.e. the convergence to the state of no fanaticism in any network of connected countries is slower if the fears are higher. It is interesting to note that the same result holds true even if the coupling profile is not balanced. In fact the matrix A given by (4), namely is a Metzler matrix and has two real eigenvalues, say L and l. Moreover, L, being an eigenvalue of A, must satisfy the equation Since L > l it follows that L is always increasing with β, i.e. β is always a desynchronizing factor.

Example 2 (competition)
. The most common model in competition theory is the Lotka-Volterra bilinear model where x 1 and x 2 are the densities of two competing populations, K 1 and K 2 are their carrying capacities (i.e. their equilibria in the absence of competitors), and q 1 and q 2 are the so-called competition coefficients. Since model (7) can not have limit cycles (this follows immediately from Bendixson-Dulac Theorem ( [15], [16])) its attractors are only equilibria. In particular, for q i < 1, i = 1, 2, and the model has a unique and globally stable equilibrium x in the positive quadrant [17]. Under condition (8) the non-trivial isoclines of the model intersect at the positive equilibrium x while the two trivial equilibria (K 1 , 0) and (0, K 2 ) are saddles, as shown in Figure 3. Thus for K 1 approaching q 1 K 2 (left extreme of the range (8)) the stable equilibrium x tends toward the trivial equilibrium on the x 2 axis, while for K 1 approaching K 2 /q 2 (right extreme of the range (8)) the stable equilibrium x tends toward the saddle point on the x 1 axis. Thus, the two extremes of the range (8) correspond to two transcritical bifurcations, so that K 1 first stabilizes and then destabilizes the equilibrium x. If we assume that the two competing populations have the same tendency to migrate (h = 1/2 in the H matrix), then we can conclude from Theorem 3.2 that K 1 is first a synchronizing and then a desynchronizing factor. This means that in any network of connected regions a small increase of K 1 where x 1 and x 2 are prey and predator densities, r and K are net growth rate and carrying capacity of the prey, a and b are maximum predation rate and half saturation constant of a Holling-type II predator, e is predator efficiency, namely the number of predator new-born for each unit of predation flow, and (m o + m h ) is total predator mortality, i.e. the sum of basic mortality m o and mortality due to harvesting m h . Both prey and predator can migrate among neighboorhing patches and even if the migration rates of the two populations are not the same, they are both important so that the coupling profile can be assumed to be relatively balanced. Under these assumptions we can apply Theorem 3.2 and immediately conclude that, no matter how the structure of the network is, harvesting is [de]synchronizing if it is [de]stabilizing the equilibrium of model (9). But the stability properties of model (9) and the role played by harvesting can easily be established (see, for example, [19]). In fact, as shown in Figure 4, the non-trivial prey isoclineẋ 1 = 0 is a parabola intersecting the x 1 -axis at points −b and K, while the predator isoclineẋ 2 = 0 is a vertical straight line shifting to the right when the harvesting m h is increased and intersecting the x 1 -axis at point b(m o + m h )/(ea − m o − m h ). Thus, there is a unique positive equilibrium x if the vertical isocline intersects the parabola in the positive quadrant and the analysis of its Jacobian matrix shows that x is unstable if the predator isocline is on the left of the vertex of the parabola and stable otherwise. Moreover, a more detailed analysis ( [20]) allows one to check that the transition of x through the vertex of the parabola is a supercritical Hopf bifutcation, while the collision of the equilibrium x with the saddle (K, 0) is a transcritical bifurcation. Thus, the harvesting effort m h has first a stabilizing and then a destabilizing effect on the equilibrium x in the range and the conclusion is that in the same range harvesting is first synchronizing and then desynchronizing.

Example 4 (mutualism). Mutually beneficial interactions between members
of different species play a central role in ecology ( [21]). Here we consider the case of a spatially extended two-species obligate mutualism where both species can migrate through corridors connecting various patches. Each species has a phenotypic trait u that measures the rate at which commodities (i.e. a reward like nectar or a service like pollination) are provided to the partner and provision of commodities is assumed to be costly in terms of reproduction or survival. Since commodities provided by either species represent a limited resource for the partner species, there is intraspecific competition for commodities [22]. Thus, the model of each isolated patch is the following (see [23] and [24] for more details)  [23], namely r i (u i ) = 0.1(u i +u 2 i ), i = 1, 2; u 2 = 40; c 1 = 1; c 2 = 2; α 1 = α 2 = 0.035 and the curves are plotted respectively for ε equal to 0, 200, 400. Note that all curves are first decreasing and then increasing with respect to u 1 , i.e. u 1 is first synchronizing and then desynchronizing.
where u i x i is the probability per unit time that an individual of species j = i receives a benefit from the mutualistic interaction, while the terms (1 − α i x i ) and −c i xi measure the detrimental effect of intraspecific competition for commodities provided by the partner species and for other resources. The mutualism being obligate, the intrinsic rate of increase −r(u) is negative and r(u) is increasing with u to reflect the cost of producing commodities. A standard analysis of model (10) shows that the extinction state x 1 = x 2 = 0 is always a locally stable equilibrium. Depending on the trait values u 1 and u 2 , there may also exist two positive equilibria, one being stable (node) and the other being unstable (saddle). The transition between the two cases (zero or two equilibria apart from the extinction state) is caused by a saddle-node bifurcation. Thus, both u 1 and u 2 are first stabilizing and then destabilizing the stable equilibrium x corresponding to persistent mutualistic interactions. This implies through Theorem 3.2 that u 1 and u 2 are first accelerating and then delaying the convergence to the synchronous permanent regime of the entire network of patches if the coupling is small and/or balanced.
In some important cases of mutualistic interactions, like those among insects and plants, the coupling can be quite large and mainly due to one of the two partner species. In such cases the conditions of Theorem 3.2 are not satisfied and some extra-work is needed to check if [de]stabilization implies [de]synchronization also for high and unbalanced couplings. Figure 5 shows a typical result obtained in this case for the particular system analyzed in [23]. In this figure u 1 and u 1 are the extremes of the range of existence of the stable permanent regime and the curves corresponding to ε = 0 show the dependence of the dominant eigenvalue of the Jacobian matrix (∂f /∂x) x upon the parameter. Since all curves are first decreasing and then increasing, the figure clearly points out that in this case "[de]stabilization implies [de]synchronization" even if the coupling is large and unbalanced. 4.5. Example 5 (pest control). Pest control is a considerable problem not only in agriculture but also in forest management and conservation where insect pest outbreaks can have devastating consequences. In this context, the systematic use of insecticides is the most easily conceivable control action. A natural question concerning spatially extended forests, where insect can disperse at small rate, is to know if the use of insecticides has a synchronizing effect on the entire forest. This question can be easily answered through Theorem 3.2 by imagining that the forest is composed of many identical sites with insects migrating among neighboring sites. Many are the models that have been proposed for single sites ( [25], [19], [26], [27], [28], [29], [30]) but spatially extended forests have never been studied through mathematical models although insect pest outbreaks have been remarked to be synchronous in many forests around the world ( [31], [32]). Here we present a first simple study of the problem by assuming (see [19]) that each single isolated site is Figure 6: Qualitative sketch of non-trivial isoclinesof model (11) and corresponding four equilibria (open circles are unstable): x = (K, 0) is the stable pest free equilibrium. described byẋ where x 1 and x 2 are trees and insects, r and K are net growth rate and carrying capacity of the trees, a is maximum insect consumption rate, b is half-saturation constant of insect functional response, c is the tree/insect conversion factor, d is basic insect mortality, e is insect intraspecific competition, z is predation pressure of insect enemies (assumed constant) and f is the half-saturation constant of their functional response, i.e. the density of the insects at which the damages produced by their enemies is half maximum. For suitable parameter values, system (11) has four equilibria, as qualitatively sketched in Figure 6. One of these equilibria, namely the pest free equilibrium x = (K, 0) is stable in the case shown in Figure 6, namely when x o > K, where x o is the intersection of the non-trivial isoclineẋ 2 = 0 with the x 1 axis. Trivial computations show that so that we can immediately conclude that all factors increasing [decreasing] x o are stabilizing[destabilizing] the pest free equilibrium at the local scale. Thus, increasing insectivores z (biological control) or increasing insect mortality d through the use of insecticides stabilizes the pest free equilibrium. Since, in general insect disperse at a small rate [33], we can conclude from Theorem 3.2 that the use of insecticides accelerates the convergence of any spatially extended forest toward the pest free stationary regime. Obviously, this simple but remarkable result might change if the model is modified in order to include some strategic phenomenon that is not considered in (11). 4.6. Example 6 (electrical circuit). Consider the electrical circuit in Figure 7 where x 1 is the current in the inductor and x 2 is the voltage on the non-linear resistor N L described by an istantaneous input-output characteristic function i = ϕ(x 2 ). This and similar circuits are known as Van der Pol circuits when the characteristic In such a case, the system is described bẏ and it is very easy to show that the circuit oscillates on a stable limit cycle if R < AL/C, (in such conditions the circuit is called Van der Pol oscillator) while for the circuit has a unique attractor, namely the equilibrium x = (0, 0). More precisely, for R = AL/C system (12) has a supercritical Hopf bifurcation. Thus, the resistance R is first a stabilizing and then a destabilizing factor in the range (13). The synchrony of networks composed of identical Van der Pol oscillators has been studied extensively in the past (see, for example, [34] and [35] for early contributions), in particular in the case of weak coupling. The results in that context are rather complex, but if we limit the analysis to the range (13) they become very simple. In fact, under the assumption of weak coupling, from Theorem 3.2 we can immediately derive that the resistance R is a synchronizing[desynchronizing] factor for any network of identical Van der Pol circuits close to the left[right] extreme of range (13). By constrast, for sufficiently high coupling strengths the result is not guaranteed if the coupling profile is unbalanced as shown in Figure 8.
5. Concluding remarks. A general class of networks composed of identical dynamical systems has been considered in this paper with the aim of establishing relationships between local dynamics and network synchronization. The first result (Theorem 3.1) is that stable equilibria of the local dynamics correspond to stable stationary synchronous regimes of the entire network if the coupling is balanced, i.e. if all state variables participate with the same strength in forming the interactions among the systems. By contrast, the stationary synchronous regime can be unstable (Turing instability [11]) if the coupling is sufficiently large and unbalanced. The second result (Theorem 3.2) is that [de]stabilizing factors at local scale are [de]synchronizing at global scale for weak and/or balanced couplings, while for sufficiently high and unbalanced couplings stabilizing[destabilizing] factors can be desynchronizing [synchronizing]. Although these or similar results are valid for any kind of synchronous regime, they have been discussed only with reference to stationary synchronous regimes, because the cases of periodic, quasi-periodic, and chaotic regimes are technically quite different and will be described elsewhere.
Besides being conceptually important, our results can also be very useful, as shown in the section devoted to a series of examples, because many simple prototypical models in biology and engineering are already well understood, while almost nothing is known on the synchrony of networks composed of these models. But the same results can also be used to establish, with almost no effort, conclusions that would be difficult to derive through experimental studies. For example, in a recent paper [36] Trzcinski at al. describe a rather complex experimental set-up conerning a small protist, Bodo, that inhabites pitcher plant leaves, and occupies a central place in the food web (feeding on bacteria and being predated by mosquito larvae). Two are the conclusions of that study, namely i) Bodo populations with higher maximum rate of increase are less stable and ii) they give rise to metapopulations which are less synchronous. Obviously, in view of Theorem 3.2, the second conclusion is just a consequence of the first one.
The results presented in this paper can certainly be extended in various directions. In particular, networks of identical discrete-time dynamical systems, as well as distributed parameter systems described by PDE models, should enjoy the same properties pointed out in Theorem 3.1 and 3.2. Less obvious should be the extensions to networks in which the interactions among the systems are asymmetric and/or time-varying (periodic or aperiodic) like in ecological systems where migration of various species can occur only in specific seasons or during rare and random enviromental events. Finally, extensions to cases in which migration is an active process associated to various kinds of risk (including death) are worth to be considered but certainly not easy.