Amplification without instability: applying fluid dynamical insights in chemistry and biology

While amplification of small perturbations often arises from instability, transient amplification is possible locally even in asymptotically stable systems. That is, knowledge of a system's stability properties can mislead one's intuition for its transient behaviors. This insight, which has an interesting history in fluid dynamics, has more recently been rediscovered in ecology. Surprisingly, many nonlinear fluid dynamical and ecological systems share linear features associated with transient amplification of noise. This paper aims to establish that these features are widespread in many other disciplines concerned with noisy systems, especially chemistry, cell biology and molecular biology. Here, using classic nonlinear systems and the graphical language of network science, we explore how the noise amplification problem can be reframed in terms of activatory and inhibitory interactions between dynamical variables. The interaction patterns considered here are found in a great variety of systems, ranging from autocatalytic reactions and activator–inhibitor systems to influential models of nerve conduction, glycolysis, cell signaling and circadian rhythms.


Introduction
The study of transients critically impacts our understanding of noisy systems subject to frequent perturbations. Across many disciplines, there is a growing awareness that stability analysis techniques can mislead one's expectations about transients and their consequences [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. This provocative development has deep roots in fluid dynamics, where stability theory has on the whole been enormously successful. A variety of characteristic behaviors, such as the emergence of thermal convection patterns in fluids heated from below, arise from hydrodynamic instabilities that trigger transitions from one flow state to another [15,16]. Such transitions have long been understood as a natural consequence of the amplification of small fluctuations around an unstable state. There are systems that do not conform to this paradigm, however. Turbulence in simple shear flows, for example, often emerges in parameter regimes for which the flow is asymptotically stable [1][2][3][4][5]. One accepted solution to this riddle recognizes that, despite linear stability, small perturbations in these systems may be amplified locally for short periods of time. If these perturbations grow to sufficient size, they can excite nonlinear interactions that lead the system even further from its expected behavior. Thus, transient behaviors can substantially increase noise levels and, even in an asymptotically stable system, can mimic instability. These insights have a parallel history in atmospheric and oceanographic fluid dynamics [6][7][8][9].
More recently, it was discovered that transient amplification is common in standard ecological models as well [10][11][12][13][14]. One of the real surprises, for both the fluid dynamics and ecology communities, was the discovery that transient amplification is initiated locally by linear features found in many different systems. Local amplification of noise and small perturbations is possible, even in the absence of any instabilities or nonlinear growth mechanisms, because short-term and long-term dynamics are governed by different eigenvalue problems. When every eigenvalue of the system's Jacobian matrix M has a negative real part, asymptotic stability is guaranteed [17][18][19]. The eigenvalue λ max (M) with the largest real part describes the system's maximum long-term growth rate. The maximum instantaneous growth rate, a quantity known as reactivity [10], provides a short-term complement to λ max (M). A system's reactivity is equal to the largest eigenvalue of the symmetric part of its Jacobian, i.e. λ max (H (M)), where H (M) = (M + M T )/2. Negative reactivity indicates immediate decay following any small perturbation, while positive reactivity indicates that transient growth is possible. Note that short-term growth and long-term decay are separate questions governed by different matrices. Note also that λ max (H (M)) Re (λ max (M)) [20]. Thus, H (M) can have a positive eigenvalue even if M itself has no eigenvalues with positive real part; this is the analytical fingerprint of a system capable of transient local amplification near an attracting steady state [4-7, 10-13, 20]. This linear framework unites many dynamical systems that may otherwise appear to have little in common.
In the fluid dynamics community, there has been a long-standing emphasis on nonnormality of M as a key feature of the transient amplification problem [1][2][3][4][5][6][7][8][9]. A normal matrix commutes with its conjugate transpose or, equivalently, has a complete basis of orthonormal eigenvectors [21]. To physicists raised on a steady diet of classical and quantum mechanics such matrices are indeed the norm, making non-normal matrices sound exotic and peculiar. Non-normal matrices have special properties, especially non-orthogonal eigenvectors, which provide helpful geometric intuition for the transient amplification problem. However, the concept of non-normality can sometimes be difficult to translate across disciplinary boundaries. In particular, while this concept compellingly draws together a wide range of topics drawn from applied mathematics, engineering and fluid dynamics, the definitive text on the subject makes almost no mention of applications to chemistry and biology [1]. The ecological discoveries noted earlier are the only notable exception. To the author's knowledge, the only other work crossing these boundaries is a remarkable paper connecting positive reactivity to Turing instability in chemical and biological systems [12]. This unexpected connection naturally raises questions about broader applications elsewhere in nonlinear science.
This paper aims to establish, using classic nonlinear systems as instructive test cases, that positive reactivity is widespread in chemistry, cell biology and molecular biology. The approach used here departs from earlier work in two important ways. Firstly, by emphasizing simple dynamical systems that have been studied for over four decades, the phenomenon of amplification without instability is presented as interwoven with the broader history of nonlinear science. This helps normalize non-normality and positive reactivity for readers that are unfamiliar with these concepts. Secondly, to facilitate this translation effort, this paper reframes the transient amplification problem using the more familiar concepts of activation, inhibition and feedback. Near a steady state, these concepts are encoded in the structure of the Jacobian matrix M. A positive entry M i j indicates an activatory interaction in which the jth variable promotes growth of the ith variable. Likewise, a negative entry indicates an inhibitory interaction in which the jth variable suppresses growth of the ith variable. This web of interactions can be represented graphically, with each variable assigned to a node and each nonzero entry M i j defining a directed link from node j to node i. In this representation, positive feedback loops are directed cycles containing an even number of inhibitory links or none at all, while negative feedback loops contain an odd number. Figures 1(A)-(C) show three feedback structures that are found in a great variety of chemical and biological systems. Here, I show that any nonlinear system with these structures is capable of transient amplification of noise and small perturbations.

Systems with self-activation loops
The structures shown in figures 1(A) and (B) contain self-activation loops, an example of positive feedback. Positive feedback, which plays an important role in many chemical and biological systems, is often understood to have a destabilizing influence on system dynamics. In a one-node system, a self-activation loop always leads to exponential growth. In two-node systems, however, stability is still possible for certain structures. The Jacobian for a stable two-node system must have negative trace and positive determinant. Assuming we have a selfactivation loop, M 11 > 0, the negative trace requirement cannot be satisfied unless M 11 When det(H (M)) < 0, the system has positive reactivity. Since M 11 M 22 < 0, this is guaranteed and one can generally find a parameter regime in which the transient growth condition, det(H (M)) < 0 < det(M), is satisfied. Therefore, transient growth near a stable steady state follows naturally from feedback structures shown in figures 1(A) and (B). This simple proof provides a direct link between generic structural features and a system's capacity for amplification of small fluctuations around a stable steady state.
To make this result more concrete, consider as an instructive example the Brusselator model of chemical pattern formation [17][18][19][22][23][24][25]. In one spatial dimension, the model has the form where the u j are chemical concentrations and the D j are diffusion constants. This model has a single spatially uniform steady state, u * 1 = a and u * 2 = b/a, whose local dynamics are governed by the Jacobian, For sufficiently small values of the parameter b, the uniform state is stable. Patterns emerge as one of two well-known parameter space boundaries is crossed, a Hopf bifurcation at b H = 1 + a 2 and a Turing instability  Positive reactivity emerges at b = b R (solid blue curve). Thus, for parameter combinations falling in the gray shaded region, one can expect transient amplification of noise and small disturbances even though the steady state is asymptotically stable.
When b > 1, the Brusselator acquires a positive feedback loop, M 11 = b − 1 > 0. In this case, the Jacobian (3) has the interaction pattern represented graphically in figure 1(A). That is, u 1 promotes its own growth and inhibits the growth of u 2 , while u 2 inhibits its own growth and promotes the growth of u 1 . As discussed above, this alone is enough to guarantee positive reactivity. In fact, positive reactivity is common even for b < 1. We can extract from det(H (M)) < 0 a critical value, b R = 2a − a 2 , above which transient growth is found. This transition curve, along with those associated with the Hopf bifurcation and Turing instability, are plotted in figure 2. Note that b R 1 for all a and, thus, the b R transition curve always falls below the b H and b T curves in figure 2. As a result, transient amplification is found in a significant portion of the parameter space, emerging before instability as b is increased. Note that, for parameter regimes in which the uniform steady state is unstable, there is no question about the role played by feedback or transients. Amplification of small perturbations by symmetrybreaking instabilities has long been at the heart of our understanding of pattern formation [16]. What a linear analysis of transient growth teaches us is that feedback-driven amplification phenomena are found below onset as well, i.e. in parameter regimes where pattern-forming instabilities are not in play.
The Brusselator serves as a prototype for many other Turing pattern-forming systems, such as the influential Gierer-Meinhardt model [26]. The instability mechanism in all of these systems conforms to Gierer and Meinhardt's well-known principle of short-range activation and long-range inhibition [16][17][18]: one chemical species activates its own growth, the other inhibits its own growth, and the diffusion length of the inhibitor must exceed that of the activator. As discussed earlier, the existence of a self-activation loop leads directly to the two-node feedback structures shown in figure 1 and, thus, every activator-inhibitor system must have one of these structures [17,18]. It follows that transient local growth is a general prerequisite for Turing instability, as established in [12] using a different argument. The approach presented here reframes this result in terms of activation and inhibition, familiar concepts that may facilitate its translation across disciplinary boundaries.
The Brusselator is instructive in another way as well, since its positive feedback loop arises through autocatalysis in the underlying kinetic model. For sufficiently strong feedback, any autocatalytic system will develop a self-activation loop in its Jacobian. Here again, stability is impossible without one of the two-node structures shown in figure 1 and transient local growth follows naturally. With this in mind, it seems reasonable to expect transient local growth near attracting states in real autocatalytic chemical systems, including the Belousov-Zhabotinsky (BZ) reaction [16,[27][28][29][30] and other pattern-forming systems associated with chemical waves rather than Turing patterns. In real systems, positive feedback rarely takes the form of direct autocatalysis. Instead, the feedback structures are generally far larger and more complicated. When just a few species dominate the dynamics, however, larger models can be reduced and autocatalysis emerges when a positive feedback loop collapses to a single link. An excellent example of this collapse is the Oregonator model of BZ oscillations [16,29,30]. Positive reactivity is found across a broad range of parameter values in this model, confirming the basic intuition presented here.

Excitable systems
Stable systems with positive reactivity have the remarkable property that even infinitesimal perturbations can be amplified locally for short periods of time. No threshold needs to be crossed. This needs to be emphasized because nonlinear science has long been familiar with excitable systems in which finite perturbations of sufficient size can trigger more dramatic transient amplification phenomena [17,18]. What is the fate of infinitesimal perturbations in such systems? This question is interesting to ask because the theory of reactivity is linear, while excitability is fundamentally nonlinear and nonlocal. This distinction suggests that reactivity is, in principle, totally unrelated to excitability. Certainly, there are many systems with positive reactivity that are not excitable. As we discuss below, however, many of the classic systems used to illustrate excitable dynamics are in fact capable of positive reactivity in the same general regions of parameter space. The reasons for this intertwining can, once again, be understood in terms of simple interaction patterns like those shown in figure 1.
One of the oldest and best-known examples of excitability is found in nerve conduction. External signals can shift the membrane potential of a nerve cell. This shift activates voltagegated ion channels on a nerve cell's membrane that allow sodium ions to enter the cell. This influx causes the membrane potential to shift further and this, in turn, activates more ion channels. When the trigger signal is sufficiently large, this positive feedback loop drives the explosive firing dynamics associated with nerve cells [18,31,32]. The essential features of this phenomenon are captured in FitzHugh and Nagumo's classic model, which can be written in the following form [17,18,33,34]: where f (u 1 ) = u 1 − u 3 1 /3 and b < 1. Using a = 0.7, b = 0.8 and small , FitzHugh showed that this simple system gives rise to many of the established properties of nerve conduction, including the existence of a threshold for excitation, the refractory period following a firing event and the possibility of repetitive firing, along with well-known neurophysiological phenomena such as anodal break excitation [33,34]. FitzHugh's parameter choices ensure that equations (4) have exactly one steady state, which is asymptotically stable for I = 0. Pulses and steps in I can cause the system to fire once or, if the system passes through a nearby Hopf bifurcation, to fire repeatedly. Large steps can block further excitation. Significantly, this system's excitability has a clear origin: a sufficiently large nudge in the correct direction moves the system beyond a hump along one of its nullclines, into a region of phase space where the flow initially pushes the system even farther from its steady state. This behavior serves as a model for similar threshold phenomena in many other two-variable nonlinear systems [17,18].
Reactivity offers a new perspective on the FitzHugh-Nagumo system's capacity for transient amplification. While the model is famous for its response to finite perturbations, infinitesimal perturbations around the stable steady state can be amplified locally as well. This is evident from the structure of the system's Jacobian where u * 1 is the steady state value of u 1 . Changing I shifts the value of u * 1 . When u * 2 1 < 1 a self-activation loop, M 11 = 1 − u * 2 1 > 0, forms and the system acquires the structure shown in figure 1(B). Given the results of the preceding section, this alone is enough to guarantee positive reactivity. Since the system's Hopf bifurcation occurs when u * 2 1 = 1 − b, positive reactivity emerges before stability is lost. Indeed, as we saw with the Brusselator, positive reactivity emerges well before the system develops a self-activation loop. Figure 3  In the narrow band bounded by the two red curves, the steady state is unstable and surrounded by an attracting limit cycle. Outside this band but between the two blue curves, the steady state is asymptotically stable but has positive reactivity. Thus, as this figure clearly shows, there is a wide range of parameter combinations for which one finds transient amplification of small, sub-threshold perturbations.
The excitable regime first highlighted by FitzHugh is located deep inside the positive reactivity regime and close to the Hopf bifurcation ( figure 3, black dot). These two features of the model help us understand this intertwining of amplification phenomena. First, proximity to Hopf bifurcation alone suggests a capacity for positive reactivity. To see this, recall that a twovariable Hopf bifurcation occurs when the eigenvalues of the system simultaneously cross the real axis together, flipping the trace of M from negative to positive without changing the sign of its determinant. Positive trace requires M to have at least one positive diagonal entry and negative trace requires M to have at least one negative entry. Thus, close to the transition, we must either have one positive entry M 11 > 0 and one negative entry M 22 < 0 or have M 22 = cM 11 where c is a fixed real number (possibly zero). The latter case is possible in highly structured systems but clearly the former case is more generic. This leads us into territory considered earlier: if M 11 and M 22 have opposite signs, M 12 and M 21 must also have opposite signs or stability is impossible. Thus, the interaction patterns shown in figure 1 are typical of any twovariable system close to a Hopf bifurcation. This can be deduced using other general arguments  as well [17]. Combining this insight with the results of the last section, we learn to expect transient growth as a precursor to any two-variable Hopf bifurcation.
The other important feature, for our purposes here, is small . Small insures that u 2 evolves much more slowly that u 1 . Due to this separation of time scales, trajectories associated with a firing event show a rapid collapse toward an 'excited' branch, followed by a rapid collapse toward a 'refractory' branch on its way back to the steady state. These small behaviors capture the essence of excitability, which is often idealized in terms of instantaneous transitions between susceptible, excited and refractory states. Now, noting factors of in the Jacobian (5), consider how small affects the determinant of H (M): the product M 11 M 22 is of order or smaller, while the product (M 12 + M 21 ) 2 is of order 1. Thus, using equation (1), we find that small guarantees that H (M) has a positive eigenvalue. This provides a second general argument that positive reactivity is deeply intertwined with excitability.
These results are in no way confined to this particular system. The BZ reaction, mentioned at the end of the preceding section, has well-known excitable dynamics [16]. Another classic excitable system is found in the cell-to-cell signaling dynamics of the social amoeba Dictyostelium discoideum [32,[35][36][37]. Two-variable models of these systems share many key features with the FitzHugh-Nagumo model, including curved nullclines with local extrema, Hopf bifurcations and separation of time scales in the governing equations. Sub-threshold amplification of noise and small perturbations can therefore be expected in all of these systems. Returning to our original question, it is still the case that excitability and positive reactivity are independent properties. In the FitzHugh-Nagumo system for example it is possible, by moving u * 1 far from the Hopf bifurcation, to create a situation in which infinitesimal perturbations decay but a sufficiently large perturbation still excites a circuitous return to the steady state. This situation, however, involves little amplification and, from a biological standpoint, is far less interesting or useful. The location of the black dot in figure 3 is no accident. Moving u * 1 close to −1 minimizes the threshold for excitation, maximizing the total possible amplification associated with a firing event. This also moves the system close to its Hopf bifurcation. As discussed above, this together with small create precisely the conditions under which we can expect positive reactivity as well. Similar arguments can be made for Dictyostelium's ability to relay small pulses of cyclic AMP.

Broader applications in cell and molecular biology
While direct autocatalysis is rare in cell and molecular biology, positive feedback involving two or more species is of central importance in these disciplines [31,37,38]. In reduced dynamical models, however, positive feedback often takes the form of a self-activation loop. As discussed in section 2, where the focus was on two-variable systems, we saw that the presence of a selfactivation loop leads naturally to positive reactivity. We can easily extend this result to larger, more complicated systems. Using an inclusion theorem from linear algebra [21], we learn that the leading eigenvalue of any symmetric matrix must be at least as large as its largest diagonal entry. Thus the presence of a single self-activation loop, M 11 > 1, in any system of any size is sufficient for positive reactivity. This is a remarkably strong result suggesting, in effect, that none of the other interactions can be adjusted to restore negative reactivity to the system. Transient amplification of noise and small perturbations cannot be completely avoided once a single self-activation loop has formed.
The two-variable case explored earlier is interesting because only two structures are compatible with asymptotic stability. These structures are frequently encountered in biochemical regulatory processes and are involved in many classic examples of biochemical oscillation. During glycolysis, for example, the enzyme phosphofructokinase (PFK) catalyzes the conversion of fructose-6-phosphate and adenosine triphosphate (ATP) to fructose-1,6bisphosphate and adenosine diphosphate (ADP). ADP is itself one of the activators of PFK, however, forming a positive feedback loop that helps activate the glycolytic pathway when the cell needs energy. This loop can also drive oscillations in ATP and ADP concentrations [32,36,37,39,40]. Likewise, many cells use Ca 2+ ions, which are usually confined to specific storage organelles, to relay signals from cell membrane receptors to targets inside the cell. The release of Ca 2+ ions into the cytoplasm is coordinated by calcium channels activated by specific trigger molecules. These channels are also activated by the Ca 2+ ions already released, however, creating a positive feedback loop known as calcium-induced calcium release (CICR). CICR can drive calcium ion pulses, oscillations, and other dynamical behaviors [32,36,41,42]. Examples like these have a history of exploration rooted in two-variable models, which necessarily exhibit both self-activation loops and Hopf bifurcations. There are multiple reasons, therefore, to expect positive reactivity and its consequences in such systems.
Moving to biochemical models with three or more variables opens many other doors for further exploration. Even in three-variable systems, oscillatory dynamics are compatible with a far wider range of structures [37]. An important example, which contains only negative feedback loops, is shown in figure 1(C). While negative feedback is often understand as playing a stabilizing role in system dynamics, it can drive both transient amplification and instability. To see this, consider a general three-node system whose Jacobian has the sign pattern represented in figure 1(C). To simplify the algebra, let us assume that the diagonal entries all have equal weight. Since we can rescale the entries of M uniformly without affecting stability, we need only solve the problem for diagonal entries equal to −1. In this case, we find asymptotic stability if and only if −M 21 M 32 M 13 < 8; note that, because one of these links is inhibitory, the term on the left is positive. Examining H (M), we find that a stable system has negative reactivity if and only if M 2 21 + M 2 32 + M 2 13 < 4 − M 21 M 32 M 13 . These conditions are only satisfied if the system-scale negative feedback loop is sufficiently weak, e.g. none of |M i j | exceed 2. Therefore, transient amplification and instability are natural consequences of sufficiently strong, system-scale negative feedback.
To make this result more concrete, consider as another instructive example the Goodwin model of biochemical feedback inhibition [17,18,25,[43][44][45] These equations describe a simple and very common regulatory structure in which a downstream end product acts an inhibitor along the pathway that produces it. Here, the u j represent chemical concentrations, the α j are dimensionless rate constants and p controls the nonlinearity of the inhibitory feedback link. This classic model offers a simple paradigm for Hopf bifurcations driven entirely by negative feedback. For fixed n, a Hopf bifurcation is only possible if p is greater than sec n (π/n), while the α j must be sufficiently close to a common value [44,45]. Setting α j = α for our analysis, the model equations (6) have a unique steady state and the Jacobian has the form where β > 0. Note that the off-diagonal entries form a negative feedback loop of length n and, for n = 3, this system has precisely the structure mapped out in figure 1(C). This negative feedback paradigm has been influential in the study of circadian rhythms [32,36,[46][47][48].
If α is very large, stability is guaranteed and small perturbations never trigger growth. Decreasing α has the effect of increasing the relative strength of the system-scale feedback loop. For sufficiently small α, the system becomes capable of supporting transient growth. To show this, we numerically calculate the leading eigenvalues of M and H (M) for a range of n values. For p = 2, we find that transient growth emerges as α drops below a critical value α R . The steady state loses stability via a Hopf bifurcation as α drops below α H . Table 1 shows how these critical parameters vary with n. As expected, there are values of n for which the steady state never loses stability; sec n (π/n) > 2 in these cases. A transient growth regime is observed for all values of n, however, including those for which the Hopf bifurcation does not occur. This is true for larger values of p as well. Here, once again, we see transient amplification driven by feedback emerging as a precursor to instability. There is no question that, after over four decades of study, the classic nonlinear systems considered here are well understood. Nevertheless, the discovery that all of these have systems-level features associated with positive reactivity threads these different systems together in new ways. Along with the connections highlighted above, our understanding of the relationship between transient growth and instability shifts as well. Note that, as emphasized in figures 2 and 3 and table 1, transient amplification emerges as a natural precursor to instability. In fact, transient growth below onset and linear instability above onset can be understood as driven by the very same feedback mechanisms, governed by the same parameters. Starting with a system with negative reactivity, increasing or decreasing the appropriate control parameter beyond a critical value causes H (M) to develop a positive eigenvalue and, beyond another critical value, causes M to develop an eigenvalue with a positive real part. It is possible for these transitions to occur at the identical parameter values, but this happens if and only if M is normal [20]. These cases may turn out to be the exception and not the norm. That is, the discovery of positive reactivity hidden in these venerable systems provides a strong suggestion that amplification without instability may not be all that unusual.
Finally, it is important to remember that the global, long-term consequences of local amplification generally depend on nonlinear features of a system. In some systems, transient amplification may simply delay the return to steady state conditions following a small perturbation. This alone can seriously impact our ability to correctly interpret short-term dynamical observations. Moreover, in a noisy system capable of transient amplification, fluctuations can be greatly enhanced and this complicates our understanding of observed noise levels. In other systems, the nonlinear consequences of amplification can be more dramatic. Note, in particular, that nonlinearities can introduce a threshold where the linear part of the problem has none. None of this diminishes the importance of understanding the linear problem, however. The broader lesson here is the widespread relevance of this problem, particularly in chemistry and biology, where noisy systems subject to frequent perturbations are a fundamental concern. Local amplification near asymptotically steady states presents new opportunities for collaboration across traditional disciplinary boundaries, as part of an ongoing effort by many researchers to advance our understanding of transient dynamics and its consequences. Simple classic systems like the Brusselator, which are standard fare in many nonlinear science textbooks [17][18][19], may serve as a useful, broadly accessible introduction to this interdisciplinary problem.