Stability and Stabilization of the Constrained Runs Schemes for Equation-free Projection to a Slow Manifold

In [C. Projecting to a slow manifold: Singularly perturbed systems and legacy codes, SIAM J. Appl. Dyn. Syst. 4 (2005), 711–732], we developed the family of constrained runs algorithms to find points on low-dimensional, attracting, slow manifolds in systems of nonlinear differential equations with multiple time scales. For user-specified values of a subset of the system variables parametriz-ing the slow manifold (which we term observables and denote collectively by u), these iterative algorithms return values of the remaining system variables v so that the point (u, v) approximates a point on a slow manifold. In particular , the m−th constrained runs algorithm (m = 0, 1,. . .) approximates a point (u, vm) that is the appropriate zero of the (m + 1)−st time derivative of v. The accuracy with which (u, vm) approximates the corresponding point on the slow manifold with the same value of the observables has been established in [A. Zagaris, C. Analysis of the accuracy and convergence of equation-free projection to a slow manifold, ESAIM: M2AN 43(4) (2009) 757–784] for systems for which the observables u evolve exclusively on the slow time scale. There, we also determined explicit conditions under which the m−th constrained runs scheme converges to the fixed point (u, vm) and identified conditions under which it fails to converge. Here, we consider the questions of stability and stabilization of these iterative algorithms for the case in which the observables u are also allowed to evolve on a fast time scale. The stability question in this case is more complicated, since it involves a generalized eigenvalue problem for a pair of matrices encoding geometric and dynamical characteristics of the system of differential equations. We determine the conditions under which these schemes converge or diverge in a series of cases in which this problem is explicitly solvable. We illustrate our main stability and stabilization results for the constrained runs schemes on certain planar systems with multiple time scales, and also on a more-realistic sixth order system with multiple time scales that models a network of coupled enzymatic reactions. Finally, we consider the issue of stabilization of the m−th constrained runs algorithm when the functional iteration scheme is divergent or converges slowly. In that case, we demonstrate on concrete examples how Newton's method and Broyden's method may replace functional iteration to yield stable iterative schemes. 1. Introduction. The long-term dynamics of many complex chemical, physical, and …


ZAGARIS, VANDEKERCKHOVE, GEAR, KAPER AND KEVREKIDIS
Abstract.In [C.W. Gear, T. J. Kaper, I. G. Kevrekidis and A. Zagaris, Projecting to a slow manifold: Singularly perturbed systems and legacy codes, SIAM J. Appl.Dyn.Syst. 4 (2005), 711-732], we developed the family of constrained runs algorithms to find points on low-dimensional, attracting, slow manifolds in systems of nonlinear differential equations with multiple time scales.For user-specified values of a subset of the system variables parametrizing the slow manifold (which we term observables and denote collectively by u), these iterative algorithms return values of the remaining system variables v so that the point (u, v) approximates a point on a slow manifold.In particular, the m−th constrained runs algorithm (m = 0, 1, . ..) approximates a point (u, vm) that is the appropriate zero of the (m + 1)−st time derivative of v.The accuracy with which (u, vm) approximates the corresponding point on the slow manifold with the same value of the observables has been established in [A.Zagaris, C. W. Gear, T. J. Kaper and I.G.Kevrekidis, Analysis of the accuracy and convergence of equation-free projection to a slow manifold, ESAIM: M2AN 43(4) (2009) 757-784] for systems for which the observables u evolve exclusively on the slow time scale.There, we also determined explicit conditions under which the m−th constrained runs scheme converges to the fixed point (u, vm) and identified conditions under which it fails to converge.Here, we consider the questions of stability and stabilization of these iterative algorithms for the case in which the observables u are also allowed to evolve on a fast time scale.The stability question in this case is more complicated, since it involves a generalized eigenvalue problem for a pair of matrices encoding geometric and dynamical characteristics of the system of differential equations.We determine the conditions under which these schemes converge or diverge in a series of cases in which this problem is explicitly solvable.We illustrate our main stability and stabilization results for the constrained runs schemes on certain planar systems with multiple time scales, and also on a more-realistic sixth order system with multiple time scales that models a network of coupled enzymatic reactions.Finally, we consider the issue of stabilization of the m−th constrained runs algorithm when the functional iteration scheme is divergent or converges slowly.In that case, we demonstrate on concrete examples how Newton's method and Broyden's method may replace functional iteration to yield stable iterative schemes.
1. Introduction.The long-term dynamics of many complex chemical, physical, and biological systems simplify when a low-dimensional, attracting, invariant slow manifold is present.Such a slow manifold attracts all nearby initial data at an exponential rate, and the reduced dynamics on it govern the long term evolution of the full system.More specifically, a slow manifold is parametrized by observables which are typically slow variables or functions of variables.All nearby system trajectories decompose naturally into a fast component, which contracts exponentially toward this slow manifold, and a slow component, which obeys the reduced system dynamics on the manifold.In this sense, the fast variables become slaved to the observables, and knowledge of the manifold and of the reduced dynamics on it suffices to determine the full long-term system dynamics.
In a computational setting, it is often more efficient to work solely with the observables (or macroscopic variables) u.Typically, though, one only has access to a simulator for the full, microscopic system dynamics, instead of for the coarsegrained, macroscopic dynamics.At the same time, such simulators may be used to investigate accurately the system behaviour over short, fine-grained time scales, but they are inefficient for simulations over the much longer, coarse-grained time scales.To circumvent this problem, Gear, Kevrekidis, et al. developed the so-called projective and coarse projective approaches, in which short bursts of simulation over fine-grained time scales are used to extrapolate the macroscopic solution over coarse-grained time scales (see, for example, [7,5,12,15,23]).To perform such a burst of computation in an equation-free or a legacy code environment, one needs to compute the values of all system variables on the slow manifold and corresponding to specified values of the observables.
In earlier papers [6,28], we considered Ordinary Differential Equations (ODEs) with slow manifolds, where the dimension N ≡ N s + N f is arbitrary.We assumed that these systems (1) can be transformed into the explicit slow-fast form by means of a coordinate change z = z(w), where z = (x, y) and w = (u, v).Here, f and g are smooth functions of their arguments and 0 < ε 1 is a small parameter measuring the separation of time scales.In general, we have no knowledge of the transformation z = z(w), nor of its inverse w = w(z), other than that they both satisfy modest restrictions.
In this setting, we developed a family of functional iteration schemes that locate accurate approximations to the points (u * , v) on slow manifolds for each value u * of the observables.Indexed by m = 0, 1, 2, . .., this family of schemes is based on finding zeros, (u * , v m ), of the functions d m+1 v dt m+1 (u * , v).Here, the time derivatives are calculated using equation (1).The schemes are reviewed in section 1.1 below.Additionally, for explicit slow-fast systems (2), we showed that these zeroes are O(ε m ) close to the corresponding points on the slow invariant manifold, with the same value of the observables.This accuracy result is briefly reviewed in section 1.2.Moreover, we showed that these results extend naturally to the more useful case in which finite difference approximations to the (m + 1)−st derivatives are used instead of analytical expressions.
In this article, we address the stability of the functional iteration schemes.We show that a generalized eigenvalue problem determines whether the iteration converges or diverges.We identify the principal geometric constructs, including the relative orientations of the subspace of the observables, the slow manifold, and the fast fibers, and their tangent planes, that enter into this generalized eigenvalue problem.Then, in a series of cases in which this problem is explicitly solvable, we determine criteria for stability and instability of the iteration schemes.Moreover, in cases of instability, we show how the methods of Broyden and of Newton can each stabilize the iteration procedure.Finally, the analysis of the schemes is illustrated on a series of examples, including a sixth-order system that models a network of coupled enzymatic reactions.A detailed statement of our main results is presented in section 1.3 below.
1.1.The constrained runs schemes.The constrained runs algorithms developed in [6] locate approximations to points on a slow manifold for systems of the form (1), as stated above.In particular, we assume explicitly that there exists an N s −dimensional, attracting, invariant, slow manifold L and that this manifold is given locally by the graph of a function v = s(u) over a compact set K in which the observables u take values.
To leading order in ε, L is obtained by setting v = 0, i.e., by solving q(u, v) = 0 for v-this is the so-called Quasi Steady State Approximation (QSSA).The constrained runs algorithms are based on the observation that we may filter out the fast dynamics to any order, and thus also obtain approximations to L of a successively higher order, by demanding that a time derivative of successively higher order vanishes (see [28] for details).This is the zero-derivative principle (ZDP) introduced in [16,17] and independently in [19].In particular, the algorithms are indexed by m = 0, 1, . . .and, for each m and any fixed value of the observables u * , the m−th algorithm locates an appropriate solution v = sm (u * ) of the (m + 1)−st derivative condition Here, the time derivatives are evaluated along solutions of (1), that is, d• dt = (D u •)p+ (D v •)q.Also, the zero in the right member of this equation could be replaced by any O(1) value.
In general, condition (3) constitutes a system of N f nonlinear algebraic equations in the (u, v)−space.In addition, the explicit form of equation (1), and thus also an analytic formula for the (m + 1)−st time derivative in equation ( 3), may be unavailable (as is the case in equation-free or legacy code applications).For these reasons, a solution sm (u * ) cannot, in general, be computed explicitly.The m−th constrained runs algorithm generates an approximation v # m of sm (u * ), rather than sm (u * ) itself, using either an analytic formula for the time derivative or a finite difference approximation for it.In either case, the approximation v # m to sm (u * ) is determined through an explicit functional iteration scheme, which we now summarize.
The m−th constrained runs algorithm is defined by the map where the iterative step size H is an arbitrary positive number whose magnitude we fix below for stability reasons.We initialize the iteration with some value v (1)  and generate the sequence For a prescribed tolerance tol m , the iteration procedure is terminated when v ( +1) − v ( ) < tol m , for some ≥ 1.The output v # m of this m−th algorithm is the last member of the sequence {v ( +1) }.

Asymptotic accuracy of the solutions to condition (3)
. In [28], we worked directly on the explicit slow-fast system (2).In that setting, the slow manifold L is the graph of a function h : K → R Nf , where K ⊂ R Ns is a set assumed to be compact and where h admits an asymptotic expansion in ε, h(•) = i=0 ε i h i (•).For these systems we proved that, for each x * ∈ K, the (m + 1)−st derivative condition d m+1 y dt m+1 (x * , y) = 0 has a solution y = hm (x * ).Moreover, we showed that hm : K → R Nf is a smooth function admitting an asymptotic expansion in ε which agrees with that of h up to and including terms of O(ε m ), The asymptotic accuracy results for explicit slow-fast systems reported in [28] are generalized to all of the nonlinear systems (1) under consideration in [29].In particular, we assumed that there exists a smooth and invertible coordinate change z = z(w) with inverse w = w(z) which puts the system (1) into the explicit slow-fast form (2). (As we emphasized already, in general we have no knowledge whatsoever of this transformation.)The main questions addressed in [29] concern the existence of a solution sm (u * ) to the (m + 1)−st derivative condition (3) (equivalently, of a fixed point for the m−th constrained runs algorithm (4)) and its proximity to s(u * ), the ideal point on a slow manifold.The results on the existence of this fixed point and on its proximity to s(u * ) are similar to those for explicit slow-fast systems.In particular, we proved the following theorem: Theorem 1.1.If D y v is non-singular in a neighborhood of L 0 , then, for each m = 0, 1, . .., the (m + 1)−st derivative condition (3) can be solved for v to yield an N s −dimensional manifold Lm -termed the ZDP m manifold-which is the graph of a function sm : K → R Nf over u.Moreover, This theorem guarantees that, for each u * ∈ K, there exists an isolated fixed point v = sm (u * ) of the m−th constrained runs algorithm.Moreover, this fixed point varies smoothly with u * , and the approximation (u * , sm (u * )) of the point (u * , s(u * )) on the actual invariant slow manifold is valid up to O(ε m+1 ).We remark that a similar theorem holds when a forward difference approximation is used for the (m + 1)−st derivative instead of an analytic formula.
1.3.Statement of the main results on stability and stabilization.The main focus of this article is on the stability and stabilization of the constrained runs schemes for the general systems (1).In order to describe the results we present here, it is useful to review briefly the existing stability analysis for these schemes in the special case of explicit slow-fast systems (2).In [28], we analysed the stability properties of the fixed point hm (x * ) under the functional iteration scheme using the (m + 1)−st derivative described earlier.We found that, if an analytic formula for the (m + 1)−st derivative is used and if, additionally, the spectrum of the Jacobian block D y g(x * , h 0 (x * ), 0) (fast spectrum) is real and negative, then the fixed point is stable for all positive values of H smaller than a certain upper bound.On the other hand, if the fast spectrum contains complex eigenvalue pairs, instabilities may arise depending on the value of m and on the magnitude of the imaginary parts of these eigenvalues, see theorem 3.2 below for details.Similar results hold when forward differencing is used to approximate the (m + 1)−st derivative: in that case, the fixed point is unconditionally stable if the fast spectrum is real and negative, but complex eigenvalue pairs may lead, here also, to instability (see also theorem 4.1).We reemphasize that these results from [28] are exclusively for explicit slow-fast systems (2).
In this article, we turn our attention away from the explicit slow-fast systems (2) and study the general systems (1).First, we address the stability and stabilization questions for the fixed point sm (u * ) of the m−th algorithm (4) in the context of (1).In particular, we show that the stability of sm (u * ) is governed by the spectrum of a product of two matrices which we identify explicitly.The first of these matrices contains information on the dynamics on the fast fibration F over the slow manifold L, while the second matrix encodes the relevant geometric characteristics of the original system (1)-namely, the relative orientations of the affine subspace u = u * corresponding to the observables, of the slow manifold L, and of the fast fibration F over it.Then, we use this result to derive generalized eigenvalue conditions for convergence of the iterations, see sections 3.1-3.3.
As is well known, see for example [9], generalized eigenvalue problems cannot necessarily be solved in closed form.In section 3.4, we identify a series of cases in which the generalized eigenvalue problem that arises here can be solved and analyze them fully.In particular, these generalized eigenvalue conditions can be reduced to specific conditions on the geometric and dynamical characteristics of the system (1) described above.For example, in the prototypical case where N f = 1 and N s is arbitrary, we find that our functional iteration schemes are stable provided that, first, the fast (i.e., most negative) eigenvalue lies in a wedge-like subset of the complex plane that is symmetric about the negative horizontal semiaxis, and, second, the left and right eigenvectors corresponding to this fast eigenvalue form an angle θ ∈ (−π/2, π/2) (measured in a specific direction).A quantitative classification is also possible in the special case that the aforementioned matrix pair is symmetric-definite, but it does not appear to be possible in general.In the general case, instead, we first interpret the action of the matrix whose spectrum determines the stability type of the fixed point in terms of the system characteristics described above.Then, we show that, in addition to the potential instabilities due to the presence of complex eigenvalues in the fast spectrum, there exist geometric configurations for which sm (u * ) is unstable irrespective of the iterative step size and of the nature of the fast spectrum.
Second, we extend all of the above analysis to the case where a forward difference approximation of the (m + 1)−st derivative is used by the m−th algorithm at each iteration, see section 4. We examine the stability of the fixed point v = ŝm (u * ) of that algorithm.In this case also, we find that there are instabilities due to the presence of complex eigenvalues in the fast spectrum, as well as geometric configurations for which the fixed point is unstable under the iterative scheme irrespectively of the size of H.
Third, we illustrate these stability results by applying the first several constrained runs algorithms to a collection of two-dimensional models exhibiting multiscale dynamics, see section 5.1.The first model is chosen to be in explicit slow-fast form (2), so that our results from [28] apply and predict convergence of the algorithms.The results of the application of the algorithms to this model serve as reference.In the second model, both variables evolve on both the fast and slow time scales, so that the model is of the form (1).These variables are chosen so that the algorithms are expected, theoretically, to converge.Lastly, the third model is also not in explicit slow-fast form, and our theoretical results predict that the algorithms diverge.In this last case, we verify this theoretical prediction and stabilize the constrained runs algorithms by means of Newton's and Broyden's methods.
Fourth, we apply the m−th constrained runs schemes to a sixth-order system of the form (1) that models a network of coupled enzymatic reactions ( [3]).This model exhibits slow and fast time scales, and there are multiple slow and fast variables.
As a result, it illustrates a number of important geometric aspects of our stability and stabilization results; see section 5.2.
Finally, we conclude this article with an extension of the constrained runs schemes.In particular, the map F m employed in the constrained runs schemes is an explicit map for each m.It turns out to be interesting, at least from a theoretical point of view, to also include implicitly defined maps, F I m , in the family of constrained runs schemes.Setting aside the questions concerning computational implementation, we show that implicitly defined maps lead to more stable schemes; see section 6.
2. Mathematical background.In this section, we recall certain results that also fix the notation used in the rest of this article.First, we define L 0 = {z | g(z, 0) = 0} to be the ε = 0 critical manifold of (2), and we assume without loss of generality that it is the graph of a function h 0 : Also, we assume explicitly that L 0 is normally attracting, i.e., that the spectrum associated with D y g(•, 0) is contained in the (complex) left half-plane uniformly over L 0 , Here, σ denotes the spectrum of the set.With each point z = (x * , h 0 (x * )) ∈ L 0 is associated a fast fiber F z 0 , which is simply an affine copy of R Nf based on z, In the slow time formulation of the system (2), each fast fiber is invariant and offers leading order information on how orbits approach the slow manifold, see, e.g., [13] for details.For 0 < ε 1, L 0 is no longer necessarily invariant.Nevertheless, there is an invariant manifold L which is O(ε) close to L 0 and the graph of a function h, The function h : K → R Nf satisfies the invariance equation, g(x, h(x), ε) − εDh(x)f (x, h(x), ε) = 0, and, as discussed earlier, it admits an asymptotic expansion in ε, where h 0 here is identical to that in (5).Moreover, for 0 < ε 1, the fibers F z 0 are no longer individually invariant.Instead, there exists an invariant family F = ∪ z∈L F z of fast fibers foliating a neighborhood of L.
The manifolds and fibers introduced above for system (2) have their counterparts in system (1), and these counterparts are obtained by applying the inverse of the coordinate change z = z(w) between (1) and (2).Under mild assumptions, one can prove that L 0 can be expressed locally as the graph of a function s 0 : K → R Nf , for some compact set K, Similarly, L is a graph of a function s : K → R Nf .Under the same assumptions, and for each u * ∈ K, the fast fiber F w0 0 with basepoint w 0 = (u * , s 0 (u * )) ∈ L 0 is a graph over the variables v locally around that basepoint, Here, the domain K ⊂ R Nf depends on u * and necessarily includes s 0 (u * ), while r 0 (•; u * ) : K → R Ns is a one-parameter family of functions.
In what follows, we write T w L 0 and N w L 0 for the tangent and normal spaces, respectively, of the manifold L 0 at the point w ∈ L 0 .Similarly, we write T w F w0 0 and N w F w0 0 for the tangent and normal spaces, respectively, at the point w ∈ F w 0 of the fast fiber F w 0 .
3. The stability of the fixed point sm (u * ).In this section, we derive a formula for the matrix whose spectrum determines the linearized stability type of the fixed point v = sm (u * ) of the functional iteration scheme induced by F m .Then, we show that, in the application of the constrained runs schemes to general problems (1), the calculation of the spectrum corresponds to a generalized eigenvalue problem and that no explicit information can be obtained without specific information about the system (1).We identify those cases in which the generalized eigenvalue problem can be solved analytically, and in which, hence, the spectrum can be calculated explicitly.In these cases, we determine when the constrained runs schemes converge and when they diverge.
Here, for each m = 0, 1, . .., the function L m : R N → R Nf is given by Then, equation (10) yields For the stability analysis, it is sufficient to work with the leading order (in the small parameter ε) term in the formula for D v L m (u * , v).Let w 0 = (u * , s 0 (u * )) ∈ L 0 and P w0,1 : R Nf → T w0 F w0 0 be the projection This projection lifts an arbitrary vector v ∈ R Nf to the vector (0, v T ) T on the v−subspace {0} Ns ×R N f , and then it projects this latter vector to P w0,1 v ∈ T w0 F w0 0 along the u−subspace R Ns × {0} N f .Also, let P w0,2 : T w0 F w0 0 → R Nf be the projection operator P w0,2 = (−Ds 0 (u * ), I Nf ) .
Using P w0 , we can now state the leading order term in the asymptotics for D v L m and obtain a geometric understanding of the action of DF m .Lemma 3.1.Let wm = (u * , sm (u * )), for m = 0, 1, . ... Also, let the non-singular matrix D be given by where (•) 0 = (•)(w 0 , 0).Then, for all H = O(ε), to leading order in ε.
The proof of this lemma is rather technical and thus has been relegated to Appendix A. Its central ingredients are, first, an interpretation of D v L m ( wm ) as the restriction on T w0 F w0 0 of a linear map defined over R N , whose slow and fast eigenspaces coincide with T w0 L 0 and T w0 F w0 0 ; and second, the explicit decomposition of the column vectors in (D v z) 0 into unique components along these eigenspaces.When combined with equation (12), this lemma yields the formula This is the desired leading order formula for DF m (s m (u * )), and the analysis in the rest of this section is based on it.We remark that, in the lemma above, H is taken to be O(ε) to ensure that σ(DF m (s m (u * ))) lies an O(1) distance away from the value one (which corresponds to the case of neutral stability), see also equation (12).Note, also, that P w0 is invertible (cf.(15)) by transversality of the intersection between T w0 L 0 and T w0 F w0 0 .Indeed, recalling (8), and that gradients are normal to level sets, we immediately obtain where the spans here are meant row-wise.Similarly, the fast fiber with basepoint w 0 is parameterized by v, and hence with the span taken column-wise.Since T w0 L 0 and T w0 F w0 0 intersect transversally, N w0 L 0 and T w0 F w0 0 intersect non-perpendicularly, and hence the product P w0,2 P w0,1 is indeed invertible.The action of P w0 on a vector v I that lies on the affine subspace u = u * .Here, u * ∈ K is a fixed value of the observables u, while s 0 and r 0 (•; u * ) are the functions whose graphs form the leading order slow manifold L 0 and fast fiber F w0 0 , respectively.Also, P w0,1 , P w0,2 are the projections introduced in (13).

3.2.
A geometric interpretation of lemma 3.1.In this section, we analyze lemma 3.1 and, in particular, formula (16) for DF m (s m (u * )).First, for transparency of presentation, we offer a heuristic derivation of this formula limiting our discussion to the case where equation ( 1) is linear.Then, we interpret the formula in terms of the system geometry for general nonlinear systems.We show that, if all eigenvalues associated with the fast dynamics are real and the magnitude of H satisfies certain conditions, F m approximately maps the v−coordinate of any initial condition to the v−coordinate of the basepoint of the fast fiber through that initial condition.
As already mentioned, strictly for the purpose of discussion in this section, we analyse the case w = M w where M is a matrix depending on ε but not on w.In this case, the transformation between w and z may be chosen to be linear, w = T z, and such that the explicit slow-fast form, equation (2), becomes Here, Λ s and Λ f are N s × N s and N f × N f matrices, respectively.We remark that the spectra of these blocks depend on ε, in general, but they remain bounded as ε ↓ 0. Further, the spectrum of Λ f has positive real parts by local attractivity.In terms of the z−coordinates, the slow manifold L coincides with the eigenspace of Λ associated with the eigenvalues of Λ s (slow eigenspace), and it is spanned by the column vectors of (I Ns , 0) T .The fast fibration F over L consists of all affine copies of the eigenspace associated with the eigenvalues in ε −1 Λ f (fast eigenspace), and it is spanned by the column vectors of (0, I Nf ) T .In terms of the w−coordinates associated with (1), the slow and fast eigenspaces are spanned by respectively.Since we already assumed that L and F are graphs over the subspaces v = 0 and u = 0, respectively, we may set where P and P are independent of w, we obtain The formula for T , in turn, yields Alternatively, L and F w I are described by the equations Next, we show that F m (v) depends linearly on v, F m (v) = (DF m )v + c(u * ) for some c, and we derive an explicit, leading order formula for DF m .First, the initial condition w I generates the solution w(•; w I ), a formula for which can be derived using the transformation w = T z and integrating the explicit slow-fast system (19); in particular, w(t; w I ) = T e Λt T −1 w I .Combining this formula with (20) and recalling that w I = (u T * , v T I ) T , we find Combining this equation with equation ( 11), we calculate where we have also used that w(0; w I ) = w I .Hence, L m depends linearly on v I , and thus linear dependence of F m on v follows from equation (10).Substituting this result into equation ( 12), expanding Λ f and Λ s asymptotically, and recalling that H = O(ε), we obtain to leading order.Here, Λ f,0 is the leading order term in the asymptotic expansion of Λ f (ε).This formula matches equation ( 16) as expected, since, in the current setting, D y v = T 22 = I Nf and D y g = −Λ f .We now rewrite equation ( 23) in a way that will allow us to interpret the action of DF m in terms of the system dynamics.First, we obtain a formula for the basepoint w b of the fast fiber F w b going through w I .By definition, w b is the intersection of L and F w I , as these are given in (21).The equation for L is satisfied by w b , while the one for F w I is satisfied by w I .Hence, we obtain the system which may be solved for u b to yield the formula Substituting this result into (24) and recalling that SR = I − P , we obtain Using equations ( 25)-( 26), we rewrite equation ( 22) in the equivalent form which shows that the motion is decomposed into a fast contracting motion along the fast fibration and a slow drifting motion of the basepoint on the slow manifold.
Using this formula, we calculate Substituting this result into equation ( 10) and rearranging terms, we obtain This last expression suggests that, in the absence of fast eigenvalues with nonzero imaginary parts and under certain conditions on H, F m approximately maps (the v−coordinate of) the initial condition w I to (the v−coordinate of) the basepoint of the fast fiber going through w I .
The above result for DF m of linear systems generalizes to the fully nonlinear setting described by equation (1).The reader can check that, in that case, DF m (s m (u * ))v I is the solution to the linear system Here, u and v I are affine coordinates, and thus the origin (u, v I ) = (0, 0) corresponds to the point w 0 = (u * , s 0 (u * )) ∈ L 0 .Therefore, the first of these equations describes an affine copy of T w0 F w0 0 which goes through (u * , (s In other words, it describes a local, linear approximation of the fast fiber through that point.The second equation, in turn, describes an affine copy of T w0 L 0 through the point (u * , (s 0 (u * ) + ṽI ) T ) T .Here, ṽI corresponds to the initial condition v I projected forward in time under the (fast) dynamics associated with the (m + 1)−st time derivative d m+1 v dt m+1 .With this in mind, we conclude that DF m (s m (u * ))v I is the v−coordinate of the intersection of the linearized fast fiber through the initial guess v I with the copy of the linearized slow manifold which goes through the point ṽI .
Now, if all of the fast eigenvalues are real and under certain conditions on H, formula (28) suggests that ṽI is small.In particular, then, a single iteration of the map F m tends to identify (the v−coordinate of) the basepoint of the fiber going through the initial guess rather than (the v−coordinate of) the point on the slow manifold sharing the same u−coordinate.These two points only coincide if the fast fibration is vertical to leading order, see figure 2. This is the case for the explicit slow-fast systems (2).16).Now, let Q 0 and E be the invertible N f × N f matrices which put (−D y g) 0 (associated with (2)) and P w0 , respectively, in their canonical (e.g., Jordan) forms, , and formula (16) becomes Finally, Equation (29) states that the spectrum of DF m (s m (u * )) may be calculated in terms of the generalized eigenvalues of the matrix pair (EQD m+1 (EQ) −1 , P).In general, there exists no matrix that puts D and P −1 w0 into canonical form simultaneously [9]; in other words, EQ = I Nf and, as a result, one cannot derive explicit information on these generalized eigenvalues.
3.4.Analysis of cases in which the generalized eigenvalue problem may be solved explicitly.In this section, we identify a series of cases in which this generalized eigenvalue problem can be solved analytically.Specifically, we analyze the cases (i) Ds 0 = 0 or Dr 0 = 0, (ii) N s = N f = 1, (iii) N s > 1 and N f = 1, and (iv) the pair (D, P w0 ) is symmetric definite.We derive explicit stability conditions in these cases.Then, recalling our formulas for σ((−Dyg) 0 ) from [28] and using information on σ(P w0 ) which we derive independently in Appendix B, we show how the presence of complex eigenvalues in the former and of negative eigenvalues in the latter may lead to instabilities.
3.4.1.The cases Ds 0 = 0 and Dr 0 = 0, respectively.First, we consider the case where either the leading order slow manifold L 0 or the leading order fast fiber F w0 0 for the general nonlinear system (1) is, respectively, horizontal (equivalently, s 0 is constant) or vertical (equivalently, r 0 (• ; u * ) = const.for each u * ∈ K).In either case, P w0 reduces to the identity, since Ds 0 = 0 or Dr 0 = 0, respectively, and equation (29) becomes . In this case, then, the following theorem applies and offers a full classification of the stability type of the fixed point.Theorem 3.2.([28, theorem 3.1]) For each m = 0, 1, . .., the functional iteration scheme defined by F m is stable if and only if the following two conditions are satisfied for all = 1, . . ., N f : In particular, if λ 1 , . . ., λ Nf are real, then the functional iteration is stable for all positive H satisfying We refer the reader to figure 1 in [28] for an illustration of H max as a function of θ on (π/2, 3π/2).
3.4.2.The case N s = N f = 1.Next, we treat the case N s = N f = 1, because the spectrum is explicitly computable and also because it offers significant insight for higher-dimensional cases.Here, (D y g) 0 ≡ λ < 0 and P w0 = 1 − s 0 r 0 are scalars.Thus, formula (29) becomes Note that 1−s 0 r 0 = (−s 0 , 1)•(r 0 , 1), where (−s 0 , 1) ∈ N w0 L 0 and (r 0 , 1) ∈ T w0 F w0 0 , see also figure 3.For each w 0 ∈ L 0 , we define φ σ (w 0 ) to be the angle that T w0 L 0 forms with the positive u−axis, as measured from that axis (that is, measured counterclockwise), and we similarly define φ ρ (w 0 ) to be the angle that T w0 F w0 0 forms with the positive v−axis as measured from that axis (that is, measured clockwise).Without loss of generality, we restrict φ σ (w 0 ) and φ ρ (w 0 ) to take values in (−π/2, π/2) for each w 0 ∈ L 0 , and we remark that the endpoints of this interval are excluded by the assumptions that L 0 and F w 0 are graphs over u and v, respectively.Formula (31) now becomes Observing that the cases where φ σ + φ ρ equals −π/2 or π/2 must be ruled out because T w0 F w0 0 and T w0 L 0 may not coincide, we conclude that there are three cases to consider (see also figure 4).(α) −π < φ σ + φ ρ < −π/2; the functional iteration is unstable for all positive H; the functional iteration is unstable for all positive H.

3.4.3.
The case of N s > 1 and N f = 1.In this case, r 0 maps the scalar variable v to a N s −dimensional vector, s 0 is a scalar function of N s variables, and (D y g) 0 ≡ λ < 0 is a scalar.Thus, P w0 = (−Ds 0 , 1) • ((r 0 ) T , 1), with (−Ds 0 , 1) and ((r 0 ) T , 1) being N f −dimensional vectors spanning the one-dimensional subspaces N w0 L 0 and T w0 F w0 0 , respectively.Formula (29) becomes For each w 0 ∈ L 0 we define φ σ (w 0 ) and φ ρ (w 0 ) to be the angles that the onedimensional affine subspaces N w0 L 0 and T w0 F w0 0 , respectively, form with the positive v−axis.We remark that these definitions agree with those in the two -dimensional case, see also figure 3.Here also, we restrict φ σ (w 0 ) and φ ρ (w 0 ) to take values in (−π/2, π/2) for each w 0 ∈ L 0 .
Using these definitions, we may rewrite equation (34) in the form (32).The rest of the stability analysis carries over, then, from the two-dimensional case.

3.4.4.
The case of a symmetric-definite pair (D, P w0 ).Another important case in which the spectrum is explicitly computable is that of the pair (D, P w0 ) being symmetric-definite, that is, the case of D being a symmetric positive definite matrix, D T = D > 0, and of P w0 being symmetric, P T w0 = P w0 .This is the case, for example, if D corresponds to an appropriate spatial discretization of a self-adjoint differential operator and the fast fibration is vertical to the slow manifold (T w0 F w0 0 ⊥ T w0 L 0 ).In that case, there exists a matrix which diagonalizes both D and P w0 -that is, the matrix EQ may be made to be the identity matrix, see [9, The identity D = D T implies that σ((D y g) 0 ) ⊂ R, and thus λ 1 , . . ., λ Nf < 0 by normal attractivity.Thus, if ν > 0 and H < ε(2ν ) 4. Stability of the algorithms using forward differencing.In a numerical setting, the time derivatives of v are approximated at each iteration by a differencing scheme, , where w ≡ (u * , v) and Ĥ > 0. (36) In this section, we examine how the stability results of the previous section are affected by the use of forward differencing, where ψ(w; t) = ((ψ u (w; t)) T , (ψ v (w; t)) T ) T is a (numerically generated) solution with initial condition w and Ĥ is a positive, O(ε) quantity.For the m−th algorithm, the approximation of d m+1 v dt m+1 by this scheme corresponds to generating the sequence {v (r) | r = 1, 2, . ..} using the map In [29], we established that the map Fm has an isolated fixed point v = ŝm (u * ) which differs from sm (u * ) (and thus also from s(u * ), by virtue of theorem 1.1) only by terms of O(ε m+1 ).Here also, the spectrum of governs the stability of this fixed point.In the same work, we also derived a leading order formula for D v Lm ( ŵm ), namely, These two equations yield, then, Next, we observe that the matrix Q that puts D in canonical form (see section 3.3) also puts Dm+1 in canonical form, Then, working as in section 3.3, we obtain Therefore, and the spectrum of D Fm ( ŝm (u * )) is determined, here also, by the set of generalized eigenvalues corresponding to the pair (EQ Dm+1 (EQ) −1 , P).As explained in section 3.3, explicit information on these generalized eigenvalues can be obtained only in special cases.In what follows, we identify the modifications that forward differencing induces on the results of sections 3.4.1-3.4.4.

4.1.
The cases Ds 0 = 0 and Dr 0 = 0, respectively.In this case, P w0 = I Nf , and thus equation (40) becomes In other words, the geometry is trivial and the dynamics of (2) are simply represented in the formula for D Fm .This is the setting we worked with in [28]; the following theorem characterizes the stability type of the fixed point: Theorem 4.1.([28, theorem 6.2]) Fix η > 0. For each m = 0, 1, 2, . .., the functional iteration scheme defined by Fm is stable if and only if, for each = 1, . . ., N f , the pair ( Ĥ, θ ) lies in the stability region the boundary of which is given by the implicit equation where Ĥ = −λ ,R Ĥ/ε > 0. Here, the branch of arctan is chosen so that θ ∈ (π/2, 3π/2).In particular: (i) Assume that Im(λ ) = 0, for all = 1, . . ., N f .If 0 < η < 2 1/(m+1) , then the functional iteration is unconditionally stable.If η > 2 We conclude that, here also, the fixed point is unstable in cases (α) and (γ) of section 3.4.2.In case (β), the functional iteration is unconditionally stable if , and stable for all .

4.3.
The case of a symmetric-definite pair (D, P w0 ).In this case, the analogue of equation ( 35) reads where we recall that λ 1 , . . ., λ Nf < 0. Thus, if ν > 0, the fixed point is stable for small enough H (the exact conditions can be derived by working as in case II upon replacing cos(φ σ + φ ρ )/(cos φ σ cos φ ρ ) by ν ).If ν < 0, on the other hand, then µ ∈ B(0; 1) for all positive H and Ĥ, and lemmas B. The interpretation of these two equations is directly analogous to that in section 3.2.
In particular, for η = 1 and large H, equation (38) yields D ≈ I Nf and thus vI ≈ 0.
In that case, then, (D Fm ( ŝm (u * )))v I corresponds with good approximation to the intersection of the (linearized) fast fiber through the initial guess (u * , v I ) with the slow manifold.That is, a single iteration using Fm tends to identify the basepoint of the fast fiber through the initial guess, see also figure 5.

Michaelis-Menten kinetics.
As in [6], we consider the Michaelis-Menten-Henri (MMH) model describing the kinetics of substrate-enzyme interaction (see, e.g., [1,20]).If there is much less enzyme than substrate at t = 0, then 0 < ε 1 and system (41) has a slow manifold y = h(x).For any x, the value h(x) can be approximated using an asymptotic expansion in terms of the small parameter ε.Choosing κ = 1 and λ = 0.5, as in [6], we obtain The first term in this asymptotic expansion corresponds to the QSSA.In what follows, we fix ε = 0.01, set x = x * = 1, and use the constrained runs schemes with various values of m to approximate the value y * = h(x * ).We remark that, although no closed formula for h(x * ) exists, we may use a truncated version of the asymptotic expansion (42) to approximate y * .In particular, a truncated asymptotic expansion including all terms up to and including those of O(ε 10 ) yields, to 20 significant digits, the value y * = 0.50031152780809838151.A comparison with the results obtained with asymptotic expansions of higher order indicates that all 20 digits are indeed exact.(Note here that the value reported in [6] is incorrect at the eighth decimal place due to a transcription error.)In what follows, this value will be used as the exact solution.
In this section, we apply the class of constrained runs schemes of section 4 on three versions of the MMH model.In particular, in section 5.1.1 below, we state the initial conditions.Then, in section 5.1.2,we work with the explicit slow-fast system (41), which is of the form (2). In section 5.1.3,we define new dependent variables both of which evolve on a fast time scale-we work with a system of the general form (1); these variables are chosen so that the class of schemes remains convergent according to the theory in section 4. In section 5.1.4,we use a second set of new variables, which also evolve on a fast time scale and are chosen so that the class of schemes becomes divergent according to the theory in section 4. In section 5.1.5,we employ Newton's method to solve the (m + 1)−st time derivative condition (3) (or rather its discrete counterpart, see equations ( 36)-( 37)) and for each of the three versions of the MMH model.Finally, in section 5.1.6,we apply Broyden's method to each one of the three systems.
In all cases, we use the Dormand-Prince embedded Runge-Kutta (4, 5) ODE time integrator [4] to generate a numerical solution of the ODE system.The time step is chosen sufficiently small to ensure that the time integration is virtually exact-that is, that the estimated error drops below machine accuracy.Then, we estimate the (m + 1)−st time derivative using equations ( 36)-(37) with Ĥ = ε.The iterative step size H is also chosen to be equal to ε.Further, we iterate until the nonlinear residual becomes smaller than the absolute tolerance tol.The parameter tol is taken to be 10 −14 throughout our experiments, because we are interested solely in the accuracy of the zero of the (m+1)−st derivative and thus wish to avoid coloring the results by premature termination.Remark 1.In practice, larger values of tol may be used depending also on the value of m, and hence the computational effort values below should be understood as mere upper bounds.Indeed, it can be shown that, for every m, v # m − s(u * ) = O(ε m+1 ) whenever tol = O(ε m+1 ) and as long as the fixed point sm (u * ) is stable; that is, the value returned by the functional iteration is within the tolerance of the point on the true slow manifold for sufficiently small values of the tolerance.The proof of this fact is exactly analogous to that in [28]: first, it suffices to establish that v # m and sm (u * ) are O(ε m+1 ) close, for these small tolerances, since the desired result is then immediately obtained by the triangular inequality, v # m − s(u * ) ≤ v # m −s m (u * ) + sm (u * )−s(u * ) , and theorem 1.1.This estimate on v # m −s m (u * ) , in turn, may be established as in [28], by noting that D v L m (u * , sm (u * )) is nonsingular and has an O(1) norm, see equations ( 6) and ( 14)- (15).Larger values of the tolerance tol , on the other hand, compromise the accuracy with which v # m approximates sm (u * ) and thus, also, the desired value s(u * ).5.1.1.The constrained runs functional iteration.We use the same initial guess in all three coordinate systems, (x, y), (u, v), and (ū, v), namely, y (0) = v (0) = v(0) = 0.625.This value was chosen because, first, it has approximately the same absolute error in all three coordinate systems and, second, because it is close enough to the exact solution to ensure that our local stability analysis applies.5.1.2.Numerical results for the coordinate system (x, y).Table 1 summarizes the accuracy and efficiency results for the functional iteration schemes.The error shown is equal to |y # m − y * |, and the reported work is equal to the number of evaluations of Lm needed to reach the tolerance (in this case, this number is also equal to the number of functional iteration steps).Note that the evaluation of Lm becomes more costly as m increases, as the time integration dominates the overall cost at each iteration step and the length of the integration interval scales with m + 1.Further, the work increases with m, since the convergence factor of the functional For all values of m that we considered, the m−th functional iteration scheme converges to ŝm (x * ).This result is in agreement with theorem 4.1: indeed, the sole fast eigenvalue is real, and the fast fibers foliate the manifold in an approximately vertical manner, so that φ ρ = 0 to leading order.Further, the converged values y # m limit to the exact solution h(x * ) = y * , with the absolute error decreasing exponentially with m, as predicted by theorem 1.1.

5.1.3.
Numerical results for the coordinate system (u, v).To unravel the implications of a non-vertical fast fibration and verify our findings in section 4, we also apply the class of constrained runs schemes on a transformed version of the explicit slow-fast system (41).In particular, we use the variables so that both u and v evolve on a fast time scale and the slow manifold is a graph over u.As the notation suggests, we take u to denote the observables (it is straightforward to check that the slow manifold is, indeed, parametrizable by this variable, see also figure 6) and seek to approximate the point (u * , v * ) corresponding to (x * , y * ) under this transformation.Using the highly accurate value for y * reported earlier, we obtain u * = 1.250155763904049190755 and v * = 0.750155763904049190755.The results for the functional iteration schemes are shown in table 1.First, the m−th functional iteration scheme converges to ŝm (u * ).Indeed, in this case, we have φ σ = arctan(5/9) ≈ 0.507 and φ ρ = π/4 to leading order; therefore, φ σ + φ ρ ≈ 0.411π, and our results in section 4 imply that each functional iteration scheme is locally stable.Second, the absolute error decreases again exponentially in m.In this case, though, the convergence factor decreases with increasing m, and therefore the work decreases with m.For the case m = 0, in particular, the work becomes quite large.5.1.4.Numerical results for the coordinate system (ū, v).Finally, we consider a third set of variables, Here also, the variable ū denotes the observables, and the point (x * , y * ) maps to (ū * , v * ) under the transformation (43), where ū * = 1.00031152780809838151 and v * = 0.750155763904049190755.The slow manifold is, here also, a graph over the new variables ū.In this case, φ σ = arctan(5/6) ≈ 0.695 and φ ρ = arctan(2) ≈ 1.107 to leading order.Thus, φ σ + φ ρ ≈ 0.574π, and ŝm (u * ) is locally unstable under functional iteration according to section 4. Indeed, the sequence of iterates {v (i) } becomes unbounded in the case m = 0 (we declare divergence when v(n) exceeds 10 4 , for some n).For m = 1, we also fail to reach the tolerance.In particular, after 3071 iterations and due to round-off errors, the iteration sequence becomes trapped in a period-2 cycle around the fixed point with the nonlinear residual leveling off at 10 −13 > tol.In all of the cases m ≥ 2 that we examined, the residual does reach the tolerance after a number of iteration steps-an indication that functional iteration converges to a point which lies close to a root of the (m + 1)−st derivative.However, this root is non-physical in the sense that it is O(1)−away from v * and thus also from s(ū * ).(Due to the nonlinearity of the problem, the (m + 1) − st derivative condition has several zeroes but only one of them is asymptotically close to a point on the slow manifold.)Moreover, the iterates converge to this fixed point very slowly.The local instability issue near v * is resolved in the next two sections, where we apply Newton's method and Broyden's method [14], instead of functional iteration, to obtain the appropriate zero of the (m + 1)−st derivative.5.1.5.Application of Newton's method.A natural alternative to using functional iteration to locate the zero of the discrete counterpart of the (m + 1)−st derivative is to use Newton's method.For our two-dimensional model problem, we have N f = 1 and the Newton iteration becomes, in the original variables x and y, The derivative D y Lm (x * , y n ) can be estimated numerically, for instance using the finite difference formula with γ a small number (typically chosen equal to the square root of the machine precision in order to balance truncation error and round-off error).Note that each Newton iteration only requires two evaluations of Lm , since the value Lm (x * , y n ) is depending on the coordinate system we use, and the work is equal to the number of evaluations of Lm before the tolerance is reached (in this case, this number is twice the number of Newton steps, as explained above).In all cases, the method converges to the appropriate fixed point even if the functional iteration scheme does not.Moreover, when both the functional iteration and Newton's method converge, the latter typically converges much faster since it is second order (i.e., converges quadratically).
For a general N f −dimensional system (1), the construction of the Jacobian matrix requires N f + 1 evaluations of Lm .In addition, one must solve a linear system of size N f × N f at each iteration step to compute (D y Lm (x * , y n )) −1  Lm (x * , y n ).For large values of N f , these tasks may become prohibitively expensive.In that case, one should resort to a more efficient scheme, such as Broyden's method (see below) or a Newton-Krylov method [26].5.1.6.Application of Broyden's method.Broyden's method offers yet another alternative to solving the (m + 1)−st derivative condition.In general, Broyden's method may be characterized as a least change secant update (also known as quasi-Newton) method [14].In the case of our two-dimensional model problem, Broyden's method reduces to the classical secant method Note that each iteration except the first one requires a single evaluation of Lm and is thus half as expensive as one iteration using Newton's method.This difference by a factor of two makes the secant method more efficient, even though it needs about 44% more iterations than Newton's method because its convergence order is p s ≈ 1.618 (whereas that of Newton's method is p n = 2).This may be seen in table 3, which reports our results from applying Broyden's method.The error shown is equal to while the work shown is equal to the number of evaluations of Lm before tolerance is reached (as explained above, this number equals the number of Broyden iterations plus one).For general, N f −dimensional systems of the form (1), a single evaluation of Lm is required per Broyden iteration.Furthermore, at each iteration step, one can update directly the inverse of the approximate Jacobian by using efficient techniques such as the Sherman-Morrison formula [14].

5.2.
A network of coupled enzymatic reactions.In this section, we treat a six-dimensional, fully nonlinear protein interaction network with a two-dimensional slow manifold which models mutual antagonism between two kinases S and E. This system is composed of four interacting enzymatic networks, each of which describes the chemical transformation of a substrate through the action of an appropriate enzyme and the formation of an intermediate complex, see figure 7 for a schematic representation.(A colon joining a substrate and an enzyme denotes a complex formed by these two constituents; also, the complexes E : S and S : E are not chemically equivalent.)The two subnetworks transforming S into S * and back and E into E * and back are called Goldbeter-Koshland modules (or switches) [8], and they play an important role in many cellular functions such as sensing, signaling, and gene regulation.The particular network above describes the antagonism between two regulators, S and E, of the G2-to-mitosis (G2/M) transition in the eukaryotic cell cycle [3].The antagonism is evident in that each kinase catalyzes the phosphorylation of the other, i.e., E catalyzes the reaction transforming S into S * and S catalyzes the reaction transforming E into E * .These phosphorylated forms S * and E * can be transformed back into their unphosphorylated counterparts S and E by following two distinct reaction paths catalyzed by the enzymes D and F , respectively.The ten state variables of the model correspond to the molecular concentrations of the network constituents.Four of these can be readily eliminated by employing the four exact protein conservation laws present in the system, Here, S T , E T , D T and F T correspond to the total protein concentrations and are determined by the initial conditions.Following [3], we choose to eliminate the inactivated substrates S * and E * and the free enzymes D and F .We thus obtain a system of six ODEs for the remaining concentrations, The values of the reaction rate constants and of the total protein concentrations are identical to those used in [3], In particular, no small parameter ε has been explicitly identified for this system, although this is in principle possible [18].Reductions of this model to two-dimensional systems by applications of the QSSA and its variant, total QSSA (tQSSA), have been previously considered in [3,18].Following that work, we group E : S, S : E, D : S * and F : E * into the vector v (cf.( 3)) and apply the first-(m = 0) and second-derivative (m = 1) conditions to approximate the slow manifold.Then, we employ each one of these two approximations to reduce the original, six-dimensional ODE system to a two-dimensional one.Finally, we simulate these reduced models and compare our numerical results to those obtained by a direct simulation of the full six-dimensional model (48).
Our results are presented in section 5.2.1 below.Our interest here lies in the application of the (m + 1)−st derivative condition (3) to the model (48) to glean information on its long-term dynamics.We solve (3) using Newton's method for this model and compute the first-and second-derivatives of v analytically.The absolute tolerance is set to a value below which no appreciable difference arises in either the approximate manifolds or the reduced dynamics on these manifolds; in particular, tol = 10 −9 for both m = 0 and m = 1.The full model is simulated using the stiff solver ode23s embedded in MATLAB [25], while numerical integration of the two reduced models is carried out using the RK4 method with fixed time step.Here also, this time step is chosen sufficiently small to ensure that the time integration is virtually exact.

5.2.1.
Results.We tabulated the two-dimensional ZDP 0 and ZDP 1 manifolds over a uniform grid on the region [0, S T ]×[0, E T ] of the (S, E)−plane.This rectangle is the projection of the biologically meaningful region on the (S, E)−plane, see (47) and recall that all concentrations must be non-negative.The grid is organized around the projection on the (S, E)−plane of the globally attracting steady state.This steady state is found by Newton's method on the full, six-dimensional vector field, and it is the first point on the manifold that we tabulate (it is plain to see that fixed points of the full system solve (3) for any value of m).Next, we tabulate the manifold over the four immediate grid neighbors of the steady state, using the value of the vector v at that steady state to initialize Newton's method.These five points are then used to seed a three-point linear extrapolation procedure which produces rather accurate initializations for Newton's method over the entire grid, provided the grid step size is chosen small enough-see the supplement to [10] for a detailed description of this continuation procedure.Executing Newton's method at every grid point using these initializations, we tabulate the manifold over the entire grid.The tabulated manifolds for m = 0 and m = 1 are shown in figure 8.Each manifold is plotted over the subset of the rectangle [0, S T ] × [0, E T ] where all concentrations remain positive, as negative concentrations are biochemically meaningless.
We used these tabulations to construct a timestepper which maps out the reduced dynamics on these appoximate slow manifolds.To obtain a trajectory of the reduced dynamics on the ZDP 0 or ZDP 1 manifold, we start by specifying an initial condition (S 0 , E 0 ) on the (S, E)−plane.This initial condition must be pushed forward in time, and in order to do so we need to obtain the (S, E)−component of the vector field at the point (S 0 , E 0 , E : S, S : E, D : S * , F : E * ) on the ZDP 0 or ZDP 1 manifold.To lift (S 0 , E 0 ) on the manifold, we interpolate from the tabulated v−values at its three Each panel shows the projection of these manifolds to a threedimensional space.The horizontal axes in each plot correspond to the concentrations S and E parameterizing the manifolds, while the vertical one corresponds to one of the four concentrations comprising v.Note that the two manifolds are practically indistinguishable in panel 2, yet the QSSA manifold extends over a larger domain.The steady state is denoted by a solid dot in each panel.closest grid neighbors.This lift is further improved by an application of Newton's method with the tolerance value reported earlier.Once the (S, E)−component of the vector field has been computed, the initial condition can be pushed forward in time and the same procedure can be applied sequentially to yield an entire (discrete) trajectory.We have plotted several such trajectories in figure 9, both for the QSSAand the ZDP 1 −reduced dynamics.The initial conditions for all of these lie on the boundary of a curvilinear triangle outlining the region where all four components of the ZDP 1 manifold are positive.This region is an approximation of the exact region where slow dynamics is biologically meaningful, i.e.. of the forward-invariant region where all four components of the exact slow manifold are positive.
The two sets of trajectories exhibit large differences.To assess their quality, we have also plotted the corresponding trajectories of the full, six-dimensional system.To obtain these trajectories, we first need to map the initial conditions on the (S, E)−plane (used to generate the trajectories for the reduced systems) to initial conditions in the full state space; these can then be integrated forward in time using the vector field (48).This mapping was achieved by lifting the reduced initial conditions to the ZDP 1 manifold.(Lifting to the QSSA manifold yielded similar results modulo an initial fast transient (data not shown).)The results are plotted in the same figure; as can be readily seen, the ZDP 1 -reduced model yields trajectories which remain much closer to their full model counterparts than the QSSA-reduced Projection of the slow, reduced dynamics on the (S, E)−plane corresponding to the QSSA manifold (solid dots), the ZDP 1 manifold (blank dots), and the full system (48) (thin curves) for the first parameter set (49).The projections on the (S, E)−plane of all initial conditions lie on the boundary of the curvilinearly triangular region where the ZDP 1 manifold is positive.The steady state lies approximately at (0.47, 0.02).
model.The first striking discrepancy between the data generated by the QSSAreduced model and the full model is evident in that QSSA severely overestimates the extent to which S decreases as the solution approaches the steady state: QSSA predicts a substantial decrease whereas the actual one is barely noticeable, see figure 9.The second discrepancy concerns the rate at which the concentrations approach their steady state values: it it plain to see from the same figure that QSSA predicts a rate of decrease for E which is approximately 50% higher than the actual one.
An additional point of interest becomes apparent once we plot the reduced dynamics for a different set of parameters, namely, The total protein concentrations are set to the values S T = E T = 1 and D T = F T = 2.Note that, for these values of the reaction rate constants, the state space exhibits the mirror symmetry (S, E, S : E, E : S, D : S * , F : E * ) ↔ (E, S, E : S, S : E, F : E * , D : S * ), which induces the symmetry S ↔ E in the reduced dynamics.Representative trajectories of this reduced dynamics are plotted in figure 10.Here also, the trajectories corresponding to the ZDP 1 −reduced model describe the evolution of the system during this slow phase much better than their QSSA-generated counterparts.These latter trajectories approach, here also, their steady state values substantially faster than in reality and exhibit large excursions (overshoots) of S and E where Figure 10.Projection of the slow, reduced dynamics on the (S, E)−plane corresponding to the QSSA manifold (solid dots), the ZDP 1 manifold (blank dots), and the full system (48) (thin curves) for the second parameter set (50).As in figure 9, the projections on the (S, E)−plane of the initial conditions lie on the boundary of the region where the ZDP 1 manifold is positive.The steady state here lies approximately at (0.57, 0.57).
there is none-see the solutions near the top-left and bottom-right corners of the figure.The novel feature here-and a direct consequence of these overshoots-is that the QSSA-reduced model yields trajectories which do not remain within the biologically relevant domain.This results in certain concentrations (among those slaved to S and E) assuming negative values on their way to equilibrium-recall the significance of this domain.The ZDP 1 -reduced model, on the other hand, yields trajectories which remain within the domain, except for very close to the cusps at the top-left and bottom-right corners.
6. Constrained runs schemes defined by implicit maps.In section 3, we saw that, for any m = 0, 1, . .., the m−th algorithm in our class of algorithms may have a number of eigenvalues that either are unstable or have modulus only slightly less than one.In this section, we demonstrate how the use of implicitly defined algorithms (instead of explicit ones) stabilizes the algorithm (either the one using F m or the one using Fm ) when all of its eigenvalues are contained in (−∞, 1).
For each m = 0, 1, . .., we define the maps F I m : R Nf → R Nf and F I m : R Nf → R Nf (where "I" stands for "implicit") as the simplest implicit versions of the maps F m and Fm , respectively.In particular, given any u * ∈ K, It is easy to prove that the fixed points of F m and Fm are also fixed for F I m and F I m , respectively.Indeed, the condition arrive at the desired result v = sm (v).That ŝm (u * ) is a fixed point of F I m may be proved in the same way.
Next, we look at the spectra σ(DF I m (u * )) and σ(D F I m (u * )).For concreteness, we focus on σ(DF I m (u * )) and remark that similar formulas may be obtained for σ(D F I m (u * )) in an entirely analogous manner.The definitions (51) yield, upon differentiation with respect to v, This formula yields, then, where we have used equation (12) to replace D v L m by I Nf − DF m .This formula shows that, if σ(DF m (s m (u * ))) ⊂ (−∞, 1), then σ(DF I m (s m (u * ))) ⊂ (0, 1), and thus the (implicit) functional iteration defined by F I m is locally stable.The exact same conclusions may be drawn for the implicit functional iteration defined by F I m .
7. Summary and discussion.The main object of study in this article was the constrained runs algorithm (4), with m arbitrary but fixed and H a scalar control parameter.This algorithm was proposed in [6] as a computationally inexpensive way of approximating an arbitrary point (u * , s(u * )) on the slow, invariant, normally attracting manifold L.More specifically, the algorithm was designed to solve the (m + 1)−st derivative condition (3)-all of its fixed points constitute solutions to this condition and vice versa.The existence of a solution (u * , sm (u * )) to (3) was established in [29], where we also showed that, first, this solution is ε m+1 −close to the point (u * , s(u * )) ∈ L; and second, it is the only solution in an O(1) neighborhood of that point.(Note that spurious solutions may exist outside such an O(1) neighborhood, depending on the particulars of the dynamical system at hand.)It follows directly that (u * , sm (u * )) is a fixed point for the constrained runs algorithm (4) sharing the same closeness and isolation properties.
The main question asked at present was whether (and how) the control parameter H can be tuned for the constrained runs algorithm to converge to the aforementioned fixed point, when applied to a nonlinear multiscale problem (1).The same question was previously asked in [28] in a restricted setting-namely, for a multiscale system of the form (2), i.e., with slowly evolving observables.In that special setting, we showed that H can be tuned to make the fixed point (locally) exponentially attracting, provided that the fast spectrum σ (D y g)(u * , s 0 (u * ), 0) is real.(Here, s 0 is the leading order approximation of s.)The presence of complex eigenvalues in this spectrum was found to complicate the situation considerably and to raise the possibility of the constrained runs algorithm diverging for certain values of m irrespective of the choice of H; explicit convergence criteria were derived in that same work and reviewed in theorems 3.2 and 4.1 in this article.
To answer the question above in the more general setting described by (1), this article dealt with, and partially quantified, the effects that the absence of explicit slow variables has on our earlier stability results.As starting point, we established that local stability is now governed by the geometry of the multiscale system (1), as much as by its normal dynamics (encoded in D y g, see above).By geometry, here, we understand the relative orientations of the affine subspace u = u * and of the tangent spaces to the (leading order) slow manifold and fast fiber through (u * , s 0 (u * )); this information is encoded in the matrix P w0 defined in (13).In particular, the stability problem was found to correspond to a generalized eigenvalue problem involving the matrices D y g and P w0 .
Generalized eigenvalue problems are not explicitly solvable, in general, and hence a full theoretical characterization of the stability of the fixed point in this general setting remains out of reach.Nevertheless, we succeeded in fully analyzing certain special cases-in particular, those where (a) either the fast fibration is vertical or the slow manifold horizontal (both at leading order), as well as (b) the toy scenario N f = 1 (a single fast direction) and (c) the considerably more significant case of a fast fibration orthogonal to L 0 and of symmetric, positive definite normal dynamics.In case (a), the problem reduces to that considered in [28], and theorems 3.2 and 4.1 in this article quantify the local stability properties of the fixed point of the algorithm-see also our discussion above.In case (b), the algorithm becomes a map on the real line, and the fixed point is stable for trivial geometry of the type considered in case (a) and H below a certain value (cf.(33)).We showed that a nontrivial geometry can preserve the stability of the fixed point but also destabilize it, depending on the sign of (the scalar) P w0 .In case (c), finally, we demonstrated that the geometry can work both towards stabilizing unstable directions and destabilizing stable ones, as well as leave the stability types of other directions unaffected.A supplementary theoretical result applicable to systems of this type concerns the number of unstable directions: we found that there are at most as many of these as the dimensionality of L 0 .For large multiscale systems (N s N f ) of type (c), this implies that one must stabilize relatively few directions, and thus the constrained runs algorithm can be stabilized efficiently through, e.g., an application of the recursive projection method [24].Unfortunately, no analogous result holds for more general classes of systems (1).This fact and the intricacy of the stability boundaries whenever complex eigenvalues are present in the normal spectrum essentially preclude a practically useful theoretical characterization of the convergence properties of the constrained runs algorithm in the setting of (1).
These stability results were demonstrated concretely on an ensemble of twodimensional models deriving from the classic Michaelis-Menten enzymatic reaction.The stability of the constrained runs algorithms, as applied to these models, can be fully predicted using the aforementioned eigenvalue problem.The results of our numerical simulations were in perfect agreement with these theoretical predictions.Additionally, we showed how Newton's method can be employed to stabilize the constrained runs algorithm when it is unstable.In this context, this approach is equivalent to the aforementioned recursive projection method; note also that the implicitly defined maps discussed in Section 6 achieve similar stabilization.A second set of simulations was carried out for a six-dimensional, fully nonlinear biochemical model with a two-dimensional slow manifold, which corresponds to a network of coupled enzymatic reactions.The focus in this case shifted from investigating algorithmic stability and implementing efficient stabilizations to employing the (stabilized) constrained runs algorithms for mapping out the long-term system dynamics.We verified that the m = 0 algorithm-known as the Quasi-Steady State Approximation (QSSA)-represents these dynamics inaccurately, as Geometric Singular Perturbation Theory (GSPT) predicts whenever the observables evolve on a fast time scale.(See [11] for discussion of a similar phenomenon in a different model.) The m = 1 algorithm, on the other hand, reproduces the actual long-term dynamics essentially flawlessly; this is also in agreement with GSPT, which additionally predicts that higher-order algorithms enjoy even greater accuracy.
Appendix A. Proof of lemma 3.1 in the general case.To establish the validity of the lemma in question for a general nonlinear system (1), we need two auxiliary results.These are stated and proved in section A.1 directly below.The proof of lemma 3.1 is given in section A.2.
A.1.A formula for L m and slow-fast decompositions.Our first auxiliary result yields a leading order formula for the function L m introduced in (11).
Lemma A.1.For m = 0, 1, . .., the function L m satisfies where R m is a (z−dependent) bilinear form, the notation (•) 0 denotes evaluation of the parenthesized quantity at ε = 0, and z ∈ R N is arbitrary.
Proof.The proof is by induction on m.For m = 0, we show that Equation ( 11) readily yields Factoring out a factor of ε −1 , expanding g in powers of ε, and grouping all higher order terms in that expansion and ε(D x v)f in an O(ε) remainder, we obtain the desired formula (52).
Similarly, for m = 1, we have An expression for L 0 is reported in (52) above and can be readily substituted into this equation.We note that an application of the differential operator ε (D x •) f + (D y •) g on the O(ε) terms does not alter their asymptotic magnitude.Additionally, the term ε D x (D y v)g 0 f can similarly be absorbed within the O(ε) remainder.The same holds for all higher order terms in the asymptotic expansion of g, so that We now carry out the induction step.We assume that and use the definition of L m+1 to write cf. (53).Here also, an explicit expression for L m is available through the induction hypothesis (54) and can be substituted into (55).Moreover, as we saw above already, an application of ε (D x •) f + (D y •) g on the O(ε) remainder conserves its asymptotic magnitude, while the term ε D x (D y v)(D y g) m 0 g 0 + R m (g 0 , g 0 ) f can similarly be absorbed within that remainder.Thus, , where we have also expanded g asymptotically.Next, y v (D y g) m 0 g 0 , g 0 .Note that the last two terms in the right member of this equation are quadratic in g 0 .Finally, D y (R m (g 0 , g 0 )) g 0 = (D y R m )(g 0 , g 0 , g 0 ) + R m ((D y g) 0 g 0 , g 0 ) + R m (g 0 , (D y g) 0 ), and hence This completes the proof of the lemma.
The second auxiliary result enables us to decompose vector fields defined on L 0 into leading order slow and fast components.
Lemma A.2. Let v : L 0 → R N be a vector field defined on L 0 .Define the matrices and where (•) 0 = (•)(w, 0) for any point w ∈ L 0 .Then, the decomposition is a leading order slow-fast decomposition for v, that is, where the spans are meant column-wise.These results directly establish that the decomposition (A.2) is leading order slow-fast, since the columns of A s v s and A f v f are linear combinations of the columns of A s and A f , respectively.We start by noting that span B s⊥ (w) = N w L 0 , (59) see (18).Additionally, while the leading order fast fiber F w 0 is the level set through w of the function x, Since gradients are normal to level sets, we immediately obtain Further, the identity BA = BB −1 = I is written block-wise as and the desired result (58) now follows from (59)-(60).It remains to establish formula (57) for A. First, we define the auxiliary matrix and decompose A as Geometrically, (D z w) 0 Â is the push-forward of the (column) vectors collected in Â from the x− to the w−coordinate system and C is the non-singular coefficient matrix expressing A in terms of (D z w) 0 Â, see the remark below.
To produce an explicit formula for C, and thus also for A, we rewrite (62) as Substituting for B and Â from Eqs. ( 56) and (61), respectively, we obtain, for the blocks of C, To establish the last equality for (C −1 ) 21 , we combine ( 5) and ( 17) into Taking the total derivative with respect to x of both members of this identity, we recover the desired result.Similarly, to prove that (C −1 ) 22 = 0, we have used that Dr 0 = (D y u) 0 (D y v) −1 0 .Indeed, fixing the basepoint w 0 = (u * , s 0 (u * )) = w(x * , h 0 (x * )) and combining (7) and ( 9), we find that u(x * , y) − r 0 (v(x * , y); u(x * , h 0 (x * ))) = 0 along the fiber F w0 0 .The result now follows by taking the total derivative of both members with respect to y.Since C is block-diagonal, it can be inverted to yield Formula (57) for A now follows by combining this equation with (62).
Remark 2. The crux of the derivation of (57) is that C −1 is block-diagonal and thus explicitly invertible.In fact, we engineered the decomposition (62) specifically to guarantee this fact.To see this, note first that span Bs⊥ (z(w)) = N z(w) L 0 and span Bf⊥ (z(w)) = N z(w) F z(w) 0 .
Equivalently, in terms of the w−coordinate system, span (D z w) 0 Âs (z(w)) = T w L 0 and span (D z w) 0 Âf (z(w)) = T w F w 0 , since the prefactor (D z w) 0 transforms (pushes forward ) the (column) vectors in Âs and Âf from the x− to the w−coordinate system (see [27] for more details).Since A s (w) and (D z w) 0 Âs (z(w)) span the same space, there exists an invertible, N s which establishes that C is block-diagonal, cf.(62).
A.2. Proof of lemma 3.1.We are now in a position to prove lemma 3.1.We establish the validity of formula (15) for D v L m in four steps.
Step 1.In this first step, we derive an intermediate formula for D v L m .Recall that lemma A.1 provides the explicit leading order expression for L m , Differentiating it with respect to v and observing that differentiation of O(ε) terms does not change their asymptotic magnitude, we obtain (Here, Rm is a (z−dependent) matrix; the term Rm g 0 arises from differentiation of R m (g 0 , g 0 ).)Applying the product rule in the right member and absorbing in Rm g 0 the resulting terms proportional to g 0 , we write, with a slight abuse of notation, Next, we evaluate this formula at wm .Recalling the estimate wm = w 0 + O(ε) (see theorem 1.1) and that w 0 ∈ L 0 = {z|g 0 (z) = 0} (see section 2), we conclude that g 0 (z( wm )) = O(ε).Thus, to leading order, where (•) 0 here carries the same meaning as in lemma 3.1.
Step 2. In this second step, we rewrite our formula (64) for D v L m as the restriction on T w0 F w0 0 of a map involving the full Jacobian DL m .In particular, we establish the leading order identity We remark that the matrices D z v, D z G, and D v z are of dimensions N f × N , N × N , and N × N f , respectively.We first note that a straightforward induction using the definition of G establishes the leading order result Using this formula and writing ) .This equation, in conjunction with (64), yields the desired result (65).
Step 3. To utilize formula (65) derived above, we need to compute the action of (D z G) 0 on (D v z) 0 in an effective manner.To that end, we obtain in this step a leading order slow-fast decomposition for (D v z) 0 (cf.lemma A.2). Since the matrices A and B introduced in that lemma are the inverse of each other, AB = I.Hence, we may write Employing the formula for B from lemma A.2, we find and hence (67) becomes Next, we employ the formula for A f from the same lemma, and use chain rule to obtain Similarly, In this case, we can prove the following result: Lemma B.1.The eigenvalues of P −1 w0 may assume any nonzero complex values.Moreover, these eigenvalues may have any multiplicities compatible with the fact that R and S are real matrices.
Proof.First, 0 ∈ σ(P −1 w0 ) because P −1 w0 is invertible.Next, let {(λ , k , g ) | 1 ≤ ≤ n}, for some n ≤ N f , be the set of (desired) nonzero eigenvalues together with their (desired) geometric and algebraic multiplicities, respectively.Then, we may construct a N s × N s matrix S which is in Jordan canonical form and has eigenvalues 1 − 1/λ 1 , . .., 1 − 1/λ n with multiplicities as above.Letting S = (S, 0) and R = (I Nf , 0) T , where the zero blocks are of appropriate dimensions, we obtain that P w0 = I Nf − SR = I Nf − S. Thus, σ(P w0 ) = {1/λ 1 , . . ., 1/λ n }, where the eigenvalues have the desired multiplicities.As a result, σ(P −1 w0 ) = {λ 1 , . . ., λ n } and these eigenvalues have the preassigned multiplicities by the identity σ(P −1 w0 ) = (σ(P w0 )) −1 and the fact that inversion does not affect these multiplicities.B.2.The case N f > N s .Next, we assume that N f and N s can take any values such that N f > N s .The first case we consider is the special (but generic-see below) case dim N (R) = N f − N s , N (S) = {0}, and R(S) ∩ N (R) = {0}. (70) In this setting, we can prove the following result: Lemma B.2.Let σ( Â) = {λ 1 , . . ., λ n }, where the eigenvalues are reported without repetition ( i.e., without counting their multiplicities), and assume that the equations collected in (70) hold.Then, σ(P −1 w0 ) consists of the eigenvalue one and of the eigenvalues 1/(1 − λ ), = 1, . . ., n.For any such , 1/(1 − λ ) is neither zero nor one but is otherwise arbitrary.Additionally, one is of algebraic and geometric multiplicity equal to N f − N s , while the geometric and algebraic multiplicities of 1/(1 − λ ) as an eigenvalue of P −1 w0 are equal to those of λ as an eigenvalue of Â.If at least one of the equations collected in (70) does not hold, then we can prove the following more general-but less concrete-result.
Lemma B.3.Write σ( Â) − {0} = {λ 1 , . . ., λ n } without repetition.Then, σ(P −1 w0 ) consists of the eigenvalue one and of the eigenvalues 1/(1 − λ ), = 1, . . ., n.For any = 1, . . ., n, 1/(1 − λ ) is neither zero nor one but is otherwise arbitrary.Additionally, one is of algebraic and geometric multiplicity at least equal to dim N (R) ≥ N f − N s .The geometric and algebraic multiplicities of 1/(1 − λ ) as an eigenvalue of P −1 w0 are at least equal to those of λ as an eigenvalue of Â.The precise values of the eigenvalue multiplicities in the lemma above cannot be determined without recourse to more specific information about R and S.
As we remarked above, the setting assumed in lemma B.2 is generic-that is, the set of matrix pairs (R, S) satisfying the equations collected in (70) is of full measure.Indeed, generically speaking and since R and S are N s × N f and N f × N s matrices, respectively, dim N (R) = N f − N s and dim N (S) = 0 by the rank-nullity theorem.The first of these equations is identical to the first equation in (70).The second equation, in turn, implies that N (S) = {0}, which is the second equation in (70).Also, this equation combined with the rank-nullity theorem yields that dim R(S) = N s , and thus the dimensions of R(S) and N (R) are complementary with respect to that of the carrier space C Nf .Therefore, generically speaking again, the two spaces intersect transversally and hence R(S) ∩ N (R) = {0}, which is the desired third equation in (70).

3. 1 .
Stability criterion for DF m .By definition, sm (u * ) is exponentially attracting if and only if σ (DF m (s m (u * ))) ⊂ B(0; 1), where B(0; 1) denotes the open unit ball in the complex plane.To find DF m , we rewrite the map F m introduced in (4) as

Figure 1 .
Figure1.The action of P w0 on a vector v I that lies on the affine subspace u = u * .Here, u * ∈ K is a fixed value of the observables u, while s 0 and r 0 (•; u * ) are the functions whose graphs form the leading order slow manifold L 0 and fast fiber F w0 0 , respectively.Also, P w0,1 , P w0,2 are the projections introduced in (13).
11 = I Ns and T 22 = I Nf without loss of generality.Further writing T 21 = S, T 12 = R, P = I Nf − SR, and P = I Ns − RS,

Figure 2 .
Figure 2. The action of DF m on a vector v I sitting on the affine subspace u = u * .

Figure 3 .
Figure 3.A schematic in the special case N s = N f = 1, class (β), of the relative configurations of the spaces T w0 L 0 , N w0 L 0 , and T w0 F w0 0 and of the angles φ σ and φ ρ .

Figure 4 .
Figure 4.A schematic in the special case N s = N f = 1 of two relative configurations of the spaces T w0 L 0 and T w0 F w0 0 leading to a stable (converging) algorithm (left panel, case (β)) and an unstable (diverging) algorithm (right panel, case (γ)).Here, v I is the initial guess, v A = P −1 w0 v I , v B = −D m+1 v A , and (DF m )v I = (I Nf − D m+1 P −1 w0 )v I = v I + v B .

Figure 5 .
Figure 5. Application of D Fm on an initial guess v (1) in the special case N s = N f = 1.Two relative configurations of T w0 L 0 and T w0 F w0 0 are shown, one for which the algorithm is stable (left panel) and one for which the algorithm is unstable (right panel).Here, v ( +1) = (D Fm )v ( ) , where = 1, 2, 3.

Figure 7 .
Figure 7.A protein interaction network modeling antagonism between two kinases, S and E. Note that E : S, S : E, D : S * , and F : E * denote complexes formed by a substrate and an enzyme.

Figure 8 .
Figure 8.The QSSA (black) and ZDP 1 (grey) manifolds for to the ODE system (48) with v = [E : S, S : E, D : S * , F : E * ].Each panel shows the projection of these manifolds to a threedimensional space.The horizontal axes in each plot correspond to the concentrations S and E parameterizing the manifolds, while the vertical one corresponds to one of the four concentrations comprising v.Note that the two manifolds are practically indistinguishable in panel 2, yet the QSSA manifold extends over a larger domain.The steady state is denoted by a solid dot in each panel.

Figure 9 .
Figure 9.Projection of the slow, reduced dynamics on the (S, E)−plane corresponding to the QSSA manifold (solid dots), the ZDP 1 manifold (blank dots), and the full system (48) (thin curves) for the first parameter set (49).The projections on the (S, E)−plane of all initial conditions lie on the boundary of the curvilinearly triangular region where the ZDP 1 manifold is positive.The steady state lies approximately at (0.47, 0.02).

Table 1 .
Accuracy of and computational effort associated with the functional iteration schemes on the model problem (41) in all three coordinate systems.

Table 2 .
Accuracy of and computational effort associated with

Table 3 .
Accuracy of and computational effort associated with Broyden's method for all three coordinate systems.