A LIE–DEPRIT PERTURBATION ALGORITHM FOR LINEAR DIFFERENTIAL EQUATIONS WITH PERIODIC COEFFICIENTS

. A perturbative procedure based on the Lie–Deprit algorithm of classical mechanics is proposed to compute analytic approximations to the fundamental matrix of linear diﬀerential equations with periodic coeﬃcients. These approximations reproduce the structure assured by the Floquet theorem. Alternatively, the algorithm provides explicit approximations to the Lyapunov transformation reducing the original periodic problem to an autonomous sys- tem and also to its characteristic exponents. The procedure is computationally well adapted and converges for suﬃciently small values of the perturbation pa- rameter. Moreover, when the system evolves in a Lie group, the approximations also belong to the same Lie group, thus preserving qualitative properties of the exact solution.

1. Introduction. Given the linear system of differential equations dY dt = A(t)Y, where A is a complex n × n matrix valued function whose entries are integrable periodic functions of t with period T , the much celebrated Floquet theorem [8] establishes that the fundamental matrix Y (t) can be factorized as [6] Y (t) = P (t) exp(tF ).
Here F and P are n × n matrices, P (t) = P (t + T ) for all t and F is constant. As a matter of fact, this theorem combines two propositions. On the one hand, it provides a general statement about the structure of the solution, as eq. (2) establishes. On the other hand, it also includes Lyapunov's reducibility theorem [16]: the matrix P (t), provided it is invertible, performs a transformation to a different set of variables in such a way that the corresponding differential equation in the new variables is formulated in terms of a constant coefficient matrix F . Specifically, let us consider the transformation Y = P (t)Z with a periodic nonsingular matrix P (t). Substituting into (1) we getŻ = F Z, where F = P −1 (t)A(t)P (t) − P −1 (t)Ṗ (t) happens to be a constant matrix. In this way, exp(tF ) in (2) is the exact solution of (1) previously transformed, by means of the periodic matrix P (t), to a representation where the new coefficient matrix is the constant matrix F . Thus, the 960 FERNANDO CASAS AND CRISTINA CHIRALT periodic system (1) is reducible to an autonomous system by means of a Lyapunov transformation P (t) [1]. It is clear from (2) that stability properties of the solution Y (t) depend only on the matrix F , and more specifically, on the eigenvalues of F , the so-called characteristic exponents of (1), whose real parts are uniquely defined [9]. Thus, the trivial solution of (1) is asymptotically stable if and only if all the characteristic exponents of the system have negative real parts, whereas it is stable if and only if the characteristic exponents have nonpositive real parts, with pure imaginary or zero characteristic exponents corresponding to simple elementary divisors of the matrix F − λI, with λ ∈ C [16]. It is then most interesting to determine the matrix F or the monodromy matrix Y (T ) = exp(T F ). Unfortunately, the Floquet theorem does not provide practical methods to construct F , and thus one has to resort to approximate methods, either numerical or perturbative.
One possible technique for obtaining explicitly approximations to both F and the Lyapunov transformation P (t) is the so-called Floquet-Magnus expansion [4]. With this procedure one can construct approximations of the form Y (t) = exp(Λ(t)) exp(tF ) with in such a way that: (a) the terms Λ k and F k are determined by a recursive procedure, (b) the (truncated) series Λ(t) is a periodic function, so the structure (2) of the solution prescribed by the Floquet theorem is assured at every order of approximation, and (c) the series (4) are convergent at least if This expansion has been recently used in the context of solid-state nuclear magnetic resonance (NMR) and compared with other averaging approaches [13].
In this paper we focus on linear systems involving a (small) parameter ε of the form i.e., A(t) in (1) is given by Here A 0 is a constant n × n matrix, A j (t) = A j (t + T ) (j = 1, 2, . . .) are piecewise continuous functions integrable on the interval (0, T ) and the series (7) is assumed to be convergent for |ε| < r 0 , where r 0 is independent of t.
A variety of problems arising in mechanics and engineering are modeled by equation (6), often with matrices of large dimension (see e.g. [16], chapter 6). Moreover, the coefficient matrix A(t) depends on additional parameters, so that stability (or instability) conditions lead to relations between these parameters, which should be determined. To do that one has to evaluate the characteristic exponents of (6) for small ε > 0. For other problems, computing approximations to the Lyapunov transformation P (t) allows one to compare with real measurements in a non-stroboscopic way [13]. Perturbative (analytic) methods are then very useful in this setting, especially when the dimension of the problem is large.
For equation (6) and in the spirit of the Lyapunov transformation leading to an autonomous system, we present a new perturbative algorithm designed to build recursively approximations of the solution as given by the Floquet theorem. It is computationally well adapted, even for high dimension, and convergent for sufficiently small values of ε. In addition, if the system evolves in a Lie group (as happens, for instance, with Hamiltonian systems), then the approximation also belongs to this Lie group and consequently preserves the qualitative properties of the exact solution. The scheme allows one not only to get approximate expressions for the characteristic exponents up to a high order in ε (and thus it provides estimates on the stability domain in the parameter space), but also explicit expressions for the periodic matrix carrying out the corresponding Lyapunov transformation.
The procedure is based on the unitary perturbation formalism recently introduced for time-dependent problems in quantum mechanics [5] by following the approach of the Lie-Deprit method of classical mechanics [7]. Essentially, the Lyapunov transformation to the new variables is generated by solving a linear differential equation with 'time' ε involving a new operator L in such a way that the dynamics in the new variables is easier to solve, at least to a certain order in the ε parameter. In this sense it also differs from the Floquet-Magnus expansion introduced in [4].
The plan of the paper is the following. The algorithm is first presented in section 2 for general linear systems and then applied to the specific case of constructing the Lyapunov transformation for the periodic case in section 3, where the convergence problem is also discussed. In section 4 we consider an alternative procedure to compute the characteristic exponents and illustrate the technique with a pair of examples. Finally, section 5 contains some concluding remarks.
2. Lie-Deprit perturbative algorithm for linear systems. Consider the equa- with and a nonsingular transformation P (t, ε) such that Applying (9) to (8) we have again a linear system for the transformed matrix Z(t, ε): with The transformation defined by (9) is called a Lyapunov transformation. The goal here is to determine P (t, ε) so that (10) is, in some sense, easier to solve than the original equation (8). When A(t, ε) is periodic, an obvious choice is to determine P (t, ε) in such a way that K is constant, but other alternatives are possible [5]. In the following, for simplicity, we will assume t 0 = 0. Since we are dealing with a small parameter ε, it is natural to take P as a near-identity transformation, i.e., P (t, ε) = I + O(ε). Furthermore, we assume that 962 FERNANDO CASAS AND CRISTINA CHIRALT P (t, ε) satisfies an evolution law with respect to the ε parameter similar to the usual t-evolution of Y (t, ε), namely in terms of the (unknown) generator L(t, ε). Alternatively, This assumption can be justified as follows. Since the perturbed problem defined by A(t, ε) can be interpreted as the evolution with respect to the parameter ε of the unperturbed problem (defined by matrix A 0 ), the idea is to express this ε-evolution in a similar way as for the t-evolution. This evolution law for P (t, ε) is analogous to that proposed by Deprit in [7] to construct explicitly canonical transformations in the perturbative treatment of Hamiltonian dynamical systems.
Once the generator L has been determined, it is possible to obtain P and P −1 by formally applying the Magnus expansion [14] to the linear equation (12). As is well known, this procedure allows one to write the solution of a general operator linear equation dU (t)/dt = H(t)U (t) as the exponential of an infinite series [10], each term being a linear combination of integrals and nested commutators involving H(t) at different times (see [2] for a review). For equation (12) we can thus write with Ω(t, ε) = ∞ m=1 Ω m (t, ε). Several procedures exist in the literature to construct the terms Ω m [11]. In particular, the recurrence (15) has been shown to be computationally efficient. Here B j represent Bernoulli numbers, [·, ·] stands for the commutator, [A, B] ≡ AB − BA, and the dependence on ε has not been made explicit for clarity. Differentiating equation (11) with respect to ε and taking into account (12) and Notice that (16) involves both the transformation P and its generator L. A more convenient expression in terms only of L can be obtained by inserting (14) into (16): where we have used the identity e Ω ∂A ∂ε e −Ω = e adΩ ∂A ∂ε = It is worth remarking that it only involves now the original matrix A(t, ε), the transformed matrix K(t, ε) and the generator L(t, ε) of the transformation. In this respect, notice from (15) that Ω in (17) only depends on L(t) through linear combinations of integrals and nested commutators of L at different times.
We next introduce, in addition to expression (7) for the matrix A(t, ε), similar series expansions for K(t, ε) and the (still unknown) generator L(t, ε): and work out equation (17) to get the corresponding relationship linking K n , L n and A n at each power of ε. To do that we proceed by successive stages. First, we obtain a similar series for Ω(t, ε): where v j (t) depends on L k (t). This can be accomplished by applying the recursive procedure (15) to the series defining L in (19) and reordering the series Ω(t, ε) according to powers of ε. A simple calculation leads to the first terms of Ω: whereas terms up to n = 10 have been explicitly obtained by implementing the recurrence (15) with a symbolic algebra package [5]. By working out expression (15) it is possible to derive the following recursive algorithm: Second, we have to express e adΩ ∂A ∂ε in (17) also as a power series in ε, This is done by inserting the series (20) into (18). After some algebra, the successive terms w n (t) can be determined with the following algorithm: The first terms read explicitly Finally, we insert the series (19) and (23) into equation (17) and collect terms of the same power in ε. In this way we arrive at For a specific problem, it is computationally more efficient to apply recurrences (22)-(24) directly to the matrix A than substituting the expression of A into the explicit expressions (21), (25) obtained in [5].
3. Constructing the Lyapunov transformation for periodic linear systems. So far the treatment is generally valid for any differential equation (8) with A(t, ε) given by (7) and it allows one to construct perturbatively a transformation P (t, ε) generated by L(t, ε) such that the new variable Z(t, ε) satisfies the equation (10) with a new coefficient matrix K(t, ε). Notice that neither K nor L have been determined at this point. Of course, once K is fixed according to some appropriate criterion, then L is obtained by solving iteratively (26) and then the transformation P is constructed by exponentiating the series (20).
In the specific case of a periodic system, A j (t) = A j (t + T ), the obvious choice is to construct a Lyapunov (periodic) transformation so that K is a constant matrix. In other words, we solve (26) with the following requirements: (i) the generator L is periodic, i.e., L n (t + T ) = L n (t) and L n (0) = 0 for all n ≥ 1. In this way Ω(t + T, ε) = Ω(t, ε), Ω(0, ε) = 0 and therefore P (0, ε) = I; (ii) the new matrix K in (10) is independent of time. In consequence, Z(t, ε) = exp(tK(ε)) and By satisfying these conditions we recover the structure assured by the Floquet theorem, and, provided equation (26) can be solved, we are able to construct explicitly K(ε) and the transformation P (t, ε) as a power series in the parameter ε. This constitutes in consequence our next goal. To proceed, we integrate (26) over one period T . Since we are assuming that L n (T ) = L n (0) = 0, then 0 = [A 0 , L n ] + nK n − F n , or where denote the average values of F n and L n over the period, respectively. On the other hand, equation (26) can be written as in terms of the linear operator ad A0 , so that its formal solution with L n (0) = 0 reads Inserting (30) in this expression and integrating we get Now, since L n (T ) = 0, it is clear that In other words, we may choose L n as any solution C n of the matrix equation (the solution is not unique [16]). In summary, the problem is solved by taking where C n is a particular solution of (34), or alternatively, and F n is given by (31). In this way we guarantee that L n (t + T ) = L n (t) and L n (0) = 0. If A 0 is a multiple of the identity matrix, then clearly the integral on the r.h.s. of (36) vanishes and so we may take C n = 0 and K n = (1/n) F n . In any case, equations (35)-(36) allow one to construct the series for the new constant coefficient matrix K(ε) and the generator L(t, ε) up to any order n and finally Y (t, ε) as (29) with Ω(t, ε) computed through the series (20).

3.1.
Example: Quantum harmonic oscillator. At this stage it is appropriate to illustrate the main features of the previous algorithm by applying it to a simple example. To this end we consider the quantum mechanical description of a linearly driven harmonic oscillator with Hamiltonian ( = 1) where ω 0 , β and ω are real parameters, with ω = ω 0 . The position and momentum operatorsq andp obey the commutation relation [q,p] = i. Alternatively, in terms of the ladder operators a = (q + ip)/ √ 2, a † = (q − ip)/ √ 2 satisfying [a, a † ] = 1. The time evolution of this system is described by a unitary operator U (t) verifying the Schrödinger equation which is a particular operator realization of equation (1) with A(t) = −iH(t). We introduce a parameter ε so that A(t, ε) = A 0 + εA 1 (t), with In this way we can apply the previous perturbative algorithm and eventually recover the exact solution of the problem. This is so even when equation (37) is formulated in terms of operators. The important point here, however, is the fact that a and a † generate a Lie algebra of finite dimension. The starting point is, as usual, to take K 0 = A 0 . Next we determine the average value of F 1 = A 1 over the period T = 2π/ω, which is obviously zero. Then we fix C 1 so as to satisfy (36): and obtain K 1 and the generator L 1 (t) from equation (35): Proceeding in the same way for n = 2 we get whereas K n = 0 and L n = 0 for n ≥ 3. Therefore K(ε) = K 0 + εK 1 + ε 2 K 2 , or in terms of (q,p), where ρ ≡ ω/ω 0 and we have taken ε = 1. As for the transformation, we have Ω(t, ε) = εL 1 (t) + ε 2 L 2 (t)/2. The corresponding expressions agree with those obtained by applying the Floquet-Magnus expansion (3)-(4) in [4] and reproduce the exact unitary evolution operator, An important difference exists, however, between both schemes. Whereas in the Floquet-Magnus expansion the whole series (4) has to be computed to get the exact solution, the new perturbative scheme (35) needs only to be carried out up to n = 2. In other words, the term K n in (19) is in general different from F n in (4), n ≥ 1.

3.2.
On the convergence of the procedure. With the previous formalism we have formally constructed the fundamental matrix of the periodic linear system (8) as periodic of period T . The convergence of the procedure when the parameter ε takes values in the complex plane can be established in two steps. In the first, the analyticity of P (t, ε) and K(ε) in a disk |ε| < |ε 0 | ≤ r 0 , where r 0 is the radius of convergence of the series defining the coefficient matrix A(t, ε), is established. In the second, P is expressed as the exponential of an analytic function Ω(t, ε).

4.
A simplified procedure to compute the characteristic exponents.
4.1. The procedure. The algorithm outlined in section 3 to determine explicitly the matrices Ω(t, ε) and K(ε) as power series in the factorization (29) requires fixing at each order a constant matrix C n satisfying equation (36) and then computing the new coefficient matrix K n and the generator L n (t) with equation (35). This task can be simplified in some situations by relaxing the requirements one imposes on the generator L n (t). We next analyze one case which is relevant in applications, in particular when one is only interested in computing the characteristic exponents of the system. It is clear that the general (formal) solution of equation (32) is given by where G n (t) is the antiderivative of e −t ad A 0 (nK n − F n (t)). Now we choose and impose L n (t) to be periodic. Then, evaluating (44) at t = T , we have L n (T ) = L n (0) = e T ad A 0 (L n (0) + G n (T ) − G n (0)) or equivalently where we have used (45). In consequence, by choosing L n (0) = −C n we get the generator of a periodic transformation to a new representation where the coefficient matrix is given by Notice that, since the terms K n in the new representation have changed, the generators L n of the corresponding transformation are also different. Are there simpler ways to choose L n (0) satisfying these requirements? Suppose we have determined the terms F n and the function G n (t) verifies Then it makes sense to choose L n (0) = G n (0) and finally, from (44), we have L n (t) = e t ad A 0 G n (t). Unfortunately, it is not evident how to know in advance whether eq. (46) is satisfied for a given problem. In summary, for systems verifying (46), with G n (t) denoting the indefinite integral algorithm (35)-(36) may be replaced by the simpler procedure L n (t) = e t ad A 0 G n (t).
In other words, although the structure given by the Floquet theorem is no longer reproduced, M ≡ e −Ω(0,ε) e T K(ε) e Ω(0,ε) is a monodromy matrix, with the same eigenvalues as e T K(ε) . In consequence, the eigenvalues of the new matrix K(ε) obtained with the procedure (48) are precisely the characteristic exponents of the system, so that their computation can be simplified a good deal.

4.2.
Illustrative examples. We next illustrate how this simpler procedure can be carried out up to a very high order in the expansion parameter by computing the characteristic exponents of two 2 × 2 systems. It is worth stressing that the whole process involves the algebraic structure of the problem rather than its particular dimension. Example 1. First we consider the systeṁ worked out by Malkin [1]. Here ε is a real parameter and T = 2π. Using the method of small parameters, he showed that the characteristic exponents of the system are negative at least for ε < 1/9, whereas in [16] the domain of values of ε that ensure asymptotic stability is extended up to ε < 2/3. The fundamental matrix Y (t, ε) corresponding to system (50) verifies equation (8) with In this case (46) holds, and thus the perturbation scheme (48) can be applied. We have carried out the procedure up to n = 22 with a symbolic algebra package, thus obtaining in terms of the identity I and the matrices whereas the series defining k 1 (ε) and k 2 (ε) are up to terms of order ε 8 . We then compute the eigenvalues of K(ε) as a function of ε obtained with our perturbative approach for different orders of ε and compare with the exact result (as determined by the numerical integration of equation (50) with 25 digits of accuracy). One of the eigenvalues turns out to be always negative, whereas the second one is negative only for ε < 0.745023, so that it is this value which determines the stability region of the system. In Figure 1, top panel, we represent this exact eigenvalue (solid line) together with the results rendered by the perturbative algorithm of order ε 10 (dot-dashed line), ε 15 (dashed line) and ε 22 (dotted line) for different values of ε, whereas in the bottom diagram we show, in logarithmic scale, the corresponding error with respect to the exact value (with the same code for the lines). Here we also collect the result given by the approximation of order ε 6 (solid line). Notice that higher order approximations provide results that are indistinguishable from the exact value for increasingly larger values of the perturbation parameter ε. This is so even convergence is not assured for these values of ε. Example 2. As a second illustration we take the Mathieu equation with period T = π. Expressing this equation as a first order system, we get Floquet theory shows that the (complex) number ν is a characteristic exponent of the system if and only if there exists a nontrivial solution of the form e νt p(t), where p(t + T ) = p(t), and that the transition curves (also called characteristic curves) in the a-ε plane separating stable from unstable solutions, correspond to periodic solutions of (52) [9]. Our purpose here is to determine the first of such transition curves by computing approximately the characteristic exponents with the previous perturbative procedure. It is worth remarking that the obtained approximate solutions (29) and (49) are both symplectic by construction, as the exact fundamental matrix Y (t, ε) corresponding to (53).
The eigenvalues of the matrix A 0 are α 1 = − √ ai, α 2 = √ ai, and thus α 2 − α 1 = 2πi T m (where m is an integer) as long as a = m 2 , which we herewith assume. This is consistent with condition (41).
Here, for clarity in the presentation, we have omitted terms of order ε 8 . The characteristic exponents are just ν 1,2 = ± − det K(ε) = ± k 12 (ε)k 21 (ε). To determine the first transition curve we have to find the ω, ε values for which ν 1,2 = 0 or equivalently, det K [8] (ε) = 0. Series expansions for this curve can be found in the literature (see e.g. [15]) in terms of the parameters a and q ≡ −ε/2. Thus, up to O(ε 8 ), it is given by We have substituted expression (56) into det K [8] (ε) and verified that, as required by consistency of the approximation, det K [8] (ε) = O(ε 10 ). We mention in passing that by repeating this procedure with the approximation for det K(ε) obtained in [16, pag. 363], one already has a non-vanishing coefficient multiplying ε 6 . This seems to indicate the existence of some misprints in the formulae collected in [16, pag. 363], which is consistent with the observed differences with our own results. As a second test, we compute the first transition curve with the Mathematica function MathieuCharacteristicA[0, −ε/2], and compare with the power series (56) and our perturbative scheme, displaying graphically the points where det K(ε) = 0 for different orders of ε. The corresponding results are collected in Figure 2. Dotted, dot-dashed and dashed lines correspond, respectively, to the 4th, 6th and 8th order approximations obtained from (55), whereas the black solid line (lowest curve in the right hand side of the figure) stands for the exact result rendered by Mathematica. The 8th-order polynomial approximation for a 0 (ε) is clearly visible in the figure as the gray solid line. Notice how higher order approximations in the perturbative scheme render more accurate results over increasingly large intervals of the parameter ε. This is so although the considered values of ε are such that convergence of the procedure is not guaranteed.
The same procedure can be repeated of course to determine other transition curves. In that case it is convenient to consider instead a = m 2 + µ, (m = 1, 2, . . .) and work with the new parameter µ.

5.
Concluding remarks. We have adapted a variant of the Lie-Deprit algorithm to construct analytic approximations for both the constant matrix K(ε) and the periodic matrix P (t, ε) in the Floquet factorization (29) of the fundamental matrix of a linear differential equation with periodic coefficients involving a small parameter ε. It constitutes in fact a novel way to obtain the Lyapunov transformation order by order in ε leading to an autonomous system. The procedure can be easily implemented in a symbolic algebra package and is convergent for sufficiently small values of ε.
Furthermore, when the exact solution of the problem evolves in a Lie group the successive approximations rendered by the scheme also belong to the same Lie group, since all the equations one has to solve within this framework to get the generator L of the Lyapunov transformation (i.e., equations (26)) evolve in the corresponding Lie algebra and the transformation itself is obtained by exponentiating a linear combination of L k and nested commutators. This feature has important consequences regarding preservation properties of the exact solution by the resulting approximations.
Stability analysis of problems appearing in mechanics and engineering leads naturally to the study of equation (6), very often with the coefficient matrix belonging to a Lie algebra, so that the solution evolves in the corresponding Lie group. Thus, in the study of Hamiltonian classical dynamics Y (t) is a symplectic matrix, in rigid mechanics Y (t) is orthogonal, whereas in quantum mechanical problems Y (t) is a unitary matrix. In all cases, this leads to the preservation of some relevant qualitative properties along the exact evolution: the symplectic form in Hamiltonian dynamics (in particular, the volume in phase space), transition probabilities in quantum mechanics, etc. Moreover, in practical applications the coefficient matrix involves several parameters, so that it is important to have analytic approximations to try to determine perturbatively the stability domain in the space of parameters. It makes sense then to determine the transformation P (t, ε) as an element in the same Lie group, and this is automatically accomplished if L(t, ε) in (12) belongs to the corresponding Lie algebra. Our formalism not only determines the monodromy matrix but also is able to approximate the solution at intermediate times t < T .
Although the convergence of the algorithm is assured for sufficiently small values of ε, the examples we have included seem to indicate that a larger convergence domain takes place indeed, and that by considering higher order approximations we are able to reproduce the exact solution in an increasingly large range of values of ε.
The algorithm we propose here can also be useful to compute the Lyapunov transformation for non-homogeneous periodic systems of the forṁ y = A(t, ε)y + f (y, t) with a periodic function f [3]. In that case the periodic transformation y(t) = P (t, ε)z(t) changes (57) intȯ z = K(ε)z + P −1 (t, ε) f (P (t, ε)z, t) and the problem of analyzing the stability of the trivial solution of (57) is equivalent to the stability of the trivial solution of (58). In this respect, the computation of P −1 is particularly easy in this approach: we need only to exponentiate Ω(t, ε).