Spectral analysis for linear differential-algebraic equations

In this paper, the spectral theory of linear differential-algebraic equations 
(DAEs) is discussed. Lyapunov and Sacker-Sell spectra, which are well 
known for ordinary differential equations (ODEs), are studied for linear DAEs 
and their adjoint equations. The spectral properties of the DAEs are investigated 
via the so-called essentially underlying ODEs.

1. Introduction. In this paper, we study the spectral analysis for linear systems of differential-algebraic equations (DAEs) on the half-line I = [0, ∞), together with an initial condition x(0) = x 0 . Here we assume that E, A ∈ C(I, R n×n ), and f ∈ C(I, R n ) are sufficiently smooth. The leading term E is singular for all t. We use the notation C(I, R n×n ) to denote the space of continuous functions from I to R n×n . DAEs are an important and convenient modeling concept in many different application areas such as multibody mechanics, electrical circuit design, optimal control, chemical reactions, fluid dynamics, etc, see [4,16,17,18,19,24,25] and references therein. Systems of the form (1) arise when one linearizes a general implicit nonlinear system of DAEs F (t, x,ẋ) = 0, t ∈ I, along a particular non-stationary solution [6]. However, many difficulties arise in both the qualitative and the quantitative studies of DAEs due to the fact that the dynamics is constrained to a manifold, which often is only given implicitly.
The spectral theory for ODEs, initiated by Lyapunov in [23], is now well established, see [10,26], and the reference list of [1]. Also in recent years the numerical computation of spectral intervals has become feasible, see [3,14,15] and the references therein. Recently, in [21,22], the classical spectral theory for ODEs was extended to linear DAEs, showing that there are substantial differences in the theory and that 992 VU HOANG LINH AND VOLKER MEHRMANN most results of the classical ODE theory hold for DAEs only under further restrictions. Numerical methods for computing spectral quantities of DAEs were introduced in [21,22].
In this paper, the spectral theory for DAEs initiated in [21] is revisited and improved. In contrast to [21], we carry out the spectral analysis via rather general essential underlying ODEs, that require less restrictive assumptions and simultaneously allow more flexibility in the construction of the numerical methods, via smooth QR or singular value decompositions of a fundamental solution matrix.
The outline of the paper is as follows. In the following section, we briefly recall the strangeness-free form of DAEs developed in [19] and propose the construction of EUODEs. In Section 3 we extend the spectral theory of DAEs that was initiated in [21]. The spectral relation between a DAE and its adjoint equation is investigated as well. In Section 4, the stability of Lyapunov exponents is discussed. In Section 5 we will then discuss the concept of exponential dichotomy and the Sacker-Sell spectrum for DAEs. We close the paper with some conclusions and suggestions for future work.
2. Strangeness-free DAEs and EUODEs. General linear DAEs with variable coefficients have been studied in detail in the last twenty years, see [19] and the references therein. In order to understand the solution behavior and to obtain numerical solutions, the necessary information about derivatives of equations has to be used. This has led to the concept of the strangeness index, which under very mild assumptions allows to use the DAE and (some of) its derivatives to be reformulated as a system with the same solution that is strangeness-free, i.e., for which the algebraic and differential part of the system are easily separated. In this paper for the discussion of spectral intervals, we restrict ourselves to regular DAEs with sufficiently smooth coefficients, i.e., we require that (1) (or the original nonlinear DAE locally) has a unique solution for appropriately chosen (consistent) initial conditions, see [19] for a discussion of more general nonregular DAEs.
With the theory and appropriate numerical methods available, then for regular DAEs we may assume that the homogeneous DAE in consideration is already strangeness-free and has the form where is invertible for all t, in particular, E 1 and A 2 are of full row-rank. Under the discussed assumptions it is easy to see that an initial vector x 0 ∈ R n is consistent for (2) if and only if A 2 (0)x 0 = 0.
The following modification of [21,Lemma 7] is the key to the theory and numerical methods for the computation of spectral intervals for DAEs, Lemma 2.1. Consider a regular strangeness-free DAE system of the form (2) with (sufficiently smooth) coefficients E, A. Let U ∈ C 1 (I, R n×d ) be an arbitrary orthonormal basis of the solution subspace of (2). Then there exists a matrix function V ∈ C(I, R n×d ) with pointwise orthonormal columns such that by the change of variables x = U z and multiplication of both sides of (2) from the left by V T , one obtains the system where E := V T EU , A := V T AU − V T EU and E is upper triangular.

SPECTRAL ANALYSIS FOR LINEAR DAES 993
Proof. Considering an arbitrary solution x and substituting x = U z into equation (2), we obtain EUż = (AU − EU )z.
(4) Since (2) is strangeness-free and since A 2 U = 0, we have that the matrix EU must have full column-rank. Thus, see [11], there exists a smooth QR-decomposition EU = V E, where the columns of V form an orthornormal set and E is nonsingular and upper triangular. This decomposition is unique if the diagonal elements of E are chosen positive. Multiplying both sides of (4) by V T , we arrive at System (3) is an implicitly given ODE, since E is nonsingular. It is called essentially underlying implicit ODE system (EUODE) of (2) and it can be made explicit by multiplying with E −1 from the left, see also [2] for constructing EUODEs of socalled properly-stated DAEs. In numerical methods it is necessary to construct the coefficients of the EUODE pointwise. Note, however, that in (4) for a fixed given U , the matrix function V is not unique. In fact, any V for which V T EU is invertible yields an implicit EUODE, but obviously E −1 A is unique. Thus, for a given U , the correspondence between the solutions of (2) and those of (3) is one-one, i.e., x is a solution of (2) if and only if z = U T x is a solution of (3). Note that U U T is just a projection onto the solution subspace of (2), hence Remark 1. Different special choices of the basis U will lead to different methods for approximating Lyapunov exponents. If we take U to be the Q-factor of the QR factorization of a minimal fundamental matrix X, then we have the continuous QR method for the numerical approximation of spectral intervals for (2) [21,22]. Alternatively, we may use a smooth singular value decomposition (if it exists) of X and this will allow to extend the SVD-based methods of [12,13] to DAEs.

Lyapunov exponents and Lyapunov spectral intervals.
In this section, we briefly recall basic concepts that are needed for the spectral theory for DAEs. Definition 3.1. A matrix function X ∈ C 1 (I, R n×k ), d ≤ k ≤ n, is called fundamental solution matrix of the strangeness-free DAE (2) if each of its columns is a solution to (2) and rank X(t) = d, for all t ≥ 0. A fundamental solution matrix is said to be minimal if k = d.
One may construct a minimal fundamental matrix solution by solving initial value problems for (2) with d linearly independent, consistent initial vectors, see [21,22].
Let f : [0, ∞) −→ R be a non-vanishing function. The quantities are called upper and lower, Lyapunov exponents of f , respectively. In a similar way we define upper and lower Lyapunov exponents for vector valued functions, where the absolute values are replaced by norms.
Definition 3.2. For a given fundamental solution matrix X of a strangeness-free DAE system of the form (2), and for 1 ≤ i ≤ d, we introduce where e i denotes the i-th unit vector and ||·|| denotes the Euclidean norm. The columns of a minimal fundamental solution matrix form a normal basis if Σ d i=1 λ u i is minimal. The λ u i , i = 1, 2, ..., d belonging to a normal basis are called (upper) Lyapunov exponents and the intervals [λ i , λ u i ], i = 1, 2, ..., d, are called Lyapunov spectral intervals, their union is called the Lyapunov spectrum of (2). Definition 3.3. Suppose that P ∈ C(I, R n×n ) and Q ∈ C 1 (I, R n×n ) are nonsingular matrix functions such that Q and Q −1 are bounded. Then the transformed DAE withẼ = P EQ,Ã = P AQ − P EQ and x = Qx is called globally kinematically equivalent to (2) and the transformation is called a global kinematic equivalence transformation. If P ∈ C 1 (I, R n×n ) and, furthermore, also P and P −1 are bounded then we call this a strong global kinematic equivalence transformation.
It is clear that the Lyapunov exponents of a DAE system as well as the normality of a basis formed by the columns of a fundamental solution matrix are preserved under global kinematic equivalence transformations.
A normal basis for (2) exists and it can be constructed from any (minimal) fundamental matrix solution.
Proposition 1. For any minimal fundamental matrix X of (2), for which the Lyapunov exponents of the columns are ordered decreasingly, there exist a constant, nonsingular, and upper triangular matrix C ∈ R d×d such that the columns of XC form a normal basis for (2).
Proof. Since orthonormal changes of basis keep the Euclidean norm invariant, the spectral analysis of (2) can be done via its EUODE. Thus, let Z be the corresponding fundamental matrix of (3), X = U Z. Due to the existence result of a normal basis for ODEs [23] (see also [1,14]), there exist a matrix C with the properties listed in the assertion such that ZC is a normal basis for (3). Thus, XC = U ZC is a normal basis for (2).
As a direct consequence we have the following important theorem.
Theorem 3.4. Let X be a normal basis for (2). Then the Lyapunov spectrum of the DAE (2) and that of the ODE (3) are the same. If E, A are as in (3) and if E −1 A is bounded, then all the Lyapunov exponents of (2) are finite. Furthermore, the spectrum of (3) does not depend on the choice of the basis U and the matrix function V .
where Z(t) is a fundamental matrix solution of (3).
Remark 2. The Lyapunov regularity of a strangeness-free DAE system (2) is welldefined, since it does not depend on the choice of U, V . Furthermore, the Lyapunov regularity of (2) implies that for any nontrivial solution x, the limit lim t→∞ 1 t ln ||x(t)|| exists, i.e., we have λ l i = λ u i . We note that that unlike in [9], where certain inherent ODEs of the same size as the original DAE are in use, our spectral analysis is based on the EUODE which can be constructed numerically.
In the following we consider the adjoint equation of (2), given by see e.g., [7,20], and also a slightly different formulation in [2]. The following statement shows a relation between EUODEs of (2) and (7).
Proposition 2. Let the orthonormal columns of the matrix U form a basis of the solution subspace of (2). Then there exists V ∈ C 1 (I, R n×d ) such that the columns of V form an orthonormal basis for the solution subspace of (7). Furthermore, by the change of variables y = V w and multiplication of both sides of (7) by U T , we obtain the EUODE for the adjoint system (7), given by which is exactly the adjoint of (3). If U is such the matrix E is upper triangular with positive diagonal elements, then the corresponding V is unique.
Proof. We first prove uniqueness. Suppose that there exist matrix functions V andV with orthonormal columns, such that Eż = Az andÊż =Âz, respectively, where both E andÊ are upper triangular with positive diagonal elements. Since the columns of V andV form bases of the same subspace, there exists a matrix function S ∈ C(I, R n×d ), projection onto the solution subspace, i.e., range V ), and thus S is orthogonal. On the other hand, by the construction of the EUODE, we have S TÊ = E andÊ is invertible, which implies that S T is upper triangular. Hence, S is a diagonal matrix with diagonal elements +1 or −1. But since E andÊ have positive diagonals, then S = I. For the existence we give a constructive proof. From the proof of Lemma 2.1, V is determined via EU = V E. Due to the special form of E, we can first determine an auxiliary pairṼ ,Ẽ such that EU = E 1 U 0 =ṼẼ, which implies thatṼ = Ṽ 1 0 . HereṼ 1 andẼ are determined by, e.g., a smooth QR decomposition of E 1 U .
Unfortunately, in general such aṼ is not a basis of the solution subspace of (7) yet. We observe that we can replace the zero block by anyṼ 2 and the relationṼ T EU = E still holds. Hence for the construction, we look for an appropriateṼ 2 block so that satisfies the algebraic constraint of (7). If we then orthonormalize the columns ofṼ , we are done. The adjoint DAE (7) has the form and, since the adjoint of (2) is again strangeness-free, see [20], we can reorder the equations so that the left upper d × d-block of the coefficient matrix on the lefthand side is nonsingular and then eliminate the left lower block. Then, we obtain an equivalent DAE of the form

22Ã
T 12Ṽ1 . Finally, applying a Gram-Schmidt orthogonalization toṼ , we obtain a basis of (7), denoted by V , which also fulfils V T EU = E, where E is upper triangular. One easily verifies that the obtained EUODE of the adjoint equation (7) is exactly the adjoint of EUODE (3) of the original DAE (2).
With these preparations we obtain a generalization of [21,Theorem 19] on the relation between the Lyapunov exponents of (2) and those of (7). Theorem 3.6. Suppose that the matrix function E = V T EU and its inverse are bounded on I, where the columns of U, V form bases of the solution spaces in Proposition 2. If λ l i are the lower Lyapunov exponents of (2) and −µ u i are the upper Lyapunov exponents of the adjoint system (7), both in increasing order, then λ l i = µ u i , i = 1, 2, ..., d, Furthermore, system (2) is Lyapunov regular if and only if (7) is Lyapunov regular, and in this case we have the Perron identity Proof. Due to Proposition 2, it suffices to consider two implicit EUODEs which are adjoint of each other. The assertion then follows by the same argument as that in the proof of [21,Theorem 19], which is based on the Lagrange identity W T (t)E(t)Z(t) = W T (0)E(0)Z(0), where Z and W are fundamental solutions of EUODE (3) and its adjoint (8), respectively.
Since we have used the assumption that E and its inverse are bounded, it would be more practical to check this property in terms of the original data. This is easily possible for the class of so-called semi-implicit DAEs (i.e., those with E 12 = 0), see [21,Theorem 19]. 4. Stability of Lyapunov exponents. Lyapunov exponents may be very sensitive under small changes in the system. In order to study this sensitivity for DAEs, we consider the specially perturbed system where and where F 1 and H 1 , H 2 are assumed to have the same order of smoothness as E 1 and A 1 , A 2 , respectively. Perturbations of this structure are called admissible, generalizing the concept for the constant coefficient DAEs studied in [5]. The DAE (2) is said to be robustly strangeness-free if it stays strangeness-free under all sufficiently small admissible perturbations. It is easy to see that the DAE (2) is robustly strangenessfree under admissible perturbations if and only if the matrix functionĒ defined is boundedly invertible.
In the following we restrict ourselves to robustly strangeness-free DAE systems under admissible perturbations.  (2) are said to be stable if for any > 0, there exists δ > 0 such that the conditions sup t ||F (t)|| < δ, sup t ||H(t)|| < δ, and sup t Ḣ 2 (t) < δ on the perturbations imply that the perturbed DAE system (10) is strangeness-free and |λ u i − γ u i | < , for all i = 1, 2, ..., d, where the γ u i are the ordered upper Lyapunov exponents of (10). It is clear that the stability of upper Lyapunov exponents is invariant under strong global kinematic equivalence transformations. Compared with the ODE case, the boundedness condition onḢ 2 , which is obviously satisfied in the time-invariant setting [5], is an extra condition and it seems to be somehow unusual. However, the following simple example shows its necessity.
Example 4.2. Consider the systemẋ 1 = x 1 , 0 = x 2 , which is easily seen to be robustly strangeness-free and Lyapunov regular with Lyapunov exponent λ = 1. Now, we consider the perturbed DAE where ε is a small perturbation parameter and n is a given integer. From the second equation of (11), we obtain x 2 = 2ε sin nt x 1 . Differentiating this expression for x 2 and inserting the result into the first equation, after some elementary calculations, we obtainẋ 1 = (1 + nε 2 + nε 2 cos 2nt) x 1 . Explicit integration yields x 1 (t) = e (1+nε 2 )t+ε 2 sin 2nt/2 , from which the only Lyapunov exponentλ = 1 + nε 2 is calculated. Clearly, though ε is small (hence the perturbations in the coefficient matrices are small), the difference between two Lyapunov exponents may be arbitrarily large by choosing sufficiently large n.
As our next topic, we discuss the concept of integral separation for DAEs. for all t, s with t ≥ s ≥ 0. If a DAE system has an integrally separated minimal fundamental solution matrix, then we say it has the integral separation property.
The integral separation property is invariant under strong global kinematic equivalence transformations. Furthermore, if a fundamental solution X of (2) is integrally separated, then so is the corresponding fundamental solution Z of (3) and vice versa.
Note that by using a global kinematic equivalence transformation, see [21,Remark 13], (2) can be transformed to a special structured form, where the block A 21 becomes zero. The advantage of this form is that the associated EUODE then reads E 11ẋ1 = A 11 x 1 . Therefore, for the perturbation analysis, in the following we may assume that (2) is already given with A 21 = 0, i.e., the perturbed DAE has the form (E 11 + F 11 )ẋ 1 + (E 12 + F 12 )ẋ 2 = (A 11 + H 11 )x 1 + (A 12 + H 12 )x 2 We then have the following sufficient conditions for the upper Lyapunov exponents of (2) to be stable.
The last equation is equivalent to ( Under the boundedness assumptions of Definition 4.1, it is easy to check that the terms E −1 11F 11 and E −1 11H 11 are arbitrary small in norm, provided that the norms of the original perturbation F, H are sufficiently small and d dt H 22 is bounded. Using the well-known result on the stability of Lyapunov exponents for ODEs, see [1] and also [14], the assertion follows.
Remark 3. Example 4.2 and Theorem 4.4 demonstrate that, unlike in the perturbation analysis of time-invariant DAEs [5], that of time-varying DAEs requires more restrictive conditions. However, for some classes of structured problems, see [8] and [21,Section 3.2], parts of these conditions can be relaxed.

5.
Sacker-Sell spectrum. An alternative concept to describe the growth rate of solutions for ODEs is that of Sacker-Sell or exponential dichotomy spectra.
Definition 5.1. The DAE (2) is said to have exponential dichotomy if for any minimal fundamental solution X there exist a projection P ∈ R d×d and positive constants K and α such that where X + denotes the generalized Moore-Penrose inverse of X.
Note that here (2) is not required to be transformed into an semi-implicit form as in [21]. Let X be a fundamental solution matrix of (2) and let the columns of U form an orthonormal basis of the solution subspace, then we have X = U Z, where Z is the fundamental solution matrix of the corresponding EUODE (3). Observing that X + = Z −1 U T , we have Thus, the following statement is obvious. Furthermore, as it has been remarked in [12,14], the projection P can be chosen to be orthogonal, i.e., P = P T . The projector P determines a subspace of the solution subspace in which all the solutions are uniformly exponentially decreasing, while the solutions belonging to the complementary subspace are uniformly exponentially increasing.
In order to extend the concept of exponential dichotomy spectrum to DAEs, we need shifted DAE systems where λ ∈ R. By using the same transformation as in Lemma 2.1, we obtain the corresponding shifted EUODE Eż = (A − λE)z.
From Proposition 3 and the result for the ODE case [26], we have the following theorem.
Theorem 5.3. The Sacker-Sell spectrum of (2) is exactly the Sacker-Sell spectrum of its EUODE (3). Furthermore, the Sacker-Sell spectrum of (2) consists of at most d closed intervals.
Using the same arguments as in [21,Section 3.4], one can show that under the boundedness conditions of Theorem 4.4, unlike the Lyapunov spectrum, the Sacker-Sell spectrum of the DAE (2) is stable with respect to admissible perturbations introduced in Definition 4.1.

Remark 4.
A third spectral concept that is closely related are the Bohl exponents, see [10]. We do not discuss this concept here but the extension of this concept to strangeness-free DAEs is straightforward and it has been shown in [21] that if X is an integrally separated fundamental matrix of (2), then the Sacker-Sell spectrum of the system is exactly given by the d (not necessarily disjoint) Bohl intervals associated with the columns of X.
6. Conclusion. In this paper we have improved the spectral analysis for linear DAEs that was introduced in [21]. The key idea is to construct an implicit EUODE which has the same spectral properties as the original DAE. This approach provides a unified insight into different kinds of computational technique for approximating spectral intervals for DAEs. By using appropriately constructed EUODEs, an extension of the continuous SVD method for computing the spectral intervals and their associated leading directions proposed in [12,13] to the case of DAEs seems to be straightforward as it has been done with the QR method, see [22].
As future work, the spectral analysis for nonlinear DAEs, namely, a rigorous study of the relation between the Sacker-Sell spectrum of a linearized system and the growth rates of attracting (repelling) solutions in the neighborhood of the particular solution is an important problem.