From Morse Triangular Form of ODE Control Systems to Feedback Canonical Form of DAE Control Systems

In this paper, we relate the feedback canonical form \textbf{FNCF} of differential-algebraic control systems (DACSs) with the famous Morse canonical form \textbf{MCF} of ordinary differential equation control systems (ODECSs). First, a procedure called an explicitation (with driving variables) is proposed to connect the two above categories of control systems by attaching to a DACS a class of ODECSs with two kinds of inputs (the original control input $u$ and a vector of driving variables $v$). Then, we show that any ODECS with two kinds of inputs can be transformed into its extended \textbf{MCF} via two intermediate forms: the extended Morse triangular form and the extended Morse normal form. Next, we illustrate that the \textbf{FNCF} of a DACS and the extended \textbf{MCF} of the explicitation system have a perfect one-to-one correspondence. At last, an algorithm is proposed to transform a given DACS into its \textbf{FBCF} via the explicitation procedure and a numerical example is given to show the efficiency of the proposed algorithm.


Introduction
Consider a linear differential-algebraic control system (DACS) of the form where x ∈ X ∼ = R n is called the "generalized" state, u ∈ R m is the vector of control inputs, and where E ∈ R l×n , H ∈ R l×n and L ∈ R l×m . A linear DACS of the form (1) will be denoted by ∆ u l,n,m = (E, H, L) or, simply, ∆ u . In the case of the control u being absent, the system becomes a linear differential-algebraic equation (DAE) Eẋ = Hx, which is called regular if l = n and sE − H ∈ R n×n [s]\0. A detailed exposition of the theory of linear DAEs and DACSs can be consulted in the textbooks [16], [13] and the survey paper [22]. Early results on linear DAEs can be traced back to two famous canonical forms of the matrix pencil sE − H given by Weierstrass [34] and Kronecker [21]. The following literature discusses the normal forms and canonical forms of linear DAE systems. The authors of [20] proposed a canonical from for controllable and regular DACSs. Several forms for regular systems based on their controllability and impulse controllability were given in [19]. In [31], a canonical form of general DACSs was discussed. More recently, a normal form based on impulse-controllability and impulse-observability of DACSs was proposed in [32], and a quasi-Weierstrass and a quasi-Kronecker triangular/normal forms of DAEs were given in [6] and [9], respectively. In the present paper, we discuss the feedback canonical form FBCF obtained in [24] (we restate it as Theorem 4.4 of the present paper) for general linear DACSs, which plays an important role in, e.g. controllability analysis [7], regularization problems [12], [8], pole assignment [25], [10] and stabilization [4]. The FBCF of DACSs is actually an extension of the Kronecker canonical form of general linear DAEs. Some methods (most are numerical) of transforming a DAE into its Kronecker canonical form can be found in [17], [33], [3].
In [15], we proposed a notion, called explicitation, to connect DAEs with control systems. In the present paper, we will propose a new explicitation procedure called explicitation with driving variables (see Definition 2.2), and differences and relations of the two explicitation methods are discussed in Remark 2.5. Since the vector of driving variables v enters statically into the system (similarly as the control input u), we can regard it as another kind of input. More specifically, the explicitation with driving variables of a DACS is a class of ODECSs with two kinds of inputs of the form: where A ∈ R n×n , B u ∈ R n×m , B v ∈ R n×s , C ∈ R p×n and D u ∈ R p×m , where u ∈ R m is the vector of control variables and v ∈ R s is the vector of driving variables. An ODECS of the form (2) will be denoted by Λ uv n,m,s,p = (A, B u , B v , C, D u ) or, simply, Λ uv . Note that although both u and v may be considered as inputs of system (2), we distinguish them because they play different roles for the system and, as a consequence, their feedback transformation rules are different (see Remark 2.8). Observe that we can express an ODECS Λ uv of the form (2), as a classical ODECS Λ w = (A, B w , C, D w ) of the form Λ u : by denoting w = [u T , v T ] T , B w = [ B u B v ] and D w = [ D u 0 ]. Throughout the paper, depending on the context, we will use either Λ uv or Λ w to denote an ODECS with two kinds of inputs.
We use Figure 1 to show the relations of the results of the paper. The purpose of this paper is to find an efficient geometric way to transform a DACS ∆ u into its feedback canonical form FBCF via the explicitation procedure. As we have pointed out, the FBCF is a generalization, on one hand, of the classical Kronecker form (because a DACS is a differential-algebraic equation) and one the other hand, of the Brunovsky canonical form [11] (because a DACS is a control system). The explic-itation procedure allows us to attach to a DACS a control system Λ uv with an output y (defining the algebraic constraint as y = 0) and to study the double nature of a DACS (differential-algebraic and control-theoretic) simultaneously by analyzing Λ uv . More specifically, instead of using transformations directly on a DACS, we will first transform an ODECS Λ uv , given by the explicitation of our DACS, into its canonical form (called the extended Morse canonical form EMCF, see Theorem 4.1). Then by the relation between DACSs and ODECSs given in Section 2, we can easily get the FBCF from the EMCF. Moreover, inspired by the quasi-Kronecker triangular form of [9], we will propose a Morse triangular form MTF (see Proposition 3.1) to transform an ODECS (with one type of controls) into its Morse normal form MNF (see Proposition 3.2). Note that a procedure of transforming an ODECS Λ u into its MCF was given by Morse [28] for D u = 0 and by Molinari [27] for the general case D u = 0. We propose to do it via two intermediate normal forms MTF  This paper is organized as follows. In Section 2, we introduce the explicitation with driving variables procedure and build geometric connections between DACSs and ODECSs. In Section 3, we show a method of constructing the MTF and the MNF for classical ODECSs of the form (3), then we extend them to the EMTF and the EMNF for ODECSs (with two kinds of inputs) of the form (2). In Section 4, we propose the EMCF for ODECSs of the form (2), which allows to construct the FBCF of DACSs as a corollary and we formulate the construction of the FBCF via the explication procedure as an algorithm. In Section 5, we give a numerical example to show the efficiency of the algorithm. Section 6 and 7 contain proofs and conclusions of the paper, respectively.
The definitions of geometric invariant subspaces for ODECSs and DACSs are given in Appendix.
Throughout, we will use the following notations:

Explicitation with driving variables for linear DACSs
A solution of ∆ u is a map (x(t), u(t)) : R → X × R m with x(t) ∈ C 1 and u(t) ∈ C 0 satisfying Eẋ(t) = Hx(t) + Lu(t). Notice that to some C 0 -controls u(t), there may not correspond any C 1solution x(t) because of algebraic relations between u i 's and x j 's present in ∆ u of the form (1).
Now we introduce the explicitation with driving variables procedure for ∆ u as follows.
• Denote the rank of E by q ∈ N, define s = n − q and p = l − q. Then there exists a matrix where QH = H1 H2 , QL = L1 L2 , and where H 1 ∈ R q×n , H 2 ∈ R (l−q)×n , L 1 ∈ R q×m , L 2 ∈ R (l−q)×m .
• Consider the differential part of (5): The matrix E 1 is of full row rank q, so let E † 1 ∈ R n×q denote its right inverse, i.e., • Choose a full column rank matrix B v ∈ R n×s such that Im B v = ker E 1 = ker E (note that the kernels of E 1 and E coincide since any invertible Q preserves the kernel). Then the vector v ∈ R s of driving variables (see Remark 2.5 for a control-theory interpretation of v) parameterizes the subspace ker E 1 = Im B v via B v v and the solutions of the differential inclusion (6), and thus of (5a), correspond to the solutions oḟ • We claim, see Proposition 2.4 below, that all solutions of (5) (and thus of the original DAE ∆ u ) are in one-to-one correspondence with all solutions (corresponding to all driving variables where C = H 2 ∈ R p×n and D u = L 2 ∈ R p×m . Recall that a control system of the form (2) is denoted by Λ uv n,m,s,p = (A, B u , B v , C, D u ). It is immediately to see that equation (8) can be obtained from the ODECS Λ uv by setting the output y = 0. In the above way, we attach an ODECS Λ uv to a DACS ∆ u .
The above procedure of attaching a control system Λ u,v to a DACS ∆ u will be called explicitation with driving variables and is formalized as follows.
Definition 2.2. Given a DACS ∆ u l,n,m = (E, H, L), by a (Q, v)-explicitation, we will call a control The class of all (Q, v)-explicitations will be called the explicitation with driving variables class or, shortly explicitation class, of ∆ u , denoted by Expl(∆ u ). If a particular ODECS Λ uv belongs to the explicitation class Expl(∆ u ), we will write Λ uv ∈ Expl(∆ u ).
The definition of the explicitation class Expl(∆ u ) suggests that a given ∆ u has many (Q, v)-explicitations. Indeed, the construction of Λ uv ∈ Expl(∆ u ) is not unique at three stages: there is a freedom in choosing Q, E † 1 , and B v . We show in the following proposition that Expl(∆ u ) is actually an ODECS defined up to a v-feedback transformation, an output injection and an output transformation, that is, a class of ODECSs.
corresponding to a choice of invertible matrix Q, right inverse E † 1 , and matrix B v . ThenΛ uṽ n,m,s,p = (Ã,B u ,Bṽ,C,D u ) is a (Q,ṽ)-explicitation of ∆ u corresponding to a choice of invertible matrixQ, right inverseẼ † 1 , and matrixBṽ if and only if Λ uv andΛ uṽ are equivalent via a v-feedback transformation of the form v = F v x + Ru + T −1 vṽ , an output injection Ky = K(Cx + D u u) and an output multiplicationỹ = T y y, which map where F v , K, R, T v , T y are matrices of appropriate sizes, and T v and T y are invertible.
The following proposition shows that solutions of any DACS are in one-to-one correspondence with solutions of its (Q, v)-explicitations.
is a solution of Λ uv respecting the output constraints y = 0, i.e., a solution of (8).
The proofs of Proposition 2.3 and Proposition 2.4 will be given in Section 6.1.

Remark 2.5.
Notice that the definition of (Q, v)-explicitation in the present paper is different in two aspects from the (Q, P )-explicitation of [15] (or see Chapter II of [14]). First, in this paper we consider the explicitation of DACSs while in [15] we dealt with DAEs (with no controls). The second difference is that in (Q, v)-explicitation, we keep the original generalized state variables x and add new driving variables v while in (Q, P )-explicitation of [15], we look for a partition (z 1 , z 2 ) = z = P x into state-and control-variables. More specifically, consider a DACS ∆ u l,n,m = (E, H, L), then via two invertible matrices Q and P , the system ∆ u is ex-fb-equivalent with F = 0 and G = I m (or ex-equivalent, according to the terminology of [15], since here we do not use feedback transformation for ∆ u ) to a pure semi-explicit PSE DACS ∆ u P SE : where P is any invertible map such that ker P 1 = ker E. We attach to ∆ u P SE , the control system Λ uz 2 : where z 2 ∈ Z 2 = ker E is the vector of free variables (which perform like inputs), z 1 ∈ Z 1 is the state such that Z 1 ⊕ Z 2 = X ∼ = R n , and y is the output. The system Λ uz 2 is called a (Q, P )-explicitation of ∆ u and we will write Λ uz 2 ∈ Expl(∆ u ), where Expl(∆ u ) is the explicitation class consisting of all (Q, P )-explicitations of ∆ u (clearly, for a given ∆ u , its (Q, P )-explicitation is not unique). Now by adding the equationż 2 = v, we obtain the (dynamical) prolongation Λ uv of Λ uz 2 Λ uv : which is actually an (I l , v)-explicitation of ∆ u P SE . We can summarize the relations between the notions of (Q, P )-explicitation and (Q, v)-explicitation by the following diagram.
The systems ∆ u and ∆ u P SE above are DACSs and their ex-equivalence is (Q, P )-equivalence of DACSs. The system Λ uṽ and Λ uv at the bottom are control systems and their EM-equivalence is the extended Morse equivalence given in Definition 2.7. Note that the implication that the (Q,ṽ)explicitation Λ uṽ of ∆ u is EM-equivalent to the prolongation system Λ uv is a corollary of Theorem 2.9 below since Λ uv ∈ Expl(∆ u P SE ), Λ uṽ ∈ Expl(∆ u ), and ∆ u P SE ex ∼ ∆ u .
Remark 2.6. The above explicitation (via driving variables) procedure can also be applied to more general DAE systems such as DACSs with time delays (see e.g., [1]) and external disturbances (see e.g., [5]). For example, take a DACS of the following form where τ represents a time delay and d(t) is a vector of external disturbances. It is always possible to find an invertible matrix Q such that E 1 of QE = E1 0 is of full row rank. Then we denote Choose B v such that ImB v = ker E 1 and a right inverse E † 1 of E 1 , and define With the above defined matrices, we can attach the following ODECS with time delays and external disturbance to (12): It is clear that if DACS (12) is not time-delayed, i.e. T = 0 (hence M = 0) and thus x(t−τ ) is absent, then the results of Proposition 2.4 still hold for (12) and (13), meaning that solutions (x(·), d(·), u(·)) of (12) are in a one-to-one correspondence with solutions (x(·), u(·), d(·), v(·)) of (13) with outputs y = 0. While if a delayed term is present, the analysis of solutions is more complicated because for delayed DAE systems, the existence of solutions depends on the initial condition for t ∈ [−τ, 0] (see some studies on solutions of regular delay DAEs in [13,18]). A particular case is that if the matrices E and T of (12) satisfy ker E ⊆ ker T , implying that there are no delayed free variables in the generalized state x, then it is clear that solutions of (12) and those of (13) still have a one-to-one correspondence. We will not give further discussions on solutions of delayed DAE/DACSs since the purpose of this paper is to study canonical forms but the application of the explicitation method to such systems seem to be an interesting subject for further research.
Since the explicitation of ∆ u is a class of ODECSs of the form (2), we give the following definition of equivalence for ODECSs of the form (2). This definition is a natural extension of the Morse equivalence ( [28], extended by Molinari [27], see also [15]) of classical ODECSs of the form (3).
R ∈ R s×m , K ∈ R n×p such that the system matrices of Λ uv andΛũṽ satisfy: An 8-tuple (T x , T u , T v , T y , F u , F v , R, K), acting on the system according to (14), will be called an extended Morse transformation and denoted by EM tran .
The matrices T x , T u , T v and T y are coordinates transformations in the, respectively, state space defines a state feedback of u, F v and R define a feedback of v, K defines an output injection.
Remark 2.8. (i) An extended Morse transformation, whose action is given by (14), includes two kinds of feedback transformations: The vector of driving variables v is "stronger" than the original control vector u since when transforming v we can use both u and x as feedback, but when transforming u we are not allowed to use v. This is expressed by the triangular form of the matrix multiplying on the right in (14).
(ii) Recall the definition of the Morse equivalence and the Morse transformation [28] (and their generalization by Molinari [27] for D u = 0, see also [15]): for two ODECSs Λ u = (A, B u , C, D u ) and (iii) Recall that we can express an ODECS of the form then we conclude the following equation from (14) (notice that T w has a block-triangular structure): which is exactly the expression of the M-equivalence for systems Λ w (compare Remark 2.8(ii) above).
It implies that the EM-equivalence can be expressed as a form of the M-equivalence with a triangular matrix T w (input coordinates transformation matrix). This triangular form is a consequence of two kinds of feedback transformation shown in equation (15). The proof will be given in Section 6.1. In the Appendix, we recall the definitions of geometric subspaces for DACSs and ODECSs. More specifically, for a DACS ∆ u , we recall the augmented Wong sequences V i and W i , together withŴ i (see [7], [23]); for an ODECS Λ w , we recall the subspaces sequences V i and W i (see [36], [35], [2]), whose limits are controlled and conditioned invariant subspaces, respectively, and we introduce a subspaces sequenceŴ i .
given by Definition 7.2 and the subspaces V i , W i ,Ŵ i of Λ w , given by Lemma 7.4 in the Appendix. Assume that Λ uv ∈ Expl(∆ u ).
Then we have for i ∈ N, The proof will be given in Section 6.2. Note that Theorem 2.9 and Proposition 2.10 are fundamental results for the remaining part of the paper. The above proposition shows the importance of the notion of (Q, v)-explicitation. Namely, the augmented Wong sequences of any DACS ∆ u and the invariant subspaces of its (Q, v)-explicitation Λ w coincide (in particular, they are subspaces of the same generalized state-space X ). If we use the (Q, P )-explicitation, we need to establish relations between subspaces of different spaces X and Z 1 (see Remark 2.5). Our purpose is to find the FBCF of DACSs via explicitation. We have proven in Theorem 2.9 that the ex-fb-equivalence for DACSs corresponds to the EM-equivalence for their explicitations. Thus rather than transforming a DACS ∆ u directly into its FBCF under ex-fb-equivalence, we will look for the canonical form for

The Morse triangular form and its extension
In the beginning of this section, we show that the normal form given in [27] (called Morse normal form MNF in the present paper) for the 4-tuple ODECS Λ u , given by equation (3), can be constructed through a Morse triangular form MTF that we propose. Although the constructed normal form is the same as the one in [27], we will provide explicit transformations with the help of the invariant subspaces given in Lemma 7.4 of the Appendix, which makes the normalizing procedure simple and transparent.
and there exist matrices F M T ∈ R m×n and K M T ∈ R n×p such that the Morse transformation In the above MTF, the pair (Ã 1 ,B 1 ) is controllable, the pair (C 4 ,Ã 4 ) is observable and the 4-tuple The proof is given in Section 6.3. In the next proposition, we describe a way to transform the above MTF into the Morse normal form MNF, which is a further simplification of the MTF. We will use the same notations as in Proposition 3.1.
In the above MNF, the pair (Ā 1 ,B 1 ) is controllable, the pair (C 4 ,Ā 4 ) is observable, and the 4-tuple The proof of Proposition 3.2 will be given in Section 6.4 and in that proof, we will use the construction of transformation matrices F M N , K M N and T M N , which is formulated in the following algorithm.

MNF Algorithm 3.3.
Step 1: Given the matrix (18), choose F M N and K M N : such that the spectra ofĀ 1 ,Ā 2 ,Ā 3 andĀ 4 defined by the equation below are mutually disjoint (notice that F M N and K M N preserve the zero blocks ofΛũ = (Ã,Bũ,C,Dũ)): Step 3: Set Remark 3.4. It is not surprising that Propositions 3.1 and 3.2 describe results similar to those of Theorem 2.3 and Theorem 2.6 of [9], as we have shown in [15] that there are direct connections between the geometric subspaces (the Wong sequences) of a DAE ∆ : Eẋ = Hx and invariant subspaces of a control system Λ = (A, B, C, D) ∈ Expl(∆). There are, however, differences between Propositions 3.1 and 3.2 and results of [9]. In particular, in Theorem 2.6 of [9], one has to solve generalized Sylvester equations, while in Propositions 3.2 we use (constrained) Sylvester equations.
In addition, our transformations differ from those proposed in the original paper [29] and [27] for the MNF and seem to be more transparent and explicit.
Recall that the explicitation of a DACS ∆ u is a class of ODECSs with two kinds of inputs of the form (2). In the following theorems, we will extend the results of Proposition 3.1 and 3.2 to ODECSs with two kinds of inputs. where In the above EMTF, the pair ( In the above EMNF, the pair ( The proofs of Theorem 3.5 and Theorem 3.6 are given in Section 6.5.
with the matrices and their invariants of the following form: The integers 1 , ..., a ∈ N + are the controllability indices of (A cu , B cu ), the integers¯ 1 , ...,¯ b ∈ N + are the controllability indices of (A cv , B cv ).
(ii) A nn ∈ R n2×n2 is unique up to similarity and can always be put in the real Jordan form.
(iii) Both the 4-tuple (A pu , B pu , C pu , D pu ) and the triple (A pv , B pv , C pv ) are prime, and thus controllable and observable. That is, where Â puBpû C pu 0 is square and invertible and δ = rankD pu ∈ N, and the matriceŝ . The integers σ 1 , ..., σ c ∈ N + are the controllability indices of the pair (Â pu ,B pu ) and they are equal to the observability indices of the pair (Ĉ pu ,Â pu ). The integersσ 1 , ...,σ d ∈ N + are the controllability indices of the pair (A pv , B pv ) and they are equal to the observability indices of the pair (C pv , A pv ).
The integers η 1 , ..., η e ∈ N + are the observability indices of the pair (C o , A o ).
there exists an extended Morse transformation EM tran bringing Λ uv into represented by the extended Morse canonical form EMCF.
The proof will be given in Section 6.6. Throughout if we only consider the differential equation of (2) (meaning (2) without the output y), we denote it as Λ uv n,m,s = (A, B u , B v ). Now we introduce the driving variables v-reduction and implicitation (compare [15]) to reduce the driving variables v and implicit the EMCF to a DACS.
In an algorithm below, we summarize how to construct the FBCF for a given DACS ∆ u l,n,m = (E, H, L) based on the explicitation procedure.
Step 1: Construct an ODECS Λ uv such that Λ uv ∈ Expl(∆ u ) by Definition 2.2: Step 2: Find EM tran such thatΛũṽ = EM tran (Λ uv ) is in the EMTF by Theorem 3.5: Step 3: Find EM tran such thatΛūv = EM tran (Λũṽ) is in the EMNF by Theorem 3.6: Step 4: By the procedure shown in the proof of Theorem 4.1, bringΛūv into the EMCF.
Step 5: By Definition 4.2, find the implicitation of the v-reduction ofΛūv, denoted by∆ū.

Example
In this section, we illustrate the construction of Algorithm 4.6 by an example taken from [9].
Consider the following mathematical model of an electrical circuit (see Fig. 1 where u = [I, V ] T is the control vector, L, Ca, R, R G , R F are real scalars (all assumed to be nonzero).
In [9], only the matrix pencil sE − H is transformed into a quasi-Kronecker form. Below, we will transform 2 the whole DACS into its FBCF via Algorithm 4.6.
Step 1: Find an ODECS Λ uv ∈ Expl(∆ u ), which we take as Λ uv :  Step 2: Calculate the subspaces V * , U * w , U * v , W * , Y * of Λ w = (A, B w , C, D w ) by Lemma 7.4 of the Appendix. They are W * = X = R 14 , Y * = Y = R 11 and By the proof of Theorem 3.5 and Proposition 3.1, we can choose the following transformation Step 3: By MNF Algorithm 3.3, set Then find T 2 M N via the following constrained Sylvester equation, whereĀ =Ã + K M NC ,Bw =Bw + K M NDw . The above equation is solvable and the solution is Step 4: Transform each subsystem ofΛw into its canonical form as in Theorem 4.1 to obtain EMCF : The EMCF indices are¯ 1 = 2,¯ 2 = 2,¯ 3 = 1, δ = 2,σ 1 =σ 2 =, . . . , =σ 9 = 1. Note that n 2 , a, c, e are all zeros and we have 3 subsystems only.

Proofs of Proposition 2.3, Proposition 2.4 and Theorem 2.9
Proof of Proposition 2.3. If. Suppose that Λ uv andΛ uṽ are equivalent via a transformation given by (9). First, ImBṽ Then pre-multiply the differential part ofΛ uṽ by E 1 , to get (notice that ThusΛ uṽ is an (I l ,ṽ)-explicitation of the following DACS: Since the above DACS can be transformed from ∆ u viaQ = Q Q, where Q = Iq E1K 0 Ty , it proves thatΛ uṽ is a (Q,ṽ)-explicitation of ∆ u corresponding to the choice of invertible matrixQ. Finally, is of full row rank, it follows that any otherQ, such thatẼ 1 ofQE = Ẽ 1 0 is full row rank, must be of the formQ = Q Q, where Q = Q1 Q2 0 Q4 . Thus viaQ, ∆ u is ex-equivalent to We obtain the following equations, usingẼ † 1 andBṽ, and based on the right-hand side of the above: Thus the explicitation of ∆ u viaQ,Ẽ † 1 andBṽ is Λ uṽ : where K = E † 1 Q −1 1 Q 2 , T y = Q 4 . Now we can see that Λ uv andΛ uṽ are equivalent via transformations listed in (9).
Proof of Proposition 2.4. Consider equation (5) of the (Q, v)-explicitation procedure. Since Qtransformations preserve solutions of ∆ u , equation (5) resulting from a Q-transformation of ∆ u has the same solutions as ∆ u . Thus we need to prove that equations (5) and (8) have corresponding solutions for any choices of E † 1 and B v . Moreover, the second equation 0 = H 2 x + L 2 u of (5) coincides with 0 = Cx + D u u of (8) (since C = H 2 and D u = L 2 ). So we only need to prove that (x(t), u(t)) with x(t) ∈ C 1 and u(t) ∈ C 0 is a solution of (5a) if and only if there exists v(t) ∈ C 0 such that (x(t), u(t), v(t)) is a solution of (7) independently of the choice of E † 1 , defining A = E † 1 H and B u = E † 1 L 1 , and of the choice of B v satisfying Im B v = ker E 1 . If. Suppose that (x(t), u(t), v(t)) is a solution of (7). Then we haveẋ(t) = Ax(t) , which proves that (x(t), u(t)) is a solution of (5a).
Only if. Suppose that (x(t), u(t)) is a solution of (5a). Rewrite E 1ẋ as [ E 1 ]. Then, without loss of generality, we assume that the matrix E 1 1 is invertible (if not, we permute the components of x such that the first q columns of E 1 are independent). Thus, a choice of the right inverse of , which gives the matrices A, B u , B v of (7) to be, respectively,

Is
.
Thus we can use the above system matrices to represent ∆ u and∆ũ in the remaining part of proof.
By the assumptions that Λ uv ∈ Expl(∆ u ) andΛũṽ ∈ Expl(∆ũ), we have We have chosen Λ uv andΛũṽ as above for convenience, any other choice based on the explicitation procedure could have been made. Since any two ODECSs in an explicitation class are EM-equivalent, the choice of a (Q, v)-explicitation makes no difference when proving EM-equivalence. Therefore, we will use the system matrices in (25) for the following proof.
If. Suppose Λ uv EM ∼Λũṽ. Then there exist transformation matrices such that (14) holds. Substituting the system matrices of (25) into (14), we have Represent x is invertible. Thus by the invertibility of T x , we have T 1 x is invertible as well. Subsequently, premultiply equation (26) Only if. Suppose ∆ u ex−f b ∼∆ũ. Then there exist invertible matrices Q, P , and matrices F , G of appropriate sizes such that equation (4) holds. Represent Q = Q1 Q2 Q3 Q4 , where Q 1 ∈ R q×q , and P −1 = P1 P2 P3 P4 , where P 1 ∈ R q×q . Then bỹ we immediately get q =q and Q 1 P 1 = I, Q 1 P 2 = 0, Q 3 P 1 = 0, which implies that Q 1 , P 1 are invertible matrices, P 2 = 0, and Q 3 = 0. Thus by the invertibility of Q and P , we have Q 4 and P 4 are invertible matrices as well. Then by equation (4), we get F P −1 G , which implies that the following equation holds:

Proof of Proposition 2.10
Proof. Without loss of generality, we may assume that ∆ u l,n,m = (E, H, L) is of the following form: Since if not, we can always find Q ∈ Gl(l, R), P ∈ Gl(n, R) such that ∆ũ = (QEP −1 , QHP −1 , QL) is of the above form. Then, it is not hard to check that V i (∆ũ) = . Therefore, in order to show that the relations of the subspaces (as claimed in Proposition 2.10) hold, replacing ∆ u by∆ũ makes no difference and thus we will assume that ∆ u is of the above form in what follows.
The following system, denoted Λ w = Λ uv , is a (Q, v)-explicitation of ∆ u , Firstly, we calculate V i (Λ w ) through equation (45) of the Appendix: Comparing the above expression with equation (42) of the Appendix, it is easily seen that the subspace sequences V i+1 (Λ w ) and V i+1 (∆ u ) are calculated in the same way.
In the above formula, according to the special form of E, we directly calculate the preimage. Moreover, we can express 0 In−q = 0 0 0 0

It follows that
It is seen from the above equation and (47) of Appendix that the subspace sequences W i+1 (Λ w ) and W i+1 (∆ u ) are calculated in the same way. Since the initial conditions W 0 (Λ w ) = W 0 (∆ u ) = {0}, we Then from (43) and (44), it is seen that the subspaces sequences W i andŴ i are calculated in the same form, their difference comes from their initial conditions only. Similarly, from (47) and (49), it is seen that W i andŴ i have different initial conditions but evolve in the same way. Thus, bŷ 6.3. Proof of Proposition 3.1 Proof. Observe that the transformation matrix T s decomposes the state space X of Λ u into X = . Then consider the following equation and subspaces: Now, applying (46), for i = n, to both Λ and the dual system of Λ ( see Appendix), we have It follows that B 1 3 , B 1 4 , C 1 4 , C 3 4 , D 1 3 , D 1 4 , D 4 2 are all zero. Then applying (45) for i = n, to both Λ and its dual system, we have The lower parts of equations (28) and (29) give C V * ⊆ Im D and (B ) T (W * ) ⊥ ⊆ Im (D ) T , which implies that B 1 2 and C 4 2 are zero. On the other hand, equation (28) gives that and Im implying that there exist matrices F 1 ∈ R m3×n1 and F 2 ∈ R m3×n2 such that Then setting F = 0 0 0 0 F1 F2 0 0 , we have Since W * is feedback invariant, equation (29) also holds for the above transformed system. Thus the upper part of (29) becomes It follows that there exist K 1 ∈ R n2×p3 and K 2 ∈ R n4×p3 such that Let To The system matrices ofΛ u , see (18), . Now we will show that (Ã 1 ,B 1 ) is controllable. By Lemma 4 of [27] applied toΛũ, we get where W i (Λũ| U * u ) denotes the subspace W i when the input is restricted to U * u . Use the system matrices (18) to calculate W i (Λũ| U * u ) and W i (Λũ) ∩ V * (Λũ), which gives W n (Λũ| U * u ) = B 1 +Ã 1 B 1 + · · · + (Ã 1 ) n−1 B 1 where B 1 = Im [B 1 0 0 0 ] T . We can see from the above equation that the reachability space of (Ã 1 ,B 1 ) is W * (Λũ) ∩ V * (Λũ) = X 1 , which implies that (Ã 1 ,B 1 ) is controllable. Since the proof of the observability of (C 4 ,Ã 4 ) is completely dual to the above proof, we omit that part.
(i) We will prove that any controllable Λ uv n,m,s = (A, B u , B v ) can be transformed into the Brunovský canonical form with indices ( 1 , . . . , m ) and (¯ 1 , . . . ,¯ s ), then the transformation from is straightforward to see. Since Λ uv = (A, B u , B v ) is a control system without output, in view of the extended Morse equivalence of Definition 2.7, we just need to prove that there exist transformation matrices T x , T u , T v , F u , F v , R such that the transformed system matrices are in the Brunovský canonical form (notice a triangular form of input transformation acting on . First, from the classical linear system theory (see, e.g., [11]), using only a state coordinates transformation and state feedback, i.e., choosing suitable T x , F v , F u , and setting T u = I m , T v = I s , R = 0, we can transform Λ uv into the following form: Moreover, without loss of generality, we assume rank B w = m + s (if not, we can always permute the variables of u and v such that the first m 1 columns of B u and the first s 1 columns of B v are independent, where m 1 = rank B u and s 1 = rank B v , then we will work with the matrices with these independent columns only, the remaining ones being zero by suitable transformations T u and T v ). Thus the matrix Γ = [Γ u Γ v ], where Γ u = (b l i ) and Γ v = (bl i ), where 1 ≤ i ≤ m + s, 1 ≤ l ≤ m and 1 ≤l ≤ s, is invertible. Then we suppose that the controllability indices κ i satisfy κ 1 ≥ κ 2 ≥ · · · ≥ κ m+s ≥ 1.
Note that in the case of the Brunovský form for classical ODECS (with one kind of inputs), we could use T w = Γ as an input coordinates transformation matrix. However, ∆ uv has two kinds of inputs and the input coordinates transformation matrix should have a triangular form (see Remark 2.8(ii)). In order to have such an input coordinates transformation matrix, we implement the following procedure.
to get (we delete "tildes" over i u 1 + · · · + b m i u m + 0 +b 2 i v 2 + · · · +b s i v s , 2 ≤ i ≤ m + s. Step i = k + 1: Assume that after k steps, we have defined k and i , for 1 ≤ i ≤ k , as well as¯ k and¯ i for 1 ≤ i ≤¯ k , such that k +¯ k = k, and the system reads ( the term "0" is to indicate that v 1 , . . . , v¯ k are missing) x κi i = b 1 i u 1 + · · · + b m i u m + 0 +b¯ k +1 i v¯ k +1 + · · · +b s i v s , k + 1 ≤ i ≤ m + s. Then two cases are possible, either for all¯ k + 1 ≤ j ≤ s, we haveb j k+1 = 0 or there exists k + 1 ≤ j ≤ s such thatb j k+1 = 0. In the first case, set k+1 = k + 1, k+1 = κ k+1 ,¯ k+1 =¯ k and set which is well-defined because, by controllability, at least one b j k+1 = 0, for j > k . We get (we delete "tildes" over x i , u j and v j ) x κi i = b 1 i u 1 + · · · + b m i u m + 0 +b¯ k +1 i v¯ k +1 + · · · +b s i v s , k + 2 ≤ i ≤ m + s.
Additionally, define the sequence of subspacesŴ i as follows: Consider an ODECS Λ uv n,m,s,p = (A, B u , B v , C, D) of the form Λ uv : The state, input and output space of Λ uv will be denoted by X , U uv and Y , respectively. The input subspaces of u and v will be denoted by U u and U v , respectively. Thus we have U uv = U u ⊕ U v .
Recall that Λ uv can be expressed as a classical ODECS Λ w n,m+s,p = (A, B w , C, D w ) of the form (2). The input space of Λ w is denoted by U w , and, clearly, U w = U uv = U u ⊕ U v . We now recall the invariant subspaces V and W defined in [26] and [27] for Λ w (generalizing the classical invariant subspaces [2,35,36] given for D u = 0). Denote by V * (respectively U * w ) the largest null-output (A, B w ) controlled invariant subspace (respectively input subspace).
Correspondingly, a subspace W ⊆ R n is called an unknown-input (C, A)-conditioned invariant subspace if there exists K ∈ R n×p such that (A + KC)W + (B w + KD w )U w = W and a subspace Y ⊆ R p is called an unknown-input (C, A)-conditioned invariant output subspace if Denote by W * (respectively Y * ) the smallest unknown-input (C, A)-conditioned invariant subspace (respectively output subspace).
Lemma 7.4. [26] Initialize V 0 = X = R n and, for i ∈ N, define inductively and U i ⊆ U for i ∈ N are given by Then V * = V n and U * w = U n . Correspondingly, initialize W 0 = {0} and, for i ∈ N, define inductively and Y i ⊆ Y for i ∈ N are given by Additionally, define a sequenceŴ i of subspaces aŝ Then W * = W n =Ŵ n and Y * = Y n .
Note that when considering the above defined invariant subspaces for the dual system (Λ w ) d of Λ w , given by (Λ w ) d = (A T , C T , (B w ) T , (D w ) T ), we have the following results [28], [27]: