Remarks on the realization of time-varying systems

The realization problem of nonlinear time-varying input–output equations is considered. Differentials of the state coordinates, necessary for realization, are determined by the vector space of differential one-forms, spanned over the field of meromorphic functions. Formulas for computing the basis one-forms are given, based on the Euclidean division of noncommutative polynomials. Moreover, it is shown that in the case of a reducible system, the subspace admits a basis with certain structure, explicitly related to reduced input–output equations.


INTRODUCTION
Transforming the input-output (i/o) equation, describing the control system, into the state-space form is known as a realization problem.Note that the problem for the time-invariant nonlinear case has been studied extensively [3,4,[7][8][9]15,19].In particular, the papers [4] and [3] suggest two alternatives to finding the basis of the vector space that defines the state coordinates.In both cases explicit formulas for basis vectors are suggested.The first method relies on the Euclidean division of non-commutative polynomials.The second method is based on the concept of adjoint polynomials.The first method is, up to the computations involved, similar to that for commutative polynomials and therefore intuitively understandable by engineers, familiar with linear systems.The concept of adjoint polynomial is far less popular.For this reason, the first method is chosen for the extension to the time-varying case.
The main goal is to find the state coordinates, necessary for deriving a realization.The algebraic approach of differential 1-forms is used.It is extended from the time-invariant case in [7] to the time-varying case in [16].The solution is given in Theorem 2, generalizing the polynomial formulas from [4] to timevarying systems.The difference which makes this extension non-obvious is that a time-invariant system is described by two non-commutative polynomials while a time-varying system requires three polynomials, and the third one has to be incorporated into the analysis.The proof of Theorem 2 is completely different from its counterpart in [4].
As a byproduct, Theorem 3 shows that in the case of a reducible i/o equation the subspace, determining the state coordinates, admits a basis with a certain structure, explicitly related to the reduced i/o equation.At first sight, it may seem that this result allows also finding the state coordinates for the reduced system, but actually it is not so, as in practice the equation of the reduced system is usually not known.However, the main value of this structure is its theoretical insight as it enables one to prove the other results, for instance, on the minimal realization [12], and is completely new even for time-invariant systems.
In this paper, the interest is in generic properties that hold on some open and dense subsets of suitable domains exactly like in [7].In what follows, our theorems hold generically, which means that for almost every point of the domain there is an open neighbourhood on which some statement holds or some object is defined.If one assumes that some rank condition holds generically, the solution exists around almost every point, though it is not necessarily global (that is, defined almost everywhere).
Finally, note that the realization problem studied in this paper is a special case of a more general problem, examined in [17] for explicit and in [18] for implicit time-varying systems.The papers [17,18] address the problem of eliminating the input derivative in the so-called generalized state equations that depend, besides the input, upon the input derivatives as well.The conditions are given in terms of the existence of a generalized state transformation such that the transformed equations are in the classical state-space form.It has been known for a long time (see, for instance, [14,15]) that the i/o equation can be easily converted into the special and very simple form of the generalized state equations.So, in principle it is possible to derive the results of Theorem 1 below from the results of [17], taking this special form into account.However, note that the alternative and direct proof, based on the i/o equations, is simpler and the main results of this paper are rather in Theorems 2 and 3, not in Theorem 1.Moreover, the polynomial formulas based on the Euclidean division algorithm, presented in Theorem 2, cannot be extended to arbitrary generalized state equations, since they depend on the simple structure, obtained from converting the i/o equations.
The paper is organized as follows.In Section 2 we give a brief summary of the algebraic framework of differential 1-forms and also recall some basic facts about the theory of non-commutative polynomial rings.In Section 3 the system reduction and realization are explained.Section 4 presents the main results and Section 5 gives an example.Finally, Section 6 draws the conclusions.

ALGEBRAIC FRAMEWORK
In this paper, two types of single-input single-output nonlinear time-varying system representations are considered: first, the i/o equation in the form and second, the state equations where u(t) ∈ R is the input, y(t) ∈ R is the output, x(t) ∈ R n is the state variable, ϕ , f , and h are meromorphic functions in their arguments.For the sake of compactness the argument t in functions y (i) (t), u ( j) (t) and x (k) (t) for i, j, k 0 will be omitted from now on.The special case, where the input u is missing in systems (1) or (2), is not treated in this paper.Sometimes also the i/o equation in implicit form is considered: where ψ(•) = y (n) − ϕ (•).Additionally, assume that ψ(•) and f (t, x, u) (in their expanded form) do not include the terms, depending on t only.This requirement is consistent with linear theory.Below we briefly recall the algebraic framework of differential 1-forms from [7], extending it to the time-varying case, i.e. for the case when the system equations depend explicitly on time t.The approach of 1-forms is based on the idea of working with differentials of nonlinear system equations rather than with the original equations themselves.This allows linearization of the intermediate computations.

Differential forms
Let K be the field of meromorphic functions in a finite number of independent variables from the set {t, y (ℓ) , ℓ < n, u (k) , k 0 }.The variable y (n) is considered to be a dependent variable according to (1) and as such is substituted by ϕ (t, y, . . ., y (n−1) , u, . . ., u (r) ).The variables y (n+ℓ) , ℓ 1 are substituted by ϕ (ℓ) (•); if necessary, repeated substitutions are applied.The derivative operator d dt : The pair is a differential field, see [11].
The space E is closed under the operation d dt ).One says that ω ∈ E is an exact 1-form if ω = dα for some α ∈ K .A 1-form ω for which d ω = 0 is said to be closed (locally exact).A subspace V is said to be closed or completely integrable if it admits locally an exact basis V = sp K {dζ 1 , . . ., dζ r } [6].The integrability of V = sp K {ω 1 , . . ., ω r } can be checked by the Frobenius theorem: V is completely integrable if and only if dω i ∧ ω 1 ∧ . . .∧ ω r = 0 for i = 1, . . ., r.Here d is the exterior differential operator and ∧ denotes the wedge product, see [6].

Polynomial framework
Next, the algebraic approach of 1-forms is supplemented by the theory of non-commutative polynomial ring.Polynomials allow representing the 1-forms as well as the operations with them in a compact form; such tools have been used to address many problems for nonlinear time-invariant systems [3,4,10,20].
The field K and the operator d dt induce a non-commutative ring of left differential polynomials K [s].A polynomial p ∈ K [s] can be uniquely written as where s is a formal variable and p i ∈ K for i = 0, . . .k. Polynomial p ̸ = 0 if and only if at least one of the functions p i is non-zero.If p k ̸ ≡ 0, then the integer k is called the degree of p and denoted by deg p.We set additionally deg 0 = −∞.The polynomial p is called monic if p k = 1.The addition of the polynomials is defined in a standard way.However, for a ∈ K ⊂ K [s] the multiplication is defined by the commutation rule s • a := a s + ȧ.The application of the (left) Euclidean division algorithm to the polynomials p, q (q ̸ ≡ 0) allows one to find a, b ∈ K [s], deg b < deg q, such that p = qa + b, see [5].Then a is called the left quotient and b is called the left reminder for p and q.
A left differential polynomial a ∈ K [s] may be interpreted as an operator 1 a(s) : It is natural to extend (4) for a = ∑ k i=0 a i s i as a(s)(αdζ called the globally linearized i/o equation.Moreover, ω is called the differential form of system (1).The 1form ω can be represented in terms of three non-commutative polynomials from the ring K [s] by rewriting (5) as ω ≡ p(s)dy + q(s)du where Note that ρ is a polynomial of degree 0 (function in K ), thus ρ in ( 6) does not depend on the polynomial variable s.

Reduction
If the 1-form ω satisfies ω = γ(s)π, γ ∈ K [s], and deg γ 1, then π ∈ E is called a reduced 1-form of ω.As claimed in [12], the one-form π is exact or can be made exact by multiplying it by an integrating factor α ∈ K ; see also [2] for the time-invariant case.Thus we may write απ = dφ and If (8) holds, then there exist a function F and non-zero k = k(t, u, u, . . . (see [19], Lemma 6.2).In order to construct a new reduced system from φ, we make the following technical assumption.
This assumption has been made in most papers in the literature and it means that zero is a solution of the autonomous differential equation F(φ, φ, . . ., φ (ν) ) = 0.If a function φ in (9) (or equivalently in ( 8)) satisfies Assumption 1, then φ is called a reduced variable of (1) and system φ = 0 is a reduced i/o equation.

Realization
The set of equations ( 2) is called the nth-order state-space realization of the nth-order i/o equation (1) if both equations have the same solution sets {u(t), y(t),t 0}.Observe that this definition covers only realizations with state dimension equal to the order of the i/o equation.In principle, realizations with lower (or higher) dimension are also allowed.If the i/o equation is reducible, then the realization of the reduced equation is also considered a realization for the original system.
To find the realization of the i/o equation (1), we follow the approach from [7] and generalize some notions and constructions from the time-invariant to time-varying case, as done in [13].
The relative degree ξ of a 1-form ω ∈ E is defined to be the least integer such that ω (ξ ) ̸ ∈ sp K {dt, dy, . . . ,dy (n−1) , du, . . . ,du (r) }.If such an integer does not exist, we set ξ := ∞.Next, we define for equation (1) the sequence of subspaces H k of E by the recursive formula H 1 = sp K {dt, dy, . . ., dy (n−1) , du, . . ., du (r) }, It is obvious that the sequence (10) is decreasing.Denote its limit by H ∞ so that Lemma 1.Each H k contains the 1-forms with relative degrees equal to k or higher than k (with respect to input u (r+1) ).
Proof.From the definitions of relative degree and the vector space H 1 , the claim of the Lemma is equivalent to the following statement: ω ∈ H k ⇒ ω (k−1) ∈ H 1 .This claim can be easily shown by mathematical induction.The statement is obvious for k = 1; assuming that it holds for k, we prove it for k + 1, i.e. we show that ω ∈ H k+1 ⇒ ω (k) ∈ H 1 .By (10), the one-forms ω ∈ H k+1 ⊂ H k and by assumption of the induction ω (k−1) ∈ H 1 .Due to (10), ω is also in The subspace H ∞ contains 1-forms with infinite relative degree so that these 1-forms will never be influenced by the input of the system.Note that the 1-form dt always belongs to H ∞ since it has infinite relative degree.Like in the time-invariant case, the subspaces H k are, in general, not integrable, i.e. they do not admit an exact basis.
Proof.We first find the extended system of (1) as done in [7] for time-invariant systems and define the state variables as z 1 := y, . . ., z n := y (n−1) , z n+1 := u, . . ., z n+s+1 := u (s) and the new input variable v := u (s+1) .We additionally introduce a state variable z n+s+2 := t.In these variables system (1) can be rewritten in the form of time-invariant state equations as The subspace H ∞ , computed for ( 11) is integrable by Proposition 3.3 of [1].Thus H ∞ computed for ( 1) is also integrable since the subspace H ∞ can be identified for system (1) and its equivalent representation (11).
The reduced 1-form of ( 2) is defined as the reduced 1-form of its i/o equation.

Lemma 2. The 1-form ω ∈ H ∞ if and only if ω is a linear combination (over K ) of reduced 1-form(s) of (1) and dt.
Proof.As in the proof of Proposition 1, we can rewrite equation (1) in the form of time-invariant state equations (11) and apply Proposition 3.12 of [7].
From (10) it follows that the subspaces H k , k = 1, . . ., r + 1 have the following structure: Theorem 1.The i/o equation ( 1) has the state-space realization in the form (2) if and only if the subspace H r+2 , computed for equation (1), is completely integrable.Moreover, the state coordinates for a realization can be found by integrating the basis 1-forms of H r+2 .
The proof of the Theorem is given in [12].

MAIN RESULTS
Introduce the 1-forms which simplify the computation of the subspace H r+2 for system (1).These 1-forms help to construct the nth-order state-space realization of the i/o equation (1).Let where p ℓ and q ℓ are polynomials, which can be recursively calculated from the equalities with the initial polynomials p 0 := p, q 0 := q, given by (7).Note that the 1-forms ω ℓ , ℓ = 1, . . ., n do not include the term analogous to ρdt in (6).The reason is that 1-forms ω ℓ will be used below as basis vectors of the subspace H r+2 .Since the 1-form dt is present in all subspaces, and naturally the linear transformations (over K ) with basis one-forms are allowed, there is no need to include such a term in ω ℓ .
Second, we show that in H r+2 there is no other 1-form, linearly independent of dt, ω 1 , . . ., ω n over K .On the one hand, ω 1 , . . ., ω n are linearly independent over K due to the construction (14), where deg p ℓ = n − ℓ and all p ℓ are monic.On the other hand, H r+2 has a basis consisting of exactly n basis 1-forms, in addition to dt.This can be deduced from the fact that H 1 has, by (10), n + r + 2 basis 1-forms and from the structure (12) the number of basis 1-forms always decreases by 1, when stepping from H k to H k+1 , k = 1, . . .k * .This follows from dim u = 1.
Second, we show that there are no other 1-forms in H r+2 , linearly independent of dt, dφ, ω1 , . . ., ωn−1 over K .It can be done like in the proof of Theorem 2. Additionally, we have to show that the 1-form dφ is linearly independent of ω1 , . . ., ωn−1 over K .It is really correct since deg p = n − 1 and p is monic.
where a α , b β , c 0 ∈ K , may be expressed in terms of the left differential polynomials as ω = ∑ k α=0 a α s α dy + ∑ ℓ β =0 b β s β du + c 0 dt := a(s)dy + b(s)du + cdt, where a, b ∈ K [s] and c = c 0 ∈ K .It is easy to see that s ω = ω, for ω ∈ E .By applying the operator d to equation (1) we obtain ω