A polynomial approach to a nonlinear model matching problem

The paper studies the model matching problem for nonlinear systems, described by a higher order input–output differential equation, not necessarily realizable in the state–space form. Only the feedforward solution is looked for. The problem statement and solution rely on the recently introduced concept of a generalized transfer function for nonlinear systems. We require that the transfer functions of the compensated system and that of the prespecified model be equal, like in the linear case. However, in the nonlinear transfer function formalism one does not work with equations but with differential one-forms, and the existence of the compensator is restricted by integrability of the one-form corresponding to the compensator. Necessary and sufficient but nonconstructive solvability conditions are given. The second theorem lists a number of different (constructive) conditions under which the one-form is integrable. Additional freedom is sometimes obtained by forcing the conditions of the second theorem to hold via introducing assumptions suggested by the Euclidean division algorithm.


INTRODUCTION
The nonlinear model matching problem (MMP) is a typical control problem, which plays an important role in various other control problems, such as input-output (i/o) linearization, disturbance decoupling, etc.The problem has been addressed by many authors, using somewhat different problem formulations.In general, the goal of the MMP is to compensate a given nonlinear system in order to make its behaviour similar to that of a given nonlinear model.Within the (dominating) state space approach, the typical requirement is that the difference between the outputs of the model and the compensated system be independent of input or even zero (in the strong version of the MMP), see [3,4,12,14,19].The interested reader is referred to [15] for a more detailed review on the exact model matching of linear systems by state feedback.
The less studied i/o approach requires that the compensated system and the model admit the same differential polynomial [7], or that the differential (over)fields, associated to the compensated system and, respectively, to the model be isomorphic [24].The case of the MMP for linear time-varying systems was addressed in [17,18].Some recent results for nonlinear discrete-time systems include [25], where a very specific class of i/o equationsis is considered, and [1] and the references therein.
The compensators found within these approaches are not compatible.The state-space approach sometimes excludes a trivial compensator.The following example from [19] is intended to illustrate the phenomenon.Consider the system F and the model G, described by equations ẋ = xu y = x and ξ = ξ v y = ξ .
The trivial (identity) compensator R, given by u = v, does not give a solution to the MMP because the difference between the output of the model and the output of the compensated system, (x 0 − ξ 0 )e ∫ t 0 v(τ)dτ , is independent of v only if the initial states of the model and the state of the system coincide, i.e., that x 0 = ξ 0 .The i/o approach usually does not exclude the trivial compensator as a solution.
The i/o approach involves other difficulties in the construction of a feedforward solution to the MMP.As an example, consider the system F and the model G given as ẏ = −y 2 + u and ẏ = v, respectively.Clearly, the compensator u = y 2 + v gives a solution, but this is not a feedforward solution as it depends on the output y of the system.To find a feedforward solution, one typically takes a number of derivatives of the feedback solution and uses the relations defined by F and G until the feedforward solution can be found.In our example, we get u = 2y ẏ + v.
Since by F and G, y can be considered as a feedforward compensator.However, the compensated system FR that under (1) reads as ÿ = −2y ẏ + 2v is not anymore equivalent to the model G. Really, FR, although irreducible, is of higher order than G.The issues discussed above motivated the study reported in this paper.Our aim was to find the feedforward solution to the MMP that does not exclude the trivial compensator (when the model and the system are, up to notations, identical) and that is able to find the lower order solution than the technique based on taking derivatives of the feedback solution.We were looking for a solution for systems described by higher order i/o differential equations not necessarily realizable in the state-space form.Of course, the approach is still applicable to systems described by the state equations as the state elimination algorithm [2] ensures that the i/o representation of the form (3) below, at least locally, always exists for such systems.The problem statement and solution rely on the concept of generalized transfer function for nonlinear systems [8,26].We require that the transfer functions of the compensated system and that of the prespecified model be equal like in the linear case.However, in the nonlinear transfer function formalism one does not work with equations but with differential one-forms and the existence of the compensator is restricted by integrability of the one-form corresponding to the compensator.Such a problem statement has not been used yet.
Note that the transfer functions can be easily computed from the i/o equation.In doing so, one associates with the system two polynomials, as in [27], defined over the field of meromorphic functions.Then, after fractions of such polynomials are defined [20,21], the transfer function of a nonlinear system is obtained.This can be interpreted as associating to a nonlinear system, the so-called tangent linear system, see [6], by using Kähler's differentials [13].Then, the ideas similar to those applied for linear time-varying systems in [5] can be used.Hence, the linearized system description resembles the time-varying linear system description except that now the time-varying coefficients of the polynomials are not necessarily independent, see [16].The preliminary ideas of this paper were discussed in the conference paper [11].The transfer function formalism of nonlinear systems has already been used, for instance to study the realization problem [10] and to solve the MMP in the discrete-time case [1].Herein, the continuous-time case is considered.
Two types of solutions, feedforward and feedback compensators, are typically looked for within the MMP.The paper [1], which focuses on the discrete-time case, addresses both, feedforward and feedback, solutions.It is shown that the existence of the feedforward solution depends critically on the integrability of a certain one-form whereas the feedback solution always exists.For this reason, we decided to consider in this paper only the feedforward solution as more difficult, and provide constructive conditions for checking whether the solution exists or not.Finally, note that the feedback solution is briefly addressed in the conference paper [11] and this solution is similar to that from [1] although the computations differ because the noncommutative polynomial rings, associated to continuous-and discrete-time systems, are different.
The paper is organized as follows.The transfer function formalism of nonlinear systems, with the properties relevant to the MMP, is discussed in Section 2. The problem statement and the main results are given in Section 3, the respective results are endowed with examples.Finally, discussion is presented in Section 4.

TRANSFER FUNCTIONS OF NONLINEAR SYSTEMS
Note that throughout the paper we use the abridged notations.First, in order to simplify the exposition we leave out the time argument t, so ξ := ξ (t).Next, we apply Newton's notation for the first and second time derivatives, i.e., ξ := dξ dt , ξ := d 2 ξ dt 2 , and a more general notation ξ (k) := d k ξ dt k for the time derivative of an arbitrary order.Recall briefly mathematical tools from [2] and [8] that are used in this paper.Consider the single-input single-output nonlinear system Σ, described by the i/o equation of the form where φ is assumed to be an element of the field of meromorphic functions K Σ of variables from the set C Σ = {y, ẏ, . . ., y (n−1) , u (i) ; i ≥ 0}.We want to avoid the situation when two different systems have the same transfer function.Since the definition of the transfer function is based on the globally linearized system description dy (n) − dφ(•) = 0 and the differential of a constant c is equal to zero, we assume that φ in (3) cannot be written as ψ + c for some nonzero c and ψ ∈ K .Consider the set of symbols dC Σ = {dy, d ẏ, . . ., dy (n−1) , du (i) ; i ≥ 0} and define the formal vector space of differential one-forms E Σ = span K Σ {dC Σ }.Let δ Σ denote the derivative operator that acts on K Σ and E Σ .In particular, δ Σ (y (k) ) = y (k+1) , k = 0, . . . ,n − 2, δ Σ (y (n−1) ) = φ(y, ẏ, . . ., y (n−1) , u, u, . . ., u (m) ), and δ Σ (u (k) ) = u (k+1) , k ≥ 0. The derivative operator δ Σ induces the left skew polynomial ring K Σ [s] of polynomials in s over K Σ with the usual addition and the (noncommutative) multiplication defined by the commutation rule for any ξ ∈ K Σ .We define for any v ∈ E Σ .Thus, K Σ [s] represents the ring of linear ordinary differential operators that act on E Σ .Note that sdψ = d ψ for any dψ ∈ E Σ .The skew polynomial ring K Σ [s] is proved to satisfy the following left Ore condition.
Proposition 1 (left Ore condition).For all nonzero a, b If the condition of the above proposition holds, then the skew polynomial ring is called the left Ore ring.Thus, the ring K Σ [s] can be embedded into the field of fractions F Σ by defining fractions as b −1 • a, see [20,21].The addition and multiplication in F Σ are defined as where where β 2 a 1 = α 1 b 2 again by the Ore condition.Due to the noncommutative multiplication rule, ( 5) and ( 6) differ from the familiar rules.In particular, in case of the multiplication (6) one, in general, cannot simply multiply numerators and denominators or cancel them in a standard manner.
The addition operation can be performed, according to (5), as where the Ore condition The multiplication operation can be performed, according to (6), as where again the Ore condition β 2 = α 1 s is satisfied for α 1 = 1 and β 2 = s.Observe that s −1 • (s − y) −1 = (s 2 − ys) −1 , which confirms that the multiplication is noncommutative.
Once the fraction of two skew polynomials is defined, we can introduce the so-called (generalized) transfer function of system (3) as an element F(s) ∈ F Σ such that dy = F(s)du.Differentiating (3) yields a(s)dy = b(s)du, (7) where Example 2. Consider the system ÿ = yu 2 + y u.
Differentiating ( 9) yields d ÿ − ( u 2 + u) dy = yd u + 2yudu, which, according to (7), can be written as ( s 2 − u − u 2 ) dy = (ys + 2yu)du.Then, using (8), the transfer function is We recall now the respective properties of the transfer functions of nonlinear systems that are relevant for the MMP.

Properness
Analogously to the linear case let us introduce the definitions.In what follows we denote the relative degree as rel deg F(s).

Integrability
In the linear case an i/o differential equation of a control system can be associated to each (proper) rational function F(s) = a −1 (s)b(s), where a(s) and b(s) are from the commutative polynomial ring R[s].However, things are different in the nonlinear case.Given form is exact or can be made exact by multiplying it with an integrating factor, then there exists an i/o differential equation of the form (3) with the transfer function F(s).To conclude, not every fraction of skew polynomials necessarily represents a control system, which plays a crucial role in designing compensators.Recall that Definition 5. A one-form ω ∈ E Σ is said to be exact (integrable) if there exists a function ξ ∈ K Σ (and λ ∈ K Σ ) such that dξ = ω (λ dξ = ω).
A one-form ω is integrable if and only if dω ∧ ω = 0.

Transfer equivalence
In the MMP the focus is on designing a compensator for a nonlinear system (3) under which the i/o equation of the compensated system becomes transfer equivalent to a prespecified model.The necessary notions and their relation to the transfer function formalism are recalled below.The reader is referred to [2,22,23,27] for more details.Definition 6.A nonconstant function f ∈ K Σ is said to be an autonomous variable for system (3) if there exist an integer ν ≥ 1 and a nonzero meromorphic function ϕ ∈ K Σ such that ϕ ( f , ḟ , . . ., f (ν) ) = 0. Definition 7. The system (3) is called irreducible if there does not exist any autonomous variable in K Σ .
Note that irreducibility of system (3) can be easily checked by the subspace H ∞ , defined below } , Then, the system is irreducible if and only if H ∞ = {0}, see [2].However, within the polynomial approach the condition can be stated alternatively as follows.(7) have no (nontrivial) common left factors.Definition 9.The irreducible i/o equation φ ir (•) = 0 of system (3) is defined as follows:
Finally, one can introduce the notion of a transfer equivalence of two systems of the form (3) and a condition for its inspection as follows.
Note that F 2 can be obtained by taking a derivative of (10) and F 3 by substituting ( 10) into (11).Although the set of solutions of F 1 is a subset of those of systems F 2 and F 3 , this does not imply that F 1 , F 2 , and F 3 are all transfer equivalent.Clearly, the system F 2 is transfer equivalent to the system F 1 , for f := ẏ − yu is an autonomous variable for the system F 2 , ÿ − ẏu − y u = ḟ .Thus, ( 10) is an irreducible i/o differential equation for both F 1 and F 2 .On the other hand, the system F 3 is not transfer equivalent to the system F 1 (thereby neither to the system F 2 ), as ( 12) is an irreducible i/o differential equation.That is, there does not exist any autonomous variable f for system (12).In terms of transfer functions we have

Interconnection of systems
As in the linear case, the transfer function of two interconnected nonlinear systems, see Fig. 1, can be computed as a product of the two respective transfer functions [8,26] except that the multiplication is noncommutative.Mathematically, such an interconnection is viewed as follows.
Remark 12.Note that two initially independent variables, that is the output of the system R and the input to the system F, become dependent once the systems are interconnected.By abuse of the notation we therefore use the same symbol u for both.
The fact that initially independent variables, the output of the system R and the input to the system F, become dependent once the systems are interconnected plays a crucial role in the solution of the MMP.This aspect is demonstrated by the following example.
Example 4. Consider systems F and R given, respectively, as ÿ = u + u 2 , u = v with the transfer functions F(s) = (s 2 ) −1 (s + 2u) and R(s) = 1.To compute the transfer function of their interconnection, we first map both R and F to the same overfield.The differential field K F consists of meromorphic functions of variables C F = {y, ẏ, u (i) ; i ≥ 0} and δ F ( ẏ) = u + u 2 .The differential field K R is trivial, as u = v, and consists of meromorphic functions of variables C R = {v ( j) ; j ≥ 0}.The interconnected systems thus belong to the overfield K FR with C FR = {y, ẏ, v ( j) ; j ≥ 0}, and δ FR ( ẏ) = v + v 2 .Therefore, in However, also the pair F and R ′ given, respectively, as with the transfer functions F(s) = (s 2 ) −1 (s + 2u) and R ′ (s) = (s + 2u) −1 (s + 2v) produce 'the same' interconnected system, i.e. the system with the same irreducible i/o differential equation.Really, the differential field K R ′ consists of meromorphic functions of variables C R ′ = {u, v ( j) ; j ≥ 0}, and δ R ′ (u) = −u 2 + v + v 2 .Therefore, the interconnected systems belong to the overfield K FR ′ with C FR ′ = {y, ẏ, u, v ( j) ; j ≥ 0}, and Finally, in F FR ′ we again have However, the systems R and R ′ are not transfer equivalent, i.e.R(s) ̸ = R ′ (s), although the respective interconnections are.Observe that both R and R ′ are irreducible i/o equations.This is in great contrast to the linear case.The reason is that in the interconnection initially independent variables (the output of the system R and the input to the system F) become dependent.To conclude, we have found R and The relevant point to notice here is that F(s) • R(s) and F(s) • R ′ (s) have been computed in two different overfields that are not isomorphic.

FEEDFORWARD SOLUTION OF THE MMP
In Example 4 we can think of systems R and R ′ as two feedforward solutions of the MMP for the given system F and the model G.Note that in a feedforward compensator one does not allow (as depicted in Fig. 1) feedback from the system output y.Finding such a solution to the MMP represents the main scope of this paper.Using the concept of the transfer equivalence, the problem can be formulated as follows.For a given system F and a model G find a (proper) feedforward compensator R such that the compensated system is transfer equivalent to the model.With respect to the results recalled in Section 2.3 it is straightforward to conclude that such a problem formulation results in the equality of the transfer functions of the model and that of the compensated system.
Problem statement: Consider a nonlinear system F and a model G, described by the transfer functions and respectively.Find a (proper) feedforward compensator R, described by the transfer function such that the transfer function of the compensated system coincides with that of the model G

Solution
Although R(s) = F −1 (s)G(s) can be found like in the linear case, the existence of the compensator in the nonlinear case depends on integrability of the i/o differential form associated to R(s).
Theorem 13.Suppose a system F and a model G are given, both of the form (3), with the transfer functions F(s) ̸ = 0 and G(s), respectively.There exists a feedforward (proper) compensator R = a −1 R (s)b R (s) that solves the MMP if and only if there exists a differential overfield Using the transfer function algebra, Necessity.Suppose there exists a compensator R with the transfer function Finally, since rel deg , which means a properness of the compensator R.
Unfortunately, Theorem 13 does not indicate over which differential overfield K FR one needs to look for the compensator.
Example 5 (continuation of Example 4).Recall that the transfer functions of F and G are Then, the compensator reads R(s) = F −1 (s) • G(s) = (s + 2u) −1 (s + 2v) and the one-form (s + 2u)du − (s + 2v)dv is integrable over both K FR ′ and K FR .In K FR ′ we directly get R ′ : u In K FR we have u = v, and thus the transfer function reduces to R(s) = F −1 (s) • G(s) = (s + 2u) −1 (s + 2v) = 1 with the one-form du − dv being again integrable, and R : u = v.So F −1 (s) • G(s) reduces to transfer functions of the respective compensators over the respective overfields.
Note that it is not true that whenever R ′ solves the MMP also R does.Consider for instance the system F : y = u and the model G : ẏ + y 2 = v + v 2 .Then the only solution is R ′ : u + u 2 = v + v 2 .Clearly, the trivial compensator R : u = v with the trivial system F cannot give us the model G as it is an irreducible system.
Having the differential overfield K FR over which we check the integrability, practically means that we already have a compensator one has to find.However, this difficulty can be addressed by applying the Euclidean division algorithm to check for the possible existence of common factors of denominators and numerators of the respective transfer functions.We first give a theorem that sheds some light on the problem we have to solve.

an integrable differential one-form if either of the following holds:
(i) a G (s) = q(s)a F (s) for some q(s) ∈ K [s]; (ii) a F (s) = q(s)a G (s) for some q(s) ∈ K [s]; (iii) a F (s) = ρ F (s)q(s), a G (s) = ρ G (s)q(s) for some q(s) ∈ K [s] and ρ F (s), ρ G (s) in the commutative polynomial ring R[s].
Proof.From F and G we have ω F := a F (s)dy − b F (s)du = 0, ω G := a G (s)dy − b G (s)dv = 0, which both are integrable one-forms.
Part (i): Assume that a G (s) = q(s)a F (s) for some q(s) and the left-hand sides of being equal to zero, are integrable one-forms.
Part (ii): Assume now that a F (s) = q(s)a G (s) for some q(s) and the left-hand sides of being equal to zero, are integrable one-forms.Part (iii): Assume a F (s) = ρ F (s)q(s), a G (s) = ρ G (s)q(s) for some q(s) ∈ K [s] and ρ F (s), ρ G (s) ∈ R[s].Since ρ F (s)ρ G (s) = ρ G (s)ρ F (s), we have Now any R[s]-linear combination of integrable one-forms is an integrable one-form too.Therefore, Note that it is still an open problem whether the conditions of Theorem 14 are necessary or not.The conditions of Theorem 14 suggest that when looking for an integrable one-form associated to R(s), one has to look for possible common factors of the denominators of F(s) and G(s).This is demonstrated by the example below.
Note that in (any) K FR we have ẏ = yu.Thus, taking this into account, G(s) = [s(s − u)] −1 y and The conditions of Theorem 14 are satisfied.After integrating the one-form d u + udu − dv = 0, we get the compensator R described by the equation u = − u 2 2 + v. Example 7. Consider the system F and the model G given, respectively, as whose respective transfer functions are The conditions of Theorem 14 are not satisfied, since the transfer function corresponds to a nonintegrable one-form.However, one may try to search for a different overfield in which the conditions are possibly fulfilled.Since a F (s) = s − u and a G (s) = s(s − v), by assuming u = v, the conditions are fulfilled.Next, one has to check whether such an overfield exists where u = v.To do so, we apply Theorem 13 and check integrability of R(s) under the assumption u which really corresponds to an integrable one-form sdu − dv.Thus, the compensator R that solves the MMP is described by the equation u = v.

Application of the Euclidean division algorithm
Recall that the system F and the model G can be represented by the transfer functions as in ( 13) and ( 14).
Let n a F , n b F , n a G , and n b G be the degrees of the polynomials a F (s), b F (s), a G (s), and b G (s). First, check integrability of the one-form ω R = a R (s)du − b R (s)dv over 'the most general' overfield K FR where we assume all u (k) , k = 0, . . ., n b F + n a G , to be independent variables.That is, we are looking for a compensator of order n b F + n a G (or one which is transfer equivalent to it).However, Examples and 7 demonstrated that, regardless of whether ω R is integrable over K FR or not, there may exist another field K ′ FR over which ω R is (also) integrable.The compensators found may even not be transfer equivalent (see Example ).Hence, to find a lower-order compensator, if it exists, we need to look for possible common factors of the respective polynomials of F(s) and G(s), and check then the existence of a solution by applying Theorem 13.In particular, we check integrability of the respective one-form.Naturally, we are interested in finding the greatest common right divisors (factors) of a G (s) and a F (s) that can be computed by the Euclidean division algorithm summarized below.

Procedure:
Step 1.If deg a G (s) ≥ deg a F (s) set r 0 (s) := a G (s), r 1 (s) := a F (s); otherwise, set r 0 (s) := a F (s), r 1 (s) := a G (s).By the Euclidean division algorithm compute recursively r k−1 (s) = q k (s)r k (s) + r k+1 (s), k > 1 (15) until some r k+1 (s) is zero.Then r k (s) is the greatest common right divisor of a G (s) and a F (s).Note that the algorithm always terminates and the remainders are made monic before proceeding to the next round of division.
Step 2. Check the conditions of Theorem 14 and, if fulfilled, stop.If not, or if we are interested in finding a lower-order compensator than the one found, set i := k and continue.
That is, we force the last nonzero remainder in (15) to become zero, which, if possible, will introduce some (differential) relations between u and v variables.This in turn makes r i−1 (s) the greatest common (right) divisor of a G (s) and a F (s). Check the conditions of Theorem 14 and, if fulfilled, check the integrability of R(s) = F −1 (s) • G(s) by applying Theorem 13.If integrable, stop.If not, or if one is interested in finding a lower-order compensator than the one just found, then set i := i − 1 and repeat Step 3, or in case i = 0, stop.Step 1: Set r 0 (s) := s 2 − vs, r 1 (s) := s 2 − us, and compute the sequence r 0 (s) = q 1 (s)r 1 (s) + r 2 (s) = 1 • r 1 (s) + ( u − v)s, r 1 (s) = q 2 (s)r 2 (s) + r 3 (s) = (s − u) • s.
Hence, the greatest common right divisor of a G (s) and a F (s) is s.
Step 2: The conditions of Theorem 14 are not fulfilled because the one-form corresponding to R(s) = F −1 (s) • G(s) is not integrable.Unfortunately, the one-form ( ẏs + 1)du − ẏdv is not integrable, meaning that there does not exist such K FR in which u − v = 0. Thus, a feedforward compensator that solves the MMP does not exist.

Definition 2 .
System (3) is said to be proper if n ≥ m.Definition 3. The relative degree r of system (3) is defined as r = n − m.It is, therefore, straightforward to conclude that Theorem 4. Let F(s) = a −1 (s)b(s) be the transfer function of system (3).Then the relative degree of the system is r = deg a(s) − deg b(s).

Theorem 11 .Example 3 .
Two systems of the form (3) are transfer equivalent if and only if they have the same transfer function.Consider three systems F 1 , F 2 , F 3 given, respectively, as ẏ = yu, (

Theorem 14 .
Suppose a system F and a model G are given, both of the form (3), with the transfer functions F(s) = a −1 F (s)b F (s) ̸ = 0 and G(s) = a −1 G (s)b G (s), respectively.Assume that both F(s) and G(s) are irreducible representations of the transfer functions.

Example 6 .
Consider the system F and the model G given, respectively, as ẏ = yu,