Adaptive Observer of State and Disturbances for Linear Overparameterized Systems

— The problem of state reconstruction is considered for a class of linear systems with time-invariant unknown parameters and overparameterization that are aﬀected by external perturbations generated by a known exosystem with unknown initial conditions. An extended adaptive observer is proposed, which, in contrast to existing approaches, solves state and perturbation adaptive estimation problems for systems that are not represented in the observer canonical form. The obtained theoretical results are validated via mathematical modeling


INTRODUCTION
One of the problems of automatic control theory is the reconstruction of unmeasured state of completely observable linear systems: ẋ(t) = Ax(t) + Bu(t), y(t) = C T x(t) (1.1) To solve it, various observers based on the invariant ellipsoid technique [1], high-gain methods [2,3], sliding mode approach [3,4], and parametric identification theory [5,6] have been proposed.
In contrast to other approaches, observers based on the methods of identification theory [5,6] use parameter adaptation algorithms and, therefore, usually require less a priori information about the system parameters.However, since the baseline solutions [7][8][9][10], the class of systems, for which the adaptive observers can be designed, is traditionally restricted to models in the observer canonical form: , where ψ a and ψ b are parameters of the characteristic polynomials of the following linear operator W uy (s) = b n−1 s n−1 + b n−2 s n−2 + . . .+ b 0 s n + a n−1 s n−1 + . . .+ a 0 , and they are related to the matrices of the model (1.1) via a transformation matrix T : The point is that the measurable control u(t) and output y(t) signals allow one to uniquely identify the parameters of such canonical state space form only [5, p. 269].The states ξ(t) ∈ R n of the model (1.2) are virtual and related to the plant physical states x(t) ∈ R n via a non-singular transformation ξ(t) = T x(t).
Therefore, the estimates ξ(t) obtained by classical adaptive state observers [5,6] of the following form ( ψa (t), ψb (t) are estimates of the parameters (1.3), L stands for the correction matrix, and the specific structures of functions f a (.) , f b (.) , f v (.) are defined in [5,6]): ξ(t) = A 0 ξ(t) + ψa (t)y(t) + ψb (t)u(t) + L (ŷ(t) − y(t)) + v(t), not only do not coincide with x(t), but also turn out to be useless, for example, to solve the problems of failure diagnostics, monitoring and storage of unmeasured variables of technological processes, design and online adjustment of digital twins and other practical scenarios.The solution to this problem is to identify the linear transformation matrix T together with the parameters ψ a and ψ b .For one specific class of linear systems, an algorithm is proposed in [7] that forms an estimate of T (t) on the basis of the ones of ψa (t) and ψb (t).In the general case, the mapping T (t) = f T ψa (t), ψb (t) can be singular for certain values of the estimates ψa (t), ψb (t) (see Section VIII of [7]).In more recent papers [10][11][12] devoted to the development of methods to design adaptive observers (and even in the fundamental books on adaptive observers for linear systems [5,6]), to the best of the authors' knowledge, the problem of the physical state x(t) reconstruction and the estimation of the linear transformation matrix T with the help of adaptive observers was no longer touched upon.
In a recent paper [13] a new approach of adaptive reconstruction of the linear system physical state is presented instead of identification of the linear transformation matrix.It is proposed to overparameterize the matrices of the system (1.1) with respect to some physical parameters θ ∈ R n θ (such overparameterization is always possible if the model is obtained directly on the basis of the laws of mathematical physics-Kirchhoff, Euler-Lagrange, etc.): and, using the following change of notation ψ a := ψ a (θ) , ψ b := ψ b (θ), to take into account the dependence of the model (1.1) parameters from θ.
The fact that the overparameterization is considered allows one to link the matrices of the models (1.1) and (1.2) not via the above-mentioned transformation, but by means of some new functional transformations of the following form (θ = F (ψ ab ) is an inverse function, L ab ∈ R n θ ×2n stands for a matrix that defines some linear transformation, which ensures dim{ψ ab } = dim{θ}) which provides much room to design the adaptive observers of physical states x(t).
In [13] it is shown that, if the condition of existence of the inverse function F : R n θ → R n θ holds and ψ ab (θ) and Θ AB (θ) depend from θ in a polynomial fashion, then, using only the measurable signals y(t), u(t) and known vector C, the following regression equations can be obtained without identification of the parameters ψ ab (θ) and θ (where Y AB (t), Y L (t), M AB (t), M L (t) stand for the measurable signals): and, as a consequence, an adaptive observer of the system (1.5) state can be implemented in the following form: where L(t) is an estimate of the matrix L (θ) such that A (θ) − L (θ) C T is a Hurwitz one.
In other words, owing to the fact that Θ AB (θ) and ψ ab (θ) are related to each other via the physical parameters θ, if the condition (1.6) is met, then, in accordance with [13], Θ AB (θ) and L (θ) can be identified without direct estimation of θ or ψ ab (θ).In contrast to (1.4), the (1.7) observer allows one to obtain estimates of the physical state x(t), and, contrary to [7], it is applicable to a wider class of systems and does not require direct identification of parameters ψ ab (θ).
The aim of this paper is to extend the results of [13] to the class of linear systems with overparameterization that are affected by the external perturbations generated by a known exosystem with unknown initial conditions.

Main definitions
The definition of a heterogeneous mapping, the regressor persistent excitation condition, and the property of the Kreisselmeyer filtering [10] given below will be used throughout this paper1 .
and mapping T F : R ∆ F → R n F ×m F such that for all ω(t) ∈ R and θ ∈ R n θ the following functional equation has a solution where For example, Using a known function Y θ (t) = ω(t)θ, the main property Ξ F (ω) θ = Ξ F (ω) ω(t)θ from Definition 1 allows one to obtain a linear regression equation with respect to F (θ) in the following way: The elements of a mapping F(θ) satisfy Definition 1 if they are polynomials or monomials of θ, as well as some of the irrational functions.Definition 2. A regressor ϕ(t) ∈ R n is persistently exciting (ϕ(t) ∈ PE) if ∃T > 0 and α > 0 such that ∀t t 0 0 the following inequality holds where α > 0 is an excitation level, I n stands for an identity matrix.

PROBLEM STATEMENT
We consider the following class of SISO-systems with overparameterization affected by a bounded external perturbation: where x(t) ∈ R n are physical states of the system with unknown initial conditions x 0 , δ(t) stands for a bounded external perturbation, ) are unknown matrices, the vector C ∈ R n and mapping Θ AB : R n θ → R n Θ are known.Only the control u(t) ∈ R and output y(t) ∈ R signals are measurable.
The following assumptions are adopted for the control and disturbance signals.
Assumption 1.For all t t 0 the control signal u(t) ensures existence and boundedness of all trajectories of the system (2.1).
Assumption 2. The disturbance δ(t) is continuous and generated by a stable exosystem with time-invariant parameters: where x δ (t) ∈ R n δ are states of the exosystem with unknown initial conditions x δ0 , h δ ∈ R n δ , A δ (t) ∈ R n δ ×n δ are known vector and matrix, which form an observable pair h T δ , A δ .Taking into account the duality of the observation and control problems and following the results of the generalized pole placement theory [15,16], we adopt an assumption that there exists a vector L (θ) ∈ R n , which transforms an algebraic spectrum σ{.} of the matrix A T (θ) − CL T (θ) into a desired one.
If Assumptions 1-3 are met, then the following observer of the state and perturbation can be introduced: The aim is to augment the observer (2.4) with the estimation laws, which ensure that the following equalities hold where x(t) = x(t) − x(t) is the state (2.1) observation error, δ(t) = δ(t) − δ(t) stands for the disturbance observation error, ΘAB (t) = ΘAB (t) − Θ AB (θ) denotes the error of the system (2.1) parameters estimation, xδ0 (t) = xδ0 (t) − x δ0 is the observation error of the exosystem (2.2) initial conditions, L(t) = L(t) − L (θ) stands for the error of L (θ) estimation.

PREREQUISITES AND PRELIMINARY TRANSFORMATIONS
Before presenting the solution of the problem (2.5), the identifiability of the unknown parameters κ from the measurements of y(t) and u(t) is investigated.For this purpose, using the transformations (1.3), the system (2.1) is represented in the form (1.2): where ψ d (θ) = T D(θ), ξ(t) ∈ R n are unmeasurable virtual state of the observer canonical form, the vector C 0 ∈ R n and mappings ψ a , ψ b , ψ d : R n θ → R n are known.
Lemma 1.The unknown parameters η (θ) satisfy the following linear regression model2 where and, if ϕ(t) ∈ PE, then for all t t 0 + T it holds that ∆ max ∆(t) ∆ min > 0.
Here ǫ(t) is an exponentially decaying term, k(t) k min > 0 stands for an amplitude modulator (can be time-varying), k 1 > 0, k 2 > 0 denote filters constants, A K = A 0 − KC T 0 , G are stable matrices of respective dimension, the vector l ∈ R n δ is such that the pair (G, l) is controllable, and Proof of Lemma 1 is postponed to Appendix.
In the general case, the goal (2.5) cannot be achieved because only the parameters ψ a , ψ b of the characteristic polynomials of the transfer function W uy (s) = C T (sI n − A (θ)) −1 B (θ) are identifiable on the basis of measurable signals u(t), y(t) via parameterization (3.3) if ϕ(t) ∈ PE.However, in the case that is important for practical scenarios, according to the problem statement, the parameters Θ AB , ψ d , L depend nonlinearly from the physical parameters θ in a known way.In their turn, the parameters ψ a , ψ b of the characteristic polynomials of the transfer function W uy (s) also depend nonlinearly from θ.Therefore, if the following condition is met then, owing to the inverse function theorem [17], there exists an inverse mapping θ = F (ψ ab ), and therefore, it becomes possible to: i ) calculate the parameters of the system Θ AB and observer L using ψ ab , ii ) obtain estimates xδ0 (t) of the initial conditions of the exosystem (2.2), iii ) implement the adaptive observer (2.4), from which the estimates x(t) and δ(t) are obtained.
In this paper, to solve the problem of reconstruction of unmeasurable state x(t) and external perturbation δ(t) when the condition (3.7) is satisfied, the following hypotheses are additionally adopted with respect to ψ ab (θ), Θ AB (θ), and ψ d (θ).
Hypothesis 1.There exist the heterogeneous in the sense of (1.8) mappings G: R n θ → R n θ ×n θ , S: R n θ → R n θ such that: and all mappings are known.
Hypothesis 2. There exist the heterogeneous in the sense of (1.8) mappings X : R and all mappings are known.Hypothesis 3.There exist the heterogeneous in the sense of (1.8) mappings W : R n θ → R n , R : R n θ → R n×n such that: and all mappings are known.
Hypotheses 1-3 are met if the corresponding mappings are defined using elementary algebraic functions in a polynomial form.For example, for vectors Θ AB (θ) = col θ 2 2 θ 2 1 + (θ 2 + θ 1 )3 , θ 2 and ψ ab (θ) = col θ 1 θ 2 + θ 2 1 , θ 2 + θ 1 the mappings from (3.9) and (3.8) are written as follows The essence of Hypotheses 1-3 is that, owing to the property Ξ (.) (ω) = Ξ (.) (ω) ω(t), the linear regression equations with respect to the unknown parameters θ, Θ AB (θ) , ψ d (θ) can be parametrized on the basis of the measurable signals For instance, equation (3.11) can be rewritten as: and therefore, we directly have the following measurable linear regression equations 3 where the signals Y θ (t) and M θ (t) are calculated in the following way using the second equation: The requirement (3.7) and Hypotheses 1-3, despite being seemed as mathematically restrictive, are practice-oriented and met for a large number of models of real technical systems.

MAIN RESULT
The solvability conditions (3.7)-(3.10)are assumed to be met and the error equations for the differences between (2.4) and (2.1), δ(t) and δ(t) are written: where T is a Hurwitz matrix in accordance with Assumption 3.
In order to achieve the goal (2.5), using equations (4.1), an estimation law is required to be designed that ensures exponential convergence to zero of the error κ(t) and exponential stability of the equilibrium point of the state observation error x(t).Thus, the problem of reconstruction of the perturbation δ(t) and unmeasurable state x(t) of the system (2.1) is reduced to the problem of parametric identification.Such problem, in its turn, can be solved if the assumptions (3.7)-(3.10)are met.To design an estimation law that is based on Hypotheses 1-3 and the results of Lemma 1, and ensures achievement of (2.5), we first parameterize the static regression equation with respect to κ.
Lemma 2. The vector of unknown parameters κ satisfies the linear regression equation where: 1) the regressand and regressor of the regression Y AB (t) = M AB (t)Θ (θ), using the auxilary calculations are defined as: AUTOMATION AND REMOTE CONTROL Vol.84 No. 11 2023 2) the regressand and regressor of the regression Y L (t) = M L (t)L (θ) are calculated as: 3) the regression Y x δ0 (t) = M x δ0 (t)x δ0 , considering the equations and filtering is defined as follows: and, if the conditions ϕ(t) ∈ PE, h T δ Φ δ (t) ⊗ I n ∈ PE are met, then for all t t 0 + T it holds that Proof of Lemma 2 and definitions of the matrices L A T , L B , L a , L b are given in Appendix.
Having at hand the regression equation (4.2) with a scalar regressor M κ (t), which is bounded away from zero for all t t 0 + T , and using the results from [13,18], the estimation law is derived, which ensures that the goal (2.5) is achieved.
Theorem 1.Let the vector D max ∈ R n be known such that D (θ) ensures the following properties: 2) ∀t t 0 + T the error xT (t) κT (t) T converges exponentially to zero with the rate, which minimum value is directly proportional to γ 1 > 0.
Proof of the first part of theorem is similar to the proof of the second part of Theorem 1 from [18], proof of the second part of theorem coincides up to the notation with the proof of Theorem 1 from [13].
Owing to the boundedness of h T δ Φ δ (t), the exponential convergence of the error δ(t) follows from the above-given theorem, which together with the exponential convergence of xT (t) κT (t) T means that the goal (2.5) is achieved.
Remark 2. The results of Lemma 2 describe the procedure to transform the regression equation (3.3) with a scalar regressor with respect to the numerator and denominator parameters of the transfer function W uy (s) into a new equation (4.2) with respect to the observer parameters (2.4).Considering such recalculation, the division operations by time-dependent signals are not used, the parameters η(θ), ψ ab (θ) or θ are not identified, and Y κ (t) and M κ (t) are calculated solely using the signals Y(t) and ∆(t) that are measurable according to the results of Lemma 1.
Remark 3. The exponential stability conditions from the theorem are conservative.In practice, the knowledge of D max ∈ R n and ρ, as well as the implementation of the procedure to compute the eigenvalue λ max φ max (t)φ T max (t) are not required, and the goal (2.5) can be achieved using any sufficiently large constant coefficient γ γ min ∼ 1 M 2 κ (t) > 0, which is a majorant for λ max φ max (t)φ T max (t) .

DISCUSSION
In this section, four additional technical comments on the obtained results are given.Comment 1.In accordance with the lower bound from (A.48), the regressor M κ (t) is proportional to a power function ∆ ℓ θ ℓ Θ n Θ +ℓ θ ℓ Θ n(n 3 +n)+n 2 δ (2ℓθℓψ d +2) (t).Therefore, if ∆(t) ≪ 1 or ∆(t) ≫ 1, then the computational elimination of the regressor excitation may occur inside a software implementation of the proposed approach: i.e.M κ (t) can become so small or so large that it can not be processed by a computer as its CPU has a limited registers length (for example, in Matlab/Simulink the numbers that are smaller than 10 −309 or larger than 10 309 are considered equal to zero and infinity, respectively).This problem does not concern the theoretical results of the paper, but is related solely to the shortcomings of the existing computational devices.To prevent computational elimination of the regressor excitation, a time-varying amplitude modulator k(t) should be used in accordance with the method of regressor excitation normalization: otherwise. (5.1) Moreover, implementing the parameterization (4.2) in practice, it is advisable to apply a multiplication by an amplitude modulator similar to (5.1) after each multiplication by the adjoint matrix adj{.}.The problem of computational elimination of the regressor excitation was discussed in more detail in Section 3.3 of [18].
Comment 2. The existing identification methods with the relaxed regressor excitation requirements do not allow one to ensure parametric convergence if the parameterized regression equation is affected even by an exponentially decaying perturbation [19].
To solve this problem, in [20] it is proposed to use integral filtering with periodic resetting after a given time interval.The method from [20] allows one to reduce the upper bound of the steady-state parametric error iteratively.
An alternative approach is to extend the identification problem via parameterization of the exponentially decaying perturbation as a linear regression with measurable regressor and unknown parameters-unmeasurable initial conditions [11][12][13].This approach allows one to ensure the exponential convergence of the parametric error to zero when the relaxed regressor excitation requirements are met, but it is applicable only to perturbations that can be reduced to a linear regression model.The exponentially decaying perturbation ǫ(t) of (3.3) cannot be represented in such a way.
Therefore, in contrast to the results of [11][12][13], in this paper, to achieve the goal (2.5), instead of the relaxed conditions, a stricter one of the regressor persistent excitation (1.9) is required.If this condition is met, then the filters with memory from [11][12][13] are not required, and the exponential convergence of the parametric error is guaranteed even in case of existence of an exponentially decaying perturbation in the parameterization in use.
It is possible to relax the requirement of the regressor persistent excitation by application of the following filter instead of (3.4): where t ǫ ≫ t 0 is a known time instant when the filtering is started.
If the time instant t ǫ is chosen so that to satisfy the condition ε(t) = o (ϕ(t)η(θ)) from (A.26) for all t t ǫ , and the regressor is finitely exciting over [t ǫ , t e ], then the goal (2.5) is achieved under the relaxed regressor excitation requirement.More detailed properties of the extended observer on the basis of the parameterization with filtering (5.2) are studied in [21].
Comment 3.According to theorem 1, the proposed observer (2.4) + (4.4) ensures convergence of the state observation error to zero only if the persistent excitation requirements ϕ(t) ∈ PE and h T δ Φ δ (t) ⊗ I n ∈ PE are met.As the signal h T δ Φ δ (t) ⊗ I n is known for all t ∈ [t 0 , ∈ f ty), then the condition h T δ Φ δ (t) ⊗ I n ∈ PE can be validated offline-before the observer implementation.The condition ϕ(t) ∈ PE is, strictly speaking, unverifiable both offline and online, since it depends on all previous and future values of the regressor ϕ(t).Usually, to meet the regressor persistent excitation condition in linear systems parametrizations of the form (A.25), the control signal is formed so as to belong to a class of functions that are sufficiently rich of some order [5,6], i.e., the functions that include a sufficient number of spectral lines (harmonics), as far as their Fourier expansion is considered.Considering the parametrization (A.25), (3.4) from this paper, unfortunately, at this stage, it is difficult to define the exact number of spectral lines that the control signal is to include in order to meet the condition ϕ(t) ∈ PE.This is one of the main disadvantages of the proposed solution, which significantly reduces its practical value.
However, owing to the implication from Proposition 1 the proposed observer can be augmented with the following heuristic procedure to obtain the control signal that ensures ϕ(t) ∈ PE.
Step 1. Choose where u b (t) is a stabilizing component of the control signal, for example, a P-controller, a i stands for an arbitrary amplitude of the i th harmonic, and the frequencies ω i (t) are such that ω i = ω j for all i = j.
Step 2. Apply the control signal u(t) to the system and calculate the value of ∆ (t) over [t k−1 , t k ], where t k − t k−1 is a sufficiently large value. Step Assume that the result obtained at [t k−1 , t k ] can be interpolated to the entire time axis [t 0 , ∞), then, based on Proposition 1, a control signal is found that satisfies the condition ϕ(t) ∈ PE.
If there is no then set m = m + 1 and k = k + 1, and go to Step 1.
The essence of the above-given algorithm is to increase iteratively the number of harmonics in the control signal until the scalar regressor ∆ (t) becomes to be bounded away from zero over a sufficiently large time interval.Assuming that there exists m max such that, when m = m max , then the control signal (5.3) ensures that the condition ϕ(t) ∈ PE is met, then it can be claimed that this algorithm, in a finite number of iterations m max , allows one to generate a control signal that ensures that the condition ϕ(t) ∈ PE is met.It should be noted that the above procedure is not mathematically rigorous because it is designed under the strict assumption that the fact that the condition (5.4) is met over a sufficiently large time interval means that the condition (5.4) is satisfied over the entire time axis.Generally speaking, such a conclusion cannot be made.However, as far as practical scenarios are concerned, the above-mentioned simplification is acceptable, and the described procedure can be efficient.
Comment 4. If, in addition to Hypotheses 1-3, in the extended system ) the pair C T e , A e is observable, then the extended observer which is augmented only with the estimation laws for Θ AB (θ) and L e (θ), (i) does not require to parametrize (4.3) the regression equation Y x δ0 (t) = M x δ0 (t)x δ0 , (ii) does not require to identify the initial conditions x δ0 of the exosystem (2.2), and, (iii) if the condition (1.9) is met in case (3.4) is used or the regressor finite excitation condition is met in case (5.2) is used, ensures the exponential convergence to zero of the errors x(t), δ(t).In (5.5) Le (t) is an estimate of the vector L e (θ) ∈ R n+n δ , which ensures that the matrix A e (θ) + A δ has the desired algebraic spectrum.The linear regression equation with respect to L e (θ) is parameterised in the same way as Y L (t) = M L (t)L (θ), but in the space n + n δ .More detailed properties of the alternative version of the extended observer are given in [21].

MATHEMATICAL MODELLING
In Matlab/Simulink the numerical experiments with the proposed adaptive observer have been conducted.The simulation was done using numerical integration by the explicit Euler method with a constant discretization step of τ s = 10 −4 s.
A two-mass elastic mechanical system shown in Fig. 1 was chosen as a plant.Here c 0 > 0, c 1 > 0 denote spring stiffness, d > 0 is a damping coefficient, m 1 > 0, m 2 > 0 are reduced masses of the bodies.
The mathematical model of the system under consideration was written as the following system of differential equations: where In the observer canonical form (3.1) the parameters of the system (6.1) were defined as follows: ) from which it followed that the condition (3.7) was met for .
The regression equation (4.2) was parameterised using the transformations introduced in Hypotheses 1-3 and Lemma 2.
Step 1.The derivation of the parametrization Y θ (t) = M θ (t)θ.The following set of the nonlinear algebraic equations were solved with respect to θ , and, using such solution, the mappings S (ψ ab ) and G (ψ ab ) from (3.8) were obtained: Then the mappings T S Ξ S (∆) Y ab , T G Ξ G (∆) Y ab were defined as follows: , which allowed one to compute Y θ (t) and M θ (t).
Step 2. Using the above-obtained equation Y θ (t) = M θ (t)θ, the following mappings were obtained therefore, we could calculate Y AB (t) and M AB (t).
Step 3. Having equation Y AB (t) = M AB (t)Θ AB (θ) at hand and using equations from the second statement of lemma 2, the values of Y L (t) and M L (t) were computed.
Step 4. Applying equation Y θ (t) =M θ (t)θ from the first step, the following mappings were obtained: 2) with measurable regressand Y κ (t) and regressor M κ (t) could be obtained, and the observer (2.4) with estimation law (4.4) was going to be implemented.
The unknown parameters of the system (6.1), the disturbance exosystem (2.2) and exosystem (2.3) parameters were picked as:  The control signal u(t) was obtained from a P-controller with the reference signal that was chosen by trial and error so as to ensure ϕ(t) ∈ PE:   Figure 2 shows the behavior of the state x(t) and the external perturbation δ(t) observation errors.
The peaking in x(t) over [5,15] was caused primarily the fact that the error equation (4.1) with non-zero initial conditions [22] was included in the closed loop with L(t) (ŷ(t) − y(t)).The peaking in δ(t) could be explained by the fact that the behavior of xδ0 (t) was affected by the perturbation ǫ(t).
Figure 3 depicts the behavior of the parametric errors ΘAB (t), L(t), and xδ0 (t).The oscillations of the obtained transients of the parametric errors ΘAB (t), L(t), xδ0 (t) were caused by the influence of the exponentially decaying perturbation ǫ(t) from the parameterization (3.3).In general, the simulation results validated that the goal (2.5) was achieved.

CONCLUSION
An adaptive observer of state and perturbations for linear systems with overparameterization is developed.If the condition of the regressor persistent excitation (sufficient richness of the control/reference signal) is met, the solution provides exponential convergence to zero of the observation errors of the system state and the external perturbation generated by a known exosystem with unknown initial conditions.Unlike the closest analogues [10][11][12], the proposed observer allows one to reconstruct not virtual but physical state of the system represented in an arbitrary state space form.
The scopes of the further research can be: -an application of the proposed observer to solve control problems with dynamic feedback; -the relaxation of (1.9) by substituting (3.3) with a parametrization, which does not include ǫ(t) (a preliminary result for this problem has been obtained in comment 2 and [21]); -the extension of the obtained results to systems with new, possibly nonlinear, models of the external perturbations; -taking into consideration the additive disturbances, which affect the measurable output signal y(t) directly; -following [12], to reduce the transients peaking amplitude by estimation of the state x(t) with the help of an algebraic equation instead of the differential one (a preliminary result for this problem has been obtained in [23]).

FUNDING
Research was in part financially supported by Grants Council of the President of the Russian Federation (project MD-1787.2022.4).

APPENDIX
Proof of Lemma 1.The parameterization (3.3) is obtained as a combination of the results from [12,24] with the dynamic regressor extension and mixing procedure from [14,19].The proof of lemma 1 is derived on the basis of Lemma 1 and Theorem 2 from [24].To make it easier to understand the adopted notation and ensure that the results of the paper are self-contained, we next present the proof of this lemma in accordance with the one in [24].In contrast to the results [24], in this paper, owing to Assumption 2, β is known, which allows one not to avoid overparameterization in (3.3) (see (A.23)).
The multiplication of (A.42) by M ψ d (t) and substitution of (A.43) into the obtained result allow one to write: (A.44) Having filtered (A.44) via (4.3) and multiplied the obtained result by adj {V f (t)}, the regression equation Y x δ0 (t) = M x δ0 (t)x δ0 is obtained, which completes proof of statement that equations (4.2) can be formed on the basis of the measurable signals.
In accordance with Lemma 6.8 from [6], if h T δ Φ δ (t) ⊗ I n ∈ PE, then the following inequality holds Then for all t t 0 + T it holds that: from which for all t t 0 + T we have M x δ0 M x δ0 > 0, which allows one to obtain: This completes proof of Lemma 2.

T∈
R n Θ AUTOMATION AND REMOTE CONTROL Vol.84 No. 11 2023

ψ d ∆ 2 min α k k=1 e −k 2
kT ψ T d (θ)ψ d (θ) >0 I n δ n δ M x δ0 I n δ , (A.47) 3. If there exists ∆ LB > 0 such that ∆ (t) ∆ LB for all t ∈ [t k−1 , t k ], then, according to Proposition 1, it holds that θ , which allowed one to calculate Y ψ d (t), M ψ d (t) and Y x δ0 (t), M x δ0 (t) on the basis of equations from the third statement of Lemma 2. At this point, having Y AB (t), Y L (t), Y x δ0 (t) at hand, equation (4.