To the dynamic reconstruction of sliding controls

. The paper is devoted to the problem of dynamic control reconstruction for controlled deterministic a ﬃ ne systems. The reconstruction has to be carried out in real time using known discrete inaccurate measurements of an observed trajectory of the system. This trajectory is generated by an unknown measurable control with values in a given compact set. A correct formulation of the reconstruction problem for the case of non-convex control restriction set is given. An approach to solving this problem is suggested. This approach is based on auxiliary variational problems with non-classical convex-concave Tikhonov-regularized integral cost. A numerical method for solving dynamic control reconstruction problem is suggested. It reduces the reconstruction problem to solving systems of linear ordinary di ﬀ erential equations. Matching conditions for the approximation parameters (accuracy and step of the known measurements and a Tikhonov regularizing parameter) such that the constructed approximations converge to the solution are obtained. Results of numerical simulation are exposed to illustrate the theory.


Inverse problems
Inverse problems play an important role in the theory and applications of dynamical systems.While forward optimal control problems for dynamical systems aim at constructing control laws that optimize given quality indexes, inverse problems aim at reconstructing unknown input control by known output-an observed movement of the system.The term dynamic reconstruction means that the reconstruction is carried out in real time when a new piece of information is received from the observations.Such tasks are actual for many modern applied control problems (for example, in mechanics, medicine and economics).

Modern approaches to solving inverse problems
There exists a variety of modern approaches to solving inverse problems.We offer a brief review of some of them.A classical approach to solving inverse problems, based on the least squares method, is suggested in [1].In work [2] inverse problems are solved by analyzing the response of the dynamical system to special controls.In [3], a regularized filtering function is used.Some of the suggested methods are based on algebraic approaches, such as the method described in [4], that is based on the Lie groups theory.Another method, developed in [5], relies on an optimization approach based on gradient-type methods.In this approach inverse problems are formulated as operator equations and then are reduced to minimizing the corresponding residual functionals.Adjoint variables are used to compute the gradients.The authors of [6] consider effective iterative methods, based on generalized Golub-Kahan bidiagonalization, which allows automatic regularization of the solution.Similar problems are considered in [7], where an approach, based on using Fourier series, is proposed.
Let us also note two approaches that rely on the Tikhonov regularization [8] procedures.The first one [9] suggests an optimization method based on discrete linear operator equations that reduces inverse problems to minimization of Tikhonov functionals in which an additional penalty term leads to smoothness of the minimizer.
Another well-known approach to solving the dynamic control reconstruction problems was proposed in [10].This approach consists of minimizing a Tikhonov-regularized discrepancy between the dynamics and measurements of the observed states of the system.It uses the extreme aiming procedure, which has roots in the works of the school of N.N.Krasovskii on the theory of positional optimal control [11].
More detailed reviews of modern methods for solving inverse problems are presented in [12,13].

The new approach
This work develops a new variational approach to solving dynamic control reconstruction problems, which was proposed by N.N.Subbotina, E.A. Krupennikov, T.B. Tokmantsev [14][15][16].This approach uses auxiliary variational problems on minimum of a nonclassical integral convex-concave Tikhonov-regularized residual functional.A feature of the proposed approach is that it does not imply direct solving of the auxiliary problem and uses only linearized necessary conditions for an extremum.This method reduces dynamic control reconstruction problem to integration of Hamiltonian systems of linear ordinary differential equations.
In this paper, we consider deterministic dynamical controlled systems affine in controls.It is supposed that discrete inaccurate measurements of the observed trajectory are known.This trajectory is called the basic trajectory.Reconstruction is carried out in real time as new measurements are received.Admissible controls are bounded measurable functions.
Note that other approaches to solving inverse problems, that rely on auxiliary optimization problems, use convex residual functionals, as a rule (see, for example, [9]).The new approach uses convex-concave functionals.This innovation ensures that the obtained approximated movements oscillate around the real movement of the dynamical system.This character of the residual is well-suited for averaging and provides *-weak convergence of the constructed control approximations to the desired solution.
A numerical algorithm based on the new approach is proposed.It reduces the dynamic control reconstruction problem to integrating systems of linear ODEs.The variety of existing numerical methods for solving ODEs ensures the efficiency of the algorithm.

New Results
Justification of the new approach is based on the results obtained in [14][15][16].These results are obtained for the case of convex control restriction set, which describes geometrical constrains on the controls.In this work a generalization of this case is introduced.Namely, controls with values in non-convex sets are considered.This type of restrictions, in particular, includes controls with sliding regimes.
Note that the control reconstruction problem is ill-posed because one basic trajectory can be generated by different admissible controls.To regularize the problem, the definition of normal control is introduced.That is the admissible control that generates the basic trajectory and has the least possible L 2 -norm.It has been shown in [16] that under some general assumptions there exists a unique normal control in the case of convex control restriction set.Now we need to complement the set of admissible controls by generalized controls, which are measurable functions with values in the set of regular probability measures on the given set of geometrical restrictions.It allows us to include sliding controls in this set of admissible controls (for references on generalized controls see [17]).But it implies that we need to redefine the notion of the unique normal control to make the problem well-posed.
Let us remark that for each generalized control there exists a unique averaged control [17].That is a measurable function with the values in the convex hull of the set of the geometric control restrictions.We will say that it is equivalent to the corresponding generalized control in the sense that they generate the same trajectory of the dynamical system.One averaged control can be equivalent to a set of generalized controls.We will not differ generalized controls that are equivalent to the same averaged control.Hence, we will consider the set of averaged controls as the set of admissible generalized controls.
So, the set of control restrictions is now convex, as in [14][15][16].According to [16], there exists a unique generalized normal control.That is the admissible generalized control that generates the basic trajectory and has the least possible L 2 -norm.So, we can consider the well-posed problem of control reconstruction as the problem of approximation of the generalized averaged normal control.The generalized averaged normal control will be further called the normal control.The results obtained in [15,16] can be applied to the introduced regularization of the problem for the non-convex case.
Remark that there may exist many admissible controls that are equivalent to the normal control.There is no means to find which one of them generated the basic trajectory in reality.

Problem statement 2.1 Dynamics
Dynamical controlled systems of the following form are considered: where u(•) is the vector of the controls (the input of the system) and x(•) is the state variables vector (the output of the system).Admissible controls are measurable functions with values in a compact set U ⊂ R m : Note that the set of restrictions U is not necessarily convex.In the case of non-convex restrictions there can appear controls with sliding regimes.We introduce generalized controls to describe the motions of system (1) generated by such controls.Generalized admissible controls are measurable functions with values in u(t) ∈ coU a.e. on [0, T ]. (3)

Measurements
It is supposed that we observe a trajectory of system (1) that is generated by an unknown generalized admissible control.This trajectory is called the basic trajectory x * (•) : [0, T ] → R n .Discrete inaccurate measurements {y δ i } of the basic trajectory have the error δ > 0 and are received with the time step h δ > 0: We introduce the following assumptions: Assumption 1.There exist parameters d 0 > 0, δ 0 > 0, h 0 > 0 and a compact Ψ ⊂ R n such that for any δ ∈ (0, is a ball of the radius r and the center in x. Assumption 2. In dynamics (1) the elements of the matrix function G(•) and the vector function f (•) are Lipschitz continuous in all variables for (t, x) Assumption 3. The rang of the matrix G(t, x) equals min{m, n} for (t, x) ∈ D 0 .

Well-posed dynamic control reconstruction problem
Generally speaking, the problem of reconstruction of the unknown control is ill-posed because the same basic trajectory can be generated by a set of different admissible generalized controls.It can be illustrated on the following example: x * (t) ≡ 0, where U * is the set of admissible generalized controls generating the basic trajectory x * (•).
To state a well-posed problem the definition of normal control is introduced: Definition.The normal control u * (•) : [0, T ] → R m is the measurable control that generates x * (•) and has the least possible norm in the L 2 space: It was proved in [16] that if assumptions 1-3 hold for x * (•), there exists a unique normal control u * (•).
Assumption 4. The normal control satisfy condition (3).Therefore, it is a generalized admissible control.
So, we can approximate in this way a real admissible sliding control that is equivalent to the generalized normal control.Since a sliding regime can't be approximated in the L 2 space, *-weak convergence is used in the problem statement.
A step-by-step algorithm for solving the dynamic control reconstruction problem is suggested.Each i-th step (i = 1, . . ., N) is performed when the measurement point y δ i from ( 4) is received.On each i-th step the solution is constructed on the interval [t i−1 , t i ] = (i−1)h δ , ih δ , i = 1, . . ., N.
The construction consists of 3 sub-steps:

Interpolation
The function It is a third-degree spline of the first-order smoothness which satisfies the conditions

Solving auxiliary variational problem
The constructed interpolation y δ (•) is used in the following auxiliary variational problem on the minimum of the functional where α is a small Tikhonov-regularizing [11] parameter.The functional is considered on the set of continuously differentiable functions The necessary optimality conditions can be written in the form of a Hamiltonian system of ODEs [18].The equations in this system are non-linear.They are linearized by "freezing" the coefficients to obtain a system of linear ODEs: where s i (•) is the adjoint variables vector corresponding to the i-th step.

Construction of the solution
System ( 7) is numerically integrated by any method and its solution s i (•): [t i−1 , t i ] → R n is used to construct the approximation u δ i (t) of the solution of the dynamic control reconstruction problem on this step: . This formula is one of the Euler equations in the auxiliary variational problem (6).The piecewise-continuous function More detailed description and justification of the algorithm is exposed in [16].
The following theorem is proved [15,16]: Theorem.If assumptions 1-4 hold for the input data (1), ( 4), then functions ( 8) are an approximation of the normal control, which is the solution to the dynamic control reconstruction problem if the following agreement of the parameters holds:

Example
As an example, let us consider a model of controlled flight used in the problem of maintaining the flight altitude [19]: The state variables are the vertical velocity v(•) and the mass m(•).The unknown control parameters are the fuel consumption β(•) and the aerodynamic drag coefficient q(•).Note that the bounding set {0; 1} for β (11) is not convex.
To simulate the measurements that are input data in the dynamic control reconstruction problem, the following generalized controls have been considered: q * (t) = 0.1 + 0.2 sin t.
Since the generalized admissible control β * (•) does not satisfy the restriction (11), we introduce an admissible control β * (•) that is an approximation of a sliding control that is equivalent to β * (•) in the meaning that it generates the same trajectory as β * (•).This approximation has the form of a function that can switch between the bounds {0; 1} with the frequency 10 −3 .The admissible controls β * (•) and q * (•) were substituted in the dynamics (10) to calculate the basic trajectory.Then, it was perturbed in discrete points in steps h δ = 10 −3 with random error not exceeding δ = 10 −4 .The obtained set of "measurements" is considered as the input data for the dynamic control reconstruction problem.
The results of averaging are exposed on figure 3. -reconstruction, generalized unknown control

Conclusion
A new approach for solving dynamic control reconstruction problems, suggested in [14][15][16], is developed.A generalization for the case of non-convex control restrictions set has been suggested.A new definition of the generalized normal control is introduced.It allows, in particular, to consider dynamical systems with controls that have sliding regimes.
In the future, it is planned to develop and justify a procedure that allows to construct numerical approximations of admissible sliding controls with the help of measurable controls with values in non-convex sets of geometrical restrictions.The approximations have to converge *-weakly to the generalized normal control.