Time optimal control for a nonholonomic system with state constraint

. The aim of this paper is to tackle the time optimal controllability of an ( n +1)-dimensional nonholonomic integrator. In the optimal control problem we consider, the state variables are subject to a bound constraint. We give a full description of the optimal control and optimal trajectories are explicitly obtained. The optimal trajectories we construct, lie in a 2-dimensional plane and they are composed of arcs of circle.


Introduction
This paper is devoted to the study of the time optimal controllability of an (n + 1)-dimensional nonholonomic differential system with constraint on the state variables.The optimal control problem we are interested in is bilinear with respect to the state and control variables.Nonholonomic systems have been intensively studied in numerous works and we only refer to Bloch [5] where a comprehensive survey is given in connection with control theory.Minimum time control problems for nonholonomic systems have also been considered in the literature and explicit optimal solutions have been computed when no constraint is imposed on the state variables.The case n = 2, i.e. when only two controls are considered, is studied in Bloch [5] with a Lagrangian approach to compute optimal controls without state constraint.The n-dimensional control case we consider in this paper is a generalization of the Brockett integrator.Our generalization is different from the one originally given in Brockett [7] but corresponds to the (2n + 1)-dimensional Heisenberg systems studied in Beals, Gaveau and Greiner [4] and Barilari and Boscain [1] in the framework of sub-Riemannian geometry.The minimal time problem for the nonholonomic system, can be interpreted as a sub-Riemannian geodesic problem.The nonholonomic system is equivalently described by the action law of the Heisenberg group in R n+1 .In this context, the minimum time needed to steer the origin to a point in R n+1 is equal to the sub-Riemannian distance of the origin to that point.A study of the geodesics of the sub-Riemannian manifold induced by the Heisenberg group can be found in Beals, Gaveau and Greiner [4], see also Prieur and Trélat [17] or Agrachev, Barilari and Boscain [1].All these studies are performed without any state constraint.As far as we know, there is no explicit optimal solutions when constraints are imposed on the state variables.
The aim of this paper is to give explicit optimal solutions to the minimal time control problem for a general nonholonomic system with a state constraint.This will be achieved with the use of Pontryagin's maximum principle extended to the state constraints cases (see Bonnans and Hermant [6], Harlt, Sethi and Vickson [13] or Ioffe and Tihomirov [15]).
Before introducing in details the general nonholonomic system we are interested in this work, we give some notations used throughout this paper.Let n 2 be an integer and let M denotes a n × n non-zero real skew-symmetric matrix.By •, • we denote the inner product in R n and | • | is the corresponding Euclidean norm.The null vector in R n is denoted by 0 n .We fix two values c ∈ (0, +∞] (possibly c = +∞) and ȳ ∈ R, ȳ = 0 which will stand respectively for the bound of the state constraint and the nontrivial target to be reached for one of the variable components.Then, the optimal control problem we consider reads as follows: Time optimal control problem.Find the minimal time T ⋆ > 0, such that there exist functions (y, x) : [0, T ⋆ ] → R × R n and a control variable u : [0, T ⋆ ] → R n satisfying the differential system with the given initial and final states and subject to the constraints |u(t)| 1 (t ∈ [0, T ⋆ ]) , (1.5) In the optimal control problem (1.1)-(1.6),functions (y, x) : R + → R × R n are the state variables of the system whereas u : R + → R n is the control variable.Equation (1.3) is the initial condition for the state variables and (1.4) is the final state to be reached in a minimal time.The constraint (1.5) on the control variables is necessary to make system (1.1)- (1.4) relevant.Without any constraint on the boundedness of the control variables, the minimum time control problem does not make sense.Indeed, the control variable can be seen as a velocity variable and if we do not require the control u to be bounded, the minimal time tends to zero.The constraint (1.6) on the state variable x is mainly motivated by the work of Lohéac, Scheid and Tucsnak [16].In this paper, the authors tackle a time optimal problem arising from fluid dynamics and self-propulsion of a deformable body immersed into a Stokes fluid.The body is able to move into the fluid by changing its shape.In this context, the state variable y stands for the position of the mass center of the body in the fluid, whereas x corresponds to the magnitudes of radial deformations of a sphere.Small magnitudes are required to ensure the deformations to be inversible.In order to deal with small deformations, a state constraint of type (1.6) are imposed on the magnitudes x.The choice of the Euclidian norm for the state constraint (1.6) is mainly motivated by the structure of the control problem.If we choose an another norm for the state constraint (for instance, with the norm |x| 1 or |x| ∞ ) or more generally with a compact set for x then the analysis done in this article becomes much more difficult to perform.In particular, in the version of the Pontryagin maximum principle we used, the state constraints have to involve a regular (i.e.differentiable) function of the state variables x.
In this paper, we deal with a skew-symmetric matrix M .All the results we obtained can be easily extended to the more general case of a non-symmetric matrix M (i.e.M ⊤ = M ).With a nonsymmetric matrix M , the results are still valid with the skew-symmetric part of M in place of M .Indeed, for a non-symmetric matrix M , it is sufficient to make the change of variable z = y − 1 4 (M + M ⊤ )x, x and then z satisfies (1.1) with the skew-symmetric part of M in place of M .In addition, z fulfills the initial condition z(0) = 0 and the final condition z(T ⋆ ) = ȳ.As a result, the minimal time and the optimal controls corresponding to a non-symmetric matrix M are the same as the ones associated to the skew-symmetric part of the matrix M .
The paper is organized as follows.In Section 2, after giving some basic results on the controllability of the nonholonomic differential system (1.1)-(1.6),we state the main result of this paper.Optimal trajectories are fully described and optimal solutions to the constrained nonholonomic problem (1.1)-(1.6)are explicitly given.In Section 3, general properties on the optimal control problem are obtained.In particular, we show that the time optimal control saturates its constraints for every time.Next, in Section 4 we apply an Hamiltonian approach taking into account the state constraint and we make use of Pontryagin's maximum principle in order to get fine properties on the optimal control.Section 5 is devoted to the 2-dimensional control case (n = 2) where explicit optimal solutions are obtained.Finally, in Section 6 we prove the main result in the n-dimensional control case by using the explicit solutions builds in Section 5.

Controllability results and statement of the main result
In this section we start to establish some controllability results on the nonholonomic system (1.1)- (1.6).The main result of this paper is given at the end of the section.
For a given control variable u, the Cauchy problem (1.1)-(1.3)admits a unique solution.More precisely, one can easily checked that the following existence result holds.
Theorem 2.1.Let u ∈ L ∞ (R) n be given.There exists a unique solution (y, For every u ∈ L ∞ (R + ) n , according to Theorem 2.1, we can defined for t 0, 3) with the control variable u.
We now address the controllability problem associated to the differential system (1.1)-(1.3).

Controllability problem.
Find a time T > 0 and a control variable u ∈ L ∞ (0, T ) n such that and which satisfy the control constraint together with the state constraint This problem can be solved by using tools coming from the geometric control theory.The controllability result for problem (2.7)-(2.9)reads as follows.
Proposition 2.2.There exist a time T > 0 and a control variable u ∈ L ∞ (0, T ) n such that (2.7) is satisfied with the constraints (2.8) and (2.9).Proposition 2.2 can be proved by using the Chow theorem based on the computation of Lie brackets (see for instance [2,11]).This result can be obtained by a slight modification of the controllability result proved in [16,Theorem 4.1] (see also [11,Example 3.20] and [5]), so we do not give the proof of this proposition.We just point out that we obtain non-null Lie bracket due to the crucial fact that the matrix M is not symmetric, i.e.M = M ⊤ .
The controllability result of Proposition 2.2 allows to define the set U T (M ) of controls u solution of (2.7)-(2.9) in time T by When no confusion occurs, we will simply write U T instead of U T (M ).
Applying the classical Filippov Theorem (see for instance [2,9,13]) to the time optimal control problem, we can easily obtain the following result.For the proof, we refer to [16,Proposition 4.5] where a similar time optimal control result is proved.
We will say that T ⋆ (M ) defined in Proposition 2.3 is the optimal time and a corresponding control u ∈ U T ⋆ (M ) is a time optimal control.The optimal time T ⋆ (M ) depends on the matrix M but when there is no possible confusion, we do not mention this dependency by simply writing T ⋆ .The 2-tuple (T ⋆ , u) where u ∈ U T ⋆ is called an optimal solution of (2.7)-(2.9).
We are now in position to give the main result of this paper.Then, the optimal time T ⋆ of the problem (2.7)-(2.9) is given by: (2.11) Moreover, the problem (2.7)-(2.9)admits a time optimal control u ∈ C 0 ([0, T ⋆ ]) n given by : where w 1 , w 2 ∈ R n are two orthonormal vectors satisfying 2 is defined as follows.
• if c d * , we have (2.13) (2.14) In the above, R stands for the rotation matrix, From Theorem 2.4, one can give a complete description of the optimal trajectory t → X u (t) associated to the optimal control u given by (2.12).Indeed, integrating (2.12) yields As a result, we obtain that the optimal trajectory t → X u (t) associated to u lies on the plane spanned by {w 1 , w 2 }.In the case where c d * i.e. when the bound of the state constraint is large enough, the optimal trajectory t → X u (t) describes a circle of diameter d * c passing through the origin 0 n .When c < d * , the optimal trajectory t → X u (t) is a C 1 -curve formed by arcs of circles.In particular, in that case one can easily check that |X u (t)| = c for all t ∈ [τ, T ⋆ − τ ] and |X u (t)| < c for all t ∈ [0, τ ) ∪ (T ⋆ − τ, T ⋆ ].An example of an optimal trajectory is shown in Figure 1 in the case where c < d * .In Figure 1, the trajectory t → X u (t) is displayed in the plane spanned by the two orthonormal vectors w 1 and w 2 .
Remark 2.5.If we consider a non-symmetric matrix M (i.e.M = M ⊤ ) not necessarily skewsymmetric, the results of Theorem 2.4 hold true with the skew-symmetric part of M in place of M .In particular, the result involves the largest modulus λ * of eigenvalues of the skew-symmetric part of M , i.e. λ * = max |λ|, λ ∈ sp 1  2 M − M ⊤ .
Remark 2.6.We point out that the minimal time is a decreasing function of the largest modulus λ * .More precisely, for any couple (M, M ) of skew-symmetric matrices, we have: whatever the size of the matrices are.
Remark 2.7.In the case where the bound of the state constraint is large enough, i.e. when c d * , the state constraint does not play any role since the optimal trajectory will never reach the constraint.For the (2n + 1)-dimensional system (n 2), explicit formulae for the minimal time and for the optimal solutions have been already obtained in [1, §5.2] for the case c = +∞ i.e. when no state constraint is considered.The minimal time was also computed in [4,Cor.3.101].These formulae coincide with (2.11)-(2.13) in the case c d * .In [1], the authors study the geodesics on the Heisenberg group viewed as a step 1, corank 1 nilpotent contact sub-Riemannian manifold.We also mention [4, Th.1.41]and [17] where all the optimal solutions are obtained for the 2-dimensional control case n = 2.

Preliminary properties of the optimal solution
In this section we establish some properties of the time optimal solutions of the control problem (2.7)-(2.9).These properties will be very useful when integrating the maximum principle in Section 4 in order to obtain fine properties of the optimal trajectories.We first mention a result about the reversibility property of the control problem.Proposition 3.1.Let T > 0 and u ∈ U T be solution of the control problem (2.7)-(2.9).We define the function ũ ∈ L ∞ (0, T ) n by ũ(t) = u(T − t) for almost every t ∈ (0, T ).Then, we have In other words, if u is a control steering the state trajectory from (0, 0 n ) to (ȳ, 0 n ) in time T , then ũ = u(T − •) is a control which steers the state trajectory from (0, 0 n ) to (−ȳ, 0 n ) in the same time T .
The next Proposition shows that any time optimal control saturates its constraint, that is to say the bound of the constraint (2.8) is reached by any optimal control for almost every time.Proposition 3.2.Let T ⋆ > 0 and u ∈ U T ⋆ be an optimal solution of the control problem (2.7)-(2.9).Then, u satisfies for almost every t ∈ (0, T ⋆ ), Proof.To prove (3.1), we argue by contradiction.Let us assume that there exists a time optimal control u which does not satisfies (3.1).We define the function s ∈ W 1,∞ (0, T ⋆ ) by Let us denote T = s(T ⋆ ).Since ȳ = 0, the optimal control u is necessarily a non-null function and then the function s is non-constant on [0, T ⋆ ].As a result, we have that T > 0. In addition, from the fact that |u(t)| 1 for almost every t ∈ (0, T ⋆ ) and since we assume that u does not satisfy (3.1), we deduce that T = s(T ⋆ ) < T ⋆ .Moreover, since s is an nondecreasing function, there exists a right inverse function s r : [0, T ] → [0, T ⋆ ] such that s(s r (σ)) = σ for every σ ∈ [0, T ].In addition, s r is a nondecreasing function.This fact implies that s r is almost everywhere differentiable in [0, T ] and we have that ṡ(s r ) ṡr = 1 a.e. in [0, T ].Using (3.3), we deduce that u s r ṡr = 1 a.e. in [0, T ].Therefore, we obtain u s r = 0 a.e. in [0, T ] Now, we introduce the following functions defined for almost every σ ∈ [0, T ] by x(σ) = x s r (σ) , where We are going to prove that ũ is a control function providing a solution to (2.7)-(2.9) in the smaller time T < T ⋆ .This will lead to a contradiction to the fact that T ⋆ is the minimal time.
We first notice that ũ ∈ L ∞ (0, T ) n and (ỹ, Thus, we deduce that ỹ = Y ũ and x = X ũ.The constraints (2.8) and (2.9) are fulfilled by the control ũ.In addition, we have ỹ(0) = 0 and x(0) = 0 n , ỹ( T ) = y(T ⋆ ) = ȳ and x( T ) = x(T ⋆ ) = 0 n .Then ( T , ũ) is a solution to the control system (2.7)-(2.9)and since T < T ⋆ , we obtain a contradiction.Property (3.1) is proved.Now, we turn to the proof of (3.2).Using the space decomposition we split u = u 0 + u 1 with u 0 (t) ∈ Ker(M ) and Expanding the scalar product in (1.1) with the splitting (3.7) yields Thus, we conclude that (T ⋆ , u 1 ) is an optimal solution, that is u 1 ∈ U T ⋆ .Since u and u 1 are time optimal controls, they both satisfy the property (3.1) and consequently we have that u 0 = 0 n and u = u 1 .This ends the proof.

Maximum principle
Before considering the Hamiltonian approach of the optimal problem, we recall some basic facts about the Radon measures and bounded variations functions.We say that η is a Radon measure on [0, T ] if it is a regular Borel measure which is finite on the compacts.For more details on Radon measure, we refer to [12].The space BV (0, T ) of functions of bounded variations is defined by where the supremum is taken over all finite partitions 0 = t 0 < t 1 < . . .< t n = T .For a complete description and properties of functions of bounded variations, we refer to [3] (see also [12]).We only recall the basic fact that for every function µ ∈ BV (0, T ) and t ∈ (0, T ), one can define the left and right limits and the jump value is denoted by Now, we consider the Hamiltonian approach so as to obtain necessary optimality conditions for the time optimal problem.In order to take in consideration the state constraint, we follow a classical procedure, as described for instance in [15,6,10,8].The Hamiltonian of the system (2.7)-(2.9)for the minimal time problem is the function In order to take into account the state constraint (2.9), we introduce the function g c ∈ C ∞ (R n , R) defined, for every ξ ∈ R n , by Let T > 0 be the minimal time corresponding to an optimal control u ∈ L ∞ (0, T ) n with the associated state trajectory (y, x) = (Y u , X u ) ∈ W 1,∞ (0, T ) n+1 .We define the set The Pontryagin maximum principle (see [15, Section 5.2.1, Theorem 1] or [6, Theorem 2.2]) asserts that there exist s 0 0, q 0 ∈ R n , q 1 ∈ R n , p 0 ∈ R, p ∈ BV (0, T ) n and a non-negative Radon measure η with support in E c , not all null, such that (2.7)-(2.9)hold together with the co-state equations and in addition H(x, u, p 0 , p, s 0 ) = max We emphasize that in the presence of state constraints, the function p can be discontinuous because the co-state equation (4.3) involves an integral with respect to the measure η.
In the next Proposition, we give some property of the optimal control u satisfying (2.7)-(2.9)together with the adjoint variables satisfying (4.3)-(4.6).
satisfying the control problem (2.7)-(2.9).We also consider s 0 0, ).Then, we necessarily have that In addition, we have that Proof.To begin with, we give some properties of the set E c .Since η is a regular measure, we obtain from (4.3) that p ∈ BV (0, T ) n and p is continuous from the left.Notice that since x ∈ C 0 ([0, T ]) n and x(0) = x(T ) = 0, we have that E c is a closed set of (0, T ) and [0, T ] \ E c is a set of positive measure.Since the support of η is included in E c , we obtain from (4.3) that p is almost everywhere differentiable on the open set [0, T ] \ E c and we have, • We start to prove that s 0 < 0. To this end, we assume by contradiction that s 0 = 0. Then q 0 , q 1 , p 0 , p and η are not all zero and due to ( The above equation yields p ∈ W 1,∞ (0, T ) n .Differentiating (4.12) and comparing the result with the equation (4.10), we obtain Now, under the assumption s 0 = 0, we are going to prove that p 0 = 0. Arguing by contradiction, we assume that p 0 = 0. Using (4.12), we obtain that p = 0 n in [0, T ], hence q 0 = 0 n and for every t ∈ [0, T ], we have Since T / ∈ E c then η({T }) = 0 and then q 1 = 0 n and for all t ∈ [0, T ], we obtain If c = +∞, we clearly have that η = 0 and hence s 0 , p 0 , q 0 , q 1 , p and η are all zero, leading to a contradiction.Thus, in the case where c = +∞, we have p 0 = 0.For the case 0 < c < +∞, we will show that η = 0 and the same contradiction as above will hold.Since ∇g c (x) = x, the equation (4.14) implies that for every t 0 and t 1 with 0 t 0 t 1 T , we have Let t 0 be chosen in the support of η.Then, for every ε > 0, the measure of the interval [t 0 − ε, t 0 + ε] ∩ [0, T ] with respect to η is positive.Since the support of η is contained in E c , we have t 0 ∈ E c and hence |x(t 0 )| = c.Using the fact that x is continuous, we obtain that there exists ε > 0 such that x(t 0 ), x(t) On the other hand, due to (4.14) we have, 0 = We deduce that η [t 0 − ε, t 0 + ε] ∩ [0, T ] = 0 which contradicts the fact that t 0 is in the support of η.
Consequently, the support of η is empty and thus η = 0.As a result, s 0 , p 0 , q 0 , q 1 , p and η are all zero which leads to a contradiction.So, under the assumption s 0 = 0, we have proved that p 0 = 0. Now, we deduce from (4.13) that u ∈ Ker(M ) in the set [0, T ] \ E c of positive measure.This is in contradiction to the property (3.2) in Proposition 3.2.Therefore we have proved that s 0 < 0.
• Let us prove that u is a C ∞ function on the open set [0, T ] \ E c .From the co-state equation (4.3), we infer that p has the W 1,∞ -regularity on the set [0, T ] \ E c .Therefore, according to (4.7) we obtain that u has also the W 1,∞ -regularity on [0, T ] \ E c .Differentiating (4.7) and (4.3), we obtain that u satisfies the differential equation This ensures that u is a C ∞ function on [0, T ] \ E c .Using (4.7) we clearly obtain that p also possesses the C ∞ -regularity on [0, T ] \ E c .
• We are now in position to prove (4.8).In fact, we are going to prove that u(τ ), x(τ ) = 0 for every τ ∈ E c or equivalently d |x| 2 dt (τ ) = 0 for every τ ∈ E c .Let us choose τ ∈ E c .Using the Taylor formula, we have Then, in order to satisfy the constraint |x(τ + θ)| 2 c 2 for every θ ∈ R small enough, we necessarily have d |x| 2 dt (τ ) = 0. • The proof of the properties q 0 = 0 n , q 1 = 0 n is a consequence of the continuity of u.Indeed, we have q 0 = p(0) and q 1 = p(T ) and due to (4.7) and the fact that u ∈ C 0 ([0, T ]) n , we deduce that u(0) = 1 −s 0 q 0 and u(T ) = 1 −s 0 q 1 .Since u is continuous in [0, T ] and saturates its constraint (see (3.1)), we necessarily have q 0 = 0 n , q 1 = 0 n .
We conclude this section by giving the explicit form of the optimal controls in regions where the state constraint is not reached.Lemma 4.3.Let u ∈ L ∞ (0, T ⋆ ) n be a time optimal control.Then, for every t 0 , t 1 > 0 with t 0 < t 1 such that |X u | < c in (t 0 , t 1 ), we have with δ = 2p 0 −s 0 = 0. Proof.The Pontryagin maximum principle ensures that there exist s 0 0, q 0 ∈ R n , q 1 ∈ R n , p 0 ∈ R, p : [0, T ] → R n and a non-negative regular measure η with support included in E c , not all zero such that such (2.7)-(2.9)and (4.3)-(4.6)hold.Since the support of η does not intersect the interval [t 0 , t 1 ], we deduce from the co-state equation ( 4.3) and the C ∞ -regularity of p in (t 0 t 1 ) that the following differential equation holds: Hence differentiating (4.7), we obtain, The general solution of the above differential equation for u is given by (4.18).

Optimal solution for the 2-dimensional control case
In this section, we study the case of a 2-dimensional control, i.e. the case n = 2.We shall give the explicit expression of a 2-dimensional optimal control together with the associated optimal trajectory for the state variables.In the case n = 2, the non-zero real skew-symmetric matrix M reads as where J = 0 −1 1 0 and λ is a non-zero real number.Introducing the rotation matrix R defined by The first result is devoted to the description of the optimal trajectory for x into a region where the state constraint is not reached.Proposition 5.1.Let T ⋆ > 0 and u ∈ L ∞ (0, T ⋆ ) 2 be an optimal solution to the control problem (2.7)-(2.9)with n = 2 and with the matrix M given by (5.1).We denote by y = Y u and x = X u the corresponding optimal state variables.Then there exists δ = 0 such that for every t 0 , t 1 > 0 with t 0 < t 1 such that |X u | < c in (t 0 , t 1 ), we have, for every t ∈ [t 0 , t 1 ], (5.4) In addition, the optimal state trajectory t ∈ [t 0 , t 1 ] → x(t) ∈ R 2 is a parameterization of an arc of circle with radius Proof.The expression (5.2) for u is nothing more than the expression (4.18) in Lemma 4.3 for the 2-dimensional control case.Moreover, integrating (5.2) between t 0 and t ∈ (t 0 , t 1 ) yields the expression (5.3) for x.Let us turn to the proof of the expression (5.4) for y.We integrate (1.1) between t 0 and t ∈ (t 0 , t 1 ) : Using (5.2) and ( 5.3) in (5.5), we obtain Expression (5.4) is then proved.Now, we prove that the optimal trajectory t ∈ (t 0 , t 1 ) → x(t) describes an arc of circle.To see this, we define c 0 = x(t 0 ) − 1 δλ Ju(t 0 ) ∈ R 2 and we have, for every t ∈ (t 0 , t 1 ), This completes the proof of Proposition 5.1.
In the next Lemma, we give a property on the optimal state trajectory which asserts that the trajectory does not pass through the origin except at the initial and final times.This result will be useful for the complete description of the optimal trajectory.Lemma 5.2.Let n = 2 and M given by (5.1).We denote by T ⋆ the corresponding optimal time and u ∈ U T ⋆ a time optimal control.Then {t ∈ [0, Proof.As usual, we denote (y, x) = (Y u , X u ) the optimal state trajectory associated to u.It is clear that we have {0, T ⋆ } ⊂ {t ∈ [0, T ⋆ ] , x(t) = 0 2 }.We assume by contradiction that there exists σ ∈ (0, T ⋆ ) such that x(σ) = 0 2 .
trajectory for t σ trajectory for t σ Figure 2: Counterexample for proving that the optimal trajectory for x does not pass through 0 2 , except at initial and final times.This is in contradiction with the regularity of the optimal controls proved in Proposition 4.1.Thus, Lemma 5.2 is proved.
We are now in position to give the full description of the optimal trajectory when the state constraints are imposed.The following result is a particular version of Theorem 2.4 when n = 2, i.e. when considering the 2-dimensional control case.Proposition 5.3.Let M be a 2 × 2 real skew-symmetric matrix given by M = λJ with λ = 0. We define the two real positive values, Then, the optimal time T ⋆ for the problem (2.7)-(2.9)with the 2 × 2 matrix M is given by: (5.7) Moreover, the problem (2.7)-(2.9)admits an time optimal control u ∈ C 0 ([0, T ⋆ ]) n defined as follows: • if c d * , we have In this case, the optimal trajectory t → X u (t) for t ∈ [0, T ⋆ ] describes a circle with diameter d * c.
• if c < d * , we have (5.9) In this case, the optimal trajectory t → X u (t) for t ∈ [0, T ⋆ ] is composed by three arcs of circle.The first arc of circle corresponds to the trajectory starting from x(0) = 0 until the state constraint is reached tangentially at time τ = cπ 2 with |x(τ )| = c.Then, the trajectory stays on the constraint, that is |x(t)| = c for times t ∈ [τ, T ⋆ − τ ].Finally, at time t = T ⋆ − τ , the trajectory leaves tangentially the state constraint and describes an arc of circle in order to reach the final state x(T ⋆ ) = 0.
In the above, the vector Proof.This proof is organized as follows.Firstly, we will prove the result for the case c d * .Afterwards, the result will be shown for c < d * .
Let T ⋆ = T ⋆ (M ) > 0 be the minimal time for the problem (2.7)-(2.9)and let u ∈ U T ⋆ (M ) be a time optimal control.We denote the corresponding state variables y = Y u and x = X u .
• In order to prove the result for the case c d * , we first pay attention to the case c = +∞.In this case, the state constraint (2.9) is never reached and according to Proposition 5.1 an optimal control has the form The state trajectory (y, x) has the following representation for every t ∈ [0, T ⋆ ]: In order to reached the final state (2.7), we must have Hence, there exists k ∈ N * such that Moreover, Proposition 5.1 implies that sup Since we deal with the case c = +∞, the minimal time T ⋆ is necessarily obtained with k = 1 and we obtain T ⋆ = πd * with the corresponding optimal control u given by (5.8).
One can easily check that the optimal control u previously obtained in the case c = +∞ still remains an optimal control in the constrained case d * c < +∞ with the same minimal time T ⋆ .
From Proposition 5.1 we deduce that the associated trajectory t ∈ [0, • Now we investigate the case where c < d * .We will first show that the state constraint for x is necessarily reached in a time τ ∈ (0, T ⋆ ).Then, we will compute an optimal control.
1. Let us assume by contradiction that the state constraint (2.9) is never reached.In that case, the optimal time and the time optimal control have the same expressions as the ones already obtained in the case c = +∞.In particular, the expression (5.10) for the minimal time T ⋆ and the estimate (5.11) for x hold true for k 1.Since c < d * , estimate (5.11) implies that k 2. Hence at time σ = T ⋆ k < T ⋆ , we have x(σ) = 0 2 .This is in contradiction with Lemma 5.2.Consequently, the state constraint (2.9) is necessarily reached and according to Proposition 4.1 (see also Remark 4.2) the constraint is reached tangentially.
2. We now turn to the computation of the optimal control.
• Firstly, we prove the following characterization for the optimal solution: We recall that due to proposition 5.1, we know that the trajectory for x is composed by arcs inside the region where the state constraint is not reached.Obviously, when the trajectory lies on the constraint, x also describes an arc of circles.In addition, from (5.2)-( 5.3), one can see that the trajectory can change of arc of circle only when reaching or leaving the state constraint.
We start by proving the description of the trajectory announced in (a).(5.12) We now prove that the last part of the trajectory, described in (c), is also an arc of circle.We first prove that if the trajectory leaves the constraint at time σ then we necessarily have that σ = T ⋆ − τ and the state trajectory is never reached again.Using (5.2)-(5.3),with δ given by (5.12), we obtain that Consequently, due to Lemme 5.2, we obtain that σ + τ = T ⋆ .
Then we have proved that the optimal trajectory t → x(t) for t ∈ [0, T ], has the characterization given by (a), (b) and (c).
Using (5.14) and (5.15), we obtain λJx(t), u(t) = λcǫ 1 for every t ∈ (τ, T ⋆ − τ ), then Then, using (5.13) and (5.16), we obtain (5.17) Since the trajectory of x is composed by arcs of circle and since the state constraint is reached and leaved tangentially, we deduce that the length of the trajectory t → x(t) for t ∈ [0, T ⋆ ] is necessarily larger than the perimeter of a circle of radius c/2.So, we have Since | ẋ| = |u| = 1 on [0, T ⋆ ], we deduce that T ⋆ > cπ .Since we assumed that c < d * = 2|ȳ| |λ|π , we obtain from (5.17) that ǫ 1 = sign(λȳ) and then using (5.17), the minimum for T ⋆ is obtain for ǫ 0 = ǫ 1 .Therefore, the minimal time T ⋆ is given by A straightforward calculation shows that the control given by (5.9) is optimal.
The proof of Proposition 5.3 is completed.
We conclude this section by giving a numerical example of a fluid-structure interaction control problem.We consider a swimming control problem for a deformable body immersed into a Stokes fluid.The body is able to self-propell into the viscous fluid by changing its shape.As already mentioned in the introductory section at the beginning of the paper, the state variables x correspond to the magnitudes of the deformations and a state constraint is imposed in order to ensure in particular that the deformation map is one-to-one.We refer to [16] for more details.In this context, the minimal time control problem involves the Shapere-Wilczek matrix given by M = M SW := 3 35 0 2 3 0 (see [18,16]).The Shapere-Wilczek matrix M SW is not skew-symmetric but as already mentioned in the introduction of the paper, an optimal control is easy to obtain in the case where M is a non-symmetric matrix (M = M ⊤ ) by considering the skew-symmetric part of M in place of M in all the formulae.For the Shapere-Wilczek matrix M SW , an optimal trajectory t → x(t) with the state constraint |x| 1 is depicted in Figure 3.The state variable y stands for the vertical position of the mass center of the body and we choose the target ȳ = 1.In Figure 4, we plot the optimal trajectory t → y(t) obtained with the full Shapere-Wilczek matrix M SW .We also display in Figure 4, the optimal trajectory obtained by only considering the skew-symmetric part 1 2 M SW − M ⊤ SW ) instead of the full matrix M SW .For these two matrices, we emphasize that the dynamics of the system are different although the minimal times and the optimal controls are the same.This fact was discussed in the introduction of this paper.6 Proof of the main result for the n-dimensional control case In this section, we consider the general n-dimensional control case with n 2. We will construct a n-dimensional optimal control from the 2-dimensional control computed in the previous section.To begin with, we make a reduction on the n×n non-zero skew-symmetric matrix M of the control problem (1.1)-(1.6).Due to the property (3.2) in Proposition 3.2, the null space of M does not play any role in the control problem (2.7)-(2.9).Consequently, we can assume that M is an invertible matrix.This means in particular that the dimension number n of the matrix M is even.The eigenvalues of the invertible skew-symmetric matrix M are pure imaginary numbers iλ 1 , −iλ 1 , iλ 2 , −iλ 2 , • • • , iλ l , −iλ l with 2l = n and with λ 1 . . .λ l > 0. In addition, the skew-symmetric invertible matrix M can be diagonalised in an orthogonal basis to a block diagonal real matrix.More precisely, there exists a real orthogonal matrix P such that (see [14, Corollary 2.5.14]) where J = 0 −1 1 0 and Λ = (λ 1 , . . ., λ l ).Moreover, the columns of the orthogonal matrix P are composed by the real vectors One can easily check that, for every T > 0, we have Consequently, the minimal time for the matrix M is equal to the minimal time for the matrix J(Λ) i.e., T ⋆ (M ) = T ⋆ (J(Λ)) .
For the rest of the section, we will denote the minimal time T ⋆ (Λ) instead of T ⋆ (M ) and when no confusion occurs, we will simply write T ⋆ .According to (6.3), it is sufficient to prove Theorem 2.4 for the 2l-dimensional control problem with the block diagonal matrix J(Λ) defined by (6.1) instead of M .Moreover, the vector Λ ∈ R l is composed by the positive imaginary part of the eigenvalues of the skew-symmetric matrix M taken with their multiplicities and arranged in decreasing order.We will obtain a 2l-dimensional optimal control with the 2l × 2l matrix J(Λ) starting from a 2-dimensional optimal control for the 2 × 2 matrix λ * J where λ * = max Λ.This will be made possible thanks to a monotonicity argument and a zero-invariance property of the optimal solution.
Let us introduce some notations that will allow us to link the 2l-dimensional control case with the 2-dimensional one.For every l ∈ N * , we define the maps: The following result is concerned with a monotonicity property for the map Λ → T ⋆ (Λ).
Using the fact that Π l k q 0 = 0 2 and Π l k x(t) =

Figure 1 :
Figure 1: Optimal trajectory t → X u (t) in the case where c < d * .
s)| < c} be the first reaching time.According to the first item, we know that τ exists.Since x reaches the constraint in a tangential manner (see Proposition 4.1 and Remark 4.2) then t ∈ [0, τ ] → x(t) is a parameterization of a semicircle of radius c/2.In addition, due to the fact that |u(t)| = 1 for every t ∈ [0, τ ] and that u is continuous, we deduce that τ = cπ 2 = τ .Using Proposition 5.1, we obtain that the radius of this semi-circle is 1 |δλ| = c 2 , and then δ = ǫ 0 2 λc with ǫ 0 = ±1.

with 1 2 MFigure 4 :
Figure 4: Case n = 2. Optimal trajectories t → y(t) with the full Shapere-Wilczek matrix M SW and with its skew-symmetric part.The state constraint is |x(t)| 1 and the objective is ȳ = 1.
.5) Since the Hamiltonian H and the state constraint function g c defined by (4.2) do not depend on the time variable t, we deduce that (see [15, Section 5.2.2.]):