Linear matrix inequality relaxations and its application to data‐driven control design for switched affine systems

The problem of data‐driven control is addressed here in the context of switched affine systems. This class of nonlinear systems is of particular importance when controlling many types of applications in electronic, biology, medicine and so forth. Still in the view of practical applications, providing an accurate model for this class of systems can be a hard task, and it might be more relevant to work on data issued from some trajectories obtained from experiments and to deploy a new branch of tools to stabilize the systems that are compatible with the processed data. Following the recent concept of data‐driven control design, this paper first presents a generic equivalence lemma that shows a matrix constraint based on data, instead of the system parameter. Then, following the concept of robust hybrid limit cycles for uncertain switched affine systems, robust model‐based and then data‐driven control laws are designed based on a Lyapunov approach. The proposed results are then illustrated and evaluated on an academic example.


INTRODUCTION
Over the past decades, robust control theory has been developed to solve analysis and control problems for a wide class of dynamical systems. 1-3 The main advantages of this method rely on the possibility to derive stability and stabilization tests as to evaluate and measure the robustness of systems as to control these uncertain systems. These tests are often written in the form of linear matrix inequalities (LMIs) that can be easily and efficiently solved using semi-definite programming. Generally, this direction of research has a common requirement, namely knowledge of a model, even though some parameters in the model might be subject to uncertainties modeled as norm-bounded or polytopic uncertainties. Recently, due to the emergence of artificial intelligence and the possibility of storing numerous data, the area of data-based identification has been enhanced. The main motivation for this new paradigm is to avoid or limit the modeling phase to some extents and to better rely on a set of several experiments providing data in order to compensate for the lack of knowledge on the model. These methods have been developed to eliminate the common feature described above.
In this direction, reinforcement learning methods have been developed in the literature; see, for instance, 4,5 to cite only a few of them. They mainly deal with system identification, estimation of the model, but rarely address closed-loop guarantees, and only few papers provide non-conservative constructive stabilization conditions for the closed-loop system using noisy data of finite length, which remains an open problem, even if the data are generated by a linear time-invariant (LTI) system. Indeed, the automatic control community has recently shown a growing interest in the problem of data-driven control design for various classical control problems (see, for instance [6][7][8], enhancing the tools arising from robust control theory. In this context, the problem can be summarized as follows: How to translate a model-based stability or stabilization criteria into a data-driven ones, without introducing conservatism?
A catalog of formulas has been provided recently in Reference 9, where several problems related to the design of state/output feedback controllers for linear systems have been considered. It is notably shown therein that in the case of exact data experiments arising from linear time-invariant systems, that is, without noise or uncertainties, equivalent formulations between model-based and data-based criteria have been obtained. More recently, another elegant solution to the data-driven design problem for linear systems has been provided in References 10 and 11.
In this paper, an extension of the S-procedure has been proposed to eliminate in an LMI condition the uncertainties arising, for instance, from the model matrices. The underlying idea is to embed the uncertainties brought by the data experiments into an LMI constraint, which is considered an assumption in the design method. The authors have interestingly demonstrated that this assumption allows eliminating the model matrices at the price of introducing a single decision variable to the initial conditions for stability or stabilization. A similar assumption was also made in References 6 and 12. To the best of our knowledge, this trend of research has mainly been considered for linear systems. For the nonlinear cases, the reader may look at the case of bilinear input-affine systems, 13 referring to many relevant applications in engineering, medicine, or ecology. 14 The authors demonstrate therein how their formulas provided in Reference 9 for the linear case can be efficiently adapted to this class of nonlinear systems.
In this paper, the objective is to demonstrate that data-driven methods can also be applied to a particular class of nonlinear systems, namely switched affine systems. This class of systems represents a highly relevant theoretical and practical area of research. From a practical point of view, they have been employed to model numerous applications such as embedded systems, electronic power converters, mixing fluids, damping of vibrating structures, mobile sensor networks, 15 characterizing these complex systems and not intuitive behaviors. From a theoretical point of view, they represent a particular class of hybrid dynamical systems. 16,17 A particularity of switched affine systems arises from the fact that, in general, it is not possible to stabilize their solution to a single equilibrium point but rather to a (hybrid) limit cycle, as shown in Reference 18 and later refined in Reference 19. In the context of data-driven methods for switched systems (affine or not), the authors of References 20 and 21 provided the first attempts on switched systems (without affine terms). However, there are only a few works considering data-driven control design for switched affine systems, which is the objective of this paper. To the best of our knowledge, the only contribution in this direction has been presented in Reference 22, where the authors consider the stability analysis of this class of systems, when the switching signal is assumed to be an arbitrary exogenous input. The authors of Reference 23 use machine learning algorithms, especially regression trees and random forests to model a switched affine system using only historical data. The model is later used to establish a model predictive control or more precisely a Data Predictive Control (DPC) to obtain optimal control system trajectories. In this paper, the model under consideration is of the form x + = A x + B u + F , where x is the state (and x + the forward value of x), u the control law and the switching signal. This system refers to a different class of switched affine systems since it is composed of a switched linear controlled systems (A x + B u) with affine switching terms (f ) where is an exogenous signal.
In contrast to these works, here we consider a data-driven control design method for switched affine systems, where the control variable is the switching signal. Following the framework provided in Reference 24 dealing with an uncertain model-based method, this paper will present a robust data-driven control design for this class of systems. To do so the following objectives will be addressed in the paper • Provide a technical generic tool that allows transforming a model-based criteria for discrete-time systems into a data-driven one.
• Solve a problem of model-based design of a stabilizing switching state-feedback control law for switched affine systems subject to an external disturbance.
• Illustrate the potential of the preliminary technical tool though the data-driven design of switching control laws for switched affine systems.
The structure of the paper is the following. Section 2 introduces a novel matrix-constraint relaxation, its relationship with the existing literature and a short discussion about its potential on the data-driven stabilization of linear systems. Then, Section 3 deals with the model-based stabilization of switched affine systems subject to a bounded external disturbance. Then, thanks to the results in the two previous sections, the main contribution of the paper is presented in Section 4 addressing the data-driven stabilization of switched affine systems. These results are then illustrated in Section 5, where a numerical application of the theoretical contributions is treated.
Notations: Throughout the article, N denotes the set of natural numbers, R the real numbers, R n the n-dimensional Euclidean space, R n×m the set of all real n × m matrices and S n the set of symmetric matrices in R n×n . For all scalars 0 < a < b, notation [a, b] N stands for [a, b] ⋂ N, which represents the set of integers included in [a, b]. For any n and m in N, matrices I n and 0 n,m (0 n = 0 n,n ) denote the identity matrix of R n×n and the null matrix of R n×m , respectively. For any integer n, 1 n stands for the vector in R n , whose entries are all equal to 1. When no confusion is possible, the subscripts of these matrices that precise the dimension will be omitted. For any matrix M of R n×n , the notation M ≻ 0, (M ≺ 0) means that M is symmetric positive (negative) definite and det(M) represents its determinant. For any matrices denotes the symmetric matrix . ||x|| denotes the Euclidean norm of x. For a matrix M ∈ S n , M ≻ 0 and a vector h ∈ R n , we denote the shifted ellipsoid

New lemma for data-driven analysis
The following lemma, which is the first contribution of the paper, presents a generic method to transform a problem of a particular matrix inequality depending on parameters that verify a quadratic constraint into a formulation that is independent of these parameters. Several solutions to this problem, also known as Quadratic Matrix Inequality, are stated below.

Lemma 1. For given positive integers n, m, q, consider matrices
Then, the following statements are equivalent.
(i) Inequality holds. Σ(Ψ) represents the nonempty set of allowable uncertain matrices  characterized by a quadratic constraint defined as follows, (ii) There exist matrices (,  ) ∈ S n × R q×n and a positive scalar > 0 such that (iii) There exists a positive scalar > 0 such that Proof. The proof is divided into three steps.
(i)⇒(ii): The first step of the proof is to find an appropriate expression of matrices  that belong to Σ(Ψ). For any matrix  in Σ(Ψ), it holds where is any matrix in S n . In addition, condition Ψ 3 ≺ 0 ensures that there exists a matrix such that is positive definite, which allows applying the Schur complement as follows Note that it is not the usual way to apply the Schur complement, but this dual way has been considered to keep block (1, 2) (resp. (2, 1)) with (resp. its transpose). Next, pre-and post-multiply the previous inequality by and its transpose, respectively, where is any matrix in R p× (n+m) . Then, having  ∈ Σ(Ψ) implies The previous calculations ensure that inequality (1) can be rewritten as () ≻ 0, for all  so that (5) holds. Using an S-procedure, this statement is equivalent to the existence of a positive scalar > 0 such that which, together with the Schur complement, writes makes disappear the dependence of the condition on the uncertain matrix  and concludes the first part of the proof. and its transpose, respectively, leads to 0 ≺ This inequality proves that inequality (1) holds for all matrices  that belong to Σ(Ψ), since the second term is negative semi-definite. ▪

Comparison with the S-Lemma
Lemma 1 provides an alternative formulation in robust analysis for uncertain matrices subject to quadratic constraints of the form (2) compared to the one presented in References 10 and 11. For the sake of consistency, the S-Lemma provided in Reference 10 is recalled in the following lemma.
Lemma 2 (10, Th.9). Let  s , Ψ s ∈ R (n s +m s )×(n s +m s ) be symmetric matrices and assume that there exists Then the following statements are equivalent: ≻ 0, ∀ s ∈ Σ(Ψ s ) ⊂ R n s ×m s , where set Σ(Ψ s ) has the same definition as in (2) but replacing Ψ by Ψ s .
The main similarities and differences with respect to this formulation are described here. Let us first note that both lemmas address the same problem, consisting of the satisfaction of a matrix inequality subject to uncertain matrices characterized by a quadratic constraint. The main interest of both lemmas is to derive equivalent inequalities that are independent of the uncertain matrix  (or  s ). Finally, both lemmas can be seen as application of the usual manipulations on LMI such as Schur Complement, Finsler's lemma and S-procedure.
Apart from presenting these similarities, both lemmas have substantial differences. First, Lemma 2 requires that matrix  s has the same size as Ψ s , that characterizes the quadratic constraint on . Lemma 1 is more flexible in this sense, as there is no relationship between matrices  1 and Ψ, which are independent. This flexibility has the benefit of reducing the initial manipulations to derive, from usual stability or control problems, the appropriate expression of  and Ψ to fit the framework of Lemma 2. In fact, the relationship between both lemmas can be seen by selecting From this selection, it is clear that item (ii) of Lemma 2 is equivalent to item (iii) of Lemma 1, showing that Lemma 1 is a particular case of Lemma 2. Therefore, it is important to understand the advantages of Lemma 1 with respect to Lemma 2, since they can be used for the same purpose. The first one is related to the structure of matrix (), which appears in many constructive LMI problems for the stabilization of discrete-time systems. As a first illustration, consider the problem of stabilization of a discrete-time linear system x + = Ax + Bu using a state-feedback controller u = Kx, where we adopted the following notation x + = x k+1 and x = x k . Using a quadratic Lyapunov function, with a positive definite matrix P, the condition for the stabilization of this system writes Then, as usual in this context, the design of the control gain is obtained by introducing W = P −1 and applying the Schur complement to the first term of the previous inequality, leading to the equivalent problem This usual manipulation drives naturally to the formulation of item (i) in Lemma 1 and, it is easy to identify that for this very simple example ] .
Then Lemma 1 leads to the following equivalent formulation which can be rewritten as an LMI by introducing a new decision variable Y = KW.
Keeping the same example and following the development presented in Reference 10 or Reference 6, the LMI stabilization problem cannot be treated directly. First, a manipulation is required to solve the problem, that is, to consider the dual or transpose problem that is Then, it is possible to apply the equivalence formulation proposed in Lemma 2. This manipulation is the key step in the developments provided in Reference 10. Note that this manipulation is correct when considering this simple linear time-invariant example, since both LMIs are necessary and sufficient conditions for matrix A + BK to be Schur stable and consequently for the transpose matrix (A + BK) ⊤ to be Schur stable as well. However, this manipulation is not permitted any more, or at least has to be studied carefully when other classes of systems are considered such as systems subject to nonlinearities, saturations, etc.
In order to avoid working on the transpose matrix, one may apply the Schur complement and work on the same inequality  s (A, B) = (A, B). However, by doing this, the problem fits exactly to the structure of the inequality presented in item (i) of Lemma 1, for which no additional manipulations are further needed.
Even though this example may seem too simple, a more complicated stabilization problem will be considered in the next section on switched affine systems. In this situation, one can better understand the potential and simplicity of the proposed formulation. This is the main motivation to use Lemma 1.
Another important issue is related to item (ii) in Lemma 1, which introduces two slack variables, namely  and  . In light of item (iii), these slack variables are not needed, in general. However, there exist at least two situations in which these additional degrees of freedom could be useful. The first refers to the case where matrices in () are subject to uncertainties. A second case of interest appears in the context of an optimization problem, where these slack variables may ease the search for the optimal solution, as we will show in the latter example section.

ROBUST HYBRID CYCLES FOR PERTURBED SWITCHED AFFINE SYSTEMS
In this section, the objective is to present constructive stabilization conditions for switched affine systems subject to a bounded external disturbance. After formulating the problem under consideration, several preliminaries on robust hybrid cycles and cycles for this class of systems will be documented. Then, the first contribution of this paper on the robust model-based stabilization of switched affine systems will be presented.

System data
Consider the discrete-time switched affine system governed by the following dynamics.
where x ∈ R n is the state vector. At any time instant k ∈ N, x, and x + stand for x(k) and x(k + 1), respectively. In (6), the time argument k ∈ N is omitted for the sake of simplicity. Likewise, ∈ K ∶= {1, 2, .., K} characterizes the active mode. The dynamics of the system is affected by an external disturbance vector w ∈ R n that verifies for some given positive real number . Finally, A j ∈ R n×n and B j ∈ R n×1 are the matrices of mode j ∈ K, which are not necessarily constant and known but it is assumed that they possibly belong to a polytopic set of uncertainties. The particularity of this class of systems relies on its control action, which is only performed by selecting the active mode , which requires particular attention. The objective here is to design a suitable set-valued map u in system (6) that ensures the convergence of the state trajectories to a set to be characterized accurately. Note that the property of u of being a set-value map comes from the fact that u ∈ K. A model-driven solution to this problem was provided in Reference 24 and will be summarized hereafter. Interestingly, this paper provides a solution for robust stabilization of switched affine systems, which consists of proving that the solutions to the closed-loop system converge to a robust limit cycle which is composed of the union of shifted ellipsoid. This solution will be recalled in the next section.
This formulation paves the way for the problem of the data-driven design of stabilizing controllers for switched affine systems. Let us first formulate the problem of data-driven control.

Cycles and robust limit cycles
Following the ideas developed in Reference 24, the notion of limit cycles 25,26 is adapted to the problem under consideration. Before presenting the concept of robust limit cycles, let us first introduce the following definitions.
Definition 1 (Cycle). A cycle, denoted as , is a periodic function from N to K. More precisely, this means that there exists N in N ⧵ {0}, such that For any cycle , notations N and D stand for the minimum period and the minimum domain of , respectively. More formally, they are defined as follows Definition 2 (Set of cycles). Denote the set of cycles from N to K by To ease readability, we introduce the following modulo notation: ⌊i⌋ = ((i − 1) mod N ) + 1, for any i ∈ N ⧵ {0}. In particular, ⌊i⌋ = i, for any i = 1, … , N and ⌊N + 1⌋ = 1. The notion of robust limit cycles, which extends the definition of limit cycles in References 25 and 26, is adapted to discrete-time systems and is defined below.
Definition 3 (Robust Limit Cycles). System (6) admits a robust limit cycle associated with a cycle ∈  if there exist possibly disjoint subsets  i ⊂ R n , for i ∈ D such that In inclusion (8), the left-hand side of the inclusion means, with a light abuse of notation, that, for any i ∈ D and for all x ∈  i , vector A (i) x + B (i) belongs to  ⌊i+1⌋ . In the sequel, Figure 1 illustrates this set of inclusions and the relationship with the cycle. If, in some cases, the subsets composing the robust limit cycles are reduced to singletons, that is, (8) is rewritten as a set of equalities given by illustrating then that inclusions (8) are the natural extension of (9). Note that the idea of studying limit cycles for switched affine systems has been considered in Reference 18 for the case of constant and known matrices A j and B j . In this case, necessary and sufficient constructive conditions for the existence of { i } i∈D , for a given cycle , have been provided in Reference 24, while it was only an assumption in Reference 18. It is also worth noting that the structure of the control law provided in Reference 18 complicates the study of robust limit cycles since their control law requires exact knowledge of the system matrices.

Robust model-based stabilization of switched affine systems
A solution to the stabilization of switched affine systems subject to external disturbances to a robust limit cycle is presented here following the idea and concept borrowed from Reference 24. The main difference with respect to Reference 24 is the addition of the bounded external disturbance in (6). The robust stabilization problem is formalized as follows.
Theorem 1. For a given cycle in  and for a parameter ∈ (0, 1), consider the solution {(W i , i , i )} i∈D in S n × R n × R to the following problem.
where matrices Φ i are defined for all i in D as follows.
Then, attractor with is robustly globally exponentially stable for system (6) with the disturbance signal (7) and with the switching control law Proof. The proof of this theorem is largely inspired by Serieye et al.. 24 This theorem is demonstrated thanks to the Lyapunov function built with matrices W −1 i ≻ 0 and vectors i 's that are the decision variables of (11) and given by The forward increment of the Lyapunov function writes The last equation holds since is the value of D that minimizes (x − ) ⊤ W −1 (x − ) according to the control law (13). Furthermore, we have since the minimum is always lower than or equal to any other term. This expression only considers a particular case, which is i = ⌊ + 1⌋ . All together, an upper bound of the increment of the Lyapunov function is derived as follows Next, we note that where  ( ) is not necessarily zero. Let us now introduce ⊤ = [(W −1 (x − )) ⊤ 1 ⊤ ], so that Using this notation, the increment of the Lyapunov function can be expressed as follows: In addition to the previous inequality, we need to include the constraint on the external disturbance (7) as well as the fact that we will require convergence to the attractor  , that is, for all x ∈ R n such that V(x) ≤ 1. Using two S-procedures, this means that there exist two positive scalars > 0 and > 0, such that Applying the Schur complement leads to the expression of Φ i (A (i) , B (i) ). If all matrices Φ i (A (i) , B (i) ) in (11) are positive definite, then the increment of the Lyapunov function is negative definite outside of  .
To show that  is also an invariant set of the closed-loop system (6), it suffices to write V(x + ) = V(x) + Δ V(x). Then, enforcing the introduction of the terms employed in the S-procedure during the first step of the proof, we get Following the previous developments, the previous expression writes Since all matrices Φ i (A (i) , B (i) ) in (11) are positive definite, the last term of the previous expression is negative definite (by application of the Schur Complement). This implies that the following inequality holds where the last inequality holds because w ⊤ w ≤ 2 holds by assumption. The proof is concluded by recalling that V(x) ≤ 1 and ∈ (0, 1), which ensure V(x + ) ≤ 1 − + ≤ 1. ▪ Before going further on the optimization procedure or the extension to the data-driven case, several comments should be properly stated to highlight the contributions of Theorem 1 with respect to the existing literature on switched affine systems. First, it should be noted that the LMI conditions presented in Theorem 1 are affine and consequently convex with respect to the system matrices (A j , B j ) j∈K . This structure allows for a direct extension to the case of uncertain mode matrices. The previous theorem ensures that ellipsoids (W −1 i , i ) verify inclusions (8) that characterize robust limit cycles (as shown in Figure 2), whenever matrices (A j , B j ) j∈K are known and constant or not. When comparing the first and recent paper on the convergence of switched affine systems to limit cycles, 18 the control law proposed therein is a time-varying state feedback that requires exact knowledge of the system matrices. Therefore, this method is not applicable to the problem under consideration and, more generally, it is impossible to extend the solution provided in Reference 18 to the case of uncertain systems. Furthermore, the added value of the present contribution with respect to Reference 24 refers to the inclusion of an external disturbance w, which is assumed to be bounded but not vanishing as time increases. This external noise affects the size of the ellipsoids composing the attractor in the sense that the larger the amplitude of the noise, the larger the size of the attractor.
In addition, the authors of Reference 18 have also considered the situation of switched affine systems subjected to an external disturbance. However, the disturbance function is assumed to be L 2 , so that an L 2 performance analysis was performed. This is a stronger assumption compared to the paper in hand. Indeed, having bounded but not vanishing disturbances prevents stabilization to a limit cycle (i.e., the union of singletons) but allows it in a robust limit cycle (i.e., the union of ellipsoids). The case of L 2 performance analysis can be easily processed using the usual LMI techniques and is therefore not provided in this article.
It should be noted that if the matrices of all modes are constant and known, the decision variables i 's can be replaced by i ' s, the solution to (9). Then, the off-diagonal block (2, 4) and (4, 2) of Φ(A (i) , B (i) ) equal to zero, by definition, and several simplifications of the conditions can be performed. Indeed, the application of the Schur complement twice, condition Φ(A (i) , B (i) ) ≻ 0 is equivalent to wherẽi is a new decision variable that stands for (1 − ) −1 i and where we recall that is a tuning parameter that must be fixed a priori.
Finally, a deeper discussion of the stabilization to robust limit cycles conditions has been provided in Reference 24, going beyond the results provided in Reference 18, such as, for instance, the necessary and sufficient conditions for the existence of a limit cycle and the comparison between the time-varying state feedback controller and the pure state feedback law (13). In addition, as in Reference 18, a general optimization problem has been presented thanks to the definition of a generic cost function, which aims to characterize the distance of the attractors to a desired reference, the amplitude of the limit cycles, etc. This problem allows for selecting the optimal cycle among a set of possible cycles that minimizes this cost function. This discussion is not presented in this paper to avoid repetition with Reference 24.
Remark 1. The complexity of the LMI condition of Theorem 1 relies on the number of decisions variables ((n(n + 1)∕2 + n + 1)N ) and on the dimension of the condition ((3n + 1)N ). Noting that the length of the limit cycle under consideration, N , is limited to 10 in practice, the complexity of the LMI of Theorem 1 is reasonable.

Measurement noise and modelling of uncertainties
Unlike in Reference 24, the matrices that define the modes of (6) are not assumed to be known here. Only some experimental data are available and will be used to design the control law. Following the method presented in References 6 and 11, we define the following matrices: Matrices X + j ∈ R n×p j and X j ∈ R n×p j collect all the data from the experiments obtained for several initial conditions x j, . Subscripts "j" and " " in x j, refer to the mode under consideration and the index of the experiment, respectively. Notation does not necessarily refer to time here. In fact, experiments can be built using arbitrary vectors x j, , or selecting them such that x + j, = x j, +1 for all = 1, … , p j − 1. An experiment (x + j, , x j, ) verifies the following equation Differently from the usual linear case, 6,10-12 one has to differentiate the experiments performed for each mode j in K. In the previous equation, the experiments have been corrupted by the measurement noise represented by the additional vector w j, , for all j, in K × [1, p j ] N that is also gathered in matrix j defined as follows Summing up all these ingredients, the experimental data verify the following equation.
Again, several definitions inspired by Waarde et al. 11 need to be stated to properly formulate the control problem.
Definition 4 (11). Consider the data matrix Y j associated to each mode defined as follows Matrices Y j are called data matrices and will be of fundamental interest throughout the paper. Its relevance is demonstrated by noting that Indeed it reflects the relationship between the uncertain matrices of the system, that is, A j , B j , and the measurement noise associated with these experiments, that is, j . Equation (22) also provides some information on the relevance of the information. Essentially, matrices Y j need to include a sufficiently large number of experiments in order to be used in the control design phase. This intuition is formalized in the next definition taken from References 10 and 11 on the informative nature of matrices Y j . 6,7 Definition 5. A matrix Y j given in (21) is said to be informative if Y j Y ⊤ j is nonsingular (or equivalently positive definite). In the sequel, we will use the same lines as those that have been introduced in References 10 and 11, consisting of imposing the following assumption on the data noise.
for the data noise j satisfying (20).
Following the setup presented in Section 2, (23) in Assumption 2 is rewritten following the form given in (2), that is, as an inequality expressed in terms of matrices A j , B j instead of j . Therefore, we now define the set Σ(Ψ j , Y j ) as follows: As in Section 2, this set represents all the possible pairs of matrices A j and B j in R n×n and R n×1 that verify (20) and where the data noises j verify Assumption 2.
Summing up the previous ingredients, the problem can be formulated as follows.
This problem fits exactly in the framework presented in Section 2, and a solution is provided in the theorem below. More precisely, if such a problem admits a solution, then it is possible to build a stabilizing control law such that the closed-loop trajectories of the switched affine system (6) globally and asymptotically converge to an outer estimation of the robust limit cycle  , composed of the union of shifted ellipsoids, characterized by {(W i , i )} i∈D , solution to the LMI problem (25).
It is also worth noting that the previous problem refers to a data-driven control design, since matrices A j , B j are not involved in the previous problem, which only requires data experiments.

Main result
This section provides a new contribution to the data-based design of the stabilizing switching control law for switched affine systems. In this situation, the lack of knowledge of the system matrices A j , B j prevents the use of the method provided in Reference 18, since, again, the control law developed therein depends explicitly on A j , B j . Furthermore, since the data are corrupted by external disturbance (if ≠ 0), it is not possible to find an exact expression of A j , B j based on the data, as the data only provide a set of uncertain allowable matrices A j , B j . Therefore, the solution provided in Reference 18 is not applicable to solve this problem. However, we will demonstrate below how Theorem 1 can be easily adapted to data-driven design thanks to the use of Lemma 1.

Theorem 2.
For a selected cycle in  and for a given parameter ∈ (0, 1), under Assumptions 1 and 2 for given matrices , the solution to the following problem.
which depends only on the decision variables and on the data collected in the augmented vector data Y j . Then attractor , is robustly globally exponentially stable for system (6) with the disturbance signal (7) and with the switching control law Proof. The idea of the proof is to prove that conditions (27) and (25) are equivalent, thanks to the use of Lemma 1 in this specific context. To do so, we need to understand how inequality (25) verifies the requirements and assumptions of Lemma 1. In particular, the proof will be divided into the three following steps: (i) Rewrite inequality Φ i (A (i) , B (i) ) ≻ 0 as in (1).
Proof of (i): To do so, let us note that Φ i (A (i) , B (i) ) from Theorem 1 can be rewritten as follows Therefore, we can identify that .
Lemma 1 ensures that having Φ i (A (i) , B (i) ) ≻ 0 is equivalent to Φ i (Y (i) ) ≻ 0, for all i ∈ D , which concludes this part of the proof.
Proof of (ii): Comparing the definition of Σ(Ψ) in (2) and for any j in K. Therefore, to apply lemma 1, we need to verify that Ψ j3 ≺ 0, for all j ∈ K. To do so, let us recall that, in Assumption 2, conditions Ψ j3 ≤ 0 ensure that there exist, for any j in K, a symmetric positive definite matrix 0 ≺ R j ∈ S n and a scalar j such that matrix Ψ j − Pre-and post-multiplying this inequality by Y j and its transpose, respectively, we have Since the data matrix Y j is assumed to be informative, that is, matrices Y j Y ⊤ j are non-singular, there exists ] ⊤ = 0, the following is finally gotten which was to be proven. ▪ The previous theorem allows to design a control law that stabilizes system (6), to the attractor defined by set  given in (28), which is a union of shifted ellipsoids  = ∪ i∈D (W −1 i , i ). Remark 2. The complexity of the LMI condition of Theorem 2 relies on the number of decisions variables ((n(n + 1)∕2 + n + 2)N ) and on the dimension of the condition ((4n + 2)N ). Both numbers are lightly higher than the ones of Theorem 1. Again the length of the limit cycle under consideration, N is limited to 10 in practice, then the complexity of the LMI of Theorem 2 is reasonable.

OPTIMIZATION
Theorems 1 and 2 present a model-based and a data-based condition for the stabilization and the existence of an attractor for the closed-loop system. Both conditions are expressed in terms of LMIs, whose solutions impact both the control laws and the size of the attractors notably through the matrices W i , i ∈ D . It is then natural to ask for the optimization of these solutions in order to minimize the size of the attractors, which can be achieved by "minimizing" matrices W i .

Optimization of the attractor
There are many ways to perform such minimization. One of them could be to introduce a cost function consisting in the sum of the trace of these matrices. Here we rather consider the additional constraint W i ≺ I n , for some positive scalar . This additional inequality can be interpreted as constraining the ellipsoids (W i , i ) to be all included in the ball of radius √ , centered at the same location (see Figure 3). In other words, we impose An additional feature can be added to reduce the "size" of this attractor by introducing an optimization problem, as described below.
Optimization Problem 1. For a given , the optimal solution to the condition of Theorem 1 that verifies inclusion (30) with the smallest is obtained by solving the following optimization problem * = min where Φ i (A (i) , B (i) ) is given in (11).
Similarly, an optimization procedure can be added to the conditions of Theorem 2 as provided below.
Optimization Problem 2. For a given , the optimal solution to the condition of Theorem 2, that verifies inclusion (30) with the smallest is obtained by solving the following optimization problem * = min where Φ i (Y (i) ) is given in (27).
Introducing constraints I ≻ W i and minimizing refer to a usual optimization to reduce the size of the components of the attractors as for Theorem 1.
Apart from the usual causes associated with Lyapunov and LMI techniques, two reasons may explain this conservatism in the numerical result. The first is related to the fact that the optimization problem presented in Theorem 1 requires the use of two successive S-procedures. It is well known that this introduces conservatism compared to the situation where only one is performed. Second, the optimal value * strongly depends on parameter , which has been fixed a priori. It would be possible to refine the values obtained for by tuning this parameter, using a "line-search" algorithm on .

Optimization selection of the cycle
Following the idea first presented in Reference 18, it is possible to introduce a cost function that aims at evaluating the performance of a cycle. For instance, this cost function should reflect for each cycle the chattering effects, which means the amplitude of the trajectories within the (robust) limit cycle, or the distance of the (robust) limit cycle to a desired functioning point x d in R n , selected by the designer. However, the solution proposed in Reference 18 relies on the exact knowledge of the limit cycle, that is, { i } i∈D generated by a given cycle in the situation of system (6) without disturbances. When considering robust limit cycles and its outer estimation using the attractor  , such a procedure cannot be employed directly, since the estimation of the robust limit cycle is provided by the solution of the LMI optimization problems (31) and (32). Instead, the solution provided in Reference 24 allows formulating such a cost function based on the decision variables of the LMI optimization problem. For the sake of readability and to avoid an overlap with this contribution, such extension is not presented here but the reader can refer to Reference 24(Section 6).

6
NUMERICAL APPLICATIONS

System data
To illustrate the two results presented in this paper, here we will consider an example of switched affine systems (6) borrowed from Reference 27, where the matrices A i and B i are defined as follows.
where T = 1 is the sampling period of the associated continuous-time systeṁx = F j x + g j defined with matrices F j and g j , with j ∈ K = {1, 2} given by

Model-based design: solution and simulations
First, let us consider a model-based solution. To do so, we will consider three cycles 1 = {1, 2}, 2 = {1, 2, 2, 2}, and 3 = {1, 1, 1, 1, 2, 2} to illustrate the potential of the result. Solving the conditions of Theorem 1, for these three cycles and for three different values of , the simulations of the closed-loop systems are depicted in Figure 4. First, the figure shows that the attractor  is indeed composed of the union of ellipsoids. The number of ellipsoids in the attractor equals the length of the cycle. The state trajectories converge, in all cases, to the attractor. In addition, it can also be seen that the control input u converges to a periodic trajectory, which is a shifted version of the desired cycle. This characteristic has been demonstrated in Reference 24, but not in this paper, as the proof is very similar. It is worth mentioning that this periodic behavior of the control input is only guaranteed when the intersection of any pair of ellipsoids composing the attractor is empty. This figure also illustrates that the size of the attractor increases with the amplitude of the disturbance . Indeed, the effect of the disturbance on the state trajectory is visible in this figure.
For the first and second cycles = {1, 2} and = {1, 2, 2, 2}, the values of * obtained by solving Theorem 1 are given in Table 1. These solutions show that increasing naturally leads to an increase in * .

Data-based design: Generation of the data
In order to generate the data, the following procedure was employed. Matrix X j0 has been selected as the solution to the system with = j and an arbitrary initial condition x j,0 with magnitude less than 1. In other words, we have  with the corresponding matrices X j and X + j given in (19) and where the maximum number of data considered in this example is 30. Three cases will be used, corresponding to p j = 10, 20, and 30 for all j in K. Note that these three cases use the same set of data. This means that the 10 first data from case p j = 20 correspond to the data used when p j = 10 and the 20 first data from case p j = 30 are the same data as for case p j = 20.
In this setup, the measurement noise has been selected as an arbitrary vector, which corrupts the solution to the noiseless system. The noise w j is shown in Figure 5, only for mode j = 1 but for three cases of . The norm of the noise vectors is bounded by , and it is also assumed that the overall noise matrices j verify j ⊤ j ≤ p j 2 j I n , for all j ∈ K, with = 0.3. Such requirements ensure that inequality holds for all j ∈ K. Note that this assumption is related to the covariance matrix when j is a random variable, as mentioned in Reference 10.

Data-based design: Solution and simulations
Applying Theorem 2 and solving optimization problem 31, we have obtained the results that are resumed in Table 1 for 1 = {1, 2} and the associated simulations are depicted in Figure 6. It can be seen from Table 1 that augmenting imposes an increase of * , and that increasing the number of data leads to a decrease of the optimal F I G U R E 5 Measurement noise 1 = [ 11 12 13 ] ⊤ , affecting the data for mode 1. Similar noise affects the data of the other mode.
value * . Compared to the model-based approach, the corresponding values of * suffers from a notable increase. For the case = 0, the large augmentation of the optimal value can be explained by the fact that Theorem 2 requires an additional S-procedure to obtain the stabilization conditions, which unavoidably introduces conservatism, as commented earlier. Nevertheless, the resulting values of * are still small and the attractors are already accurate as shown in Figure 6. For the cases, where ≠ 0, the increase of the * compared to the model-based approach can also be explained using the previous argument but also by the fact that the noise in the data also leads to additional uncertainties to the system matrices, which need to be compensated by an increase of the size of the attractor. Finally, Figure 7 presents the variation of * with respect to the number of data measurements for various values of . First of all, this figure shows that optimization problem (32) has no solution when the number of data is less than 4 = n + 1. This is consistent with the assumption of informativity of the data matrices Y j in Definition 5. Moreover, as increases, the same optimization problem requires more than the minimal necessary order of data (n + 1 = 4) to obtain a solution. The minimum number of data to obtain a solution, increases with the amplitude of the disturbance. The general tendency is that the optimal solution decreases as the number of data increases. This figure also shows that the values of * tend to a constant as p increases.
The same example has been treated with a different cycle 2 = {1, 2, 2, 2}. Applying again Theorem 1 and 2 and solving the associated optimization problems, we have obtained the numerical results provided in Table 1. Note that the data-driven conditions deliver attractors composed of larger ellipsoids compared to the previous case with 1 . This explains why the amplitudes of the disturbance noise have been reduced to = 0, 0.01, and 0.02. Note that the optimal value of obtained by solving the model-based optimization problem (31) with = 0.1 is 7.0, which is already a very large value. Figure 8 presents the simulations resulting from the same initial condition as the one used in Figure 4.
In this example, we have compared the numerical results obtained for two cycles, that is, one of short length, as 1 and another one with a larger length, as 2 . Increasing the length of a cycle makes the attractor larger. Intuitively, this is because the uncertainties in the model matrices propagate from one element of the cycle to the next one, to guarantee the invariance of the attractor.
As a final comment on the numerical application of the data-driven design, it is worth noting that the numerical solutions highly depend on the noise and are very sensitive to it.

F I G U R E 7
Graph showing the optimal value * of optimization problem (32) for cycle = {1, 2} and for various values of from 0 to 0.1.