ADVANCES IN COMPUTATIONAL LYAPUNOV ANALYSIS USING SUM-OF-SQUARES PROGRAMMING

. The stability of an equilibrium point of a nonlinear dynamical system is typically determined using Lyapunov theory. This requires the construction of an energy-like function, termed a Lyapunov function, which satis-ﬁes certain positivity conditions. Unlike linear dynamical systems, there is no algorithmic method for constructing Lyapunov functions for general nonlinear systems. However, if the systems of interest evolve according to polynomial vector ﬁelds and the Lyapunov functions are constrained to be sum-of-squares polynomials then stability veriﬁcation can be cast as a semideﬁnite (convex) optimization programme. In this paper we describe recent advances in sum-of-squares programming that facilitate advanced stability analysis and control design.

1. Introduction. One of the most fundamental questions in systems and control theory is that of determining the stability of an equilibrium point of a dynamical system. Consider the linear time invariant (LTI) systemẋ(t) = Ax(t) where x(t) ∈ R n is the state vector and A ∈ R n×n . The equilibrium point x * = 0 is asymptotically stable if and only if A is Hurwitz, i.e. all eigenvalues of A lie in the open left half plane. In 1892 the Russian mathematician A. M. Lyapunov showed that for a nonlinear systemẋ(t) = f (x(t)) with f (0) = 0 and f locally Lipschitz, if there exists a radially unbounded positive definite function V : R n → R such that V (0) = 0 andV ∂V ∂x , f (x) < 0 for all x = 0 ∈ R n then the 0 equilibrium is globally asymptotically stable. For LTI systems this is translates to the following: A is Hurwitz if and only if for a given square, positive definite matrix Q there exists a positive definite matrix P ∈ R n×n that satisfies the Lyapunov equationA T P + P A+Q = 0. Equivalently, one may search for a positive definite matrix P such that the inequality A T P + P A ≺ 0 is satisfied. The inequality follows by considering the quadratic Lyapunov function candidate V (x) = x T P x. Clearly if the previous inequality holds then the Lie derivative x T (A T P + P A)x is negative definite. The search for the matrix P subject to A T P + P A ≺ 0 is a convex optimisation problem [7], specifically a Semidefinite Program (SDP) [6], which can be solved efficiently.
For general nonlinear systems, there is no convex optimization method that can be used to search for a Lyapunov function (should one exist) certifying stability of an equilibrium point. In fact there is no general algorithm that can construct a Lyapunov function even when a converse theorem guarantees that such a certificate exists. However, if we restrict our attention to dynamical systems whose vector fields evolve according to polynomial functions and only consider Lyapunov function candidates that are representable as the sum of squared polynomial functions, then tractable convex optimization techniques do exist. Although not pursued here, other restrictions on the system such as exponential stability and f ∈ C 2 allow for convex optimization based methods, see for example [11]. The link between convexity and and the sum-of-squares decomposition was established in the thesis of Parrilo [26]. Broadly speaking, sum-of-squares programming can be seen as a nonlinear (polynomial) expansion of the Linear Matrix Inequality (LMI) framework.
Fifteen years after sum-of-squares techniques were first introduced, the field is still rapidly developing and finding new application areas. In this paper we describe recent advances in the area with application to advanced computational stability analysis, particularly with a view to control applications. For a broader introduction to sum-of-squares theory the reader is referred to [3] for an algebraic flavour and [9] for a robust control perspective.
1.1. Notation. R n denotes the standard Euclidean n-dimensional space, R n×m the set of real matrices of dimension n × m and S n the space of symmetric n × n matrices. The j th row and column of a matrix A ∈ R n×n are denoted by A j,: and A :,j respectively. Given x ∈ R n we denote the ring of multivariable polynomials with real coefficients by R[x] and the subset of sum-of-squares polynomials in the variable x by Σ [x]. Sometimes it may be necessary to indicate the maximum degree of a polynomial or sum-of-squares polynomial in which case we use the subscript notation R d [x] or Σ d [x] where d is a positive integer. All other notation will be introduced as and when needed.
2. The sum-of-squares decomposition. We first begin by defining the primary objects of interest. Definition 2.1 (Monomial). Given an n-tuple of non-negative integers (α 1 , . . . , α n ), a monomial in x 1 , . . . , x n takes the form x α = x α1 1 x α2 2 · · · x αn n . The degree of a monomial is given by ∂(α) = n i=1 α i . Definition 2.2 (Real Polynomial). A polynomial f in x 1 , . . . , x n with coefficients in R is a finite linear combination of monomials written in the form The degree of f , denoted ∂(f ), is the maximum α i corresponding to a non-zero a αi . The set of all such polynomials is denoted R[x].
is a sum-ofsquares (SOS) if it can be decomposed into N polynomials and expressed as From Definition 2.3 it follows that both the degree, ∂(P ), and the minimum degree monomial must both be even in order for the polynomial P to admit a sum-of-squares decomposition. It is clear from Definition 2.3 that sum-of-squares polynomials are nonnegative on R n , i.e. P (x) ≥ 0 for all x ∈ R n . The following theorem by Parrilo [27] is the key to providing computational methods that allow us to construct sum-of-squares decompositions.
Theorem 2.4. The existence of a sum-of-squares decomposition of a polynomial in n variables and of degree 2d can be decided by solving a semidefinite programming feasibility problem. Assuming no structure or sparsity, the dimensions of the linear matrix inequality constraint are n+d d × n+d d . We can now relate Definition 2.3 with Theorem 2.4: If P (x) is a sum-of-squares polynomial of degree 2d and Z(x) is a column vector containing all monomials up to degree d then there exists a positive semidefinite matrix Q such that Furthermore, when the coefficients of P are not known, they can be searched for using (1) subject to the constraint Q 0. The significance of this result is that semidefinite programming feasibility problems are convex. For completeness we describe the general structure of a semidefinite programme, but as writing out the SDPs explicitly can become cumbersome, several software packages such as SOSTOOLS [23], Gloptipoly [15] and YALMIP [21] have been developed, that automate the conversion between sum-of-squares programs and SDPs. Throughout this work SOSTOOLS will be used to formulate and solve all sum-of-squares programmes.
A standard semidefinite programme (SDP) takes the form where X ∈ S n is the decision variable, ·, · defines the inner product A, B := trace(A T B), A i ∈ S n for all i = 1, . . . , m, C ∈ S n , b ∈ R m and the partial ordering X 0 is interpreted as the matrix X is positive semidefinite. Optimization problem (2) is referred to as the primal problem and p * is the optimal value for this problem. The dual problem is where y ∈ R m is the dual decision variable and ·, · denotes the standard Euclidean inner product. From a geometrical stand point the feasible region of (2) is the intersection of an affine subspace and the positive semidefinite cone. As the positive semidefinite cone is self-dual we can appeal to duality theory to derive (3). Semidefinite programmes of the form given above have a weak duality property, that is, feasible primal solutions give upper bounds on the dual solution and feasible solutions to dual problem give lower bounds on the primal problem. For a thorough treatment of SDPs and their numerical interpretations the reader is referred to [41,37]. As with other convex optimization programming solvers there are a wide variety of free SDP solvers and parsers available, for example SeDuMi [34], SDPT3 [38] and CVX [12].
2.1. Real algebraic geometry. One of the main advantages of the sum-of-squares decomposition is that it can be used as a computational relaxation to polynomial non-negativity. It will be shown in Section 3 why this is desirable for the case of nonlinear stability verification. Frequently however, we will not necessarily require a function to be positive on the whole state space, but rather on a bounded domain.
A standard example is that of trying to find a Lyapunov function for a nonlinear system that contains multiple equilibria. In this case it is clear that the Lyapunov inequalities will only hold over a given region around the equilibrium of interest. Similar arguments appear many times in robust control theory. A result from real algebraic geometry [4] called the Positivstellensatz, which we will now introduce, provides a feasibility test for determining the non-negativity of polynomials over semialgebraic sets. The Positivstellensatz establishes a relationship between geometric objects (affine varieties) and the validity of an algebraic relationship (polynomial ideal ) [10].
We first define what a semialgebraic set is, which we will use to define the domain over which we wish to determine the non-negativity of a given polynomial: Definition 2.5 (Semialgebraic Set). The basic closed semialgebraic set is a subset of R n defined by the existence of a finite number of polynomial inequalities: Definition 2.6 (Ideal). Given the multivariate polynomials g 1 , . . . , g m ∈ R[x], the ideal generated by g i for i = 1, . . . , m is the set where r denotes the number of polynomials in M.
Using the preceding definitions we can now state the Positivstellensatz theorem: • The set • There exist f ∈ C, g ∈ I and h ∈ M such that For a more detailed treatment and the proof of the Positivstellensatz (and algebraic geometry) see [4]. The Positivstellensatz provides an equivalence relationship between sets defined by polynomial inequalities and algebraic objects. Thus through semidefinite programming we can verify set emptiness. Note that there is no mention of the bound of the degree on any of the polynomials or sum-of-squares polynomials in Theorem 2.9. In order to use the SDP machinery we must place bounds on the maximum degree of all functions.
The following example taken from [28] illustrates the ideas presented above. Consider the set of polynomial relations: We wish to prove that there is no (x, y) ∈ R 2 such that (4) is satisfied. Using Theorem 2.9, system (4) has no solution if and only if there exist sum-of-squares polynomials s 1 , s 2 ∈ Σ[x, y] and a polynomial t ∈ R[x, y] that satisfy s 1 + s 2 f + tg = −1. If the left hand side of this expression is evaluated for any feasible solution of (4) then the result would be positive, which is a contradiction. Two polynomials that certify no solution exists to (4) are t = −6, s 2 = 2 and s 1 = 1/3 + 2(y + 3/2) 2 + 6(x − 1/6) 2 . These can be found using SOSTOOLS. Many of the sum-of-squares based theorems in the following sections can be formulated as an application of the Positivstellensatz, but we will not explicitly provide these formulations.
3. Computational Lyapunov stability analysis. In this section we will introduce the class of systems that we will focus on in the rest of the paper. We will present several problems related to this system and then introduce a number of theorems which address these problems. Once these theorems are formulated, a sum-of-squares relaxation that solves the relevant problems will be presented. In general, complete proofs of theorems will not be provided unless it is of immediate benefit to the reader or will be helpful to the reader should they want to generalise the result to a broader class of systems or problems.
The primary system of interest is described by a set of coupled differential equations of the formẋ (t) = f (x(t)), x(0) = x 0 ∈ D (5) where f : D → R n is a vector of polynomial functions, x(t) ∈ R n is the state vector and D ⊆ R n is a neighbourhood of the origin. Without loss of generality it is assumed that the equilibrium point of interest (which we denote by x * ) is at the origin, i.e. f (x * ) = 0, x * = 0 and therefore x * ∈ D. Furthermore we take for granted the fact that f is locally Lipschitz in D and thus solutions to (5) exist and are unique (locally).
, for all t ≥ 0 • asymptotically stable if it stable as defined above, and δ may be chosen such that Definition 3.2. The equilibrium point x * of (5) is said to be locally exponentially stable if there exist positive constants c, K and γ such that for all and globally exponentially stable if the conditions above hold for all x(t 0 ) ∈ R n .
Throughout this work all analysis questions pertaining to stability will be formulated in the Lyapunov framework and a computational procedure that uses sum-ofsquares will be described for verification purposes.
3.1. Lyapunov function construction. In this section we introduce the central theoretical tool that will form the core of all results described in this review. Two notions of Lyapunov stability will be presented. Using the sum-of-squares decomposition, computational methods for stability verification based upon these stability definitions will be presented. In essence the majority of the control problems considered in this paper follow the same two steps: i) Recasting the primal problem as a Lyapunov-type problem and then, ii) constructing a sum-of-squares relaxation to the problem. Theorem 3.3. Let x * = 0 ∈ D ⊂ R n be an equilibrium point of (5). If there exists a function V : D → R that is continuously differentiable such that When such a function exists, x * is said to be (asymptotically) stable in the sense of Lyapunov.
Definition 3.4. If there exists a function V that satisfies the asymptotic stability requirements of Theorem 3.3 with D = R n and x → ∞ ⇒ V (x) → ∞ then x * is said to be globally asymptotically stable.
A stronger notion of stability, at least in terms of convergence properties, is that of exponential stability.
Theorem 3.5. Let x * = 0 be a an equilibrium point for (5) and assume that D ⊂ R is a given domain which includes x * . Assume there exists a continuously differentiable function V : D → R and positive constants k 1 , k 2 , k 3 such that for all t ≥ 0 and x ∈ D where p is a positive integer. Then, x * is exponentially stable. Furthermore, if the assumptions hold when D = R n then x * is globally exponentially stable.
The following theorem illustrates how sum-of-squares programming can be used to construct a polynomial stability certificate, in this case a Lyapunov function, for an equilibrium point of (5). In accordance with Theorem 3.3 the domain of interest, D must be defined. In particular D must be representable as a semi-algebraic set. In this work we will focus on the simplified case of a single polynomial inequality. The simplest example is a ball centred at the origin with radius √ r, r > 0 which we denote by B r x ∈ R n | x 2 2 ≤ r . The polynomial inequality expression associated with B r isβ(x) x T x − r ≤ 0. For the remainder of this paper we will assume that the domain of interest D is represented by all points that satisfy Theorem 3.6. If there exists a polynomial function V , sum-of-squares polynomials r 1 , r 2 and positive definite polynomial functions ϕ 1 , ϕ 2 all of bounded degree, such that then the equilibrium point x * of (5) is asymptotically stable.
Proof. The first condition ensures that V (x) is positive on D. To see this, consider the following. Since β(x) is negative for x ∈ D and r 1 (x) is non-negative by construction, the term r 1 (x)β(x) is non-positive for x ∈ D and since by definition Therefore the first condition satisfies the first equality and inequality of Theorem 3.3. Identical arguments on the second condition enforce the derivative inequality in Theorem 3.3. This concludes the proof.
Note that if the degree of the polynomial and sum-of-squares variables in Theorem 3.6 are not bounded then the optimisation problem is convex but infinite dimensional and thus not computationally tractable.
A few questions immediately come to mind: Do stable polynomial systems always admit a sum-of-squares Lyapunov function? How conservative are we being by limiting ourselves to positive polynomials that admit a sum-of-squares decomposition? Can we determine a priori the degree of the Lyapunov function required? The answer to the first question is simply, no. In [1] a globally stable polynomial system is presented that cannot admit any polynomial Lyapunov function. Of course there are several subtleties to this result: The search is limited to global stability analysis and local stability is not considered. Secondly, the authors are referring to asymptotic stability. As will be shown in Theorem 3.8, locally exponentially stable systems do admit polynomial Lyapunov functions and it is possible to obtain a degree bound a priori [29,30]. We first present a sum-of-squares programme for verifying exponential stability.
Theorem 3.7. Let x * = 0 be an equilibrium point of (5) which is contained in D ⊂ R n . Suppose there exists a polynomial V , sum-of-squares polynomial r and scalars κ 1 , κ 2 > 0 such that then x * is a locally exponentially stable equilibrium point.
In Theorem 3.7 we have used the fact that it is known that the Lyapunov function V can be assumed to be a sum-of-squares polynomial [30], thus the sum-of-squares multiplier used in the first condition of Theorem 3.6 is not required here. The following theorem [30] shows that all systems of the form (5) that are exponentially stable on a bounded domain admit a sum-of-squares polynomial Lyapunov function.
Theorem 3.8. For system (5), suppose that f is polynomial, ∂(f ) = q and that x * = 0 is exponentially stable on B r for some r > 0 with x(t) ≤ K x(0) e −γt .
Then there exist κ 1 , κ 2 , κ 3 > 0 and a sum-of-squares polynomial V such that for all x ∈ B r the following inequalities where L denotes the Lipschitz bound of f on B 4Kr and N (L, γ, K) is any integer such that N T > (log 2K 2 /2γ) and T < 1 2L . The proof of this this theorem is constructive and is based upon the Picard iteration. The highlights of the theorem are that exponentially stable systems have a sum-of-squares polynomial Lyapunov function, the degree of which is bounded, and furthermore the degree bound is a function of the continuity properties of f and the convergence rate of (5) in the sense of Definition 3.2 (with t 0 = 0). This section is concluded with an illustrative example taken from [2]. Consider the nonlinear system described the following set of nonlinear ODEs: This system has two saddle points at (± √ 3, 0) as well as a stable focus at the origin. Here we are interested in trajectories of (6) with initial conditions in the circular disc centred on the origin with radius 1: Using SOSTOOLS [23], Theorem 3.6 was used to construct a Lyapunov function verifying local asymptotic stability. The phase portrait for the system and the level sets of the Lyapunov function are shown in Figure 1.
In addition to stability, it is often desirable to determine the Region of Attraction (RoA) denoted R A , of x * belonging to (5). Let φ(t; x 0 ) be the solution to (5) with initial state x 0 at t = 0. The region of attraction corresponds to the volume of state space R A ⊆ R n such that Determining the set R A exactly is a nontrivial task and no analytic solutions exist for general nonlinear systems. Most region of attraction algorithms attempt to (under)approximate R A by simple, more computationally amenable set descriptions such as ellipses and polytopes using the linear differential inclusion framework [5]. More recently, sum-of-squares methods have been applied to RoA analysis [8,39] although these typically lead to bilinear matrix inequalities which are non-convex [36]. The most common method for approximating R A is to determine the largest level curve of a Lyapunov function that is completely contained in some pre-specified A B Figure 1. A: Trajectories of system (6) initiated form various initial conditions (red lines) can be seen to either converge to the stable equilibrium point at the origin or rapidly diverge if their initial state is close to the two saddle points at (± √ 3, 0). B: Level curves of the Lyapunov function (blue circles).
subset of R n . Denote such a subset by X and assume that it can be represented by a semialgebraic set of the form For expositional purposes we will assume that X = D, that is, we will assume that we are searching for an approximation of R A on the same set that was used for stability analysis. For completeness we will provide a general algorithm at the end of the example that attempts the more complicated task of estimating R A on the whole of R n . The reader is referred to the references previously mentioned for various implementations of the algorithm. A new method based upon set invariance which provides an estimate of R A that is not given by a Lyapunov function level-set is presented in [40].
Define the point set enclosed by the level curve V (x) = c where c > 0 by When Ω c ⊂ D and V (x) is a Lyapunov function of (5) it follows that Ω c is compact and positively invariant and hence a region of attraction for the zero equilibrium point, x * , of (5). We now propose a sum-of squares programme based on the Positivstellensatz that computes the largest level-set of V , i.e. the maximum c, such that Ω c ⊂ D. We denote this set by Ω c * . Given a Lyapunov function V ∈ R a [x] and non-negative integers κ, b 1 , . . . , b k , search for sum-of-squares polynomials d i ∈ Σ bi [x] for i = 1, . . . , k that solve: and let c * denote the optimal value of the objective function. The maximum level set of V contained within D is thus V (x) = c * . Less conservative (although computationally more demanding) solutions can be obtained by increasing κ and the degree of the sum-of-squares polynomial multipliers b i . Let us return to the example system (6). Consider now the domain D = {x ∈ R 2 | x T x − 2.2 ≤ 0} shown by the green level curve in Figure 2. Following the steps described previously we obtain the quartic Lyapunov function: V (x) = 3.421x 2 1 + 1.7217x 1 x 2 + 2.8584x 2 2 + 0.45219x 4 1 +1.318x 2 x 3 1 + 1.5945x 2 2 x 2 1 + 0.20294x 1 x 3 2 + 0.86584x 4 2 . Setting κ = 1 and ∂(d 1 ) = 2, solving (8) we find that the largest level curve of V contained in D, and thus an approximation of the RoA on D, is {x ∈ R 2 | V (x) ≤ 6.308} which is shown in Figure 2. Improved estimates of the true region of attraction may be obtained by iterating between adjusting the domain D and then computing the largest Lyapunov level curve inside that region. The pseudo-code for such an algorithm is: 1. Define a domain X in the state space described by a semialgebraic set. 2. Construct a Lyapunov function V (x) on X as described earlier.
3. Compute Ω c * using (8). Given a scalar ∆ > 0, set X = Ω c * +∆ and go to Step 2 replacing X with X . The above steps should be iterated until the optimization problems become infeasible. Note that by employing Theorem 2.9 it is possible to ensure that X contains X c.f. [40].
In the following subsection we shall consider the problem of constructing a state feedback controller. One of the goals of feedback control is to enlarge the region of attraction of a desired equilibrium point.

3.2.
State feedback control design. The standard control synthesis problem can be stated as follows: given a (possibly unstable) dynamical system of the forṁ x(t) = f (x(t), u(t)) find a control law of the form u(x) such that the closed loop is asymptotically stable. Typically the controlled system should seek to minimise a given convex cost function. For LTI systems the solution to such problems are well understood and there are well known results that give necessary and sufficient conditions for the existence of the controller and optimality conditions. The nonlinear counterpart is however still an open problem. We will describe one approach based on the work in [32] to solve the state feedback case, extensions to more complicated synthesis problems including output feedback and estimator design have been investigated [42,43,35], for results related to the general nonlinear control setup see [17,22]. We first consider a relaxed version of the problem stated above by assuming that the dynamical system can be written in input-affine form: where f is a vector of polynomial functions such that f (0) = 0 and g is a matrix of polynomial functions and the polynomial control law to be constructed is u = h(x). Initially we will just consider the stabilisability problem and ignore optimal synthesis. Under these assumptions, state-feedback control amounts to finding a polynomial Lyapunov function such that where φ(x) is a positive definite polynomial function. Clearly constructing a sumof-squares decomposition for the derivative condition (10b) is challenging as the problem is not jointly convex in V (x) and h(x). Convex methods that avoid this problem include density functions [33], and moment relaxations [20]. However, the method presented here follows the Lyapunov approach but utilises a recasting of system (9) into a form that is reminiscent of the LTI set up. Let Z(x) be an appropriately chosen vector of monomials of dimension N × 1 and of degree at least 1, then (9) can be written asẋ with z ∈ R N . Then the state feedback problem admits a solution, furthermore, one such stabilizing controller is Proof. An outline of the proof is provided for completeness. Assume the polynomial matrices P and H defined in Theorem 3.9 exist and that conditions (12a)-(12b) are satisfied. Consider the Lyapunov function V (x) = Z T (x)P −1 (x)Z(x). The proof proceeds by showing that V is indeed a Lyapunov function for the closed loop systemẋ Positive definiteness of V is guaranteed by the fact that (12a) ensures P and thus P −1 is positive definite. What is left to do now is show that the derivative condition is non-positive. First, differentiate V along the trajectories of (13), then using the identity it is easily shown that V is a valid Lyapunov function and that condition (12b) enforces non-positivity.
When condition (12b) in Theorem 3.9 holds, if s(x) > 0 for all x = 0 then the origin is asymptotically stable. Furthermore if P (x) is a constant the origin of the closed loop system is globally stable.
The results of this section are now extended to the optimal control setting. We focus on nonlinear H ∞ state-feedback control 1 . Here we consider dynamical systems which can be written in the form  where the monomial vector Z(x) takes the same form as in the state-feedback problem and z 1 ∈ R m1 , z 2 ∈ R m2 . The index vector I B is extended to denote the zero rows of the matrix [B 1 (x) B 2 (x)]. The control objective in this case is to minimise the induced L 2 -gain from w to z = [z T 1 , z T 2 ] T by choosing a control law u(x) = F (x)Z(x). Below we present a relaxation to the problem, instead of finding a controller that finds the minimum achievable L 2 -gain we specify an acceptable gain, γ > 0 and then determine if a controller that achieves this bound exists. This is commonly referred to as sub-optimal control. Theorem 3.10. Given a nonlinear system of the form (14). If there exist a symmetric polynomial matrix P ∈ R[x] N ×N , a positive definite sum-of-squares polynomial s ∈ Σ[x] and a constant > 0 such that (Note that we have dropped the dependance on variables in the second constraint in order to reduce the notational burden.) Then the state feedback law Z(x) renders x * of the closed loop system asymptotically stable and the L 2 -gain from w to z is less than γ. If, additionally, P (x) is a constant matrix then the L 2 -gain bound and asymptotic stability hold globally.
Proof. The proof follows from the classic LPV synthesis algorithms c.f. [16] and the obvious differences are covered by Theorem 3.9.
4. Non-polynomial system analysis. In this section we will describe how sumof-squares techniques can be used to verify the stability of dynamical systems that do not evolve according to polynomial vector fields. This overview will be informal and is intended primarily for pedagogical reasons. Two simple examples will motivate the problem and then a general algorithm will be presented. The method we describe involves taking a non-polynomial system and recasting it as a constrained polynomial system. Stability of the new system then implies stability of the original system. The caveat is that the state dimension of the new polynomial system is likely to be larger than that of the original system and include new state constraints [25].
Consider the dynamical systemż where f (z * = 0) = 0 and z(t) ∈ R n . Unlike in (5) f is not necessarily a vector of polynomial functions. For brevity we drop the dependence on time, t, from the notation. Let us now define the augmented state vectorsx 1 ,x 2 : which after applying a recasting algorithm form the systeṁ where f 1 , f 2 are polynomial vector functions. In order for the system (16) to accurately capture the behaviour of (15) the recasting process must introduce some constraints on (16) (or more concretely on the new state variablesx 2 ). Essentially this restricts the state evolution of (16) to a manifold. We write these constraints in vector formx with the equality being applied in an element-wise manner. Clearly for the recasting process to be of any use F must be polynomial inx 1 . Such a recasting process may indirectly induce some non-polynomial constraints of the form which are automatically satisfied when (17) holds.
In order to verify stability of an equilibrium point of the non-polynomial system (15) using sum-of-squares programming we instead construct a Lyapunov function for the constrained, recast system (16). A theorem that proves this condition is sufficient and sum-of-squares programme that implements the results will be provided after two illustrative examples are presented.
The first example is an exact recasting. Exactness refers to the fact that the transformed system has the same state dimension as the original system. Take the non-polynomial scalar systemẋ(t) = ce −αx(t) . Choose p(t) = ce −αx(t) theṅ p(t) = −αp 2 (t). Next we present a more involved example taken from [25]. Let Γ be the static saturation function: Now consider the systemẋ which has one equilibrium point, x * at the origin. To recast this as a polynomial system, introduce the following variables: Note that these variables are the induced equality constraints described by (18a). The state equations from (19) can now be written as: with the constraints It should be clear that the polynomial constraints (21) above encode the variables u 1 , . . . , u 4 defined earlier. The Lyapunov theorems presented thus far do not take into account constrained systems of the form (20)- (21). First, a generalised constrained polynomial system is defined and the a sum-of-squares programme for stability verification is presented.
A constrained polynomial dynamical system takes the following form: where x(t) ∈ R n is the usual state vector and θ(t) ∈ R m represents any needed auxiliary variables. It is assumed that the functions a i , b j , c k are all polynomials. The domain of interest is D ⊂ R m+n which we assume has a semialgebraic representation: We further assume that f (x, θ) = 0 for x = x * = 0 and θ ∈ D 0 θ where D 0 The following Lyapunov Theorem from [25] provides a sumof-squares programme for verifying the stability of the equilibrium point x * of (22).
Theorem 4.1. Given system (22), where f is allowed to be a rational function with no singularities in D. Suppose there exist polynomials V (x), w(x, θ), p i (x, θ), q j (x, θ) and scalars r k > 0 such that (23) or if there are no integral constraints then the equilibrium point x * = 0 is stable.
Proof. This is a simple extension of Theorem 3.3. Integrating either (23) or (24) from 0 to T it is easily verified that V (x(0))−V (x(T )) ≥ 0 on D when the constraints are satisfied, thus ensuring stability.
The conditions in Theorem 4.1 can easily be implemented as sum-of-squares programmes in exactly the same manner as those of Theorems 3.6, 3.7 The final theorem in this section provides sufficient conditions for verifying the stability of (15) by constructing a Lyapunov function for (16). As we've done elsewhere functional dependance on variables has been suppressed where possible to improve readability.
Replacing positivity constraints in Theorem 4.2 with sum-of-squares relaxations and assuming that D 1 and D 2 can be represented as semialgebraic sets allows for the conditions above to be algorithmically verified.
4.1. Absolute stability. An alternative method for analysing non-polynomial systems using the sum-of-squares framework was developed in [14] using the notion of functional sum-of-squares decompositions and absolute stability arguments [18,Ch. 7]. Recall the definition of a sum-of-squares polynomial (Definition 2.3), consider the functional p(x, ζ(x)) where x ∈ R n and ζ : D x → R p . We would like to determine if p(x, ζ(x)) ≥ 0 on D x ⊂ R n . Extending the idea of a sum-of-squares decomposition of a polynomial to the functional case, a relaxation for positivity is to search for a decomposition of the form There is no implication here that the functions g i should be polynomial. This idea can be taken further and incorporated into a semidefinite programme to computationally verify positivity of p. To do so we treat ζ as an independent variable, then if p is polynomial in (x, ζ) with ∂(p) = 2d and a decomposition of the form p(x, ζ) = N i=1 g 2 i (x, ζ) exists where the functions g i are polynomial in x and ζ, then computing a functional-decomposition can be cast in a similar manner to the polynomial case. Let where Z is a vector of monomials in x and ζ of degree less than or equal to d and the decision variable Q is a symmetric matrix of coefficients to be determined. If there exists a positive semidefinte Q then p is said to possess a functional sum-ofsquares decomposition and satisfies p(x, ζ) ≥ 0 for all x, ζ. The following example illustrates this idea. The problem is to determine iff the expression is non-negative. Non-negativity is confirmed if a functional sum-of-squares decomposition of p can be constructed: Set ζ(x) = sin x then (27) can be written as p(x, ζ) = 2x 4 + 2x 3 ζ − x 2 ζ 2 + 5ζ 4 , now define Z(x, ζ) = [x 2 , ζ 2 , xζ] T then (26) tells us that (27)  If f is a rational function then a simple modification to the definition of the sector inequalities (29) and a minor change to Theorem 4.4 allows for stability to be verified using the sum-of-squares decomposition. Note that system (19) is a specific example of a non-polynomial system that can be modelled and analysed using this methodology.

4.2.
Systems with delays. The final class of systems we consider in this paper, is that of systems with time-delays: in this case, the state is a function rather than a point in R n , so that the time evolution depends on the history of the state. Over the past few years, a lot of progress has been made in the stability analysis of linear time-delay systems using Lyapunov-Krasovksii (L-K) and frequency domain methods [13,19]. In the case of linear time-delay systems, converse Lyapunov functional theorems exist, which provide information on the structure of the Lyapunov function candidates that should be considered when constructing Lyapunov functionals. These searches can be cast as LMIs and solved efficiently, however, the analysis of nonlinear time delay systems is much more difficult.
In this review paper we discuss recent advances that allow the construction of L-K functionals for time-delay systems using sum-of-squares. More details on the methodology can be found in [24,31]. We build on the notation used in this paper and borrow more from [13]. For b > a denote C([a, b], R n ) the Banach space of continuous functions mapping the interval [a, b] into R n with the topology of uniform convergence. For φ ∈ C([a, b], R n ) the norm of φ is defined as φ = sup a≤θ≤b |φ(θ)|, where | · | denotes a norm on R n . Finally, we let C γ = {φ ∈ C : φ < γ}.
We consider autonomous Retarded Functional Differential Equations (RFDEs) of the formẋ (t) = f (x t ).
Let Ω ⊂ C γ , define V : Ω → R a continuous function and letV denote the Upper Right Dini Derivative, defined aṡ We then have the following theorem [19]: Then the solution x = 0 of (30) is uniformly stable. If, in addition, b(s) is positive definite, then the solution x = 0 of (30) is uniformly asymptotically stable.
Consider now the following functional: V (x t ) = V 0 (x(t)) +