Polynomial Optimization with Applications to Stability Analysis and Control - Alternatives to Sum of Squares

In this paper, we explore the merits of various algorithms for polynomial optimization problems, focusing on alternatives to sum of squares programming. While we refer to advantages and disadvantages of Quantifier Elimination, Reformulation Linear Techniques, Blossoming and Groebner basis methods, our main focus is on algorithms defined by Polya's theorem, Bernstein's theorem and Handelman's theorem. We first formulate polynomial optimization problems as verifying the feasibility of semi-algebraic sets. Then, we discuss how Polya's algorithm, Bernstein's algorithm and Handelman's algorithm reduce the intractable problem of feasibility of semi-algebraic sets to linear and/or semi-definite programming. We apply these algorithms to different problems in robust stability analysis and stability of nonlinear dynamical systems. As one contribution of this paper, we apply Polya's algorithm to the problem of H_infinity control of systems with parametric uncertainty. Numerical examples are provided to compare the accuracy of these algorithms with other polynomial optimization algorithms in the literature.


1.
Introduction. Consider problems such as portfolio optimization, structural design, local stability of nonlinear ordinary differential equations, control of time-delay systems and control of systems with uncertainties. These problems can all be formulated as polynomial optimization or optimization of polynomials. In this paper, we survey how computation can be applied to polynomial optimization and optimization of polynomials. One example of polynomial optimization is β * = min x∈R n p(x), where p : R n → R is a multi-variate polynomial. In general, since p(x) is not convex, this is not a convex optimization problem. It is well-known that polynomial optimization is NP-hard [1]. We refer to the dual problem to polynomial optimization as optimization of polynomials, e.g., the dual optimization of polynomials to β * = min x∈R n p(x) is This problem is convex, yet NP-hard. One approach to find lower bounds on the optimal objective β * is to apply Sum of Squares (SOS) programming [7,8]. A polynomial p is SOS if there exist polynomials q i such that p(x) = where c ∈ R m and A i,j ∈ R[x] are given. If p(x) is SOS, then clearly p(x) ≥ 0 on R n . While verifying p(x) ≥ 0 on R n is NP-hard, checking whether p(x) is SOShence non-negative -can be done in polynomial time [7]. It was first shown in [7] that verifying the existence of a SOS decomposition is a Semi-Definite Program. Fortunately, there exist several algorithms [9,10,11] and solvers [12,14,13] that solve SDPs to arbitrary precision in polynomial time. To find lower bounds on β * = min x∈R n p(x), consider the SOS program y * = max y∈R m y subject to p(x) − y is SOS. Clearly y * ≤ β * . By performing a bisection search on y and semi-definite programming to verify p(x) − y is SOS, one can find y * . SOS programming can also be used to find lower bounds on the global minimum of polynomials over a semi-algebraic set S := {x ∈ R n : g i (x) ≥ 0, h j (x) = 0} generated by g i , h j ∈ R[x]. Given problem (1) with x ∈ S, Positivstellensatz results [15,16,17] define a sequence of SOS programs whose objective values form a sequence of lower bounds on the global minimum β * . It is shown that under certain conditions on S [16], the sequence of lower bounds converges to the global minimum. See [18] for a comprehensive discussion on the Positivstellensatz.
In this paper, we explore the merits of some of the alternatives to SOS programming. There exist several results in the literature that can be applied to polynomial optimization; e.g., Quantifier Elimination (QE) algorithms [19] for testing the feasibility of semi-algebraic sets, Reformulation Linear Techniques (RLTs) [20,21] for linearizing polynomial optimizations, Polya's result [2] for positivity on the positive orthant, Bernstein's [22,23] and Handelman's [24] results for positivity on simplices and convex polytopes, and other results based on Groebner bases [3] and Blossoming [4]. We will discuss Polya's, Bernstein's and Handelman's results in more depth. The discussion of the other results are beyond the scope of this paper, however the ideas behind these results can be summarized as follows.
QE algorithms apply to First-Order Logic (FOR) formulae, e.g., ∀x ∃y (f (x, y) ≥ 0 ⇒ ((g(a) < xy) ∧ (a > 2)), to eliminate the quantified variables x and y (preceded by quantifiers ∀, ∃) and construct an equivalent FOR formula in terms of the unquantified variable a. The key result underlying QE algorithms is Tarski-Seidenberg theorem [5]. The theorem implies that for every formula of the form ∀x ∈ R n ∃y ∈ R m (f i (x, y, a) ≥ 0), where f i ∈ R[x, y, a], there exists an equivalent quantifier-free formula of the form . QE implementations [25,26] with a bisection search yields the exact solution to optimization of polynomials, however the complexity scales double exponentially in the dimension of variables x, y.
RLT was initially developed to find the convex hull of feasible solutions of zeroone linear programs [27]. It was later generalized to address polynomial optimizations of the form min x p(x) subject to x ∈ [0, 1] n ∩ S [20]. RLT constructs a δ−hierarchy of linear programs by performing two steps. In the first step (reformulation), RLT introduces the new constraints i x i j (1 − x j ) ≥ 0 for all i, j : i + j = δ. In the second step (linearization), RTL defines a linear program by replacing every product of variables x i by a new variable. By increasing δ and repeating the two steps, one can construct a δ−hierarchy of lower bounding linear programs. A combination of RLT and branch-and-bound partitioning of [0, 1] n was developed in [21] to achieve tighter lower bounds on the global minimum. For a survey of different extensions of RLT see [6].
Groebner bases can be used to reduce a polynomial optimization to the problem of finding the roots of univariate polynomials [28]. First, one needs to construct the system of polynomial equations ∇L(x, λ, µ) = [f 1 (x, λ, µ), · · · , f N (x, λ, µ)] = 0, where L := p(x) + i λ i g i (x) + i µ i h i (x) is the Lagrangian. It is well-known that the set of solutions to ∇L(x, λ, µ) = 0 is the set of extrema of the polynomial optimization min x∈S p(x). Using the elimination property [3] of the Groebner bases, the minimal Groebner basis of the ideal of f 1 , · · · , f N defines a triangular-form system of polynomial equations. This system can be solved by calculating one variable at a time and back-substituting into other polynomials. The most computationally expensive part is the calculation of the Groebner basis, which in the worst case scales double-exponentially in the number of decision variables.
The blossoming approach involves mapping the space of polynomials to the space of multi-affine functions (polynomials that are affine in each variable). By using this map and the diagonal property of blossoms [4], one can reformulate any polynomial optimization min x∈S p(x) as an optimization of multi-affine functions. In [30], it is shown that the dual to optimization of multi-affine functions over a hypercube is a linear program. The optimal objective value of this linear program is a lower bound on the minimum of p(x) over the hypercube.
While the discussed algorithms have advantages and disadvantages, we focus on Polya's, Bernstein's and Handelman's results on parameterization of positive polynomials. Polya's theorem yields a basis to parameterize the cone of polynomials that are positive on the positive orthant. Bernstein's and Handelman's theorems yield a basis to parameterize the space of polynomials that are positive on simplices and convex polytopes. Similar to SOS programming, one can find Polya's, Bernstein's and Handelman's parameterizations by solving a sequence of Linear Programs (LPs) and/or SDPs. However, unlike the SDPs associated with SOS programming, the SDPs associated with these theorems have a block-diagonal structure. This structure has been exploited in [29] to design parallel algorithms for optimization of polynomials with large degrees and number of variables. Unfortunately, unlike SOS programming, Bernstein's, Handelman's and the original Polya's theorems do not parameterize polynomials with zeros in the positive orthant. Yet, there exist some variants of Polya's theorem which considers zeros at the corners [31] and edges [32] of simplices. Moreover, there exist other variants of Polya's theorem which provide certificates of positivity on hypercubes [33,34], intersection of semi-algebraic sets and the positive orthant [35] and the entire R n [36], or apply to polynomials with rational exponents [37].
We organize this paper as follows. In Section 2, we place Polya's, Bernstein's, Handelman's and the Positivstellensatz results in the broader topic of research on polynomial positivity. In Section 3, we first define polynomial optimization and optimization of polynomials. Then, we formulate optimization of polynomials as the problem of verifying the feasibility of semi-algebraic sets. To verify the feasibility of different semi-algebraic sets, we present algorithms based on the different variants of Polya's, Bernstein's, Handelman's and Positivstellensatz results. In Section 4, we discuss how these algorithms apply to robust stability analysis [38,29,39] and nonlinear stability [41,42,43,44]. Finally, one contribution of this paper is to apply Polya's algorithm to the problem of H ∞ control synthesis for systems with parametric uncertainties.
2. Background on positivity of polynomials. In 1900, Hilbert published a list of mathematical problems, one of which was: For every non-negative f ∈ R[x], does there exist some non-zero q ∈ R[x] such that q 2 f is a sum of squares? In other words, is every non-negative polynomial a sum of squares of rational functions? This question was motivated by his earlier works [48,49], in which he proved: 1-Every non-negative bi-variate degree 4 homogeneous polynomial is a SOS of three polynomials. 2-Every bi-variate non-negative polynomial is a SOS of four rational functions. 3-Not every homogeneous polynomial with more than two variables and degree greater than 5 is SOS of polynomials. Eighty years later, Motzkin constructed a non-negative degree 6 polynomial with three variables which is not SOS [50]: [51] generalized Motzkin's example as follows. Polynomials of the form ( n i=1 x 2 i )f (x 1 , · · · , x n ) + 1 are not SOS if polynomial f of degree < 2n is not SOS. Hence, although the non-homogeneous Motzkin polynomial M (x 1 , x 2 , 1) = x 2 1 x 2 2 (x 2 1 + x 2 2 − 3) + 1 is non-negative it is not SOS. In 1927, Artin answered Hilbert's problem in the following theorem [52].
D(x) . Although Artin settled Hilbert's problem, his proof was neither constructive nor gave a characterization of the numerator N and denominator D. In 1939, Habicht [54] showed that if f is positive definite and can be expressed as f (x 1 , · · · , x n ) = g(x 2 1 , · · · , x 2 n ) for some polynomial g, then one can choose the denominator D = n i=1 x 2 i . Moreover, he showed that by using D = n i=1 x 2 i , the numerator N can be expressed as a sum of squares of monomials. Habicht used Polya's theorem [53] to obtain the above characterizations for N and D.
Theorem 2.2. (Polya's theorem) Suppose a homogeneous polynomial p satisfies p(x) > 0 for all x ∈ {x ∈ R n : x i ≥ 0, n i=1 x i = 0}. Then p(x) can be expressed as OPTIMIZATION FOR STABILITY ANALYSIS AND CONTROL   5 where N (x) and D(x) are homogeneous polynomials with all positive coefficients. For every homogeneous p(x) and some e ≥ 0, the denominator D(x) can be chosen as (x 1 + · · · + x n ) e .
Suppose f is homogeneous and positive on the positive orthant and can be expressed as f (x 1 , · · · , x n ) = g(x 2 1 , · · · , x 2 n ) for some homogeneous polynomial g. By using Polya's theorem g(y) = N (y) D(y) , where y := (y 1 , · · · , y n ) and polynomials N and D have all positive coefficients. By Theorem 2.2 we may choose Since N has all positive coefficients, Reference [57] uses Goursat's transform of f to find an upper bound on d. The bound is a function of the minimum of f on [−1, 1]. However, computing the minimum itself is intractable. In 1988, Handelman [58] used products of affine functions as a basis (the Handelman basis) to extend Bernstein's theorem to multivariate polynomials which are positive on convex polytopes. Theorem 2.4. (Handelman's Theorem) Given w i ∈ R n and u i ∈ R, define the polytope Γ K := {x ∈ R n : w T i x + u i ≥ 0, i = 1, · · · , K}. If a polynomial f (x) > 0 on Γ K , then there exist b α ≥ 0, α ∈ N K such that for some d ∈ N, In [22], first the standard triangulation of a simplex (the convex hull of vertices in R n ) is developed to decompose an arbitrary simplex into sub-simplices. Then, an algorithm is proposed to ensure positivity of a polynomial f on the simplex by finding an expression of Form (3) for f on each sub-simplex. An upper bound on the degree d in (3) was provided in [23] as a function of the minimum of f on the simplex, the number of variables of f , the degree of f and the maximum of certain [23] affine combinations of the coefficients b α . Reference [22] also provides a bound on d as a function of max α b α and the minimum of f over the polytope.
An extension of Handelman's theorem was made by Schweighofer [59] to verify non-negativity of polynomials over compact semi-algebraic sets. Schweighofer used the cone of polynomials in (4) to parameterize any polynomial f which has the following properties: 1. f is non-negative over the compact semi-algebraic set S 2. f = q 1 p 1 + q 2 p 2 + · · · for some q i in the cone (4) and for some p i > 0 over S ∩ {x ∈ R n : f (x) = 0} is compact. Define the following set of polynomials which are positive on S.
If f ≥ 0 on S and there exist q i ∈ Θ d and polynomials On the assumption that g i are affine functions, p i = 1 and s λ are constant, Schweighofer's theorem gives the same parameterization of f as in Handelman's theorem. Another special case of Schweighofer's theorem is when λ ∈ {0, 1} K . In this case, Schweighofer's theorem reduces to Schmudgen's Positivstellensatz [17]. Schmudgen's Positivstellensatz states that the cone is sufficient to parameterize every f > 0 over the semi-algebraic set S generated by {g 1 , · · · , g K }. Unfortunately, the cone Λ g contains 2 K products of g i , thus finding a representation of Form (5) for f requires a search for at most 2 K SOS polynomials.
Putinar's Positivstellensatz [16] reduces the complexity of Schmudgen's parameterization in the case where the quadratic module of g i defined in (6) is Archimedean.
Theorem 2.6. (Putinars's Positivstellensatz) Let S := {x ∈ R n : g i (x) ≥ 0, g i ∈ R[x] for i = 1, · · · , K} and define If there exist some Finding a representation of Form (6) for f , only requires a search for K + 1 SOS polynomials using SOS programming. Verifying the Archimedian condition N − n i=1 x 2 i ∈ M g in Theorem 2.6 is also a SOS program. Observe that the Archimedian condition implies the compactness of S. The following theorem, lifts the compactness requirement for the semi-algebraic set S.
for i = 1, · · · , K} and define If f > 0 on S, then there exist p, g ∈ Λ g such that qf = p + 1.
Notice that the Parameterziation (3) in Handelman's theorem is affine in f and the coefficients b α . Likewise, the parameterizations in Theorems 2.5 and 2.6, i.e., f = λ s λ g λ1 1 · · · g λK K and f = s 0 + i s i g i are affine in f, s λ and s i . Thus, one can use convex optimization to find b α , s λ , s i and f . Unfortunately, since the parameterization qf = p + 1 in Stengle's Positivstellensatz is non-convex (bilinear in q and f ), it is more difficult to verify the feasibility of qf = p + 1 compared to Handelman's and Putinar's parameterizations.
For a comprehensive discussion on the Positivstellensatz and other polynomial positivity results in algebraic geometry see [61,60,62].
3. Algorithms for Polynomial Optimization. In this Section, we first define polynomial optimization, optimization of polynomials and its equivalent feasibility problem using semi-algebraic sets. Then, we introduce some algorithms to verify the feasibility of different semi-algebraic sets. We observe that combining these algorithms with bisection yields some lower bounds on optimal objective values of polynomial optimization problems.

Polynomial Optimization and optimization of polynomials.
We define polynomial optimization problems as where f, g i , h j ∈ R[x] are given. For example, the integer program with given a i ∈ R n , b i ∈ R and p ∈ R[x], can be formulated as a polynomial optimization problem by setting We define Optimization of polynomials problems as where c ∈ R n and F i (y) = α F i,α y α , where F i,α ∈ R q are either given or are decision variables. Optimization of polynomials can be used to find β * in (7). For example, we can compute the optimal objective value α * of the polynomial by solving the problem where Problem (10) can be expressed in the Form (9) by setting Optimization of polynomials (9) can be formulated as the following feasibility problem. (11) where c, F, g i and h j are given. The question of feasibility of a semi-algebraic set is NP-hard [1]. However, if we have a test to verify S γ = ∅, we can find γ * by performing bisection on γ. In Section 3.2, we use the results of Section 2 to provide sufficient conditions, in the form of Linear Matrix Inequalities (LMIs), for S γ = ∅.

3.2.
Algorithms. In this section, we discuss how to find lower bounds on β * for different classes of polynomial optimization problems. The results in this section are primarily expressed as methods for verifying S γ = ∅ and can be used with bisection to solve polynomial optimization problems.

Case 1. Optimization over the standard simplex ∆ n
Define the standard unit simplex as Consider the polynomial optimization problem where f is a homogeneous polynomial of degree d. If f is not homogeneous, we can homogenize it by multiplying each monomial x α1 1 · · · x αn n in f by ( all x ∈ ∆ n . To find γ * , one can solve the following optimization of polynomials problem. It can be shown that Problem (13) is equivalent to the feasibility problem For a given γ, we use the following version of Polya's theorem to verify S γ = ∅.
has all positive coefficients. We can compute lower bounds on γ * by performing bisection on γ. For each γ in bisection, if there exist e ≥ 0 such that all of the coefficients of (14) are positive, then γ ≤ γ * .
Case 2. Optimization over the hypercube Φ n : Given r i ∈ R, define the hypercube Define the set of n-variate multi-homogeneous polynomials of degree vector d ∈ N n as It is shown in [34] that for every polynomial f (z) with z ∈ Φ n , there exists a multi-homogeneous polynomial p such that For example, consider f (z 1 , By homogenizing q we obtain the multi-homogeneous polynomial with degree vector d = [2,1]. See [34] for an algorithm which computes the multihomogeneous polynomial p for an arbitrary f defined on a hypercube. Now consider the polynomial optimization problem γ * = min x∈Φ n f (x). To find γ * , one can solve the following feasibility problem.
For a given γ, one can use the following version of Polya's theorem to verify S γ = ∅.
are positive definite.
First, by using the algorithm in [34] we obtain the multi-homogeneous form p of the polynomial f in (17).
has all positive coefficients, where d i is the degree of x i in p(x, y). We can compute lower bounds on γ * by performing bisection on γ. For each γ in bisection, if there exist e ≥ 0 such that all of the coefficients of (18) are positive, then γ ≤ γ * . Case 3. Optimization over the convex polytope Γ K : Given w i ∈ R n and u i ∈ R, define the convex polytope Γ K := {x ∈ R n : where f is a polynomial of degree d f . To find γ * , one can solve the feasibility problem.
Given γ, one can use Handelman's theorem (Theorem 2.4) to verify S γ = ∅ as follows. Consider the Handelman basis associated with polytope Γ K defined as Basis B s spans the space of polynomials of degree s or less, however it is not minimal.
such that for (19) and (20) can be determined using linear programming. If (19) and (20) are infeasible for some d, then one can increase d up to (19) and (20) hold. However, computing upper bounds for d is difficult [63,23]. Similar to Cases 1 and 2, we can compute lower bounds on γ * by performing bisection on γ. For each γ in bisection, if there exist d ≥ d f such that Conditions (19) and (20), then γ ≤ γ * .

Case 4: Optimization over compact semi-algebraic sets:
Recall that we defined a semi-algebraic set as Suppose S is compact. Consider the polynomial optimization problem Define the following cone of polynomials which are positive over S.  (21) is Archimedean, then the solution to the following SOS program is a lower bound on γ * . Given d ∈ N, define For given γ ∈ R and d ∈ N, Problem (22) is the following linear matrix inequality.
where Q i , P j ∈ S N , where S N is the subspace of symmetric matrices in R N ×N and N := n + d d , and where z d (x) is the vector of monomial basis of degree d or less.
If the Cone (21) is not Archimedean, then we can use Schmudgen's Positivstellensatz to obtain the following SOS program with solution γ d ≤ γ * .
Case 5: Tests for non-negativity on R n : The following theorem [54], gives a test for non-negativity of a class of homogeneous polynomials.
1 , · · · , x 2 n ) for some polynomial g, there exist e ≥ 0 such that all of the coefficients of are positive. In particular, the product is a sum of squares of monomials.
Based on Habicht's theorem, a test for non-negativity of a homogeneous polynomial f of the form f (x 1 , · · · , x n ) = g(x 2 1 , · · · , x 2 n ) is to multiply it repeatedly by n i=1 x 2 i . If for some e ∈ N, the Product (25) has all positive coefficients, then f ≥ 0.
An alternative test for non-negativity on R n is given in the following theorem [36].
Based on Theorem 3.4, we can propose the following test for non-negativity of polynomials over the cone Λ e := {x ∈ R n : sgn(x i ) = e i , i = 1, · · · , n} for some e ∈ E n . Multiply a given polynomial f repeatedly by 1 + e T x for some e ∈ E n . If there exists λ e ≥ 0 such that sgn(c α ) = e α1 1 · · · e αn n , then f (x) ≥ 0 for all x ∈ Λ e . Since R n = ∪ e∈En Λ e , we can repeat the test 2 n times to obtain a test for nonnegativity of f over R n .
The second part of Theorem 3.4 gives a solution to the Hilbert's problem in Section 2. See [36] for an algorithm which computes polynomials N and D.

4.
Applications of polynomial optimization. In this section, we discuss how the algorithms in Section 3.2 apply to stability analysis and control of dynamical systems. We consider robust stability analysis of linear systems with parametric uncertainty, stability of nonlinear systems, robust controller synthesis for systems with parametric uncertainty and stability of systems with time-delay. 4.1. Robust stability analysis. Consider the linear systeṁ where A(α) ∈ R n×n is a polynomial and α ∈ Q ⊂ R l is the vector of uncertain parameters, where Q is compact. From converse Lyapunov theory and existence of polynomial solutions for feasible patameter-dependent LMIs [66] it follows that that System (27) is asymptotically stable if and only if there exist matrix-valued polynomial P (α) ∈ S n such that If Q is a semi-algebraic set, then asymptotic stability of System (27) is equivalent to positivity of γ * in the following optimization of polynomials problem for some d ∈ N.
where we have denoted α β1 1 · · · α β l l by α β and Given stable systems of Form (27) with different classes of polynomials A(α), we discuss different algorithms for solving (29). Solutions to (29) yield Lyapunov functions of the form V = x T ( β∈E d C β α β )x proving stability of System (27).
Consider the case where A(α) belongs to the polytope where A i are the vertices of the polytope and ∆ l is the standard unit simplex defined as in (12). Given A(α) ∈ Λ l , we address the problem of stability analysis of System (27) for all α ∈ ∆ l . A sufficient condition for asymptotic stability of System (27) is to find a matrix P > 0 such that the Lyapunov inequality A T (α)P +P A(α) < 0 holds for all α ∈ ∆ l . If A(α) = l i=1 A i α i , then from convexity of A it follows that the condition A T (α)P + P A(α) < 0 for all α ∈ ∆ l is equivalent to positivity of γ * in the following semi-definite program.
Any P ∈ S n that satisfies the LMI in (31) for some γ > 0, yields a Lyapunov function of the form V = x T P x. However for many systems, this class of Lyapunov functions can be conservative (see Numerical Example 1). More general classes of Lyapunov functions such as parameter-dependent functions of the forms [38,67,68] and V = x T ( β P β∈E d α β )x [39,29] have been utilized in the literature. As shown in [38], given A i ∈ R n×n , x T P (α)x with P (α) = l i=1 P i α i is a Lyapunov function for (27) with α ∈ ∆ l if the following LMI consitions hold.
In [69], it is shown that given continuous functions A i , B i : ∆ l → R n×n and continuous function R : ∆ l → S n , if there exists a continuous function X : then there exists a homogeneous polynomial Y : ∆ l → S n which also satisfies (32). Motivated by this result, [39] uses the class of homogeneous polynomials of the form with to provide the following necessary and sufficient LMI condition for stability of System (27). (27) is asymptotically stable for all α ∈ ∆ l if and only if there exist some d ≥ 0 and positive definite P β ∈ S n , β ∈ I d such that i=1,··· ,l βi>0 where e i = [ 0 · · · 0 1 i th 0 · · · 0 ] ∈ N l , i = 1, · · · , l form the canonical basis for R l . and (α 1 , α 2 , α 3 ) ∈ ∆ 3 , η ≥ 0. We would like to find η * = max η such thatẋ(t) = A(α, η)x(t) is asymptotically stable for all η ∈ [0, η * ]. By performing bisection on η and verifying the inequalities in (35) for each η in the bisection algorithm, we obtained lower bounds on η * (see Figure 1) using d = 0, 1, 2 and 3. For comparison, we have also plotted the lower bounds computed in [40] using the Complete Square Matricial Representation (CSMR) of the Lyapunov inequalities in (28). Both methods found max η = 2.224, however the method in [40] used a lower d to find this bound.  Figure 1. lower-bounds for η * computed using the LMIs in (35) and the method in [40] Case 2. A(α) is a polynomial with α ∈ ∆ l :

Numerical Example 1: Consider the systemẋ(t) = A(α, η)x(t) from [40], where
Given A h ∈ R n×n for h ∈ I d as defined in (34), we address the problem of stability analysis of System (27) with A(α) = h∈I d A h α h1 1 · · · α h l l for all α ∈ ∆ l . Using Lyapunov theory, this problem can be formulated as the following optimization of polynomials problem.
System (27) is asymptotically stable for all α ∈ ∆ l if and only if γ * > 0. As in Case 1 of Section 3.2, one can apply bisection algorithm on γ and use Polya's theorem (Theorem 3.1) as a test for feasibility of Constraint (36) to find lower bounds on γ * . Suppose P and A are homogeneous matrix valued polynomials. Given γ ∈ R, it follows from Theorem 3.1 that the inequality condition in (36) holds for all α ∈ ∆ if there exist some e ≥ 0 such that and have all positive coefficients, where d p is the degree of P and d a is the degree of A. Let P and A be of the forms (39) By combining (39) with (37) and (38) it follows that for a given γ ∈ R, the inequality condition in (36) holds for all α ∈ ∆ l if there exist some e ≥ 0 such that and have all positive coefficients, i.e., h∈I dp f h,g P h > 0 for all g ∈ I dp+e h∈I dp where we define f h,g ∈ R as the coefficient of P h α g1 1 · · · α g l l after expanding (40). Likewise, we define M h,q ∈ R n×n as the coefficient of P h α q1 1 · · · α q l l after expanding (41). See [70] for recursive formulae for f h,g and M h,q . Feasibility of Conditions (42) can be verified by the following semi-definite program.
where we have denoted the elements of I dp+e by g (i) ∈ N l , i = 1, · · · , L and have denoted the elements of I dp+da+e by q (i) , i = 1, · · · , M . For any γ ∈ R, if there exist e ≥ 0 such that SDP (43) is feasible, then γ ≤ γ * . If for a positive γ, there exist e ≥ 0 such that SDP (43) has a solution P h , h ∈ I dp , then x is a Lyapunov function proving stability ofẋ(t) = A(α)x(t), α ∈ ∆ l . SDPs such as (43) can be solved in polynomial time using interior-point algorithms such as the central path primal-dual algorithms in [9,10,11]. Fortunately, Problem (43) has block-diagonal structure. Block-diagonal structure in SDP constraints can be used to design massively parallel algorithms, an approach which was applied to Problem (43) in [29].
Numerical Example 2: Consider the systemẋ(t) = A(α)x(t), where We would like to solve the following optimization problem.
We first represent T L using the unit simplex ∆ 3 as follows. Define the map f : Thus, the following optimization problem is equivalent to Problem (44).
We solved Problem (45) using bisection on L. For each L, we used Theorem 3.1 to verify the inequality in (36) using Polya's exponents e = 1 to 7 and d p = 1 to 4 as degrees of P . Figure 2 shows the computed upper-bounds on L * for different e and d p . The algorithm found min L = −0.504. For comparison, we also the same problem using SOSTOOLS [8] and Putinar's Positivstellensatz (see Case 4 of Section 3.2). By computing a Lyapunov function of degree two in x and degree one in β, SOSTOOLS certified L = −0.504 as an upper-bound for L * . This is the same as the upper-bound computed by Polya's algorithm. Given A h ∈ R n×n for h ∈ E d as defined in (30), we address the problem of stability analysis of System (27) (27) is asymptotically stable for all α ∈ Φ l if and only if γ * > 0 in the following optimization of polynomials problem.
As in Case 2 of Section 3.2, by applying bisection algorithm on γ and using a multisimplex version of Polya's algorithm (such as Theorem 3.2) as a test for feasibility of Constraint (46) we can compute lower bounds on γ * . Suppose there exists a multi-homogeneous matrix-valued polynomial Q of degree vector d q ∈ N l (d qi is the degree of β i ) such that Likewise, suppose there exists a multi-homogeneous matrix-valued polynomial B of degree vector d b ∈ N l (d bi is the degree of β i ) such that Given γ ∈ R, it follows from Theorem 3.2 that the inequality condition in (46) holds for all α ∈ Φ l if there exist e ≥ 0 such that and have all positive coefficients where d pi is the degree of α i in P (α) and d pai is the degree of α i in P (α)A(α). Suppose Q and B are of the forms and By combining (50) and (51) with (48) and (49) we find that for a given γ ∈ R, the inequality condition in (46) where 1 ∈ N l is the vector of ones and where we define f {q,r},{h,g} ∈ R to be the coefficient of Q h,g β q η r after expansion of (48). Likewise, we define M {s,t},{h,g} ∈ R n×n to be the coefficient of Q h,g β s η t after expansion of (49). See [34] (52), then γ ≤ γ * as defined in (46). Furthermore, if γ is positive, thenẋ(t) = A(α)x(t) is asymptotically stable for all α ∈ Φ l .

4.2.
Nonlinear stability analysis. Consider nonlinear systems of the forṁ where f : R n → R n is a degree d f polynomial. Suppose the origin is an isolated equilibrium of (53). In this section, we address local stability of the origin in the following sense. and Then the origin is an asymptotically stable equilibrium of System (53), meaning that for every Since existence of polynomial Lyapunov functions is necessary and sufficient for stability of (53) on any compact set [73], we can formulate the problem of stability analysis of (53) as follows.
Conditions (54) and (55) hold if and only if there exist d ∈ N such that γ * > 0. In Sections 4.2.1 and 4.2.2, we discuss two alternatives to SOS programming for solving (56). These methods apply Polya's theorem and Handelman's theorem to Problem (56) (as described in Cases 2 and 3 in Section 3.2) to find lower bounds on γ * . See [43] for a different application of Handelman's theorem and intervals method in nonlinear stability. Also, see [41] for a method of computing continuous piecewise affine Lyapunov functions using linear programming and a triangulation scheme for polytopes.

Application of Handelman's theorem in nonlinear stability analysis.
Recall that every convex polytope can be represented as for some w i ∈ R n and u i ∈ R. Suppose Γ K is bounded and the origin is in the interior of Γ K . In this section, we would like to investigate asymptotic stability of the equilibrium of System (53) by verifying positivity of γ * in Problem (56) with Q = Γ K . Unfortunately, Handelman's theorem (Theorem 2.4) does not parameterize polynomials which have zeros in the interior of a given polytope. To see this, suppose a polynomial g (g is not identically zero) is zero at x = a, where a is in the interior of a polytope Γ K := {x ∈ R n : w T i x + u i ≥ 0, i = 1, · · · , K}. Suppose there exist b α ≥ 0, α ∈ N K such that for some d ∈ N , Then, From the assumption a ∈ int(Γ K ) it follows that w T i a + u i > 0 for all i = 1, · · · , K. Hence b α < 0 for at least one α ∈ {α ∈ N K : α 1 ≤ d}. This contradicts with the assumption that all b α ≥ 0.
Based on the above reasoning, one cannot readily use Handelman's theorem to verify the Lyapunov inequalities in (54). In [44], a combination of Handelman's theorem and a decomposition scheme was applied to Problem (56) with Q = Γ K . Here we outline a modified version of this result. First, consider the following definitions.

Definition 4.2. Given a bounded polytope of the form Γ
Consider System (53) with f of degree d f . Given w i , h i,j ∈ R n and u i , g i,j ∈ R, let Γ K := {x ∈ R n : where N i is the cardinality of {α ∈ N mi : α 1 ≤ d} and where we have denoted the elements of E d := {λ ∈ N n : λ 1 ≤ d} by λ (k) , k = 1, · · · , B. For any λ (k) ∈ E d , we define p {λ (k) ,α,i} as the coefficient of y α x λ (k) in If there exist d ∈ N such that max γ in the linear program for i, j = 1, · · · , L and k, l ∈ Λ i,j (59) is positive, then the origin is an asymptotically stable equilibrium for System (53) and is a piecewise polynomial Lyapunov function proving stability of System (53).
Using the polytope and D−decomposition we set-up the LP in (59) with d = 4. The solution to the LP certified asymptotic stability of the origin and yielded the following piecewise polynomial Lyapunov function.

POLYNOMIAL OPTIMIZATION FOR STABILITY ANALYSIS AND CONTROL
where A(x) ∈ R n×n is a matrix-valued polynomial and A(0) = 0. Unfortunately, Polya's theorem does not parameterize polynomials which have zeros in the interior of the unit simplex (see [31] for an elementary proof of this). From the same reasoning as in [31] it follows that the multi-simplex version of Polya's theorem (Theorem 3.2) does not parameterize polynomials which have zeros in the interior of a multi-simplex. On the other hand, if f (z) in (16) has a zero in the interior of Φ n , then any multi-homogeneous polynomial p(x, y) that satisfies (16) has a zero in the interior of the multi-simplex ∆ 2 × · · ·× ∆ 2 . One way to enforce the condition V (0) = 0 in (54) is to search for coefficients of a matrix-valued polynomial P which defines a Lyapunov function of the form V (x) = x T P (x)x. It can be shown that V (x) = x T P (x)x is a Lyapunov function for System (62) if and only if γ * in the following optimization of polynomials problem is positive. where As in Case 2 of Section 3.2, by applying bisection algorithm on γ and using Theorem 3.2 as a test for feasibility of Constraint (63) we can compute lower bounds on γ * . Suppose there exists a multi-homogeneous matrix-valued polynomial S of degree vector d s ∈ N n (d si is the degree of y i ) such that Likewise, suppose there exist multi-homogeneous matrix-valued polynomials B and C of degree vectors d b ∈ N n and d c = d s ∈ N n such that {A(x) ∈ R n×n : x ∈ Φ n } = {B(y, z) ∈ R n×n : (y i , z i ) ∈ ∆ 2 , i = 1, · · · , n} and ∂P (x) ∂x 1 x, · · · , ∂P (x) ∂xn x ∈ R n×n : x ∈ Φ n = C(y, z) ∈ R n×n : (yi, zi) ∈ ∆ 2 , i = 1, · · · , n .
Given γ ∈ R, it follows from Theorem 3.2 that the inequality condition in (63) holds for all α ∈ Φ l if there exist e ≥ 0 such that and n i=1 (y i + z i ) e B T (y, z)S(y, z) + S(y, z)B(y, z) have all positive coefficients, where d pi is the degree of x i in P (x) and d qi is the degree of x i in Q(x). Suppose S, B and C have the following forms.
S(y, z) = h,g∈N l h+g=ds S h,g y h1 1 z g1 1 · · · y h l n z g l n , C(y, z) = h,g∈N l h+g=dc C h,g y h1 1 z g1 1 · · · y h l n z g l n , By combining (66), (67) and (68) with (64) and (65) it follows that for a given γ ∈ R, the inequality condition in (63) where similar to Case 3 of Section 4.1, we define f {q,r},{h,g} to be the coefficient of S h,g y q z r after combining (66) with (64). Likewise, we define M {u,v},{h,g} to be the coefficient of S h,g y u z v and N {u,v},{h,g} to be the coefficient of C h,g y u z v after combining (67) and (68) with (65). Conditions (69) are an SDP. For any γ ∈ R, if there exist e ≥ 0 and {S h,g } such that Conditions (69) hold, then γ is a lower bound for γ * as defined in (63). Furthermore, if γ is positive, then origin is an asymptotically stable equilibrium for System (62).
Numerical Example 5: Consider the reverse-time Van Der Pol oscillator defined as . By using the method in Section 4.2.2, we solved Problem (63) using the hypercubes and d p = 0, 2, 4, 6 as the degrees of P (x). For each hypercube Φ 2 i in (70), we computed a Lyapunov function of the form V i (x) = x T P i (x)x. In Figure 4, we have plotted the largest level-set of V i , inscribed in Φ 2 i for i = 1, · · · , 4. For all the cases, we used the Polya's exponent e = 1.
We also used the method in Section 4.2.1 to solve (see Figure 5) the same problem using the polytopes where α ∈ Q ⊂ R l , x(t) ∈ R n , u(t) ∈ R m , ω(t) ∈ R p is the external input and z(t) ∈ R q is the external output. According to [76] there exists a state feedback gain K(α) ∈ R m×n such that S(G, K(α)) H∞ ≤ γ, for all α ∈ Q, if and only if there exist P (α) > 0 and R(α) ∈ R m×n such that K(α) = R(α)P −1 (α) and for all α ∈ Q, where γ > 0 and S(G, K(α)) is the map from the external input ω to the external output z of the closed loop system with a static full state feedback controller. The symbol ⋆ denotes the symmetric blocks in the matrix inequality.
To find a solution to the robust H ∞ -optimal static state-feedback controller problem with optimal feedback gain K(α) = P (α)R −1 (α), one can solve the following optimization of polynomials problem.
In Problem (73), if Q = ∆ l as defined in (12), then we can apply Polya' theorem (Theorem 3.1) as in the algorithm in Case 1 of Section 3.2 to find a γ ≤ γ * and P and R which satisfy the inequality in (73). Suppose P, A, B 1 , B 2 , C 1 , D 11 and D 12 are homogeneous polynomials. If any of these polynomials is not homogeneous, use the procedure in Case 1 of Section 3.2 to homogenize it. Let F (P (α), R(α)) := and denote the degree of F by d f . Given γ ∈ R, the inequality in (73) are negative-definite. Let P and R be of the forms P (α) = h∈I dp P h α h1 1 · · · α h l l , P h ∈ S n and R(α) = h∈I dr R h α h1 1 · · · α h l l , R h ∈ R n×n , where I dp and I dr are defined as in (34). By combining (75) with (74) it follows from Theorem (3.1) that for a given γ, the inequality in (73) holds, if there exist e ≥ 0 such that h∈I dp where we define M h,q ∈ R n×n as the coefficient of P h α q1 1 · · · α q l l after combining (75) with (74). Likewise, N h,q ∈ R n×n is the coefficient of R h α q1 1 · · · α q l l after combining (75) with (74). For given γ > 0, if there exist e ≥ 0 such that the LMI (76) has a solution, say P h , h ∈ I dp and R g , g ∈ I dr , then is a feedback gain for an H ∞ -suboptimal static state-feedback controller for System (71). By performing bisection on γ and solving (76) form each γ in the bisection, one may find an H ∞ -optimal controller for System (71). In Problem (73), if Q = Φ l as defined in (15), then by applying the algorithm in Case 2 of section 3.2 to Problem (73), we can find a solution P, Q, γ to (73), where γ ≤ γ * . See Case 3 of Section 4.1 and Section 4.2.2 for similar applications of this theorem.
If Q = Γ l as defined in (57), then we can use Handelman's theorem (Theorem 2.4) as in the algorithm in Case 3 of section 3.2 to find a solution to Problem (73). We have provided a similar application of Handelman's theorem in Section 4.2.1.
If Q is a compact semi-algebraic set, then for given d ∈ N, one can apply the Positivstellensatz results in Case 4 of Section 3.2 to the inequality in (73) to obtain a SOS program of the Form (22). A solution to the SOS program yields a solution to Problem (73).

5.
Conclusion. SOS programming, moment's approach and their applications in polynomial optimization have been well-served in the literature. To promote diversity in commonly used algorithms for polynomial optimization, we dedicated this paper to some of the alternatives to SOS programming. In particular, we focused on the algorithms defined by Polya's theorem, Bernstein's theorem and Handelman's theorem. We discussed how these algorithms apply to polynomial optimization problems with decision variables on simplices, hypercubes and arbitrary convex polytopes. Moreover, we demonstrated some of the applications of Polya's and Handelman's algorithms in stability analysis of nonlinear systems and stability analysis and H ∞ control of systems with parametric uncertainty. For most of these applications, we have provided numerical examples to compare the conservativeness of Polya's and Handelman's algorithms with other algorithms in the literature such as SOS programming.