Triangulations and Error Estimates for Interpolating Lyapunov Functions

The CPA method to compute Lyapunov functions depends on a triangulation of the relevant part of the state space. In more detail, a CPA (Continuous and Piecewise Affine) function is affine on each simplex of a given triangulation and is determined by the values at the vertices of the triangulation. Two important aspects in the proof that the CPA method is always able to generate a CPA Lyapunov function if the triangulation is sufficiently fine, are (a) the geometry of the simplices of the triangulation and (b) error estimates of CPA interpolations of functions. In this paper the aspect (a) is tackled by extending the notion of (h, d)-boundedness, which so far has depended on the order of the vertices in each simplex, and it is shown that it is essentially independent of the order and can be expressed in terms of the condition number of the shape matrix. Concerning (b), existing error estimates are generalised to other norms to increase the flexibility of the CPA method. In particular, when the CPA method is used to verify Lyapunov function candidates generated by other methods. Parts of the results were presented in Giesl and Hafstein (Uniformly regular triangulations for parameterizing Lyapunov functions. In: Proceedings of the 18th International Conference on Informatics in Control, Automation and Robotics (ICINCO), 549–557, 2021).


Introduction
This paper is concerned with dynamical systems, whose dynamics are defined by an ordinary differential equation (ODE) and in particular with the stability of equilibria of systems. Lyapunov stability theory is of essential importance in dynamical systems and control theory and is studied in practically all textbooks and monographs on linear and nonlinear systems, cf. e.g. [2][3][4] or [5][6][7] for a more modern treatment. The canonical candidate for a Lyapunov function for a physical system is its (free) energy. In particular, a dissipative physical system must approach the state of a local minimum of the energy.
For general dynamical systems, however, there is no analytical method to obtain a Lyapunov function. For this reason, various methods for the numerical generation of Lyapunov functions have emerged. To name a few, in [8,9] the numerical generation of rational Lyapunov functions was studied, in [10][11][12] sum-of-squared (SOS) polynomial Lyapunov functions were parameterized using semi-definite optimization, see also [13,14] for other approaches using polynomials, and in [15] a Zubov type PDE was approximately solved using collocation. For more numerical approaches cf. the review [16].
This article is part of the topical collection "Informatics in Control, Automation and Robotics" guest edited by Kurosh Madani, Oleg Gusikhin and Henk Nijmeijer.
In [17,18] linear programming was used to parameterize continuous and piecewise affine (CPA) Lyapunov functions. In this approach, a subset of the state space is first triangulated, i.e. subdivided into simplices, and then a number of constraints are derived for a given nonlinear system, such that a feasible solution to the resulting linear programming problem allows for the parametrization of a CPA Lyapunov function for the system. In [19][20][21] it was proved that this approach, referred to as the CPA method, always succeeds in computing a Lyapunov function for a general nonlinear system with an exponentially stable equilibrium, if the simplices are sufficiently small and non-degenerate.
The main advantages of the CPA method, apart from the fact that it generates true Lyapunov functions and not approximations, are that that it can be combined with faster methods to verify Lyapunov function candidates, see, e.g. [22][23][24][25][26], and that is easily adaptable to different kinds of systems, e.g. to differential inclusions [27,28] or time-discrete systems [29]. The CPA framework can even be extended to compute or verify so-called contraction metrics [30][31][32], see also the recent review [33].
The proof that the CPA method always succeeds in generating a true Lyapunov function for a system with an exponentially stable equilibrium used the concept of (h, d)bounded triangulations, see Definition 10, where h > 0 is an upper bound on the diameters of the simplices and d > 0 quantifies the degeneracy of the simplices. For the definition of (h, d)-bounded triangulations one must consider triangulations, of which the order of the vertices of each simplex has been fixed.
The first contribution of this paper is to show that if T is an (h, d)-bounded triangulation in ℝ n , then any triangulation consisting of the same simplices as T , but with a different ordering of the vertices, is an (h, d * )-bounded triangulation with Thus, the property that a triangulation is (h, d)-bounded depends essentially on the simplices of the triangulation T , and not the ordering of the vertices of the simplices.
The second contribution is a characterization of (h, d)bounded triangulations using the condition number of the shape-matrices of the simplices, cf. Definition 15. The advantage of this characterization is that the condition number of a matrix is a more familiar concept than the degeneracy as defined in Definition 10.
The third contribution is a systematic study of the error estimates used in the CPA algorithm with respect to the norms used.
The paper is organized as follows. In Section" Preliminaries", after introducing some notations, triangulations, CPA functions, shape-matrices of simplices, and (h, d)-bounded triangulations are presented. In Section "Construction of CPA Lyapunov functions" the algorithm to compute CPA Lyapunov functions is outlined and the relevance of (h, d)-bounded triangulations for the algorithm is shown. Further, the main results of the paper are proved. In Section "Error estimates in the CPA algorithm" error estimates for general norms in the CPA algorithm are derived and in Section "Conclusions and Future Work" some concluding remarks and ideas for future research are discussed.

Preliminaries
In this section the notation for the rest of the paper is introduced and some useful facts from linear algebra that will be used to derive the results of the paper recall are presented. Further, triangulations and CPA functions are defined.

Notation
Vectors in ℝ n are written in bold face and are assumed to be column vectors, e.g.
With p = 1 , p = 2 , and p = ∞ there are simple formulas for the induced matrix norms: where a i are the column vectors of A, and ‖A‖ 2 = max ‖x‖ 2 =1 √ x T A T Ax is the square root of the largest eigenvalue of the symmetric and positive-semidefinite matrix A T A . For A ∈ ℝ n×n the norm equivalences for p ∈ {1, ∞} and will be useful in the following.
The condition number ‖⋅‖ of a nonsingular matrix A ∈ ℝ n×n with respect to the norm ‖ ⋅ ‖ is defined as The identity matrix in ℝ n×n is denoted by I and its column vectors by e 1 , e 2 , … , e n , i.e. the standard orthonormal basis of ℝ n .
The set of the permutations of a set C is denoted by Sym(C) , i.e. Sym(C) is the set of bijective mappings C → C . For an ∈ Sym({1, 2, … , n}) the permutation matrix P ∈ ℝ n×n is defined through One easily verifies that P −1 = P T and ‖P ‖ p = ‖P −1 ‖ p = 1 for p ∈ {1, 2, ∞} . Note that left-multiplication by P permutes the rows of a vector or a matrix and right-multiplication by P permutes the columns of a vector or a matrix. For example, P x = x and , has continuous derivatives of all orders up to and including k. In particular, if G is compact, then they are all bounded. If the set G and m are clear from the context or not important, one also writes g ∈ C k or says g is a C k function. Note that if n > 1 and m = 1 the vector ∇g(x) is assumed to be a column vector.

Preliminaries
Let us start with a simple result, which will be used later.

Lemma 1
Let X ∈ ℝ n×n and let ‖ ⋅ ‖ be any sub-multiplicative matrix norm on ℝ n×n . Assume P, Q ∈ ℝ n×n are (nonsingular) matrices such that Then In particular, if ‖ ⋅ ‖ = ‖ ⋅ ‖ p with p ∈ {1, 2, ∞} and P = P and Q = Q are permutation matrices, then Proof The first statement follows immediately from and the second statement is a direct consequence of the comments in Section "Notation". ◻ P e k = e (k) for k = 1, 2, … , n.
A well known useful result on rank 1 corrections of matrices is given by the next lemma from [34].

Lemma 2 (Sherman-Morrison) Let
A ∈ ℝ n×n be nonsingular and u, v ∈ ℝ n be such that 1 + v T A −1 u ≠ 0 . Then and Proof (sketch) The formula for the inverse can be easily verified noting that v T A −1 u ∈ ℝ.
For the determinant formula, first note that Moreover, for a vector x such that v T x ≠ 0 , one has that w 1 ∶= x and any basis w 2 , … , w n of ker(v T ) = {y ∈ ℝ n ∶ v T y = 0} are linearly independent eigenvectors of I + xv T with eigenvalues 1 + v T x (once) and 1 ( n − 1 times). Since the determinant is the multiple of all the eigenvalues it follows that det(I + xv T ) = 1 + v T x and with x = A −1 u one gets

◻
The convex combination of the vectors x 0 , x 1 , … , x m ∈ ℝ n is defined as the set The vectors x 0 , x 1 , … , x m ∈ ℝ n are said to be affinely independent, if and only if This condition is equivalent to the linear independence of the augmented vectors For affinely independent vectors x 0 , x 1 , … , x m ∈ ℝ n the set S ∶= co{x 0 , x 1 , … , x m } is called an m-simplex and the vectors x i are said to be its vertices and veS ∶= {x 0 , x 1 , … , x m } is the vertex set of S. For an m-simplex S, its diameter is defined as An n-simplex in ℝ n is often referred to simply as simplex.
The so-called shape-matrix of the vertices of a simplex, i.e. an n-tuple of affinely independent vectors, is very important for the CPA algorithm, because it is used to measure the (geometrical) regularity of a simplex. It is defined in terms of an n-tuple containing its vertices and is, therefore, dependent of their order in the tuple. The goal of this paper is to show that it is essentially only dependent on the set of vectors and not on the order. Hence, for an n-tuple C = x 0 , x 1 , … , x n , x i ∈ ℝ n , set(C) is defined as the set containing the elements in C, i.e. set(C) ∶= {x 0 , x 1 , … , x m }. Definition 3 Let x 0 , x 1 , … , x n ∈ ℝ n be affinely independent vectors and C = x 0 , x 1 , … , x n an n-tuple. The shape-matrix of C is defined by That is, (x i − x 0 ) T is the i-th row vector of X C . For S = coset(C) the matrix X C is said to be the shape-matrix of the simplex S.
Note that an n-simplex S has (n + 1)! potentially different shape-matrices, corresponding to the permutations of its vertices. In the next lemma certain quantities of different permutations are related to each other.
Proof Proof of the second statement: If (0) = 0 , then the restriction of to {1, 2, … , n} is in Sym({1, 2, … , n}) . Define the permutation matrix P ∈ ℝ n×n using this restriction. Then X C = P X C and the statement follows immediately by Lemma 1.
Proof of the first statement: Let Then and Laplace expansion of the first column of AX a C gives Then holds tr ue for the augmented matr ix. Since | det(P )| = 1 = det(A) it follows that ◻

Triangulations and CPA Functions
In this section triangulations and the associated CPA functions, that are the motivation for this paper, are introduced. For our purposes it is sometimes advantageous to have the order of the vertices of every simplex in a triangulation fixed, similar as in [25]. .
SN Computer Science for all , ∈ L . The domain of T is defined as and its complete set of vertices is denoted by Further, the diameter of T is defined as If every simplex S ∈ T is uniquely associated to a corresponding n-tuple C = x 0 , x 1 , … , x n of its vertices, T is said to be a triangulation with ordered vertices.
Example 1 An example of a triangulation of ℝ n is the standard triangulation T std , see Fig. 1 and e.g. [35], which consists of the simplices fo r a l l z ∈ ℕ n 0 , a l l J ⊂ {1, 2, … , n} , a n d a l l ∈ Sym({1, 2, … , n}) . The functions R J ∶ ℝ n → ℝ n , defined for every J ⊂ {1, 2, … , n} are given by where J (i) denotes the characteristic function equal to one if i ∈ J and equal to zero if i ∉ J . It is easy to see that D T std = ℝ n and V T std = ℤ n .
Given a triangulation T , a continuous and piecewise affine function, i.e. CPA function, can be defined by fixing its values at the vertices of the simplices, i.e. V T . Definition 6 (CPA function) Let T be a triangulation in ℝ n . Denote by CPA[T] the set of all continuous functions that are affine on each simplex S ∈ T , i.e. for each S ∈ T there exists a vector w ∈ ℝ n and a number a ∈ ℝ such that Define ∇V ∶= w .
Further, x has a unique representation as the convex combination of the vertices of S , i.e. there are unique numbers It is not difficult to see that and from the condition (3) it immediately follows, that even though x ∈ S ∩ S for S ∈ T , the representation (4) is unique. Hence, each V ∈ CPA[T] is completely determined by its values in the vertex set V T . Further, from V(x 0 ) = ∇V ⋅ x 0 + a and one obtains With the n-tuple C = x 0 , x 1 , … , x n and the a corresponding shape-matrix X C it follows that This section is concluded by showing an example of how the value of a CPA function at a point is determined. Furthermore,

Construction of CPA Lyapunov Functions
Let us elaborate why ‖X −1 C ‖ p for shape-matrices X C is of so much interest for computing CPA Lyapunov functions. Our reference is [21]. To prove that the CPA method always succeeds in computing a CPA Lyapunov functions for the system (1) with f ∈ C 2 (ℝ n , ℝ n ) , one uses the fact that by converse theorems there exists a C 2 Lyapunov function W for the system, cf. e.g. [36][37][38][39] or [4][5][6][7] for a more accessible discussion. Such a Lyapunov function W is interpolated over a triangulation T by a CPA function V ∈ CPA[T] by fixing The function V is said to be the CPA interpolation of W on T . Thus, on a simplex S ∶= coset(C) ∈ T , C = x 0 , x 1 , … , x n , the unique convex combination of the vertices for an x ∈ S is used to set

Then
For every triangulation T with D T ⊂ K , where K ⊂ ℝ n is compact, one has It follows that V approximates W arbitrarily well if T , D T ⊂ K , is a triangulation consisting of simplices with small enough diameters.
However, this is not sufficient if one additionally wants ∇V to closely approximate ∇W , i.e. convergence in C 1 (D T ;ℝ) and not only in C(D T ;ℝ) , cf. e.g. the proof of [21,Theorem 5]. This is demonstrated in the next example, where some shape-matrices for a simplex in the plane (triangle) are computed and the gradients of two potential Lyapunov functions are compared to the gradient of their CPA interpolations.

Example 9 D e f i n e t h e v e c t o r s
The triangle is equilateral if 3 k . Now consider two different orderings of the vertices.
For C 0 ∶= (x 0 , x 1 , x 2 ) one has the shape-matrix with inverse Then which has the eigenvalues Now swap the first two vertices and set C 1 ∶= (x 1 , x 0 , x 2 ) , describing the same simplex, but with a different order of the vertices. The shape-matrix becomes

SN Computer Science
with inverse and has the eigenvalues Thus One clearly sees that ‖X −1 0 ‖ 2 ≠ ‖X −1 1 ‖ 2 . Later it will be shown that a convenient sufficient condition for the convergence ‖∇V − ∇W‖ 2 → 0 for any W ∈ C 2 (S;ℝ) and its CPA interpolation V is that for a constant d > 0 one has Lemma 13 will show that if (7) . In our example That is, depending on h and k: From the formula for d 0 , one sees that condition (7) translates with X ∶= X 0 to for some constants L, H. Alternatively, one can use norm equivalence to see that which delivers again the same condition (8) for (7) with X ∶= X 1 . Let us link these considerations to estimates on the gradient of a Lyapunov function and its CPA interpolation. Consider two potential Lyapunov functions.
As the first potential Lyapunov consider W 1 (x, y) = x 2 + y 2 with gradient ∇W 1 (x, y) = (2x, 2y) T . By formula (5) for the gradient ∇V 1 of its CPA interpolation V 1 on S is from which follows. S i m i l a r l y , f o r t h e p o t e n t i a l L y a p un ov f u n c t i o n W 2 (x, y) = (x + y) 2 w i t h g r a d i e n t ∇W 2 (x, y) = (2x + 2y, 2x + 2y) T one obtained for the gradient ∇V 2 of its CPA interpolation V 2 on S that i.e.
In both cases one sees that a sufficient condition for ‖∇V i − ∇W i (x, y)‖ 2 → 0 , i = 1, 2 , when h → 0 and k → 0 , is that kh −1 is bounded, which corresponds to the condition L > 0 in (8).
To prove that the gradient ∇V of the CPA interpolation V of the C 2 Lyapunov function W approximates ∇W arbitrarily well for appropriate small simplices, let us first consider a fixed simplex S ∶= coset(C) ∈ T , C = x 0 , x 1 , … , x n . In the CPA algorithm certain linear constraints are to be fulfilled at the vertices x i of the simplex S. Note the estimate for every i = 0, 1, … , n . Let us first consider the term ‖∇W(x i ) − ∇W(x 0 )‖ p . Consider the continuously differentiable function g(t) ∶= ∇W(t(x i − x 0 ) + x 0 ) − ∇W(x 0 ) . Then, by the Mean Value Theorem one obtains for some s ∈ (0, 1) and where H W ∶ ℝ n → ℝ n×n the Hessian matrix of W, that Thus, ‖∇W(x i ) − ∇W(x 0 )‖ p can be made arbitrarily small by using a simplex S with small enough diameter. The term ‖∇V − ∇W(x 0 )‖ p is more problematic and is not necessarily small for a small simplex S as was demonstrated in Example 9. Let us analyse this in more detail and derive sufficient conditions. By Taylor's Theorem one has for every j = 1, 2, … , n that for some z j on the line segment between x 0 and x j . Thus Now with one gets Note that the j-th component of the vector v − X C ∇W(x 0 ) , see (10), is given by and can thus be bounded using (9), Thus and Above it was shown that the Lyapunov function W is approximated arbitrary well in the C 1 norm on the simplex S given that diam(S) and ‖X −1 C ‖ p diam(S) 2 are small. Note in particular, that on a compact set K ⊂ ℝ n one has and the CPA interpolation V ∈ CPA[T] of the Lyapunov function W is arbitrarily close to W in the C 1 norm for any triangulation with ordered vertices T with sufficiently small diam(T) and max S ∈T ‖X C ‖ p diam(T) 2 .
Because of this, the proof in [21] that the CPA method always succeeds in computing a Lyapunov function if one exists, uses a sequence of finite triangulations T k where the simplices become smaller, i.e. diam(T k ) → 0 as k → ∞ , but also such that max S ∈T k ‖X C ‖ p diam(T k ) 2 → 0 as k → ∞ , or, as a sufficient condition, that Now note that when the simplex S ∶= coset(C) is scaled down, i.e. the vertices of C (or S) are multiplied with a number 0 < s < 1 , then This leads to the following strategy of obtaining a suitable sequence of triangulations T k for proving that the algorithm in [21] succeeds in computing a Lyapunov function on any compact set C , that is contained in the basin of attraction of the equilibrium at the origin. For simplicity some adaptations that have to be made close to the equilibrium are ignored, cf. [21] for the details. The starting point is a triangulation T 0 with D T 0 = ℝ n that is uniformly regular as defined in Definition 10 below. Then an adequate sequence of triangulations T k is generated from the uniformly regular triangulation T 0 . For this fix a constant s fulfilling 0 < s < 1 and define Then by (11) one has diam(T k ) ≤ s k diam(T k ) and with d * as in Definition 10 the degeneracy of T k is upper bounded by d * . It follows that V and ∇V approximate W and ∇W arbitrarily close on C with increasing k.
Definition 10 Let T = S ∈L be a triangulation with ordered vertices, S = coset(C ) , where the C are the associated n-tuples of vertices.
1. The degeneracy of T is defined to be the value 2. The triangulation with ordered vertices T is said to be (h, d)-bounded for constants 0 < h, d < ∞ , if diam(T) < h and the degeneracy of T is bounded from above by d.

The triangulation with ordered vertices T is said to be
uniformly regular if it is (h, d)-bounded for some constants 0 < h, d < ∞. 4. Let T * be a triangulation (not with ordered vertices) that consists of the same simplices as T and assume that T is uniformly regular. Then T * is said to be uniformly regular.

Remark 11
Some comments on the last definition are in order: 1. p = 2 was used in the definition of degeneracy to fix the numerical value, but in principle any 1 ≤ p ≤ ∞ can be used to the same effect because of norm equivalences in ℝ n×n . 2. Obviously all triangulations consisting of a finite number of simplices are uniformly regular and this concept is only interesting for infinite triangulations. The triangulation in (12) is a triangulation of a compact set and therefore finite, however, the algorithm to compute CPA Lyapunov functions uses a sequence of finer and finer triangulations arising from scaling down an infinite triangulation and this infinite triangulation should be uniformly regular. 3. An equivalent condition for a uniformly regular triangulation with ordered vertices T is that there exist constants 0 < h * , d * < ∞ such that where X C is the shape-matrix corresponding to the simplex S ∈ T . This is shown in Lemma 12 below. 4. A priori it is not obvious that uniformly regular is properly defined for triangulations that don't have ordered vertices. This however, is proved in Lemma 13, and indeed if T is a triangulation with ordered vertices that has degeneracy d, then all triangulations with ordered vertices consisting of the simplices of T will have degeneracy no larger than d * ∶= d(1 + d √ n − 1). 5. An example of uniformly regular triangulation is the standard triangulation from Example 6; see e.g. [35], where it is shown that it is uniformly regular. However, there are many more examples of uniformly regular triangulations with useful approximate symmetries that can be adapted to the system (1) at hand [40,41].
As explained above, the success of the CPA algorithm to compute Lyapunov functions is shown using a sequence of triangulations T k such that each triangulation T k is (h k , d) -bounded, where h k → 0 as k → ∞ and d > 0 is a constant independent of k.
Our first main result is Lemma 12, which shows that the concept of (h, d)-bounded triangulations can equivalently be formulated in terms of the norm and the condition number of the shape-matrices of the triangulation.

Lemma 12
Let C = (x 0 , x 1 , … , x n ) be an n-tuple of affinely independent vectors in ℝ n , X C be its corresponding shapematrix, and S = coset(C ) the corresponding simplex. Then and and ◻ The next lemma shows that the concept of a uniformly regular triangulation is properly defined for a triangulation (not with ordered vertices). This is shown by demonstrating that if a triangulation with ordered vertices T in ℝ n is (h, d)-bounded for some particular ordering of the vertices of the simplices, then it is (h, d * )-bounded for any ordering with d * = d(1 + d √ n − 1).

Lemma 13
Let T = S ∈L be a triangulation with ordered vertices, S = coset(C ) , where the C are the associated n-tuples of vertices. Assume T * = S ∈L is a triangulation consisting of the same simplices as T , but with a (possibly) different ordering of the vertices, i.e. a different set of n-tuples C * associated to the simplices. Then T * is (h, d * ) Proof The case n = 1 is trivial. Thus assume in the following that n ≥ 2.
Let C ∶= (x 0 , x 1 , … , x n ) be the n-tuple of vertices associated to the simplex S ∈ T . Its shapem a t r i x i s (1) , … , x (n) ) is the n-tuple of vertices associated to the simplex S in T * . If (0) = 0 , then the shapematrix X C * has the same rows as the shape-matrix X C , just in a (possibly) different order. Then it follows immediately by Lemma 1 that ‖X −1 C * ‖ 2 = ‖X −1 C ‖ 2 and thus If (0) ≠ 0 , then there is an i ∈ {1, 2, … , n} such that (i) = 0 . Define ∈ Sym({1, 2, … , n}) through (i) = (0) and (k) = (k) for k = 1, 2, … , i − 1, i + 1, … , n , i.e. k ≠ i , and denote by P the permutation matrix defined through P e k = e (k) . Then which shows (13). Now and by Lemma 4 | det which is a contradiction because X C * and A −1 are invertible and u ≠ 0. Thus one obtains by Lemma 2 that Further, again by Lemma 2, It is easy to see that Note that R −1 i = R i = R T i and recall that P −1 = P T , from which and follows. Thus ‖R i P ‖ 2 = ‖(R i P ) −1 ‖ 2 = 1 and it follows by Lemma 1 that Hence, and then follows.
Since the simplex S ∈ T * was arbitrary, it has been shown that T * is (h, d * )-bounded. ◻ The following proposition is a direct consequence of Lemma 13.

Proposition 14
Assume T k , k ∈ ℕ 0 , is a sequence of triangulations with ordered vertices in ℝ n , such that T k is (h k , d k ) -bounded, h k → 0 as k → ∞ , and d k ≤ d for all k ∈ ℕ 0 . Let T * k , k ∈ ℕ 0 , be a sequence of triangulations with ordered vertices such that T * k consists of the simplices of T k for every k ∈ ℕ 0 , but with a (possibly) different ordering of the vertices of the simplices. Then there are constants d * By Lemma 13 one can talk about an (h, d)-bounded triangulation T = {S } ∈L even though the vertices of the simplices are not ordered. The understanding is then that no matter how the vertices of the simplices are ordered, the resulting triangulation with ordered vertices in ℝ n is (h, d)bounded in the sense of Definition 10. Thus, one can define uniformly regular for triangulations, of which the vertices of the simplices are not necessarily ordered. Let us put this in a formal definition.
Definition 15 (Uniformly regular triangulations) A triangulation T in ℝ n (not with ordered vertices) is said to be uniformly regular if any, and then all, triangulation with ordered vertices that consists of the same simplices as T is uniformly regular.

Error Estimates in the CPA Algorithm
In this section, some important error estimates for CPA Lyapunov functions V ∶ D T → ℝ when using linear constraints in the CPA algorithm are discussed. Here T is a triangulation and D T its domain. Let c ∶ D T → ℝ be a function that is convex on every simplex S ∈ T ; a sufficient condition, but not necessary, is that c is a convex function on D T . The essential idea of the CPA algorithm is to state constraints at the vertices of a simplex S = coset(C) ∈ T , C = (x 0 , x 1 , … , x n ) , that are linear in the values of a function V ∈ CPA[T] , such that R e c a l l t h a t Further recall that x ∈ S can be written uniquely as a convex combination of the vertices of S, i.e. ∑ n i=0 i x i where i ≥ 0 and ∑ n i=0 i = 1. Because c is convex on S one gets and by writing one sees that a sufficient condition for (14) is that Assume that for a q ∈ [1, ∞] upper bounds functions for nonlinear systems, see e.g. [21,42]. The values E f i,∞ obviously depend on the simplex S ∈ T and are chosen individually for each simplex S ∈ T . Additionally, the E f i,∞ for a given simplex S = co(x 0 , x 1 , … , x n ) ∈ T depend on the vertex x 0 , which one can choose freely among the vertices of the simplex; the inequality (18) will imply (14) for any choice. This is important because often one is interested in an equilibrium at the origin, i.e. f(0) = 0 , and c(x) ≥ 0 with c(x) = 0 , if and only ifx = 0 . In this case 0 ∈ S must imply that 0 is a vertex of S and one must choose x 0 = 0 . Note that if x 0 = 0 , then E f 0,1 = E f 0,∞ = 0 and (18) with i = 0 is trivially fulfilled.

Conclusions and Future Work
The computation of Lyapunov functions using CPA (continuous and piecewise affine functions) fixes a triangulation of the phase space and determines the values of the function at the vertices. If those values satisfy certain inequalities depending on the system (1) in question, then the (unique) CPA interpolation of these values is a Lyapunov function [19][20][21]. This method can be used in two ways: either the values are determined by solving a linear optimisation problem, or the method is used to verify values that have been found with a different method.
This paper addressed two aspects of the method: the method requires a sequence of simplices that are not degenerate. The degeneracy so far was dependent on the ordering of the vertices in the simplices. The first contribution of this paper is to eliminated the dependence of the degeneracy on the ordering of the vertices of the simplices in the triangulation. Thus, the degeneracy can be defined for the simplices as geometrical objects. Further, a characterization of the degeneracy in terms of the condition number of the shapematrices was provided.
The second contribution is to generalise the error estimates used in the CPA method to general p-norms. While the cases p = 1 and p = ∞ are particularly useful as they result in linear optimisation problems, any case p ∈ (1, ∞) can be useful to verify Lyapunov function candidates that have been computed with a different method.
For future work, it would be very interesting to investigate if one can use a Lyapunov function candidate computed by a non-exact method, i.e. a numerical approximation to a Lyapunov function that might fail to fulfill the conditions for a Lyapunov function in some areas, as a starting point in solving the linear program to generate a true CPA Lyapunov function for the system in question. Additionally, the localization of the area where the decrease condition of a Lyapunov function holds true for a complete Lyapunov functions candidate, generated as in [43][44][45][46], would constitute an important step in algorithmically localizing chain-recurrent sets [47][48][49] in dynamical systems.

Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.