VERIFICATION ESTIMATES FOR THE CONSTRUCTION OF LYAPUNOV FUNCTIONS USING MESHFREE COLLOCATION

. Lyapunov functions are functions with negative derivative along solutions of a given ordinary diﬀerential equation. Moreover, sub-level sets of a Lyapunov function are subsets of the domain of attraction of the equilibrium. One of the numerical construction methods for Lyapunov functions uses mesh- free collocation with radial basis functions (RBF). In this paper, we propose two veriﬁcation estimates combined with this RBF construction method to en- sure that the constructed function is a Lyapunov function. We show that this combination of the RBF construction method and the veriﬁcation estimates al- ways succeeds in constructing and verifying a Lyapunov function for nonlinear ODEs in R d with an exponentially stable equilibrium.


1.
Introduction. The determination of the domain of attraction of an equilibrium is an important task in the analysis and derivation of dynamical systems, arising in many practical applications. Since it is difficult to determine the exact domain of attraction analytically, researchers have been seeking numerical algorithms to determine subsets of the domain of attraction, see [6]. Most of these methods for computing domains of attraction are based on Lyapunov functions, which are functions that decrease along trajectories of the dynamical system. Sublevel sets of Lyapunov functions are positively invariant subsets of the domain of attraction. The construction of such Lyapunov functions, however, is very challenging. One of the numerical methods to compute Lyapunov functions is the RBF (radial basis function) method, a special case of meshfree collocation. It considers a particular Lyapunov function, satisfying a linear PDE and approximates it using meshfree collocation [3,7]. For this method, a set of scattered collocation points is used to find an approximation to the solution of the linear PDE. It is computed by solving a linear system of equations, for more details see Section 3. A refinement algorithm, based on Voronoi diagrams, for this method was proposed in [12].
Error estimates show that the RBF method always constructs a Lyapunov function if the collocation points are dense enough and they were placed in the domain of attraction [7]. However, since both assumptions cannot be verified in a given example, we need a-posteriori estimates to check whether the constructed function is indeed a Lyapunov function.
In [5], such a verification was proposed by interpolating the RBF approximant by a continuous, piecewise affine function (CPA), which is affine on each simplex of a fixed triangulation. While this method provides a rigorous verification of the Lyapunov function properties, the function is not the (smooth) RBF approximant, but its interpolation, which is only continuous. In [8], the CPA method was also used to verify a Lyapunov function, constructed by approximating the Lyapunov function proposed by Yoshizawa at the vertices of a fixed triangulation.
In this paper, we propose a rigorous verification that the function constructed with the RBF method is a Lyapunov function. We will derive two verification estimates which are based on Taylor approximations and rely on the first and second derivatives of the orbital derivative. For the second derivatives, we need to consider a triangulation of R d . The verification estimate based on the second derivatives has a higher order of convergence and thus requires fewer points, where the orbital derivative needs to be checked. Hence, it is preferential to the first order estimate, which is also observed in the example section. The results in this paper are partly included in the second author's PhD thesis [11].
The outline of the paper will be as follows: Section 2 will give a short introduction to dynamical systems and Lyapunov functions. In Section 3 we present the radial basis functions (RBF) collocation method for numerical solutions of PDEs. In Sections 4 and 5 we will derive the verification estimates, using the first and second derivative of the orbital derivative, respectively. In Section 6, we will show that the verification always succeeds, if we choose finer and finer grids. We apply the method to a numerical example in Section 7, before we conclude in Section 8.

2.
Lyapunov functions and domain of attraction. Consider the following autonomous system of differential equationṡ (1) where f ∈ C σ (R d , R d ), σ ≥ 1 and d ∈ N. We denote by S t ξ := x(t) the solution of the initial value problemẋ = f (x), x(0) = ξ ∈ R d and we assume that the solution exists for all t ≥ 0. Note that S t defines a dynamical system on R d .
A point x 0 ∈ R d is an equilibrium of (1) if f (x 0 ) = 0. Then, x(t) = x 0 is a constant solution of (1) for all t ≥ 0. The stability of an equilibrium is determined by the behaviour of solutions in its neighborhood. An equilibrium is called stable, if all solutions starting near the equilibrium stay near the equilibrium for all future times. Moreover, it is called asymptotically stable if it is stable and the solutions starting near the equilibrium converge to it as time tends to infinity. It is called exponentially stable, if the rate of convergence to the equilibrium is exponential. For an asymptotically stable equilibrium x 0 we are interested in finding the largest set of initial states from which the trajectories of solutions converge to the equilibrium as time tends to infinity. This set is called the domain of attraction.
Definition 2.1 (Domain of attraction). The domain of attraction of an asymptotically stable equilibrium x 0 is defined by Remark 1. The domain of attraction A(x 0 ) of an asymptotically stable equilibrium x 0 is non-empty and open.
2.1. Lyapunov functions. The method of Lyapunov functions enables us to determine subsets of the domain of attraction of an asymptotically stable equilibrium through sublevel sets of the Lyapunov function. A function V ∈ C 1 (R d , R) is called a Lyapunov function for the equilibrium x 0 if it has a local minimum at x 0 and a negative orbital derivative in a neighborhood of x 0 .
where ·, · denotes the standard scalar product in R d .

Remark 2.
The orbital derivative is the derivative along solutions: with the chain rule we have The following theorem shows how Lyapunov functions are used to find subsets of the domain of attraction; note that the requirement of the local minimum at x 0 is a consequence of the assumptions. The theorem states that sublevel sets of a Lyapunov function are positively invariant subsets of the domain of attraction, see e.g. [3,Theorem 2.24]. Theorem 2.3. Let x 0 ∈ R d be an equilibrium, V ∈ C 1 (R d , R) and K ⊂ R d be a compact set with neighbourhood B such that x 0 ∈K. Moreover, let , V is decreasing along solutions in K \{x 0 }. Then K ⊂ A(x 0 ), K is positively invariant and V is called a (strict) Lyapunov function.

2.2.
Existence of Lyapunov functions. There are several results on converse theorems, ensuring the existence of Lyapunov functions in different contexts, e.g. already in 1949 [10]; for a review see [9]. These converse theorems, however, use the explicit solution of (1) and are thus often not useful to construct a Lyapunov function explicitly. We will approximate the Lyapunov function of the following theorem, see [3,Theorem 2.46].
Then there exists a Lyapunov function Note that we can replace the right-hand side by other functions, including a negative constant, see also [3], but we will restrict ourselves to the above function in this paper.
3. Construction of Lyapunov functions using radial basis functions. The Lyapunov function V introduced in Theorem 2.4 satisfies a partial differential equation. Therefore, it can be approximated using the radial basis functions collocation method. Note that the approximation itself will be a Lyapunov function outside an arbitrarily small neighborhood of the equilibrium, if its orbital derivative is sufficiently close to that of the Lyapunov function V at all points, which can be ensured using error estimates.
Meshfree collocation, in particular by radial basis functions, is used to approximate multivariate functions and approximately solve partial differential equations [13,2,14]. For a general introduction to meshfree collocation and reproducing kernel Hilbert spaces see [16]. For the application of RBF to the construction of Lyapunov functions, see [3], where details for the following overview of the method can be found, as well as [7].
A radial basis function is a real-valued function whose value depends only on the distance from the origin i.e., Ψ(x) = ψ( x 2 ), where · 2 denotes the Euclidean norm in R d . There is a one-to-one correspondence between a radial basis function, or more generally kernel, and its reproducing kernel Hilbert space (RKHS). The approximate solution of the PDE will be a norm-minimal interpolant in the RKHS; in our brief overview, however, we do not discuss this relation further, the interested reader is referred to [7]. Now let us consider a general linear partial differential equation of the form where Ω is a bounded domain with Lipschitz continuous boundary and L is a linear differential operator of the form where α ∈ N d 0 is a multi-index. In our case, the differential operator L will be given by the orbital derivative of a function V with respect to system (1), namely The operator L in (6) is a first order differential operator of the form (5) with where Ψ(x) is the fixed radial basis function, δ is Dirac's delta-operator, defined by δ y0 g(x) = g(y 0 ), and the superscript y denotes the application of the operator with respect to the variable y.
The coefficients β k are determined by the interpolation conditions for all x j ∈ X RBF ; this leads to a linear system for β = (β k ) k=1,...,N Aβ = α.
The matrix entries of A = (a jk ) j,k=1,...,N are given by and the right-hand side α = (α j ) j=1,...,N is given by Since the points x j are pairwise distinct and no equilibria, the symmetric matrix A is positive definite, so in particular non-singular, see [7]. Hence, (8) has a unique solution β.
Finally, we calculate the approximant v(x) and its orbital derivative v (x) by evaluating and taking the orbital derivative of (7).
where ψ 1 and ψ 2 are defined as: and we assume that ψ 1 and ψ 2 can be continuously extended to 0. In the following we mainly use a Wendland function φ l,k as radial basis function, introduced by Wendland [15] as a compactly supported radial basis function, which is a polynomial on its support. The corresponding RKHS is a Sobolev space W τ 2 (Ω); the space of functions is identical, while the norms are equivalent. If the smoothness parameter k, defined below, is sufficiently large, then ψ 1 and ψ 2 can be continuously extended to 0. In particular, we will later use ψ(r) := φ 6,4 (cr) and ψ(r) := φ 7,5 (cr), see Appendix A.1.
The following error estimate was given in [7,Corollary 4.11]; note that Ω can be a general domain. W τ 2 (Ω) denotes the usual Sobolev space on Ω ⊂ R d with norm · W τ 2 (Ω) . Theorem 3.1. Let k ∈ N if d is odd or k ∈ N\{1} if d is even and fix l := d 2 +k+1; we use the radial basis function ψ(r) = φ l,k (cr) with c > 0, where φ l,k denotes the Wendland function. Set τ = k + (d + 1)/2 and σ = τ . Consider the dynamical system defined by (1), where f ∈ C σ (R d , R d ). Let x 0 be an exponentially stable equilibrium of (1). Let f be bounded in A(x 0 ) and denote Let Ω ⊂ A(x 0 ) be a bounded domain with Lipschitz continuous boundary.
The reconstruction v of the Lyapunov function V with respect to the operator (6) and a set where h RBF := sup x∈Ω min xj ∈X RBF x − x j 2 denotes the fill distance.
The error estimate (13) implies that the approximant v of the Lyapunov function V is actually a Lyapunov function outside a small neighborhood of the origin, i.e. satisfies v (x) < 0 for all x ∈ Ω with x − x 0 2 2 > , if the collocation points are dense enough. Let us make this more precise: by choosing the fill distance h RBF so small that Ch Remark 3. The approximation may fail to have negative orbital derivative near the equilibrium x 0 as the error estimate requires x − x 0 2 2 > . One can use the Lyapunov function of the linearized system, the so-called local Lyapunov function, to deal with this small neighborhood of x 0 , for details see [3], or a modified method, see [4]. In this paper, we will not deal with this local problem in more detail, but we exclude a small neighborhood E ⊃ B √ (x 0 ) of x 0 in our consideration.
According to the error estimate (13) for Wendland functions, the approximant v is a Lyapunov function, i.e. has negative orbital derivative, if the RBF collocation points are dense enough. However, we cannot tell in advance how dense the collocation points should be (i.e. how small the fill distance h RBF should be), since this error estimate depends on unknown quantities such as V . Therefore, we need a verification of the negativity of the orbital derivative of a constructed function v on a compact set K. Moreover, these a-posteriori estimates will also be applicable to other radial basis functions.
For our verification estimates we compute the orbital derivative on finitely many points of a checking grid Y od and then use estimates that are based on a Taylor approximation and rely on the first and second derivatives of the orbital derivative. For the derivation of these estimates we make use of the special form of the RBF approximant and its orbital derivative, (9) and (10), respectively.
In the following we consider two point sets: 1. X RBF = {x 1 , x 2 , . . . , x N }, the collocation points for the calculation of the approximant v of the solution of V (x) = − x − x 0 2 2 as discussed above. 2. Y od = {y 1 , y 2 , . . . , y M }, used to check the sign and value of the orbital derivative of the constructed Lyapunov function on a different (usually finer but not necessarily) grid than X RBF .
In Section 4 and 5 we will present an estimate based on the first and second derivative of v , respectively. 4. First Estimate: Using the first derivative of the orbital derivative. We derive the first verification estimate by using a consequence of the mean value theorem.
Let us define the set of discrete points C h1 . It is chosen such that the · 1 -balls with centers in C h1 cover a large area.
Note that C h1 adds the center of each d-dimensional square of S h1 , see Figure 1 in Figure 1. The two point sets S h1 (square grid) and C h1 (centered square grid) in R 2 ; h 1 is the distance between the square grid points in both directions.
For our estimate, we start with a compact set K ⊂ R d and choose a set of finitely many checking points Y od ⊂ K. We then chooseK as the set of points with a · 1distance of at most h * to one of the points in Y od . We use Lemma 4.1 and set η = v in (15), where v : R d → R is the orbital derivative of an approximant v. For each x ∈K we choose a point y ∈ Y od with x − y 1 ≤ h * . Note that by construction of K the line between x and y lies inK. Then The setsK and K in Proposition 1 are nearly the same sets -note that at the boundary of K, the sets will be slightly different, however, "inside" they are equal. Let us make this precise by showing show that the closed 1-balls of size d 4 h 1 with centers in C h1 cover the hypercube [0, Proof. For a 2-dimensional illustration see Figure 2. Denote the grid points 2 for all i ∈ {1, . . . , d}. We want to show that this point either lies in the closed h * -ball around the origin or in the closed h * -ball around the center point. Note that the argumentation is similar for points near the other corners. Assume i.e. the point (x 1 , x 2 , . . . , x d ) is outside the closed 1-norm ball of the center grid point which also means that the point (x 1 , x 2 , . . . , x d ) lies inside the closed 1-norm ball of the grid point On the other hand, there is a point, namely h1 4 (1, 1, . . . , 1) T with distance d 4 h 1 to the grid points y 1 and y 2 d +1 , so h * cannot be chosen smaller.
This shows that the setK in Proposition 1 only differs from K at the boundary, it may be larger or smaller than K, depending on the shape of K.
To use the estimate (17), we can compute max y∈Y od v (y) as Y od consists of finitely many points. In the following, we will now estimate max z∈K max j=1,...,d ∂v (z) ∂xj , using the specific form of the RBF approximation in Proposition 2.

VERIFICATION ESTIMATES 4963
Let us define the following functions and quantities.
For i ∈ {0, . . . , i 0 }, for which lim r→0 ψ i (r) exists, and k ∈ N 0 also define Note that the quantities F and D 1 in Proposition 2 are finite by assumption and can a priori be estimated from above by hand. β, on the other hand, depends on the computed solution v and is only available a posteriori. The assumptions for ψ i and Ψ i,k are verified for the Wendland functions φ 6,4 and φ 7,5 as well as the Gaussians in the appendix.
Proposition 2 (The first derivative of the orbital derivative).
and v ∈ C 2 (R d , R) be an RBF approximant of the form (9) with a radial basis function ψ : LetK be a compact set and X RBF = {x 1 , . . . , x N } ⊂K be a set of N pairwise distinct points which are no equilibria.
Then the first derivative of the orbital derivative v can be bounded by for all x ∈K and all j ∈ {1, . . . , d}, where Proof. By differentiating v from (10) with respect to x j , j ∈ {1, . . . , d}, we get, Hence, This proves the proposition.
For specific RBFs we have the following results, using the estimates in the appendix.
where F and D 1 were defined in Proposition 2.
For the RBF ψ 0 (r) = φ 7,5 (cr) given by Wendland's compactly supported RBF with c > 0 we have For the Gaussian RBF ψ 0 (r) = exp(− 2 r 2 ) with > 0 we have Summarizing, we can use Proposition 1 to estimate v (x) by fixing the checking grid Y od as well as the setK, which differs from K only at the boundary due to Lemma 4.3. Using Proposition 2 to estimate the first derivative of the orbital derivative all quantities on the right-hand side of (17) can now be computed.

5.
Second Estimate: Using the second derivative of the orbital derivative. To derive the estimate using the second derivative, we first need to introduce a triangulation of R d , see e.g. [5]. We will later use two particular triangulations, one where the vertices are given by S h1 and one where they are given by C h1 , see Definition 4.2.
where x 0 , . . . , x k ∈ R d are affinely independent (i.e. the vectors x j − x 0 , j = 1 . . . , k are linearly independent) and are called the vertices.
A triangulation in R d is a set T := {T ν : ν = 1, 2, ..., N } (or N = ∞) of dsimplices T ν , such that any two simplices in T intersect in a common face or are disjoint. Note that a face of a d-simplex is a k-simplex, 0 ≤ k < d, so this means that the intersection of two simplices in T is either empty or the convex combinations of the common vertices of the two simplices.
We denote the set of all vertices of all simplices by V T and we say that T is a triangulation of the set In Lemma 5.2 and 5.3 we will define the standard and the centered triangulation, respectively. Figure 3 shows the standard and the centered triangulations in R 2 .
• are the vertices, which are in S h 1 and T i , i = 1, 2 are simplices of the triangulation.
• are the vertices, which are in C h 1 and T i , i = 1, 2, 3, 4 are simplices of the triangulation.    Proof. It is clear that any two simplices either intersect not at all or in a common face, if they share k vertices, and also that the set of all vertices is S h1 .
We show that it covers where |x| = (|x i |) i=1,...,d , so x lies in a simplex.
x zJ σ is a triangulation of the set D T C (h1) = R d , called the centered triangulation. The set of all vertices of all simplices is V T C (h1) = C h1 .
Proof. We have split each simplex of the standard triangulation into two, hence, it is clear that any two simplices either intersect not at all or in a common face, if they share k vertices and also that the set of all vertices is C h1 .
If r σ1 + r σ d ≥ 1, then x ∈ T 1 zJ σ , otherwise x ∈ T 2 zJ σ . Case 1: r σ1 + r σ d ≥ 1 In the first case, we have r σ1 ≥ 1 2 , since if we assume that r σ1 < 1 2 , then also r σ d ≤ r σ1 < 1 2 and thus r σ1 + r σ d < 1 in contrast to the assumption of the first case.
By definition of σ we have λ i ≥ 0 and where |x| = (|x i |) i=1,...,d , so x lies in T 1 zJ σ . Case 2: r σ1 + r σ d < 1 In the second case, r σ1 + r σ d < 1. We have r σ d ≤ 1 2 since assuming the contrary, r σ1 ≥ r σ d > 1 2 and thus r σ1 + r σ d > 1 in contradiction to the assumption of the second case. Set so x lies in T 2 zJ σ . The following proposition is an improvement from [1, Proposition 4.1(b)] by fixing a vertex x 0 of a simplex and using the · 1 norm for the distance between vertices as well as the · max norm for the Hessian matrix. Since the latter is the norm which will be estimated in Theorem 4, and by replacing the diameter of the simplex with the largest distance from a fixed vertex, we obtain a sharper estimate.
The proposition compares the value of a general function w on a simplex to the convex combination of the values at the vertices of the simplex. Note that, given a simplex, we can choose one of its vertices as x 0 .
Proof. From Taylor's theorem we have, denoting by H(z) the Hessian of w at z for some z on the line segment between x 0 and k i=0 λ i x i . Again, by Taylor's theorem we have for every i = 0, 1, . . . , k that for some z i on the line segment between x 0 and x i . Hence, denoting B S := max z∈T H(z) max = max z∈T max r,s=1,...,d ∂ 2 w(z) ∂xr∂xs and using that a simplex is convex Using the proposition for w = v on a simplex T = co(x 0 , . . . , x d ), we obtain for We will estimate both factors in the last term. We will first study the last factor in (31). We consider the two triangulations introduced earlier in the section. Recall that for a simplex with vertices x 0 , . . . , x d we have defined h · 1 = max j=0,...,d x j − x 0 1 , where x 0 is a fixed vertex. For the standard and the centered triangulations, we will determine the optimal choice of the vertex x 0 within a simplex in the sense that it minimizes h · 1 . It turns out that for even dimensions d, the standard triangulation enables us to choose the vertex x 0 as the "middle vertex" with minimal distance to all other vertices, while for odd dimensions it is better to choose the centered triangulation and to let x 0 be the center vertex. We will determine h · 1 in each of these cases. For a more detailed analysis that these choices use fewer points in the grid Y od see [11].
The following lemma gives the formula to determine the optimal choice of the vertex x 0 in a simplex T of a square configuration, as well as the corresponding value of h . 1 .
where h 1 is the distance between the equally distanced grid points in all directions of the space.
Proof. Fix a vertex x k = R J z + k j=1 e σ(j) h 1 , k = 1, . . . , d. The distance between x k and another vertex x i , i = 1, . . . , d of T zJ σ is This is minimal for k = d 2 and we have max i=1...,d For the centered triangulation, using the center point as vertex x 0 is an optimal choice in any dimension.
Lemma 5.5. Let T k zJ σ , k = 1, 2, be a simplex of the centered triangulation T C (h 1 ). An optimal choice of the vertex x 0 in Proposition 3 is the center point and the maximal distance in the 1-norm from any vertex is Proof. For T 2 zJ σ we obtain for the center point 2 . This shows that the center point is an optimal choice. The argument for T 1 zJ σ is similar.
In even dimensions, it is better to use T S (h 1 ) instead of T C (h 1 ), as we obtain the same fill distance with fewer points; for a detailed comparison see [11]. Now we study the first factor in (31) by following the arguments used in the proof of Proposition 2 and taking a further derivative.

Proposition 4 (The second derivative of the orbital derivative).
Let f ∈ C 2 (R d , R d ) and v ∈ C 3 (R d , R) be an RBF approximant of the form (9) with a radial basis function ψ := ψ 0 (r) ∈ C 4 (R + 0 , R). LetK ⊂ R d be a compact set and X RBF = {x 1 , . . . , x N } ⊂K be a set of N pairwise distinct points which are no equilibria.
Then the second derivative of the orbital derivative v can be bounded by for all x ∈K and all i, j ∈ {1, . . . , d}, where Proof. We differentiate the right-hand side of equation (23) stated in the proof of Proposition 2 as follows Taking the absolute value and simplifying gives the result.
For specific RBFs we have the following results, using the estimates in the appendix.
Corollary 2. For the RBF ψ(r) = φ 6,4 (cr) given by Wendland's compactly supported RBF with c > 0 we have where F , D 1 and D 2 were defined in Proposition 4. For the RBF ψ(r) = φ 7,5 (cr) given by Wendland's compactly supported RBF with c > 0 we have For the Gaussian RBF ψ(r) = exp(− 2 r 2 ) with > 0 we have In the following theorem we summarize the results to obtain an estimate involving the second derivatives. To estimate the second derivatives in (39), we can use Proposition 4. AsK ⊂ K, we can also use take the maximum in Proposition 4 over K rather thanK.
Theorem 5.6. Let K be a compact set, v ∈ C 3 (R d , R) and f ∈ C 2 (R d , R d ).
For d even let T = {T ν ∈ T S (h 1 ) | T ν ⊂ K}, i.e. the simplices of the standard triangulation with side length h 1 that are fully contained in K. For d odd let T = {T ν ∈ T C (h 1 ) | T ν ⊂ K}, i.e. the simplices of the centered triangulation with side length h 1 that are fully contained in K.
for all x ∈K.
Proof. By construction, each point x ∈K = D T lies in a simplex T ν . Using Proposition 3, see (31), for that simplex together with Lemma 5.4 or 5.5, respectively, we obtain the statement.
6. Converse theorem. In this section we present an algorithm to choose the sets X RBF and Y od recursively and show that the verification will always succeed. We prove the following theorem based on the second estimate with the second derivative, but a similar result holds for the estimate with the first deriative, replacing the requirement on h 1 with h 1 ≤ 10 −κ β κ . The theorem shows that by refining the sets X RBF and Y od in a particular way provides an algorithm which will always terminate and successfully construct and verify a Lyapunov function.
be a compact set. Let K = K 0 \ E, Ω = K • and assume that Ω has a Lipschitz continuous boundary as well as K = Ω.
Choose the sets X RBF and Y od in the following way: in the κ-th step, κ ∈ N • choose X κ RBF ⊂ Ω, consisting of N κ points, such that the fill distance of X κ

RBF
in Ω satisfies h RBF ≤ 10 −κ • compute the RBF approximation of the form (7) with collocation points X κ RBF and Wendland function as radial basis function, see Theorem 3.1, and denote , follow Theorem 5.6, using the triangulation T = T C (h 1 ) or T S (h 1 ), and set Y κ od = V T andK κ = D T . Then there exists a fixed constant κ * such that for all κ ≥ κ * the right-hand side of (39) is negative.
Let v be an RBF approximant constructed by approximating the solution of the Lyapunov function satisfying V (x) = − x − x 0 2 2 , using the RBF method. Define κ 1 by where C is the constant from (13). For κ ≥ κ 1 we thus have Since Y od ⊂K ⊂ K = Ω, we have in particular with (13) and the continuity We will show that the right-hand side of Theorem 5.6 is bounded by where β κ C = d 2 is the upper bound on max which depends only on f and K, usingK ⊂ K. Define κ 2 through 10 −κ2 = 2 d 2 C . Note that (40) holds if which is true if κ ≥ κ 2 . This shows the statement for κ * = max(κ 1 , κ 2 ).
In Theorem 6.1 we have used an equidistant grid as checking grid Y od . One could, however, alternatively start with a coarse checking grid Y od and use (39) to estimate h 1 locally. This would lead to an adaptive refinement of the checking grid.

7.
Examples. This section illustrates how to apply these verification estimates to check that the RBF approximant v is a Lyapunov function. The steps of the verification procedure are: for the construction method, and Y od according to the respective estimates, where h 1 will be determined later. Calculate the quantities F , D 1 and D 2 . 2. Fix a radial basis function Ψ(x) = ψ( x 2 ). 3. Approximate the Lyapunov function V , using the RBF construction method and calculate β.
We choose the collocation points and hence have N = 48 collocation points, see Figure 4. We will use the second estimate and will consider the same example with three different radial basis functions, namely the Wendland functions φ 6,4 , φ 7,5 and the Gaussians.
Thus, v (x) < 0 holds for all x ∈K = K, i.e., v is a Lyapunov function inK = K 0 \ E. Figure 5 shows the constructed Lyapunov function v(x), right, as well as its orbital derivative v (x), left, which approximates − x 2 well. When choosing fewer collocation points such as the N = 24 points the function v (x) does not approximate − x 2 well, see Figure 6, left, and has an area, where v (x) > 0. This area is displayed in red in Figure 6, right, together with the collocation points.
Remark 6. When using the first order estimate, a rule of thumb gives us a worse estimate for h 1 , so we would need more points in the checking grid Y od .  Indeed, using (24) we obtain, using the collocation points (42) and the parameters at the start of this section, The best distribution of points in the checking grid Y od is the centered configuration. Therefore, to determine the density of Y od , we use (17), assumingK ≈ K and max For this to be negative we need This requires a much smaller h 1 than the second estimate above, which was h 1 ≤ 1.1191 · 10 −3 .
We choose h 1 = 1 1000 = 10 −3 ≤ Thus, v (x) < 0 holds for all x ∈K = K, i.e., v is a Lyapunov function inK = K 0 \ E.  where N is the number of collocation points in X RBF , as it requires to solve a system of N linear equations for β. This is very fast compared to the evaluation of v at all M points of the checking grid, which has complexity O(N M ), mainly because M is very large. However, the evaluation at one point is independent of the evaluation at the other points, hence, this can be parallelized and the performance can be improved.
All three radial basis functions performed well in this example; the Gaussian radial basis function shows a larger error between the orbital derivative and − x 2 at the outer boundary of K, however, the derivative is still negative everywhere. The fastest computation time was achieved for φ 6,4 , closely followed by the Gaussians, and the slowest was φ 7,5 . This corresponds to the required number of points in the evaluation grid, however, note that the evaluation time also depends on the number of terms in the radial basis function, and φ 7,5 has more terms than the others. 8. Conclusion. In this paper the RBF construction method for Lyapunov functions was improved by providing rigorous verification estimates for the negativity of the orbital derivative of the constructed function. These estimates rely on the first and second derivatives of the orbital derivative and they provide an estimate on the density of the checking grid, where the sign and value of the orbital derivative of the constructed function is checked on finitely many points.
Finally, we have proved that the verification estimates together with the RBF method to construct Lyapunov functions provide a successful technique to construct and verify a Lyapunov function for nonlinear ODEs in R d with exponentially stable equilibria.
While the computation of the Lyapunov function is very fast, the verification is slow due to the high number of evaluations at all points of the checking grid. Since the evaluation of v at one point is independent of the evaluation at other points, the performance time can be improved considerably by employing parallel computing. More future work includes combining the refinement algorithm for the RBF construction [12] with the verification estimates introduced in this paper.
This general method of using Taylor-like estimates and the particular form of the RBF approximants can also be used for a-posteriori estimates of other applications of meshfree collocation.