Incremental quadratic stability

The concept of incremental quadratic 
stability ($\delta$QS) is very useful in treating systems with persistently acting inputs. 
To illustrate, if a time-invariant $\delta$QS system 
 is subject to a constant input or $T$-periodic input then, 
all its trajectories exponentially converge to a unique constant or $T$-periodic 
trajectory, respectively. 
By considering the relationship of $\delta$QS to the usual concept of 
quadratic stability, we obtain a useful necessary and sufficient 
condition for $\delta$QS. 
A main contribution of the paper is to consider nonlinear/uncertain systems whose 
state dependent nonlinear/uncertain terms satisfy an 
 incremental quadratic constraint which is characterized by a bunch of symmetric matrices 
we call incremental multiplier matrices . 
We obtain linear matrix inequalities whose feasibility guarantee $\delta$QS of these systems. 
Frequency domain characterizations of $\delta$QS are then obtained from these conditions. 
By characterizing incremental multiplier matrices for many common classes of nonlinearities, we 
demonstrate the usefulness of our results.


1.
Introduction. Boundedness and convergence of solutions of nonlinear systems are important issues in system analysis and control design. The distinctive characteristic of the incremental stability approach to these issues lies in the fact that it considers stability in an incremental fashion, that is, it studies the evolution of the state trajectories of a system with respect to each other, rather than with respect to a given nominal trajectory or equilibrium state. Roughly speaking, we examine whether the state trajectories of a given system converge to one another, and if they do, depending on the nature of the system (for example, autonomous or periodic in time), we can conclude that all trajectories converge to a specific type of bounded trajectory (for example, constant or periodic.) Incremental stability is an intrinsic property of systems, and not a property of particular solutions or equilibrium points. Therefore, its application requires no previous knowledge or assumption on the existence and value of specific solutions. This is particularly important when we have systems with inputs, where the attractive trajectory actually depends on the input.

LUIS D'ALTO AND MARTIN CORLESS
Considerations of incremental stability are made in the theory of contraction analysis for nonlinear systems introduced by Lohmiller and Slotine [22,23]. This theory is derived from consideration of elementary tools from continuum mechanics and differential geometry, and leads to a metric analysis on a generalized jacobian of the system. Fromion, Scorletti and Ferreres [16] also make use of the notion of incremental stability in an input-output stability context, and they present a sufficient condition for incremental stability of a system that they named quadratic incremental stability, and which involves a Lyapunov condition on the jacobian of the system. The approach of analyzing the stability of a nonlinear system by replacing it by an equivalent linear time-varying system is called global linearization; see [20,21]. Angeli [4] considers several notions of incremental stability of autonomous systems in an input-to-state stability context, and he presents a result on the existence of globally attractive solutions for incrementally input-to-state systems subject to constant or periodic inputs. Pavlov et al. [26,27] have brought to our attention some early results in the Russian literature related to this paper: Demidovich [13,14,15] pioneered some incremental stability ideas and results using the concept of a convergent system. Some of the results cited in [26] rely on Yacubovich [30]; this work also contains results on the response of a specific class of systems subject to bounded inputs.
In our case, basic Lyapunov stability theory [18,19] serves as the starting point for our results. The usual basic Lyapunov stability theory is based on the construction of radially unbounded functions of the state with a global minimum at the nominal trajectory whose stability we want to analyze. Consideration of quadratic Lyapunov functions leads to the stronger concept of quadratic stability. This concept has turned out to be very useful in system analysis and control design for large classes of uncertain/nonlinear systems; see, for example, [17,10,6,7,28,9,8,2,29] and the references therein. The quadratic stability framework has the advantage that analysis and control problems can be reduced to convex optimization problems involving the solution of linear matrix inequalities. This approach has gained considerable attention in the last two decades [8]. The theoretical framework presented here is built upon consideration of quadratic Lyapunov functions of the increment between any two states of a system, and leads to a concept of stability that we call incremental quadratic stability. This idea is also used in [16,26,27].
In our approach, the state dependent nonlinearities of an uncertain/nonlinear system are described by a quadratic inequality that is characterized by a set of symmetric matrices we call incremental multiplier matrices. The concept of a multiplier matrix and the more general approach of integral quadratic constraints have been recently used in the analysis of nonlinear/time-varying and uncertain systems; see [5,25,1]. In our case, the use of incremental multiplier matrices provides a unifying description for various types of common nonlinearities that is amenable to the analysis of incremental quadratic stability of systems through linear matrix inequalities. In this way, analysis problems are systematically reduced to the solution of linear matrix inequalities in the Lyapunov matrix and the incremental multiplier matrix.
Recent results on necessary and sufficient multiplier conditions for quadratic stability of systems provide frequency domain characterizations for quadratic stability; see [2]. These conditions consist of frequency domain inequalities involving an associated linear system and a multiplier matrix. These results can be readily extended to obtain similar conditions guaranteeing δQS of a system, this time involving incremental multiplier matrices. This paper is organized as follows. In Section 2 we consider incremental quadratic stability of systems with inputs; there we see that if a time-invariant system is subject to a constant input or a periodic input of period T , then all trajectories of the system converge to a unique trajectory which is respectively constant or periodic with period T . Section 3 considers the relationship of incremental quadratic stability to quadratic stability. In particular, we show that, when a system has an associated derivative system then, quadratic stability of the derivative system is both necessary and sufficient for incremental quadratic stability of the original system. In Section 4 we introduce a description of state dependent nonlinearities by means of incremental multiplier matrices. Linear matrix inequalities (LMIs) in the Lyapunov matrix and an incremental multiplier matrix for the system are presented. A main result of the paper is that feasibility of one of these inequalities guarantees δQS. This section also looks at differentiable nonlinearities, and the relationship between incremental multiplier matrices and the derivatives of the nonlinearities is explored. In Section 5 we use the LMIs to obtain frequency domain characterizations of δQS. Section 6 provides simple characterizations of incremental multiplier matrices for many common nonlinear/uncertain terms. Section 7 deals with systems with multiple nonlinear terms and their corresponding incremental multiplier matrices. In Section 8 we present an alternative approach to the analysis of incremental quadratic stability of systems with nonlinearities whose derivatives lie in a convex set. Section 9 contains some conclusions. For many examples and simulations illustrating the results of this paper, the reader is referred to [11].
2. Incremental quadratic stability. In this section, we introduce the concept of incremental quadratic stability (δQS) for a system with inputs and present some important properties of systems which are δQS.
In many treatments of the stability of nonlinear systems, one considers stability about a fixed equilibrium state or solution. However, in many applications one has a system with an input and wishes that the system has stable behavior for all inputs in a specific class. The input may be of any nature, such as a control input or an unknown disturbance input. In particular, one may require that the system has the following behavior.
• If the input is bounded then, the system state is bounded.
• If the input is constant then, the system has a unique equilibrium state and all solutions converge to this equilibrium state. • If the input is periodic with period T , then the system has a unique periodic solution of the same period and all solutions converge to this solution. As we will shortly demonstrate, it is in this context that the concept of δQS proves very useful. So, consider a system with input described bẏ where t ≥ 0 is the time variable, x(t) ∈ R n is the state, and w(t) ∈ W is the input where the set W of input values is some subset of R m . Throughout this paper, we assume that F is continuous. (1) is incrementally quadratically stable (δQS) with decay rate α > 0 and Lyapunov matrix P = P T > 0 if for all t ≥ 0, x,x ∈ R n and w ∈ W .
Example 2.1. The specification of the set W of input values is important in the above definition. To see this, consider the scalar systeṁ If W = [−w,w] andw < 1 then, we have δQS with P = 1 and α = 1−w. Ifw ≥ 1, we do not have δQS.
Remark 1. Using standard Lyapunov-type arguments, one can readily show that if system (1) is δQS with rate α and Lyapunov matrix P , it has the following property. If x(·) andx(·) are any two solutions of system (1) then, for all t ≥ t 0 for which both x(t) andx(t) exist where κ(P ) = λ max (P )/λ min (P ) and λ max (P ) and λ min (P ) denote the largest and smallest eigenvalues of P . Consider now a system described bẏ Then it should be clear that, regardless of g, if the "unforced system" systeṁ x = f (t, x) is δQS, then the "forced system" (2) is δQS for any set W of input values. Considering a time-invariant δQS system described bẏ we can make the following conclusions using the results in [4,16,26,27,12].
• If w(·) is bounded, then all solutions of (3) are bounded.
• If w(·) is constant, that is, w(t) ≡ w e then, system (3) has a unique equilibrium state and is GUES about x e . In general, the equilibrium state depends on the input w e . • If w(·) asymptotically approaches a constant input w e , that is, lim t→∞ w(t) = w e then, every solution x(·) of the system converges to the equilibrium state of (3) corresponding to w(t) ≡ w e . • If w(·) is periodic with period T then, system (3) has a unique periodic solution; this solution has period T and all other solutions exponentially converge to this solution. In general, the periodic solution depends on the input. • Suppose w(·) asymptotically approaches a periodic solutionw(·), that is, lim t→∞ w(t)−w(t) = 0. Then, every solution x(·) of the system converges to the periodic solution of (3) corresponding to w(t) ≡w(t). Note that in order to apply δQS to demonstrate the above properties, one does not have to know the value of the equilibrium state or the periodic solution. These specific solutions depend on the input; δQS guarantees that these solutions exist. In many applications, one does not need to know the values of the specific solutions; just knowing that they exist is sufficient.
Remark 2 (Linear time-invariant systems with inputs). Consider a system with input w described byẋ = Ax + g(t, w) , where A is a constant matrix. One can readily show that this system is δQS with decay rate α and Lyapunov matrix P if and only if P A + A T P + 2αP ≤ 0 . Satisfaction of the above inequality for some α > 0 is equivalent to the requirement that P satisfy the Lyapunov equation for some matrix Q = Q T > 0. Using Lyapunov theory for linear systems, we now obtain that δQS is equivalent to the requirement that A is Hurwitz 1 . Moreover, a Lyapunov matrix P can be obtained by first choosing any matrix Q = Q T > 0 and letting P be the unique solution to the Lyapunov equation. In this case, α = λ min (P −1 Q)/2.
Application to observers. Consider a system (we will call it the plant) with state x, input w and measured output y described bẏ Suppose we wish to construct a state observer which, based on input w and output y, asymptotically produces an estimatex of the plant state x. A general description of an observer is given byẋ This can be regarded as a system with input (w, y). Requiring that every state trajectory of the plant is also a possible motion of the observer is equivalent to the requirement thatF for all t, w andx. If this condition is satisfied and the observer is δQS with decay rate α and Lyapunov matrix P then we obtain that, for all plant and observer initial conditions, for all t ≥ t 0 . Thus the state estimate of the observer always converges exponentially to the plant state. This is basically the approach taken in [3].
3. Incremental quadratic stability and quadratic stability of the derivative system. In this section, we show that if a system has a derivative system (defined below) then quadratic stability (QS) (defined below) of the derivative system is equivalent to δQS of the original system. This result is very useful. It permits one to apply the considerable body of quadratic stability results in the literature to the derivative system in order to guarantee incremental quadratic stability of the original system; see, for example, [6,7,9,8,2] and the references therein.
Definition 3.1. Consider a system described by (1) for which ∂F ∂x (t, x, w) exists for all t, x, w. The corresponding derivative system is given bẏ where ϕ and w are any continuous functions mapping [0, ∞) into R n and W , respectively.
Definition 3.2. The derivative system (7) is quadratically stable (QS) with decay rate α > 0 and Lyapunov matrix P = P T > 0 if for all t ≥ 0, w ∈ W and ϕ ∈ R n , for all η ∈ R n , that is, Remark 3. The results in [16,26] are based on condition (9). Reference [16] also requires a condition on ∂F ∂w . Lemma 3.3. Consider a system described by (1) with F differentiable w.r.t. its second argument. This system is incrementally quadratically stable with rate of convergence α and Lyapunov matrix P if and only if the corresponding derivative system (7) is quadratically stable with rate of convergence α and Lyapunov matrix P .
Proof. Consider a system described by (1) and suppose that its derivative system (7) is quadratically stable about the zero state with rate of convergence α and Lyapunov matrix P . Consider any time t ≥ 0, any w ∈ W and any two states x,x ∈ R n . For any vector z ∈ R n , it follows from the mean value theorem that there isφ ∈ R n such that Considering z = P (x−x) and using the fact that the derivative system is quadratically stable with rate of convergence α and Lyapunov matrix P , we obtain that Since the above holds for all t ≥ 0, w ∈ W and x,x ∈ R n , it follows that system (1) is incrementally quadratically stable with rate of convergence α and Lyapunov matrix P .
To prove the converse, suppose now that system (1) is incrementally quadratically stable with rate of convergence α and Lyapunov matrix P , that is, for all t ≥ 0, w ∈ W and x,x ∈ R n . Now pick anyx, η ∈ R n and let Then, the previous inequality yields Since the above holds for all t ≥ 0, w ∈ W andx, η ∈ R n , it follows that the corresponding derivative system (7) is quadratically stable about the zero state with rate of convergence α and Lyapunov matrix P . Scalar systems. Consider any scalar system of the forṁ which, for some α > 0, satisfies for all t ≥ 0, w ∈ W and x ∈ R. Considering P = 1, we readily obtain that the derivative system is quadratically stable with rate of convergence α. It now follows from the above lemma that system (10) is incrementally quadratically stable with rate of convergence α.

4.
Incremental multiplier matrices and an LMI characterization of δQS.

Systems under consideration.
In this section, we consider systems described by (1) whose state dependent nonlinearities can be characterized via symmetric matrices which we call incremental multiplier matrices. Using these matrices and any structural information on the manner in which the nonlinearities enter the system description, we obtain a sufficient condition for incremental quadratic stability. This condition is in the form of a linear matrix inequality (LMI).
To take into account the manner in which state dependent nonlinearities enter a system description, consider a system described bẏ where t ≥ 0 is the time variable, x(t) ∈ R n is the system state and w(t) ∈ W ⊂ R m is the system input. All the elements in the system involving nonlinear dependence on the state x are lumped into the term p(t, x, w) ∈ R mp and all the elements not depending on x are lumped into the term g(t, w) ∈ R n . The matrices A and B are constant and of appropriate dimensions; A describes a nominal linear unforced system while B describes the manner in which the state dependent nonlinearities enter the description. To take into account any information available on the dependency of p on x, we consider p = φ(t, q, w), where q(t, x, w) ∈ R mq is given by q(t, x, w) = Cx + Dp(t, x, w) with C and D constant matrices of appropriate dimensions. Thus, a system under consideration is described bẏ When D = 0, a system under consideration can be described bẏ To see the usefulness of allowing D = 0, consider a system described by (11a) in which p =φ(t, Eẋ+C 0 x, w) .
All the state dependent nonlinearities in the system are now described by the function φ; we characterize this function by its incremental multiplier matrices which are defined as follows.
for all t ≥ 0, w ∈ W and q,q ∈ R mq .
we obtain that p = φ(t, q, w) wherê q = Cx + Dp, and φ has the same incremental multiplier matrices asφ.
Remark 5. When D = 0, the term p is implicitly defined by In this case, we assume that there exists a function ψ such that for all t, z, w, the vector p = ψ(t, z, w) (14) solves the above implicit identity. Note that M is an incremental multiplier matrix for φ if and only if the matrix is an incremental multiplier matrix for ψ. This follows from the relationship: Of course, ψ = φ when D is zero.
The following two general scalar examples illustrate the characterization of nonlinearities via incremental multiplier matrices.
Example 4.1. Consider any differentiable scalar valued function φ of a scalar variable. Suppose that φ ′ , the derivative of φ, is bounded and choose σ 1 and σ 2 so that σ 1 ≤ φ ′ (q) ≤ σ 2 for all q ∈ R. An application of the mean value theorem shows that φ satisfies for all q,q ∈ R. We call condition (17) a pointwise sector bounded constraint because, for eachq ∈ R, the graph of the function φ lies inside the sector defined by the lines Condition (17) is equivalent to Hence, any matrix is an incremental multiplier matrix for φ.
Example 4.2. Consider any monotone scalar valued function φ of a scalar variable, that is, This is equivalent to for all q,q ∈ R. Notice that satisfaction of (18) is equivalent to satisfaction of This clearly shows that any matrix is a multiplier matrix for φ.

4.3.
A sufficient condition for δQS. In this section we present a matrix inequality which, if satisfied by a system described in the previous section, guarantees incremental quadratic stability.
Theorem 4.2. Consider a system described by (11). Suppose there exists a matrix P = P T > 0, a scalar α > 0 and an incremental multiplier matrix M for φ such that Then the system is incrementally quadratically stable with decay rate α and Lyapunov matrix P .
Proof. We first note that incremental quadratic stability of a system described by (11a) with rate of convergence α and Lyapunov matrix P means that for all t ≥ 0, x,x ∈ R n and w ∈ W where p := p(t, x, w) andp := p(t,x, w). The above inequality is equivalent to We now show that when p is given by (11c) and (11b) and M is any multiplier matrix for φ then, for all t ≥ 0, x,x ∈ R n and w ∈ W . To see this, notice that p = φ(t, q, w) and p = φ(t,q, w) where q = Cx + Dp andq = Cx + Dp. Hence, Condition (20) now follows from the definition of the multiplier matrix M .
To prove the lemma, suppose that matrix inequality (19) holds and consider any time t ≥ 0, any two states x,x ∈ R n and any input w ∈ W . Pre-and post-multiplication of the matrix inequality by (x −x) T (p −p) T and its transpose respectively, yields Since M is a multiplier matrix for φ, we have L 1 (t, x,x, w) ≥ 0; hence L 0 (t, x,x, w) ≤ 0. Since this holds for all t ≥ 0, x,x ∈ R n and all w ∈ W , it follows that the system under consideration is incrementally quadratically stable with decay rate α and Lyapunov matrix P .
Remark 6. Note that condition (19) is a linear matrix inequality in P and M . Notice also that maximization of α subject to (19) is a generalized eigenvalue problem.
Remark 7. Recalling the definition of N in (15), the matrix inequality (19) can be expressed as If we partition M and N as where M 22 , N 22 ∈ R mp×mp , then the inequality (19) can be expressed as and Note that in order for inequality (23) to hold we must have Thus, we need only consider incremental multiplier matrices for which N 22 ≤ 0. In particular, when D = 0, we must have M 22 ≤ 0.
Remark 8. In many cases it is difficult to completely characterize the complete set of incremental multiplier matrices associated with a given function φ. Fortunately, it is not necessary to have this characterization for our purposes. We need only consider sets of matrices which are sufficiently rich. It should be clear that if M is a sufficiently rich set of incremental multiplier matrices for φ then, satifisfaction of the matrix inequality (19) with any incremental multiplier matrix for φ implies satisfaction with some M in M.

4.4.
A strict matrix inequality. If the following strict inequality holds then, it can readily be shown that the non-strict inequality (19) holds for α > 0 sufficiently small: This provides another sufficient condition for δQS. However, there are situations in which the non-strict inequality (19) holds for some α > 0 but, the strict inequality (26) does not hold. In fact, when the strict inequality holds, the dependency of p on z = Cx must be globally Lipschitz, that is, the function ψ(t, ·, w) (recall (14)) must be globally Lipschitz. To see this, consider any z,z and recall from Remark 5 that the incremental quadratic constraint (12) is equivalent to δz T N 11 δz + δz T N 12 δp + δp T N 21 δz + δp T N 22 δp ≥ 0 for all t and w where δz = z −z and δp = ψ(t, z, w)−ψ(t,z, w) and N ij are defined in (24). Recalling the discussion in Remark 7, one can see that the strict matrix inequality (26) implies that N 22 < 0. The last inequality above now implies that c 1 δz 2 + 2c 2 δz · δp − c 3 p 2 ≥ 0 where c 1 = N 11 , c 2 = N 12 and −c 3 < 0 is the maximum eigenvalue of N 22 . Rearranging the last inequality results in for all t, w and z,z, that is, ψ(t, ·, w) is globally Lipschitz for all t and w.
On the other hand, suppose ψ satisfies the global Lipschitz condition (27). We will show that satisfaction of the non-strict inequality with some incremental multiplier matrix implies satisfaction of the strict inequality with some incremental multiplier matrix.
To see this, suppose that for some α > 0, the non-strict matrix inequality (19) is satisfied with M =M whereM is an incremental multiplier matrix for φ. It follows from (27) that the matrix γ 2 I 0 0 −I is an incremental multiplier for ψ; hence, recalling Remark 5, the matrix is an incremental multiplier matrix for φ. It now follows that, for any ǫ ≥ 0, the matrix, is also an incremental multiplier matrix for φ. Let To complete the proof, it suffices to show that Q(0, ǫ) < 0 for some ǫ > 0; this is because inequality (26), with M replaced by the new multiplier matrix M ǫ , is equivalent to Q(0, ǫ) < 0. To achieve the above goal, note that where S(α, ǫ) = 2αP − ǫγ 2 C T C 0 0 ǫI and the non-strict matrix inequality (19) can be written as Q(α, 0) ≤ 0. Considering ǫ > 0, we obtain that S(α, ǫ) > 0 if and only if 2αP − ǫγ 2 C T C > 0 .

Remark 9.
Recalling the definition of N in (15), the strict matrix inequality (26) can be expressed as If we partition N as in (22) where N 22 ∈ R mp×mp , then the above inequality can be expressed as Note that in order for inequality (30) to hold we must have Thus, in utilizing the strict inequality (26), we need only consider incremental multiplier matrices for which N 22 < 0. In particular, when D = 0, we must have M 22 < 0.
Remark 10. Using a Schur complement result, inequality (30) is equivalent to This is a Riccati-type matrix inequality in P .

Differentiable nonlinearities.
Here we consider nonlinearities φ which are continuously differentiable with respect to their second argument, that is, exists for all t, q, w and is continuous with respect to q. We will characterize incremental multiplier matrices M for φ with the condition that for all t, q, w.
Consider an uncertain/nonlinear element described by (11b) and (11c), that is, Recall that, in order to satisfy our condition for δQS, we need only consider incremental multiplier matrices M which satisfy (25). Consider first the case in which D = 0. We claim that a symmetric matrix M which satisfies (25) is an incremental multiplier matrix for φ if and only if (33) holds for all t, q, w. This follows from Lemmas 4.4 and 4.5 which are given below. Consider now the case in which D = 0. Suppose p is well-defined, that is, for each t, z, w, there is a p = ψ(t, z, w) which satisfies p = φ(t, z + Dp, w).
If we assume that, for each t, w, the function ψ(t, ·, w) is continuously differentiable and the mapping z → z +Dψ(t, z, w) is onto then, a symmetric matrix M which satisfies (25) is an incremental multiplier matrix for φ if and only if (33) holds for all t, q, w. This also follows from the following two lemmas.
for all q,q ∈ R q . Then, Proof. A proof is contained in the Appendix.
Lemma 4.5. Suppose h : R mq → R mp is a continuously differentiable function, M is a symmetric matrix and inequality (35) holds for all q ∈ R mq . In addition, suppose there is a matrix D such that, and for each z ∈ R mq , the equation has a solution p = ψ(z) where ψ is continuously differentiable and the mapping z → z +Dψ(z) is onto. Then, inequality (34) holds for all q,q ∈ R mq .
Proof. A proof is contained in the Appendix.
The following result is used in the proof of Lemma 4.5; it is also useful later in the paper. Lemma 4.6. Suppose h : R mq → R mp is a continuously differentiable function with derivative Dh and let Ω be any closed convex set of real matrices such that Dh(q) is in Ω for all q ∈ R mq . Then for every q,q ∈ R mq there is a matrix Θ in Ω such that Proof. A proof is contained in the Appendix.

5.
Frequency domain conditions for δQS. Using results from [2], one can readily obtain frequency domain conditions which guarantee δQS of system (11). These conditions involve the transfer function G defined by Using the strict inequality (26) and Lemma 9 of [2], one can obtain the following result.
Lemma 5.1. A system described by (11) is incrementally quadratically stable if there is an incremental multiplier matrix M for φ which satisfies the following conditions. (a) There is a matrix K ∈ R mp×mq for which A+BKC is Hurwitz and (b) The strict frequency domain inequality, holds for 0 ≤ ω ≤ ∞ whenever ω is not an eigenvalue of A.
Note that the frequency domain inequality (39) can also be written as Using the non-strict inequality (19) and Lemma 6 of [2], one can also obtain a sufficient condition involving a non-strict frequency domain inequality Lemma 5.2. Consider a system described by (11) with (A, B) controllable and (C, A) observable. Suppose that, for some α > 0, there is an incremental multiplier matrix M for φ which satisfies the following conditions.
(a) There is a matrix K ∈ R mp×mq for which A + BKC + αI is Hurwitz, (38) holds, and the matrix M I +DK K has maximum column rank. (b) The non-strict frequency domain inequality holds for 0 ≤ ω ≤ ∞ whenever ω −α is not an eigenvalue of A. Then system (11) is incrementally quadratically stable with decay rate α.
Remark 11. Yacubovich [30] considers systems described by (11) in which p and q are scalars, D = 0, φ(t, q, w) = ϕ(q) and for some real constant µ 0 ≤ ∞, the function ϕ satisfies the incremental sector condition for all q =q. Incremental multiplier matrices for this nonlinearity are given by where κ > 0 and K = 0 satisfies condition (38). Yacubovich shows that if A is Hurwitz and a certain frequency domain condition is satisfied then, the systems under his consideration have the properties claimed here for incrementally quadratically stable systems. His frequency domain condition is basically condition (41) with one of the above multiplier matrices, that is, 6. Incremental multiplier matrices for many common nonlinearities. In this section, we provide incremental multiplier matrices for many commonly encountered nonlinearities.
6.1. Incremental norm bounded nonlinearities. Consider a function φ which, for some symmetric positive definite matrices U and V , satisfies for all t ≥ 0, w ∈ W and q,q ∈ R mq . Incremental multiplier matrices for φ are given by Note that incremental norm bounded nonlinearities include nonlinearities which are globally Lipschitz with respect to q, that is, for some scalar γ ≥ 0, they satisfy φ(t, q, w) − φ(t,q, w) ≤ γ q −q for all t ≥ 0, w ∈ W and q,q ∈ R mq . In this case, condition (42) is satisfied with U = I and V = γ 2 I. So, incremental multiplier matrices for φ are given by 6.2. Incremental positive real nonlinearities. Consider a function φ which satisfies [φ(t, q, w) − φ(t,q, w)] T U (q −q) ≥ 0 for all t ≥ 0, w ∈ W and q,q ∈ R mq , where U ∈ R mp×mq . Incremental multiplier matrices for φ are given by Note that incremental positive real nonlinearities include scalar nonlinearities which satisfy (φ(t, q, w) − φ(t,q, w))(q −q) ≥ 0 for all t ≥ 0, w ∈ W and q,q ∈ R, or equivalently, φ(t, q, w) − φ(t,q, w) q −q ≥ 0 for all q,q ∈ R with q =q. Notice that this condition is equivalent to φ being monotonic with respect to its second argument q. Incremental multiplier matrices for φ are given by 6.3. Incremental sector bounded nonlinearities. Consider a function φ which satisfies for all t ≥ 0, w ∈ W and q,q ∈ R mq , where U = U T ∈ R mp×mp , and K 1 , K 2 ∈ R mp×mq are fixed matrices. Incremental multiplier matrices for φ are given by Incremental sector bounded nonlinearities include scalar nonlinearities which satisfy for all t ≥ 0, q =q ∈ R and w ∈ W where σ 1 , σ 2 ∈ R are constants. This is because the above inequalities are equivalent to [φ(t, q, w)−φ(t,q, w) − σ 1 (q−q)] · [σ 2 (q−q) − φ(t, q, w)+φ(t,q, w)] ≥ 0 .
Hence, incremental multiplier matrices for φ are given by 6.4. Nonlinearities with matrix characterizations. In this section, we consider nonlinearities which are characterized by some known set Ω of matrices. Specifically, we assume that there is a known set Ω of real matrices Θ with the following property. For each t ≥ 0, w ∈ W , and q,q ∈ R mq , there is a matrix Θ in Ω such that φ(t, q, w) − φ(t,q, w) = Θ(q −q) . (43) For example, suppose that φ is continuously differentiable with respect to its second argument, and for each t ≥ 0, w ∈ W , and q ∈ R mq the derivative ∂φ ∂q (t, q, w) lies in some known closed convex set Ω, that is, ∂φ ∂q (t, q, w) ∈ Ω .
Then, it follows from Lemma 4.6 that for each t ≥ 0, w ∈ W , and q,q ∈ R mq , there exists a matrix Θ in Ω such that (43) holds. As a non-differentiable example, consider the absolute value function, φ(t, q, w) = |q|. Here, Θ is the interval [−1, 1]. and satisfies (52) with M T 22 = M 22 ≤ 0. If the matrix [Θ 1 Θ 2 · · · Θ ν ] has maximum row rank m p then, the second condition in (52) is simply equivalent to M 22 = 0. The case in which D is nonzero. Recalling the corresponding discussion of the polytopic case, we obtain that a symmetric matrix M is an incremental multiplier matrix for φ if there is a matrix J ∈ R mp×mp such that (49) is satisfied for all Θ ∈ Ω. Clearly, condition (49) is satisfied if and Clearly, inequality (54) holds for all Θ ∈ Ω = Cone{Θ 1 , Using the same reasoning as in the case D = 0, it follows that satisfaction of (55) by all Θ in Ω is equivalent to M 11 ≥ 0, and (58) As before, we can take M 11 = 0 without loss of generality. It now follows that a symmetric matrix M is a multiplier matrix for φ if it has the form given in (53) and there is a matrix J ∈ R mp×mp such that (56)-(58) hold with M 22 = M T 22 . If the matrix [Θ 1 Θ 2 · · · Θ ν ] has maximum row rank m p then, the second condition in (58) is simply equivalent to M 22 + J + J T = 0. 6.4.3. Polytopic/Conic case. One can readily generalize the results of the previous two sections to consider situations in which Θ has a mixed polytopic/conic description. To see this, consider a function φ which for each t, w, q andq satisfies (43) with some matrix Θ in a known fixed set Ω. Suppose that there are fixed matrices Θ 1 , . . . , Θ ν with the property that for every Θ in Ω there are scalars λ 1 , λ 2 , . . . , λ ν so that Θ = ν k=1 λ k Θ k where ν1 k=1 λ k = 1 and λ k ≥ 0 for k = 1, 2, . . . , ν .
This is a combined polytopic/conic description. If ν 1 = ν, it reduces to a polytopic description; if ν 1 = 0, it reduces to a conic description. Considering M 22 ≤ 0, one can readily show that any symmetric matrix M satisfying is an incremental multiplier matrix for φ.

7.
Systems with multiple nonlinear terms.
7.1. General case. Consider a system described by (11a) whose nonlinear term p consists of multiple terms given by t, x, w) . . .
where each nonlinear term p j ∈ R mp j can be described by and C j , D j are constant matrices of appropriate dimensions. Letting the nonlinear term p can be described by p = φ(t, q, w) where q(t, x, w) = Cx + Dp(t, x, w) and Suppose that, for j = 1, 2, · · · , µ, is an incremental multiplier matrix for φ j where M j,22 ∈ R mp j ×mp j . In this case, the function φ has an incremental multiplier matrix M given by In this way, incremental multiplier matrices M for φ can be obtained from incremental multiplier matrices M 1 , . . . , M µ corresponding to the individual nonlinear terms φ 1 , . . . , φ µ .

7.2.
A common special case. In this subsection, we consider a important special case of multiple uncertain/nonlinear terms and provide a richer set of incremental multiplier matrices than would be obtained using the general approach of the previous section. Suppose the functions φ 1 , . . . , φ µ are scalar-valued and, for j = 1, . . . , µ and q j =q j , they satisfy where q 1 , . . . , q µ are scalars. In this case, One could use the results of the previous subsection to obtain an incremental multiplier set based on incremental multiplier sets for φ 1 , . . . , φ µ . However, one can obtain a richer set of incremental multiplier matrices by proceeding as follows.
Using the notation of the preceding section, we obtain a single uncertain/nonlinear term described by Thus Θ ∈ Co{Θ 1 , . . . , Θ ν } where the ν = 2 µ matrices Θ 1 , . . . , Θ ν correspond to the extreme values σ 1j , σ 2j of the parameters θ j . We now have a polytopic description of φ and we can obtain a set of incremental multiplier matrices as described in Section 6.4.1.
In a similar fashion, one can use the results of Section 6.4.2 to treat the case in which the functions φ 1 , . . . , φ µ are nondecreasing scalar-valued functions, that is, they satisfy (φ j (q j )−φ j (q j ))(q j −q j ) ≥ 0 .
In this case, we have Finally, one can also use the results of section 6.4.3 to consider a bunch of scalar uncertainties satisfying (60) or (62).
8. An alternative sufficient condition for nonlinearities with matrix characterizations. We present here another sufficient condition for incremental quadratic stability of systems whose nonlinearities are characterized by a set of matrices. Specifically, we consider systems described by (11) with D = 0, that is, and we assume that there is a set Ω of matrices such that for each t ≥ 0, w ∈ W , and q,q ∈ R mq , there is a matrix in Θ in Ω such that φ(t, q, w) − φ(t,q, w) = Θ(q −q) .
The following lemma provides a sufficient condition for incremental quadratic stability of such systems.
Lemma 8.1. Consider a system described by (63) and suppose there is a set Ω of matrices such that for each t ≥ 0, w ∈ W , and q,q ∈ R mq , there is a matrix in Θ in Ω such that (64) holds. Suppose also there is a matrix P = P T > 0 and a scalar α > 0 such that P (A + BΘC) + (A + BΘC) T P + 2αP ≤ 0 (65) for all Θ ∈ Ω. Then system (63) is incrementally quadratically stable with decay rate α and Lyapunov matrix P .
Since (66) holds for all Θ in Ω, we now obtain that Since the above holds for all t ≥ 0, w ∈ W and x,x ∈ R n , we can conclude that system (63) is incrementally quadratically stable with decay rate α and Lyapunov matrix P .
Remark 12. When ν = 2 and B, C are rank one, [29] contains some very easily verifiable conditions which guarantee the existence of P satisfying the above inequalities.

9.
Conclusions. We introduced and discussed basic properties of incrementally quadratically stable (δQS) systems, and considered the particular cases of (asymptotically) time-invariant systems and (asymptotically) periodic-in-time systems: if a time-invariant system is subject to a constant input or a periodic input of period T , then all trajectories of the system converge to a unique trajectory which is respectively constant or periodic with period T . We showed that incremental quadratic stability of a system is equivalent to quadratic stability of its associated derivative system. We presented a characterization of state dependent nonlinearities by means of incremental multiplier matrices, and we formulated conditions guaranteeing δQS of a system in terms of linear matrix inequalities in the Lyapunov and incremental multiplier matrices. These conditions allowed us to obtain a characterization of δQS consisting of frequency domain inequalities involving an associated linear system and the original incremental multiplier matrices. For a differentiable nonlinearity, we formulated a necessary and sufficient condition for incremental multiplier matrices in terms of the derivative of the nonlinearity. Several common classes of nonlinearities were then described by means of incremental multiplier matrices. We finally presented an alternative approach to the analysis of δQS for systems whose nonlinearities have their derivatives in a convex set.
Rewrite (67) as Since N 22 ≤ 0, this is equivalent to Since the above inequality is linear in ∂ψ ∂z (z), we obtain that N 11 + N 12 Θ + Θ T N 21 Θ T N 22 for all Θ ∈ Ω where Ω is the convex hull of the set ∂ψ ∂z (z) : z ∈ R mq . Thus, for all Θ ∈ Ω, that is, I Θ