Robust dynamic self-triggered control for nonlinear systems using hybrid Lyapunov functions

Self-triggered control (STC) is a resource efficient approach to determine sampling instants for Networked Control Systems. At each sampling instant, an STC mechanism determines not only the control inputs but also the next sampling instant. In this article, an STC framework for perturbed nonlinear systems is proposed. In the framework, a dynamic variable is used in addition to current state information to determine the next sampling instant, rendering the STC mechanism dynamic. Using dynamic variables has proven to be powerful for increasing sampling intervals for the closely related concept of event-triggered control, but has so far not been exploited for STC. Two variants of the dynamic STC framework are presented. The first variant can be used without further knowledge on the disturbance and leads to guarantees on input-to-state stability. The second variant exploits a known disturbance bound to determine sampling instants and guarantees asymptotic stability of a set containing the origin. In both cases, hybrid Lyapunov function techniques are used to derive the respective stability guarantees. Different choices for the dynamics of the dynamic variable, that lead to different particular STC mechanisms, are presented for both variants of the framework. The resulting dynamic STC mechanisms are illustrated with two numerical examples to emphasize their benefits in comparison to existing static STC approaches.


I. INTRODUCTION
Many modern control applications, e.g., in the field of Networked Control Systems (NCS), involve the implementation of feedback laws on shared hardware or with limited communication resources [1].Classically, feedback laws are evaluated periodically with a sampling frequency that is determined before the runtime of the controller.However, such periodic sampling may lead in many situations to a waste of resources, as reported, e.g., in [2], [3].Therefore, eventand self-triggered control have been developed as alternatives to periodic sampling [4].In event-triggered control (ETC), a state-dependent trigger condition is used to determine sampling instants.The trigger condition is monitored continuously or at periodic time instants and a transmission of sampled state information is triggered if the trigger condition is fulfilled.While ETC can typically significantly reduce the required amount of transmissions, the continuous/periodic monitoring Funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC 2075 -390740016 and under grant AL 316/13-2 -285825138.We acknowledge the support by the Stuttgart Center for Simulation Science (SimTech).
The authors are with the University of Stuttgart, Institute for Systems Theory and Automatic Control, 70569 Stuttgart, Germany (email: {hertneck,allgower}@ist.uni-stuttgart.de).
of the trigger condition may be impractical in many NCS setups, as it requires that the trigger mechanism has access to current state information at each time the trigger condition shall be checked.Moreover, the resulting traffic patterns for ETC are hard to predict, which may result in undesired effects like packet loss or highly time varying delays [5].
In self-triggered control (STC), the controller determines at each sampling instant based on sampled state information when the next sample should be taken, thus lowering the effort for monitoring the plant state.STC can reduce the network load for NCS in comparison to periodic sampling significantly, as it has been demonstrated in [6], [7].Moreover, it allows for efficient scheduling of sampling instants [8].For linear systems, a variety of STC approaches are available, see, e.g., [4], [9] and the references therein.For nonlinear systems, there are only a few approaches available to to this date.In [7], [10], [11] a conservative prediction for the trigger condition of ETC is used to determine sampling instants.For an efficient online evaluation, the state-space is divided into regions with the same sampling interval.In [12]- [16], Lipschitz continuity properties are used to determine sampling instants such that the decrease of a Lyapunov function can be guaranteed.
In all aforementioned works on nonlinear STC, only the current state of the system is taken into account to determine sampling instants.However, for the related approach of STC, it has been demonstrated, that taking also information about the past system behavior into account for the trigger condition can reduce the required amount of sampling significantly [17].In [17] the past system behavior is encoded in a dynamic variable, such that an averaged version of a purely state-dependent trigger rule can be used to determine sampling instants, leading to significantly increased sampling intervals.Similar benefits as for such dynamic ETC can also be expected for STC if the past system behavior is taken into account, yet no dynamic STC approaches exist to this date.
In a preliminary study for the work at hand [18], we have presented a dynamic STC mechanism based on hybrid Lyapunov functions that bridges this gap.In [18], a dynamic variable is used to encode the past system behavior.In particular, therein the dynamic variable is chosen as a filtered average of the values of a Lyapunov function at previous sampling instants.At each sampling instant, the next sampling instant is selected such that the value of the Lyapunov function at the next sampling instant is bounded by the value of the dynamic variable.Hybrid Lyapunov techniques adapted from [19]- [21] are used in [18] to guarantee this bound and to derive stability guarantees for the dynamic STC mechanism.It is demonstrated in [18] that the proposed dynamic STC mechanism based on hybrid Lyapunov functions can significantly reduce the required amount of samples in comparison to static STC.
In this article, we extend the results from [18] in various directions.The main contributions compared to the preliminary results in [18] are: (1) We present a general framework for dynamic STC based on hybrid Lyapunov functions that includes different particular dynamic STC mechanisms.While in [18], the dynamics of the dynamic variable were fixed to be a finite impulse response (FIR) filter for the Lyapunov function, the general framework in this article allows different choices.We present two particular alternatives, namely choosing the dynamic variable as an infinite impulse response (IIR) filter for the Lyapunov function and choosing it such that it implements a tunable reference function as a bound for the Lyapunov function and illustrate these options with numerical examples.Further choices can be included in the general framework as well.
(2) We extend the results from [18] to perturbed nonlinear systems.Note that only few results on STC for perturbed nonlinear systems are available, and all require knowledge of bound on the disturbance (see [11] and the references therein for a literature overview).We present two different variants of the dynamic STC framework such that either input-tostate stability (ISS) or robust asymptotic stability (RAS) of a sublevel set of the Lyapunov function can be guaranteed.The variant with ISS guarantees requires no prior knowledge on the disturbance signal such as a disturbance bound.The variant that ensures RAS of a level set of the Lyapunov function does require knowledge of a bound on the disturbance, but offers two advantages in comparison to the ISS variant: Firstly, it can be used locally, i.e., assumptions on the system dynamics that are required for using hybrid Lyapunov functions only need to hold locally.Secondly, by taking the disturbance bound explicitly into account for determining sampling intervals, less frequent triggering may be required in some situations in comparison to the ISS variant.
(3) For the RAS variant of our framework, we modify the dynamic STC mechanism from [18] such that it can use different parameters for different level sets of the Lyapunov function.This can significantly reduce the required amount of sampling instants, since the employed techniques for hybrid Lyapunov functions may allow for larger sampling intervals when it can be ensured that the system state stays in certain level sets of the Lyapunov function.
(4) We present more compact proofs of the main results compared to [18].
The remainder of this article is organized as follows.The problem setup is described in Section II.In Section III, the variant of the general framework for dynamic STC based on hybrid Lyapunov functions that requires no disturbance bound is presented.Specific dynamic STC mechanisms with ISS guarantees for this variant are introduced in Section IV.The variant of the framework that takes explicitly into account a disturbance bound and respective specific dynamic STC mechanisms with RAS guarantees are presented in Section V.In Section VI, the proposed mechanisms are illustrated with two numerical examples.Section VII concludes the article.
To characterize a hybrid model of the considered NCS, we use the following definitions, that are taken from [22], [23].

Definition 2. [23]
A hybrid trajectory is a pair (dom ξ, ξ) consisting of the hybrid time domain dom ξ and a function ξ defined on dom ξ that is absolutely continuous in t on (dom ξ)∩ (R ≥0 × {j}) for each j ∈ N 0 .Definition 3. [23] For the hybrid system H given by the state space R n ξ , the input space R nw and the data (F, G, C, D) , where is locally bounded, and C and D are subsets of R n ξ , a hybrid trajectory (dom ξ,ξ) with ξ : dom ξ → R n ξ is a solution to H for a locally integrable input function w : R ≥0 → R nw if 1) for all j ∈ N 0 and for almost all t ∈ I j := {t|(t, j) ∈ dom ξ} we have ξ(t, j) ∈ C and ξ(t, j) = F (ξ(t, j), w(t)); 2) for all (t, j) ∈ dom ξ such that (t, j + 1) ∈ dom ξ, we have ξ(t, j) ∈ D and ξ(t, j + 1) = G(ξ(t, j)).
II. SETUP In this section, we model first the considered dynamic STC setup as a hybrid system.Then, we present stability definitions for the considered setup and a precise problem statement.

A. Hybrid system model for dynamic STC
We consider a nonlinear plant that exchanges information with a (possibly) dynamic controller only at discrete sampling instants as depicted in Figure 1.The plant is given by where x p (t) ∈ R nx p is the plant state with initial condition x p (0) = x p,0 , û(t) ∈ R nu is the last input that has been received by the plant and w(t) ∈ R nw is a disturbance input.The controller is given by where x c (t) ∈ R nx c is the state of the controller with some initial condition x c (0) = x c,0 and xp (t) is the last plant state that has been received by the controller.The sampled values of û and xp are updated at discrete sampling instants t j , j ∈ N 0 , generated by a self-triggered sampling mechanism that will be specified later.The updates are based on the current values of u and x, i.e., û(t j ) = u(t j ) and xp (t j ) = x p (t j ).We will subsequently denote the value of the controller state at the last sampling instant as xc , i.e., xc (t j ) = x c (t j ).Between sampling instants, xp , xc and û are thus constant, which corresponds to a zero-order-hold (ZOH) scenario.We define the sampling-induced error as The sampling instants are determined by a dynamic sampling mechanism that can be described as t j+1 := t j + Γ(x(t j ), η(t j )), where Γ : R nx × R nη → [t min , ∞) for some t min > 0 to be specified later.Here η ∈ R nη is an internal state of the sampling mechanism.It is updated according to η(t j+1 ) = S(η(t j ), x(t j )) for some S : R nη × R nx → R nη , which allows to take the past system behavior into account for determining transmission times.
We define the timer variable τ with τ (t j ) = 0 and τ (t) = 1 for t j ≤ t < t j+1 , which keeps track of the elapsed time since the last sampling instant and the auxiliary variable τ max which encodes the next sampling interval.Using this, we can model the overall networked control system as a hybrid system H ST C according to (2) with ξ := x , e , η , τ, τ max , F (ξ) := f (x, e, w) , g(x, e, w) , 0, 1, 0 , and g(x, e, w) = −f (x, e, w), G(ξ) := x , 0, S(η, x) , 0, Γ(x, η) , Algorithm 1: Dynamic STC mechanism at sampling instant t j 1: Measure x p (s j ) 2: Set xp (s + j ) = x p (s j ) 3: Set û(s + j ) = g c (x c (s j ), x p (s j )) 4: Set τ max (s + j ) = Γ(x(s j ), η(s j )) 5: Set η(s + j ) = S(η(s j ), x(s j )) 6: Wait until t j+1 = t j + τ max (s + j ) and with Note that jumps of the hybrid system (2) correspond exactly to sampling instants of the self-triggered sampling mechanism and thus the sampling sequence t j , j ∈ N 0 corresponds exactly to the time indices when (2) jumps.Thus, we describe the hybrid time before the sampling at time t j by s j := (t j , j) and the hybrid time directly after the sampling at time t j by s + j := (t j , j + 1).Using this notation, the dynamic STC mechanism can be described by Algorithm 1.
For simplicity, we assume that the self-triggered sampling mechanism is executed at the initial time t 0 = 0, i.e., there is a jump at t = 0 and we have that ξ(0, 1) = G(ξ(0, 0)).This corresponds to a restriction of the initial conditions of the hybrid system for τ (0, 0) and τ max (0, 0) to τ max (0, 0) = τ (0, 0).Note that without this assumption, the first sampling instant might be not well-defined.

B. Stability definitions and problem statement
In this article, we will, depending on the disturbance signal w(t), consider different stability notions.In the nominal case, i.e., if w(t) = 0 for all t ≥ 0, we will be interested in guaranteeing uniform global asymptotic stability of the origin of H ST C according to the following definition, which is adapted to our setup from [19].Definition 4. For the hybrid system H ST C with w(t) = 0 for all t ≥ 0, the set {(x, e, η, τ, τ max ) : x = 0, e = 0, η = 0} is uniformly globally asymptotically stable (UGAS), if there exists β ∈ KLL such that, for each initial condition x(0, 0) ∈ R nx , η(0, 0) ∈ R nη , e(0, 0) ∈ R ne , τ (0, 0) ∈ R ≥0 and τ max (0, 0) = τ (0, 0), and each corresponding solution for all (t, j) in the solutions domain.
Since the disturbance signal will in practice be nonzero, stability notions that take into account the effects of the disturbance need to be considered.An important stability notion for perturbed systems is input-to-state stability, which we define for the hybrid system H ST C as follows.
Hereby, considering the compact region of attraction R allows us to consider local results, which may be beneficial for nonlinear systems, where global stability results can not always be obtained.
In this article, our goal is to design functions Γ and S for H ST C , such that for the resulting dynamic STC mechanisms, the time spans between sampling instants are maximized whilst at the same time the stability properties mentioned above are ensured.To this end, the dynamic variable η shall be exploited.

III. HYBRID LYAPUNOV FUNCTIONS FOR DYNAMIC STC
In this section, we present a general framework for dynamic STC based on hybrid Lyapunov functions.This framework includes different particular dynamic STC mechanisms.We assume in this section and in Section IV that neither the disturbance signal nor a bound on it is known.We present in the first subsection some preliminaries on hybrid Lyapunov functions and then discuss in the second subsection how they can be used to determine sampling instants for dynamic STC in a general framework.The framework allows different particular choices for Γ and S which lead to different dynamic STC mechanisms with different properties of the closed-loop system.Specific choices for Γ and S with guarantees for ISS will be discussed in Section IV.

A. Hybrid Lyapunov functions
We make the following assumption based on [23,Condition IV.6], that can be used to derive a hybrid Lyapunov function.
Assumption 1.There exist a locally Lipschitz function W : R ne → R ≥0 , a locally Lipschitz function and for all x ∈ R nx , w ∈ R nw and almost all e ∈ R ne , ∂W (e) ∂e , g(x, e, w) ≤ LW (e) + H(x, e, w).
Moreover, for all e ∈ R ne , w ∈ R nw and almost all x ∈ R nx , Note that there are some differences between Assumption 1 and [23, Condition IV.6].In (12), we add the additional term − V (x).For > 0, this term requires that the system with continuous feedback is exponentially stable.For < 0, it allows potentially to chose smaller values for γ, which we will exploit subsequently to maximize the time between triggering instants.Furthermore, we use here a K ∞ function α w (|w|) instead of θ p |w| p for some θ > 0. This relaxation is possible since we investigate input-to-state stability instead of particular L p -gains, as it was done in [23].Approaches for verifying Assumption 1 can, e.g., be adapted from [24].Note also that Assumption 1 can hold simultaneously for different choices of , γ and L. If we can find one parameter set for which the assumption holds, then we will typically also be able to find many different parameter sets.
To determine sampling intervals, the proposed STC framework will employ a bound on the evolution of V (x), which is adapted from [21].This bound is based on the function where that was originally used in [20] to determine the maximum allowable transmission interval.The choice of the parameter Λ will be discussed subsequently.A visualization of T max (γ, Λ) is given in Figure 2. We adapt from [21,Proposition 12] the following result.
Proposition 1.Consider the hybrid system H ST C at sampling instant s + j for j ∈ N 0 .Let Assumption 1 hold for γ, and L.Moreover, let 0 < τ max (s where φ : [0, τ max (s for some sufficiently small λ ∈ (0, 1).Then, for all t j ≤ t ≤ Proof.The proof is given in Appendix A1.
Proposition 1 delivers for parameters , γ, L and a K ∞ function α w that satisfy Assumption 1 an upper bound on the evolution of U (ξ) and thus since U (ξ) ≥ V (x) also an upper bound on the evolution of V (x).This bound is valid, if the time between two sampling instants is bounded by T max (γ, Λ) for some Λ > 0. The actual bound depends on the parameters from Assumption 1 and on Λ. Particularly, if > 0 and Λ > L, then the bound is exponentially decreasing in the nominal case (i.e. for w(t) = 0 for t j ≤ t ≤ t j + τ max s + j ).In contrast, if < 0 and Λ < L, then the bound is increasing.However, the admissible time between sampling instants T max (γ, Λ) decreases when γ and Λ are increased.Note that smaller values for require larger values for γ for (12) in Assumption 1 to hold.We thus observe in Proposition 1 a trade-off between the admissible time between sampling instants and the growth of the bound on V (x).Particularly, if the time between sampling instants is small, then we will be able to chose and Λ large and thus obtain an exponentially decreasing bound on V (x) for the nominal case.In contrast, if the time between two sampling instants is large, then we need to chose Λ and small to be able to derive a bound on V (x), which has the effect that this bound may be increasing.Next, we discuss how the bound on V (x) can be used to determine sampling instants.

B. Using hybrid Lyapunov functions to determine sampling instants
For periodic time-triggered sampling, to determine sampling instants based on Proposition 1, we would have to choose and Λ such that the bound on V (x) is decreasing in order to obtain stability guarantees.The value of would determine the smallest possible values of γ and L and as a consequence also the maximum allowable sampling interval, for which we would be able to guarantee stability.
However the upper bound on V (x) from Proposition 1 is typically conservative, i.e., V (x) decreases in the nominal case often more strongly than guaranteed, and the effect of the disturbance is often not the worst-case effect, as is considered in Proposition 1.It may therefore be reasonable to allow sometimes a certain increase of V (x) as long as an average decrease of V (x) can be guaranteed or a comparable condition, that ensures desired stability properties, is still satisfied.We will now describe how this can be exploited by a dynamic STC mechanism to maximize sampling intervals.
Suppose the condition, that the dynamic STC mechanism shall ensure when choosing at time s j the next sampling instant t j+1 , has the form that in the nominal case has to hold for t j ≤ t ≤ t j+1 , a tunable constant ref > 0 and a function C : R nx ×R nη → R ≥0 that will be specified later.The condition is stated for the nominal case since we assume in this section that the disturbance signal w(t) may be arbitrary and unknown, and thus, the dynamic STC mechanism cannot take it explicitly into account for determining sampling instants.Nevertheless, we will later derive ISS guarantees for the STC mechanism despite the disturbance for different choices of C(x, η).
The dynamic STC mechanism can now use the bound on V (x) from Proposition 1 to choose the next sampling instant, i.e., to determine at time s j a preferably large value for τ max (s + j ) = t j+1 −t j such that (18) holds in the nominal case.For that, suppose the dynamic STC mechanism has access to n par different parameter sets i , γ i , L i , i ∈ {1, . . ., n par }, for which Assumption 1 holds for the same functions α w and V (x).We will later require that at least for one of the parameter sets, to which we assign the index 1, 1 > 0 holds.For all other parameter sets, i may be negative.In the nominal case, (18) holds due to Proposition 1, if for one of the parameter sets holds for some Λ i > 0 and Thus, the dynamic STC mechanism needs to search at hybrid time s j for a preferably large value for τ max (s + j ) such that (19) and (20) hold for some i ∈ {1, . . ., n par } and t j ≤ t ≤ t j + τ max (s + j ).For an efficient search, we make two simplifications.First, we replace condition (20) by for some 1 δ ∈ (0, 1).Second, we fix2 Here the (typically small) positive value of 1 − δ avoids that Λ i ≤ 0. Note that Proposition 1 still applies despite the simplifications.
The second simplification allows us to rewrite (19) as Now suppose that C(x(s j ), η(s j )) ≥ V (x(s j )).In this case, maximizing τ max (s + j ) for a given i ∈ {1, . . ., n par } such that ( 21) and ( 22) hold is straightforward.In particular, if − i + ref > 0, then we obtain as the maximum value.Otherwise, i.e., if − i + ref ≤ 0, we obtain the maximum value τ max (s + j ) = δT max (γ i , Λ i ).The case that C(x(s j ), η(s j )) < V (x(s j )) is typically not relevant and therefore omitted3 by the dynamic STC mechanism for simplicity.In this case, it will use a fall-back strategy detailed subsequently.
An efficient search for a preferably large value of τ max (s + j ) is thus possible for any given parameter set that satisfies Assumption 1.The STC mechanism can now simply iterate over all parameter sets for i ∈ {2, . . ., n par } and use the largest value for τ max (s + j ) for which a guarantee can be obtained that (18) holds if t j+1 − t j = τ max (s + j ).It may however happen that a guarantee that (18) holds cannot be obtained for any i ∈ {2, . . ., n par }.In this case, the STC mechanism uses a fallback strategy that exploits that 1 > 0. In particular, it chooses then τ max (s + j ) = δT max 1 , L 1 + γ1 2 , for which it follows in the nominal case from Proposition 1 for t j ≤ t ≤ t j+1 that V (x(t, j + 1)) ≤ e − 1(t−tj ) V (x(t j )), which can be employed to obtain stability guarantees.
In Algorithm 2, the procedure to determine a preferably large value for τ max (s + j ) such that (18) holds in the nominal case is summarized.It will later serve for the dynamic STC mechanism as an implicit definition of the function Γ(x, η) for given C(x, η).The algorithm first computes the values of V (x) and C(x, η).Then it sets h to the minimum value for the fallback strategy, that is given by h = T max γ 1 , L 1 + 1 2 .After that, an iteration over all other parameter sets is started.For each parameter set, the algorithm determines a preferably large value for which (18) holds if τ max (s + j ) is chosen accordingly.If h is smaller than that value, then h is updated accordingly.Thus, after the iteration, the variable h contains a preferably large value for τ max (s + j ), for which it is guaranteed that (18) Algorithm 2: Computation of Γ(x, η) for the dynamic STC framework for some δ ∈ (0, 1) and given C(x, η).
hi ← δT max (γ i , Λ i ) end if 17: end for 18: Γ(x, η) ← h holds in the nominal case or it is set according to the fall-back strategy.
In the next section, we will present particular choices for C(x, η) and for S(η, x) that lead, together with the implicit definition of Γ(x, η) in Algorithm 2, to particular dynamic STC mechanisms with different closed-loop properties.

GUARANTEES
In this section, we present three different dynamic STC mechanisms that are based on the implicit definition of Γ(x, η) in Algorithm 2. The first two mechanisms will use different (time-varying) linear filters for the values of the Lyapunov function V (x) at past sampling instants to determine the next sampling instant.The third one uses instead a timedependent but state-independent reference function.Based on Assumption 1, we derive for all three mechanisms guarantees for ISS, which also implies UGAS of the origin in the nominal case.

A. Dynamic STC based on an FIR filter for the Lyapunov function
In this subsection, we present a dynamic STC mechanism that uses a time-varying FIR filter for the values of the Lyapunov function V (x) at past sampling instants to determine the next sampling instant.Note that a preliminary version of this mechanism has been presented in [18].The mechanism uses Algorithm 2 to choose the next sampling instant t j+1 at time s j such that for some m > 1, would hold in the nominal case, i.e., such that the Lyapunov function at the next sampling instant would be bounded by a discounted average of the values of the Lyapunov function at the past m sampling instants.To implement this choice in the setup from Section II and with Algorithm 2, we set n η = m−1 as the dimension of the dynamic variable and define the update rule of the dynamic variable as where Γ(x, η) is defined by Algorithm 2 for a function C(x, η) that is still to be determined.Note that t j+1 −t j = τ max (s + j ) = Γ(x(s j ), η(s j )).Hence, for this choice of S(η, x), holds if k > m − j, which implies for j > m.Then, ( 18) is equal to (23) if we choose For k ≤ m − j, the value of η k is determined by the initial condition η(0, 0) and does not influence the stability properties.Therefore, we define the function Γ(x, η) for the dynamic STC mechanism, that is based on a discounted average of the Lyapunov function, implicitly by Algorithm 2 with C(x, η) according to (26).This leads us to the following result.
Theorem 1. Assume there are n par different parameter sets i , γ i , L i , i ∈ {1, . . ., n par }, for which Assumption 1 holds with the same functions V and α w .Let 1 > 0. Further, consider H ST C with S(η, x) and Γ(x, η) defined according to (24) and by Algorithm 2 with C(x, η) according to (26) and some δ ∈ (0, 1).Then Proof.The proof is given in Appendix A2.
Remark 2. The combination of (24) and (26) corresponds to a (time-varying) FIR filter for the Lyapunov function V (x) evaluated at sampling instants.The e − ref Γ(x,η) terms are included here to determine the convergence speed of the dynamic STC mechanism in the nominal case.Instead, a constant factor could be used as it was done in the preliminary study [18].However, then the convergence behavior of the filter would be influenced by the time between sampling instants.

B. Dynamic STC based on an IIR filter for the Lyapunov function
In this subsection, we present a second approach to choose the dynamics of the dynamic variable.While the approach from the previous subsection was based on an FIR filter, we consider in this subsection an approach that is based on a (time-varying) IIR filter.To implement the IIR filter, we set n η = 1 and choose where r 1 ∈ R >0 and r 2 ∈ R >0 are some constants satisfying r 1 + r 2 ≤ 1 and Γ(x, η) is again defined by Algorithm 2 for a function C(x, η) that will be determined next.The trigger decision is made for this STC mechanism such that Proof.The proof is given in Appendix A3 Remark 3. The update of η according to (27) can be interpreted as a time-varying IIR filter for the values of the Lyapunov function at past sampling instants.Similar as in the previous subsection, the additional term e − refΓ(x,η) is included to determine the convergence speed of the closed-loop system with the dynamic STC mechanism in the nominal case.This term could also be replaced by a constant term.However, then the convergence behavior of the filter would depend on the time between sampling instants, which may be undesired.
Remark 4. The parameters r 1 and r 2 can be used to tune the behavior of the IIR filter.The condition r 1 + r 2 ≤ 1 ensures that the interconnection of system and filter is stable.

C. Dynamic STC based on a time-dependent reference function
In the two previous subsections, we have presented two dynamic STC mechanisms that are based on a linear filter for the Lyapunov function V (x) at past sampling instants and that are thus based on past system states.In this subsection, we will present a different approach, that instead bounds the Lyapunov function V (x) at the next sampling instant by a reference function that depends only on time and on the initial state of the NCS.It is thus independent of the actual state evolution of the system.Such a reference function-based approach may, e.g., be advantageous for setpoint changes.In particular, the goal of the dynamic STC mechanism that we present in this subsection is to ensure for a function holds in the nominal case for all (t, j) ∈ dom ξ.We assume for simplicity that η(0, 0) = V (x(0, 0)), which is not restrictive if the initial value of the dynamic variable can be set by the user, and focus on the specific reference function choice This choice can be implemented by using the dynamic variable with n η = 1 and where Γ(x, η) is again defined by Algorithm 2 for a function C(x, η) that will be determined next.Recall that we want to choose sampling instants such that (30) holds.This can be achieved by using Algorithm 2 with Hence, the function Γ(x, η) is defined for the dynamic STC mechanism, that is based on the reference function V ref , by Algorithm 2 with C(x, η) according to (32).This leads us to the following result.
Proof.The proof is given in Appendix A4.
A numerical example for the mechanisms presented in this section is given in Section VI-A.

V. LOCAL RESULTS FOR BOUNDED DISTURBANCES
Up to this point, we have not posed any assumptions on the disturbance signal w other than it is locally integrable.We have derived guarantees on UGAS and ISS for such a setup.
However, these results require Assumption 1 to hold globally, since the disturbance may be arbitrarily large.This may be restrictive in some situations.Moreover, it may lead to unnecessarily conservative results, as parameters , γ and L may be chosen in a less conservative manner if only subsets of the state-space need to be considered when verifying Assumption 1.To overcome these restrictions, we present in this section local results for the case that w(t) ∈ W := {w| |w| ≤ w} for all t ≥ 0 and some w > 0, for which a local version of Assumption 1 can be exploited.
A. Using disturbance bounds in the general framework for dynamic STC Subsequently, we will modify the dynamic STC framework from the previous sections such that RAS of a sublevel set of V will be guaranteed for bounded disturbances.
To obtain guarantees for RAS, we will use the following local version of Assumption 1, that is stated for a sublevel set X c := {x|V (x) ≤ c} of V for some c > 0. We use subsequently the notation x := x p x c .Assumption 2. There exist a locally Lipschitz function W : R ne → R ≥0 , a locally Lipschitz function V : R nx → R ≥0 , a continuous function and for all x ∈ X c , w ∈ W and almost all e = x − x with x ∈ X c , ∂W (e) ∂e , g(x, e, w) ≤ LW (e) + H(x, e, w).
Moreover, for all e = x − x, with x ∈ X c , x ∈ X c , all w ∈ W and almost all x ∈ X , x, e, w) We will later exploit that the modified assumption may become less restrictive in the sense that smaller values for γ and L may be possible as c decreases.Based on the modified assumption, we obtain the following modified version of Proposition 1.
Note that we have already used the simplification Λ = max L + 2 , (1 − δ) in the proposition.
Proposition 2 can be used in order to determine sampling instants such that condition (18) is satisfied for all possible disturbance signals that satisfy the disturbance bound (and not only in the nominal case).Suppose there are n par parameter sets i,l , γ i,l , L i,l , i ∈ {1, . . ., n par }, for which Assumption 2 holds for some c l and with 1,l > 0 and i,l < 0 for all i ∈ {2, . . ., n par }.
If V (x(s j )) < c l , C(x(s j ), η(s j )) ≤ c l and w(t) ∈ W for t j ≤ t ≤ t j+1 , then (18) can be ensured with Proposition 2, if there is a parameter set with index i j ∈ {2, . . ., n par } for Assumption 2 for which holds for t j ≤ t ≤ t j+1 and holds for some δ > 0. Note that holds if ij ≤ 0.
A checkable sufficient condition for (39), that can be used to determine τ max (s + j ), can thus be derived for the case that C(x(s j ), η(s j )) ≥ V (x(s j )) as In order to reduce potential conservativity when determining sampling instants, different values for c can be used for Assumption 2 at different sampling instances depending on the current values of V (x(t j )) and C(x(t j ), η(t j )).In particular, suppose there are n c ∈ N variables l , l ∈ {1, . . ., n c }, for each of which Assumption 2 has been verified offline for n par specific parameter sets i,l , γ i,l and L i,l .Then, choosing l such that c l is as small as possible and satisfies c l ≥ max {V (x(t j )), C(x(t j ), η(t j ))}, leads to reduced conservativity when determining sampling instants.
A modified version of Algorithm 2 that considers explicitly the bound on w and that uses different values for c in Assumption 2 is given by Algorithm 3. Note that this algorithm follows essentially the same main steps as Algorithm 2.
Differences to Algorithm 2 are that first a suitable value for l is chosen in Line 2 such that c l is as small as possible but satisfies c l ≥ max {V (x(s j )), C(x(s j ), η(s j ))}.Then, an iteration over all parameter sets is started.For each corresponding parameter set, a preferably large value for hi is determined such that (18) can be guaranteed if t j+1 = t j + hi for l and for all possible disturbance realizations.This is ensured by the choice of hi in Line 8 that is modified in comparison to the respective line in Algorithm 3. Similar as in Algorithm 2, the maximum such hi is selected as value for Γ(x(s j ), η(s j )) = h.If there is no parameter set for the considered l, for which (18) Algorithm 3: Computation of Γ(x, η) for the dynamic STC framework for some δ ∈ (0, 1) and given C(x, η), taking into account the disturbance bound.
else 10: hi ← δT max (γ i,l , Λ i,l ) end if 18: end for 19: Γ(x, η) ← h can be guaranteed to hold based on Proposition A6, the fallback strategy based on 1,l is used.In particular, the Algorithm sets in this case h = δT max (γ 1,l , L 1,l + 1,l 2 ).For this fall-back strategy, it is important to note that Proposition 2 delivers in this case that To guarantee RAS of the set R with ROA X cmax , c w needs to be such that V (x(s j+1 )) < V (x(s j )) holds for the fallback strategy for x(s j ) ∈ X \R and V (x(s j+1 )) ≤ c w for all x(s j ) ∈ R.This is ensured by ( 43 Next, we discuss how the particular dynamic STC mechanisms from Section IV needs to be modified in order to guarantee RAS of the set R with ROA X cmax .

B. Modifications for the dynamic STC mechanisms to guarantee RAS
We discuss in this subsection, which additional modifications are required for the particular dynamic STC mechanisms from Section IV in order to guarantee RAS of the set R with region of attraction X cmax .For simplicity, we assume that c max = max l∈{1,...,nc} c l and c w = min l∈{1,...,nc} c l .

1) Dynamic STC based on an FIR filter:
To guarantee RAS of the set R with region of attraction X cmax , the dynamic STC mechanism based on an FIR filter for V from Subsection IV-A needs to be modified such that it ensures that the system state stays in the set X cmax for all times, since Assumption 2 is only valid in this set.Keeping the system state in X cmax can be achieved by limiting the maximum value of C(x, η) by c max .Moreover, in order to enlarge the time between sampling instants, it is beneficial to set the value of C(x, η) to c w if the filter state would else result in smaller values.Both can be achieved by replacing the definition of C(x, η) from ( 26) by (44) Using this modification, we obtain the following result.
Theorem 4. Suppose w(t) ∈ W for all t ≥ 0. Assume there are n c parameters c l , l ∈ {1, . . ., n c }, each with n par different parameter sets i,l , γ i,l , L i,l , i ∈ {1, . . ., n par }, for which Assumption 2 holds with the same functions V and α w .Let 1,l > 0 and i,l < 0 for i > 1 and each l.Further, let . Consider H ST C with S(η, x) and Γ(x, η) defined according to (24) and by Algorithm 3 with C(x, η) according to (44) and some δ ∈ (0, 1).Then t j+1 − t j ≥ t min = min l∈{1,...,nc} and the set R is RAS for H ST C with region of attraction X cmax .
Proof.The proof is given in Appendix A6.
2) Dynamic STC based on an IIR filter: The modification required for the dynamic STC mechanism based on an IIR filter for V from Subsection IV-B to guarantee RAS of R for the ROA X cmax is quite similar as for the FIR mechanism.To ensure that the system state stays in X cmax for all times, the maximum value of C(x, η) can again be limited by c max .Moreover, similar as the modification for the FIR mechanism, it is beneficial to set the value of C(x, η) to c w if the filter state would result in smaller values in order to enlarge the time between sampling instants.Both can be achieved by replacing the definition of C(x, η) from (29) by With this modification, we obtain the following result.
Theorem 5. Suppose w(t) ∈ W for all t ≥ 0. Assume there are n c parameters c l , l ∈ {1, . . ., n c } each with n par different parameter sets i,l , γ i,l , L i,l , i ∈ {1, . . ., n par }, for which Assumption 2 holds with the same functions V and α w .Let and the set R is RAS for H ST C with region of attraction X cmax .
The proof of Theorem 5 is omitted for brevity.It follows from a combination of the proofs of Theorems 2 and 4.
3) Dynamic STC based on a reference function: To modify the reference function-based dynamic STC mechanism from Subsection IV-C to ensure RAS of R with ROA X cmax , the reference function needs to be adapted.To ensure that the system state stays in X cmax for all times, the maximum value of the reference function needs to be bounded.Moreover, instead of choosing the reference function such that it converges to 0, it is beneficial to let it converge to c w in order to maximize the time between sampling instants.We focus subsequently on the specific reference function Then, the reference function can be implemented with n η = 1 and (31) where Γ(x, η) is defined by Algorithm 3 with Note that for these choices, Γ(x, η) is chosen if possible such that (30) holds not only in the nominal case, but for all disturbance signals that may occur.We obtain the following result for the modified mechanism.
The proof of Theorem 6 is omitted for brevity.It follows from a combination of the proofs of Theorems 3 and 4.

VI. NUMERICAL EXAMPLE
In this section, we present two numerical examples to illustrate the dynamic STC mechanisms from Sections IV and V.

A. Example 1
In this subsection, we illustrate the dynamic STC mechanisms from Section IV with the nonlinear example system from [25].The example system is a perturbed single-link robot arm described by with the static state feedback controller u for a varying parameter ã ∈ [−a, a], depending on x 1 .Hence, we obtain For V (x) = x P x, α w (|w|) = θ 2 w 2 for θ > 0 and any fixed ã ∈ [−a, a], the LMI-based approach from [24, Section 4] to verify Assumption 1 can be easily adapted to the setup from this article.In particular, (11) is convex in ã and can thus be verified for all ã ∈ [−a, a] by taking the maximum value of L for the extremal values ã = −a and ã = a.Inequality ( 12) can be factorized such that the result is convex in ã and ã2 .Then, γ can be minimized with one LMI constraint similar as in [24, Section 4] for each combination of the extremal values for ã and ã2 .Subsequently, we consider a = 9.81 2 and b = 2.We have computed n par = 21 different parameter sets that satisfy Assumption 2 with i ∈ [−20, 0.01] and θ = 10.The maximum sampling interval, for which ISS can be guaranteed for periodic sampling, is 0.175 s.It serves also as fall-back strategy for the dynamic STC mechanisms.
In Figures 3 and 4, state trajectories and the evolution of the sampling intervals for simulations of the three dynamic STC mechanisms from Section IV and of periodic sampling with sampling period 0.175 s are plotted.The initial condition for the trajectories is x(0) = [0.5, 0.5] .The disturbance signal is w(t) = sin(t) for 2πs ≤ t ≤ 4πs and w(t) = 0 else.For the three dynamic STC mechanisms, we have used ref = 0.2.For the FIR mechanism from Subsection IV-A, we have in addition chosen n η = 20 and for the IIR mechanism from Subsection IV-B, we have chosen r 1 = 0.9 and r 2 = 0.1.
It can be seen that significantly larger sampling intervals are achieved by all dynamic STC mechanisms in comparison to periodic sampling.Nevertheless, the state trajectories are qualitatively similar.All three dynamic STC mechanisms reduce the sampling intervals as soon as the disturbance drives the system states away from the origin.When the influence of the disturbance reduces, the sampling intervals are increased again.When comparing the three dynamic STC mechanisms, the FIR and IIR mechanisms show similar behavior.The reference function mechanism takes longer to enlarge the sampling intervals after the influence of the disturbance on the system state has diminished.
Note that all the dynamic STC mechanisms from Section IV aim for stabilizing the origin, which may be disadvantageous when this is impossible due to the disturbance, since then the sampling interval may even be reduced to the fall-back strategy.This potential disadvantage is overcome by aiming to stabilize an invariant set containing the origin, as it is done by the dynamic STC mechanisms from Section V.

B. Example 2
In this subsection, we use the example from [13] to illustrate the dynamic STC mechanisms from Section V.The example system is given by We assume that |w(t)| ≤ 0.4 for all t.Using again x = x 1 x 2 and e = e 1 e 2 = x1 − x 1 x2 − x 2 , we obtain We consider V (x) = 1.5x x.Note that for any c l ∈ R, all x ∈ X c l satisfy |x| ≤ c l 28 and all e = x − x with x ∈ X c l and x ∈ X c l satisfy |e| ≤ 2 c l 28 .Thus (48) can be rewritten as f (x, e, w) = −1 0 0 −1 x + 0 0 ã1 ã2 e + 0 1 w for varying parameters ã1 ∈ − c l 7 , c l 7 and ã2 ∈ − 3c l 14 , 3c l 14 , depending on x and e.Thus, Assumption 2 can be verified for this example for any c l and any x ∈ X c l and x ∈ X c l by using the LMI-based approach from [24, Section 4] for all combinations of extremal values of ã1 and ã2 .
Similar as in [13], we aim for stabilizing the set4 {x| |x| ≤ 0.65} for all initial conditions x(0) that satisfy  |x(0)| ≤ 5.For our choice of V (x), this translates to c w = 0.64 and c max = 37.87 for R and X cmax .We have selected n l = 40 variables c l , l ∈ {1, . . ., 40} and for each c l computed n par = 20 different parameter sets that satisfy Assumption 2 with i,l ∈ [−15, 1] and θ = 2.
In Figures 5 and 6, state trajectories and the evolution of the sampling intervals for simulations of the three dynamic STC mechanisms from Section IV and of using always the fall-back strategy as sampling interval for the current sublevel set of V (x), i.e., the largest sampling period for which we could guarantee stability for periodic sampling for this sublevel set, are plotted.The initial condition for the trajectories is x(0) = [4, −3] .The disturbance signal is w(t) = 0.4 for 5.3 s ≤ t ≤ 8 s and w(t) = 0 else.For the three dynamic STC mechanisms, we have used ref = 1.For the FIR mechanism from Subsection IV-A, we have in addition chosen n η = 20 and for the IIR mechanism from Subsection IV-B, we have chosen r 1 = 0.9 and r 2 = 0.1.It can be seen that significantly larger sampling intervals can be achieved for all three dynamic STC mechanisms in comparison to using the fall-back strategy.Moreover, the disturbance does not force the mechanisms to use the sampling interval of the fall-back strategy, since the mechanisms do not longer aim to stabilize the origin, which would be impossible due to the disturbance.
In Table I, a comparison of the required number of sampling instants for the three dynamic STC mechanisms from Section V is given.It can be seen that all three mechanisms lead to a comparable number of sampling instants.When comparing the dynamic STC mechanisms to the static STC mechanism from [13] it can be seen, that the dynamic STC mechanisms require significantly less sampling instants.Note that an additional comparison of a preliminary version of the dynamic STC mechanism based on the FIR filter to the static STC mechanism from [10] can be found in [18].

VII. CONCLUSION
This article showed how information about the past system behavior can be exploited to increase sampling intervals for nonlinear self-triggered control.We presented a general framework to encode this past behavior in a dynamic variable.The general framework allowed us to design different particular STC mechanisms and to study ISS and RAS of the resulting systems using hybrid Lyapunov function techniques.The ISS variant of the framework has the advantage that no knowledge of the disturbance signal is required.If a bound on the disturbance is known, then additional benefits can be obtained using the RAS variant of the framework.For this variant, the main assumption needs then to hold only locally and less frequent triggering may be possible, since a set with size depending on the disturbance bound is stabilized.Moreover for the RAS variant, the parameters of the STC mechanism can be adapted online depending on the actual sublevel set which the system state is located in.Both variants were extensively studied in numerical examples.There are still some open points for future research.Currently, information of the entire plant state is required for the STC framework, which may be restrictive.Therefore, modifying the framework to support output feedback, e.g., by using an observer for the plant state, would be beneficial.Moreover, in many NCS setups, sensors are spatially distributed and only one sensor can transmit at a time.Extending the framework to such a setup, e.g., by considering a transmission protocol, would make it applicable to a wider range of NCS.Finally, it would be interesting to extend existing static nonlinear STC approaches from the literature, such as those from [10], [13] such that they incorporate past information of the plant state as well.
Moreover, it holds for t j−1 ≤ t ≤ t j due to the update of η at sampling instants and due to (52) and (53) that We thus obtain which proves ISS of H ST C trivially.
3) Proof of Theorem 2: This proof follows the same lines as the proof of Theorem 1.We thus sketch here only the differences.Similar as in the proof of Theorem 1, we obtain that t min ≤ t j+1 − t j ≤ t max := max i∈{1,...,npar} δT max (γ i , L i + i 2 ) holds and that there is for each j ∈ N 0 an i j ∈ {1, . . ., n par }, such that t j+1 − t j ≤ δT max γ ij , L ij + i j 2 .Proposition 1 thus implies for t j ≤ t ≤ t j+1 with where U ij (ξ) is the function according to (15) for the parameters γ ij and Λ ij , holds for k max according to (50).
The next step is, similar as in the proof of Theorem 2, to show by induction that, for all (t, j) ∈ dom ξ, and hold with ¯ = min { 1 , ref } .Both inequalities trivially hold for t = j = 0. Further, suppose the inequalities hold for all s j with j ≤ j for some j ∈ N. Plugging (59) and ( 60) for (t, j) = s j into (27), we obtain for t j ≤ t ≤ t j+1 since r 1 + r 2 ≤ 1 that η(t, j + 1) =S(η(s j ), x(s j )) holds for t j ≤ t ≤ t j+1 and thus also for s j+1 .Thus (59) and (60) hold by induction for all (t, j) ∈ dom ξ.The remainder of this proof is similar to the corresponding part of the proof of Theorem 1 and thus omitted.4) Proof of Theorem 3: This proof follows the same lines as the proof of Theorem 1.We thus sketch here only the differences.Similar as in the proof of Theorem 1, we obtain that t min ≤ t j+1 − t j ≤ t max := max i∈{1,...,npar} δT max (γ i , L i + i 2 ) holds and that there is for each j ∈ N 0 an i j ∈ {1, . . ., n par }, such that t j+1 − t j ≤ δT max γ ij , L ij + i j 2 .Proposition 1 thus implies for t j ≤ t ≤ t j+1 with Λ ij = L ij + i j 2 that V (x(t, j + 1)) ≤ U ij (ξ(t, j + 1)) ≤e − i j (t−tj ) V (x(s j )) + k max where U ij (ξ) is the function according to (15) for the parameters γ ij and Λ ij , holds for k max according to (50).
The next step is, similar as in the proof of Theorem 2, to show by induction that, for all (t, j) ∈ dom ξ, V (x(t, j)) ≤e −¯ t V (x(0, 0)) + k max Further suppose it holds for all s j with j ≤ j for some j ∈ N.

2
, we observe that (37) holds for = 1,lj since V (x(s j )) ≤ c lj , αw(| w|) Thus (68) holds in both cases also for s j+1 and hence by induction for all (t, j) ∈ dom ξ.RAS of R follows immediately from (68) and t ≥ t+jtmin Notation and definitions: The nonnegative real numbers are denoted by R ≥0 .The natural numbers are denoted by N, and we define N 0 := N ∪ {0}.We denote the Euclidean norm by |•| and the infinity norm by |•| ∞ .A continuous function α

e
= e xp , e xc := (x p − x p ) , (x c − x c ) and the combined state as x := x p , x c .Note that e(t) ∈ R ne and x(t) ∈ R nx for n x = n e = n xp + n xc .

Fig. 6 .
Fig. 6.Comparison of the sampling intervals for dynamic STC mechanisms from Section V and periodic sampling for a simulation with x(0) = [4, −3] .
) holds in the nominal case, i.e., such that the value of the Lyapunov function at the next sampling instant is bounded by the state of the filter.This can be achieved by using Thus, we define the function Γ(x, η) for the dynamic STC mechanism, that is based on a time varying IIR filter, implicitly by Algorithm 2 with C(x, η) according to (29).We obtain the following result.Theorem 2. Assume there are n par different parameter sets i , γ i , L i , i ∈ {1, . . ., n par }, for which Assumption 1 holds with the same functions V and α w .Let 1 , r 1 , r 2 > 0 and r 1 +r 2 ≤ 1.Further, consider H ST C with S(η, x) and Γ(x, η) defined according to (27) and by Algorithm 2 with C(x, η) according to (29) and some δ ∈ (0, 1).Then t j+1 − t j ≥ t min := δT max γ 1 , L 1 + 1 2 and H ST C is ISS.
) if c w ≥ max