Sparse event‐triggered control of linear systems

In event‐triggered control, a situation where the control input must be sparse often arises. Therefore, in this study, we propose sparse event‐triggered control, meaning that the control input is sparse and updated in an event‐triggered manner. First, we present a model‐based method for sparse event‐triggered control of linear systems, where the event condition is defined by a Lyapunov function. The resulting control input is proven to be sparse and the control system is confirmed to be asymptotically stable. Second, we extend it to a data‐driven version, where the event condition is adaptively updated from online data on the state trajectory. Finally, we discuss the possibility of extending our framework to two cases of disturbance and nonlinear dynamics.


INTRODUCTION
There is growing interest in networked control, and event-triggered control is known to be a promising method of reducing the amount of computation and transmission in such applications. Therefore, many relevant studies have been conducted to date. [1][2][3][4][5][6][7][8][9][10][11][12][13] Moreover, in recent years, data-driven methods have been developed for cases in which a plant model is unavailable. [9][10][11][12]14 On the other hand, a situation often arises in which the control input must be sparse, in the sense that the value becomes zero for several time intervals. A typical case involves a system featuring the so-called "sleep mode," where the control input is zero. Furthermore, in recent years, the concept of an energy Internet has been proposed [15][16][17][18][19][20] as an analogy of TCP/IP, where an on-demand energy supply is assumed (see, e.g., references for a power pocket network [17][18][19][20]. If a control system is connected to an energy Internet, the actuator is driven in an intermittent manner. In such a case, the control input must be sparse, as shown in Figure 1. Nevertheless, the existing results for event-triggered control cannot be used in the above situations. In fact, the resulting control inputs are not always sparse in usual event-triggered control. [1][2][3][4][5][6][7][8][9][10][11][12][13][14] In contrast, an event-based control framework with a sparse input has been presented in the work, 21 however, the period when the control input is applied (i.e., the period in Figure 1) cannot be specified.
In this study, we propose sparse event-triggered control, meaning that the control input is sparse and updated in an event-triggered manner. In this framework, state feedback is applied to the plant according to time intervals that start when an event occurs and end after a specified period, as illustrated in Figure 1. First, we present a model-based method for sparse event-triggered control, where the event condition is defined by a Lyapunov function. The resulting control input is proven to be sparse and the control system is confirmed to be asymptotically stable. Second, we extend it to a (i) Sets: Let R, R + , R 0+ , and Z 0+ denote the sets of real numbers, positive numbers, non-negative numbers, and non-negative integers, respectively. The set B r ⊂ R n represents the closed boll in R n with a radius r ∈ R + , that is, B r ≔ {x ∈ R n |||x|| ≤ r}, where ||x|| is the Euclidean norm of x. We denote by S n ⊂ R n×n the set of n × n symmetric matrices. ForŜ ⊆ S n , span(Ŝ) represents the space resulted by the linear combinations of symmetric matrices inŜ. Moreover, Λ(A) is the set of all eigenvalues of A ∈ R n×n . (ii) Scalars: For a, b ∈ R, min(a, b) and max(a, b) are the minimum and maximum element of {a, b} ⊂ R, respectively. (iii) Vectors: The functions rvec(x) and lvec(P) are used to represent the quadratic form x ⊤ Px: For P ∈ S n with (i, j)th element p ij (where p ij = p ji ), rvec(P) ∈ R (1∕2)n(n+1) denotes the column vector satisfying [p 11 |p 12 p 22 | · · · |p 1n p 2n · · · p nn ] ⊤ and lvec(x) ∈ R 1×(1∕2)n(n+1) is the row vector satisfying x ⊤ Px = lvec(x)rvec(P). For example, lvec(x) = [x 2 1 2x 1 x 2 x 2 2 ] and rvec(P) = [p 11 p 12 p 22 ] ⊤ for x 1 x 2 ] ∈ R 2 , P = Furthermore, for the matrix [a 1 a 2 · · · a m ] ∈ R n×m , vec(A) represents [a ⊤ 1 a ⊤ 2 · · · a ⊤ m ] ⊤ ∈ R nm . (iv) Matrices: The identity matrix of order n is denoted by I n ∈ S n . For P ∈ S n (whose eigenvalues are real numbers), we use min (P) ∈ R and max (P) ∈ R to represent the minimum and maximum eigenvalues, respectively. The Kronecker product of the matrices A and B is denoted by A ⊗ B.
Fundamental facts: In this article, we use the following mathematical facts.
(II) (Comparison theorem 25 ) Consider a differentiable function f ∶ R 0+ → R satisfying for some constant K ∈ R. Then,

PROBLEM FORMULATION
Let us consider the control system Σ, as shown in Figure 2, which is composed of a plant P and a controller K. The plant P is given by where x(t) ∈ R n is the state, u(t) ∈ R m is the control input, and A ∈ R n×n and B ∈ R n×m are constant matrices, for which the pair (A, B) is stabilizable. The controller K is given by where F ∈ R m×n is a feedback gain such that A + BF is Hurwitz, t k ∈ R 0+ (k = 0, 1, … ) is the sequence of time instants such that t 0 = 0 and t 0 < t 1 < · · ·, and ∈ R + is a constant number. This controller intermittently applies the feedback control u(t) = Fx(t) to P. The time instants t k are called the start time instants and is called the control period. The resulting control input of this controller is illustrated in Figure 1. In (2), the start time instants t k (k = 0, 1, … ) are determined in an event-triggered manner. For a function s ∶ R 0+ × R n × R 0+ × R n → R, the start time instant t k+1 ∈ [t k + , ∞) is defined as the minimum time instant when The function s is called the start function. Then, our sparse event-triggered control problem is formulated as follows.
Problem 1. Consider the control system Σ. Suppose that a control period ∈ R + is given. Then, find a start function s such that (i) the control system Σ is globally asymptotically stable, (ii) there exists a min ∈ R + satisfying t k+1 − t k > + min (4) for every k ∈ Z 0+ .
Four remarks are given for Problem 1. First, a similar problem has been addressed in Reference 22, where the controller is a piecewise constant version of (2), that is, However, it has been clarified for the piecewise constant controller that the control period is limited to be within a certain range determined by the dynamics of the plant. To overcome this limitation, we employ the controller in (2) generating piecewise continuous signals. Second, (ii) is concerned with the vanishing sparsity. In our setting, the length of kth time slot of zero input is expressed as d k ∶= t k+1 − t k − . It might be possible that d k → 0 as k → ∞, which implies that the sparsity vanishes. To avoid this phenomenon, (ii) is imposed on the problem.
Third, (ii) is similar to the requirement that a Zeno behavior does not exist for event-triggered control, that is, However, (ii) is a strictly stronger condition than (ii ′ ) since the control period is a given positive constant. Fourth, the control period is a given parameter, which corresponds to the period of time during which the actuator can operate continuously. For example, if the actuator gains energy via a power pocket network, [17][18][19][20] which is a kind of time division multiplexing of power supply as illustrated in Section 1, the control period is set to the time length of a power pocket.
Next, let us formulate the problem for the case where the mathematical model of the plant is not available.
This represents the segment of the state trajectory x(t, 1 , Then, the data-driven version of Problem 1 is given as follows. Problem 2. Consider the control system Σ with unknown A and B. Suppose that a control period ∈ R + and a stabilizing feedback gain F ∈ R m×n are given, and assume that the state trajectory data X([0, t], x(0)) are available at each t in Σ. Then, find a start function s satisfying (i) and (ii) in Problem 1.
In the above setting, it is assumed that a stabilizable feedback gain F is available although the matrices A and B are unknown. This assumption is reasonable in some cases, for example, when F is a PID controller designed by the so-called PID tuning without a mathematical model or when A and B are varied by deterioration caused by aging but F is still stabilizing.

Solution to Problem 1
In intermittent control, it is reasonable to set a nonzero value to the control input when the plant is in a "bad" state. This motivates us to design the start function s so as to quantify the badness.
To quantify the badness of the plant, we employ the quadratic function for a unique solution P * to the Lyapunov equation Moreover, we exploit V(x(t k ))e − (t−t k ) (which is equal to the function V(x(0))e − t ) as a threshold of the badness. This implies Motivated by this fact, we propose the following start function s: For this start function, the following result is obtained.
where P * is a unique solution to (6). Then, (7) is a solution to Problem 1.
Proof. First, we prove (i). For each k ∈ Z 0+ , we consider Σ on the time interval [t k , t k + ), during which the dynamics of Σ is given bẏx FromV(x(t)) = −x ⊤ (t)Qx(t) (which is given by (5), (6), and (9)) and Fact (I) in the end of Section 1, we have This implyV By applying Fact (II) into (11), it follows that on [t k , t k + ). Furthermore, this inequality and the continuity of x(t) imply that holds on (t k , t k + ] for each ∈ (0, * ). In particular, t k+1 is the time instant when (3) is violated, which gives Moreover, from the definition of t k+1 , it follows that which gives by considering (15) as a difference equation with respect to the variable t k . Therefore, from (16) and the fact that (14) holds on [t k , t k+1 ], we obtain for every t ∈ [t 0 , ∞). In (17), 0 ≤ V(x(t)), lim t→∞ V(x(0))e − t = 0, and V(x(0))e − t < ∞ hold for every t ∈ R 0+ . Therefore, (17) implies lim t→∞ V(x(t)) = 0, that is, lim t→∞ x(t) = 0. This proves (i). Next, we prove (ii) for where ∈ R 0+ is given by Note here that P * > 0. Let us consider the time interval [t k + , t k+1 ). Then, the dynamics of Σ is written bẏ on [t k + , t k+1 ). Hence, the time derivative of V(x(t)) along the state trajectory x(t) is given bẏ from (5) and (20). In addition, we haveV from (21), (5), and Fact (I). Thus, it follows thaṫ on the time interval [t k + , t k+1 ]. Moreover, applying Fact (II) to (23) provides On the other hand, since (12) holds for t = t k + because of the continuity of V(x(t)), we have This fact and (24) imply Then, from (25) and (7), the start function s is bounded on [t k + , t k+1 ] as follows: In particular, under t ≤ t k + + min , we have (18). This fact indicates (4) because of the definition of t k . This completes the proof. ▪ One may consider that (ii) can be obtained in the same manner as the proof of the nonexistence of Zeno behaviors in typical event-triggered control, for example, References 7 and 8; however, the actual situation is different. This is because the resulting dynamics is distinct from those of the typical framework. In fact, the closed-loop system of the standard event-triggered control is given bẏx while the closed-loop system in this article has a switching dynamics as in (9) and (20), that is, Example 1. Consider the system Σ, which is given by and the start function (7) for Figure 3 shows the simulation results for It is observed that the control input is sparse and the state approaches zero over time. In the bottom figure, we see that V(x(t)) decreases when the control input is nonzero but it does not always increase when the control input is zero.

Application: Rotor angle regulation of DC motor via a power packet network
In this section, we apply our sparse event-triggered control to rotor angle regulation of a DC motor to which energy is supplied by a power packet network. [17][18][19][20] F I G U R E 3 Simulation result for model-based sparse event-triggered control in Example 1

F I G U R E 4 The circuit of a DC motor driven by a power packet
Consider the electrical circuit illustrated in Figure 4. The dynamics of this system is given by (1) for where (t), i(t), and v(t) are the rotor angle, armature current, and input voltage, respectively. 26 On the other hand, R, L, J, K, K b , and K e are positive constants that describe the characteristics of the motor, which are given as (R, L, J, K,

F I G U R E 5
Simulation result for the model-based event-triggered control in Section 3.2 Since the power to drive the motor is supplied by a power packet, the control input u(t) has to be sparse. Thus, we regulate the state x(t) of this system into the origin 0 by using the sparse event-triggered control in Theorem 1. For this system, we construct a sparse event-triggered controller by (2), (7), and The simulation result of the system for Figure 5. This figure shows that the rotor angle (t) decreases with each intermittent control and converges to 0 with time. In this way, the proposed method is useful for the control under intermittent power supply.

DATA-DRIVEN SPARSE EVENT-TRIGGERED CONTROL
Now, we address Problem 2, that is, the data-driven version of Problem 1.

Data-driven solution to Lyapunov equations
In solving Problem 2, a data-driven method to construct a Lyapunov function plays an important role. Thus, we first present it based on the data on state trajectories.
Consider the linear systeṁx where x(t) ∈ R n is the state and Ã ∈ R n×n is a Hurwitz matrix that is assumed to be unknown. We then consider the following problem, which is useful for constructing a quadratic Lyapunov function V ∶ R n → R 0+ in a data-driven manner. Problem 3. Consider the system in (29). Suppose that a positive-definite matrix Q ∈ S n and m state trajectory data Then, find a solution P ∈ S n to the Lyapunov equation Note that (30) has a unique positive-definite solution P because Ã is Hurwitz and Q is positive-definite. Now, we derive a solution to Problem 3. If (30) holds, we obtain In Problem 3, it is assumed that the positive-definite matrix Q and the state trajectory data )dt are exactly known. Thus, we can consider (31) as a linear equation with an unknown matrix P ∈ S n , and thus the solution to (31) is obtained by solving the linear equation.
This idea is formalized as follows. By using the functions rvec and lvec defined in the end of Section 1, (31) is transformed into where Δ ∶= and l ∶= 1 2 n(n + 1). Note that the equation in (31) is equivalent to the equation in (32). Moreover, (32) is a linear equation with l unknown parameters. Thus, the following result is obtained.
then the following two statements hold: 1) There exists a unique symmetric solution P ∈ S n to the equation in (31). 2

) The solution to the Lyapunov equation in (30) is equal to the solution P ∈ S n to the equation in (31).
Proof. The following three facts prove Theorem 2.
(a) If Ã is Hurwitz, there exists a unique positive-definite solution P to (30). (b) If (34) holds, (31) has a unique symmetric solution given by which is positive-definite. (c) P * in (35) is a solution to (30).
Fact (a) is a well-known result (see, e.g., Reference 27). Next, we prove (b). The linear equation in (31) is equivalently transformed into the linear equation in (32). Moreover, (32) has a unique solution subject to (34). These facts indicate that (31) has a unique symmetric solution P under (34). Furthermore, (31) holds for P * because we have for each 0 ∈ R 0+ , ∈ [ 0 , ∞), and x 0 ∈ R n . In fact, (36) indicates This implies that P * is a solution to (31). In addition, P * is clearly positive-definite if Q is positive-definite. These facts prove (b). Finally, we provide (c). Substituting P * for P in the left-hand side of (30), we obtain This completes the proof. ▪ Theorem 2 shows that the solution to Problem 3 (i.e., the Lyapunov equation in (30)) is obtained by solving the linear equation in (31) if we have sufficient data in the sense of (34). In other words, a quadratic Lyapunov function V(x) = x ⊤ Px for the system in (29) can be derived from the state trajectory data.
Next, we show that (34) is not so limited in the sense that it holds for almost all (x 11 , x 21 , … , x m1 ) under reasonable conditions. In the following, we use Δ(X 1 , T 1 , T 2 ) ∈ R m×l to denote Δ ∈ R m×l in (33), where X 1 ∶= [x 11 x 21 · · · x m1 ] ∈ R n×m , T j ∶= [ 1j 2j · · · mj ] ⊤ ∈ R m (j = 1, 2). Moreover, we introduce the set for each T 1 , T 2 ∈ R m . This represents the set of the start points of state trajectories that do not satisfy (34).
Proof. Lemma 1 is proved by the following three facts. where If m ≥ l holds, then 1) p is a polynomial function with nm variables composed of the elements of X and 2) it holds p(X) ≠ 0 for some X ∈ R n×m . (c) If 1) and 2) in (b) hold, thenŴ in (37) is zero-measure in R n×m .
Fact (c) is straightforwardly derived by the fact that {x ∈ R n | (x) = 0} is zero-measure in R n if ∶ R n → R is a polynomial function satisfying (x) ≠ 0 for some x ∈ R n . 28 In the following, we prove (a) and (b).
First, we show (a). It holds if rank(Δ(X, T 1 , T 2 )) = rank(G(X)) because of rank(G(X)) < l ⇔ G(X)(G(X)) ⊤ ∈ R l×l is a singular matrix ⇔ p(X) = 0. Therefore, we prove rank(Δ(X, T 1 , T 2 )) = rank(G(X)). This fact is obtained by where C ∈ R l×l is a nonsingular matrix. The first, third, and fourth equalities are all trivial. To show the second equality, we prove that there exists a nonsingular C ∈ R l×l satisfying for all i ∈ {1, 2, … , m}.
From the definitions of lvec, vec, and rvec, we have vec where C 1 ∈ R l×n 2 is a full column rank matrix and C 2 ∈ R n 2 ×l is a full row rank matrix. On the other hand, from i2 − i1 = (i = 1, 2, … , m) and Fact (III) in the end of Section 1, holds. Thus, (38), (39), and (40) implies where we use C ∶= C 1 (e A ⊗ e A − I n 2 )C 2 ∈ R l×l . Moreover, C is nonsingular because 1) C 1 is full column rank, 2) (e A ⊗ e A − I n 2 ) ∈ R n 2 ×n 2 is nonsingular (which is derived by Fact (IV) and the fact that e A has no eigenvalue 1 if A is Hurwitz), and 3) C 2 is full row rank. These facts prove (a). Next, we give the proof of (b). It is clear that p is a polynomial function from the definition of p and vec. Therefore, we prove that there exists X ∈ R n×m satisfying p(X) ≠ 0.
We consider m ≥ l and X = [x 1 x 2 · · · x m ] ∈ R n×m that holds , where e i ∈ R n (i = 1, 2, … , n) are the vectors satisfying [e 1 e 2 · · · e n ] = I n . Then, we have This is because 1) (e i e ⊤ j + e j e ⊤ i ) ∈ S n (i, j = 1, 2, … , n) holds e i e ⊤ j + e j e ⊤ i = (e i + e j )(e i + e j ) ⊤ − and 2) it spans S n . Moreover, from the definitions of vec and rvec and (41), we have rank(G(X)) = rank Furthermore, rank(G(X)) = l is equivalent to p(X) ≠ 0 as described in the proof of (a). Hence, there exists X ∈ R n×m satisfying p(X) ≠ 0. This completes the proof of (b). ▪

Example 2. Consider the system in (29) for
We address Problem 3 for ] and the data on the three state trajectories in Figure 6.
In this case, m = l = 3 and we obtain the linear equation which corresponds to (32). Since rank(Δ) = 3, (43) has a unique symmetric solution: Thus, the solution to (31) is given by Therefore, we obtain a Lyapunov function of the linear system in (29) for (42) as x.

Solution to Problem 2
In this section, we provide a solution to Problem 2. As explained above, if A and B are unknown for the plant, we cannot directly obtain a Lyapunov function for the system given in (9). Then, a start function s cannot be constructed for sparse event-triggered control. We therefore consider adaptive construction of the start function s from the online state trajectory data. This is achieved by employing the method derived in the previous section. The proposed method is based on the sparse event-triggered strategy in (2) and (3), and on an adaptive update rule for the start function s from the online state trajectory data. The update rule is composed of the following three steps for each k ∈ Z 0+ : (i) Data collection: In the time interval [t k , t k + ], when the state feedback is applied to the plant, we collect the state trajectory data. (ii) Estimation of P: Using the method in Section 4.1, we estimate the solution P to the Lyapunov equation in (6) from the data collected until the time t k + . The estimation of the solution P on the time interval [0, t k ] is denoted by P k . (iii) Update of s: The start function s, which is used in the time interval [t k + , t k+1 ], is updated according to the estimation P k by (7) for This idea is formulated as follows. Consider the control system Σ with a start-time sequence t k (k = 0, 1, … ). Let By noting that Δ k and Γ k are composed of the data collected until the time t k + , as stated in (ii), we obtain the following linear equation: which corresponds to (32). Moreover, we use where ∈ (0, 1) is an arbitrary given number and P k is a solution to (46). Then, the solution to Problem 2 is obtained as follows.

Theorem 3. Consider Problem 2.
Suppose that a positive-definite matrix Q ∈ S n , ∈ (0, 1) and h ∈ R + are given. Let If there exists a k ∈ Z 0+ satisfying then (47) is a solution to Problem 2.
Proof. See Section 4.3. ▪ We make two remarks regarding Theorem 3.
First, the condition expressed in(48) tends to be satisfied as k grows (i.e., with time), because the matrix Δ k expands as new data are acquired and rank(Δ k ) is nondecreasing with respect to k.
Second, the start function s in (47) is a conditional version. The former is the same as in (7), while the latter is newly introduced due to the following reason. In our data-driven method, the estimated matrix P k is different from P * before (48) is satisfied. As the result, the existence of min , which is specified in Problem 1(ii) as the lower bound of the periods of zero input, cannot be guaranteed. On the other hand, the latter start function plays a role in ensuring a certain period (which is of length h) of zero input until the time when (48), that is, P k = P * for some k. This guarantees the existence of min as proven in Section 4.3.2.
Example 3. Consider the control system Σ in Example 1. For this system, we select h = 0.1 and = 0.6.
Then, the simulation result for x(0) ∶= [−6 − 4] ⊤ is illustrated in Figure 7. It is clear that the resulting control input is sparse and the state approaches zero. Moreover, the solution to the Lyapunov equation given in (6) is obtained in this process. The time evolution of V k (x(t)) and V k (x(t k ))e − k (t−t k ) is shown in Figure 8, where we can find the followings: (a) On an early stage of the control, V k (x(t)) and V k (x(t k ))e − k (t−t k ) are discontinuous due to their update with insufficient data, and (b) the behavior of the system becomes similar to that of the model-based case in Section 3 in the latter half of the simulation.

F I G U R E 9
Simulation result for data-driven sparse event-triggered control in Example 4

Proof of Theorem 3
We prove that (i) and (ii) in Problem 2 hold for the resulting control system with (47).

Proof that (i) holds
The following three facts prove that (i) holds for (47).
Fact (a) is proved by (48) and the fact that the rank of Δ k is nondecreasing with respect to k. In fact, holds for all k ∈ Z 0+ . Moreover, (b) is straightforwardly derived from Theorem 2. Next, we prove (c). Without loss of generality, we assume that k = 0, that is, P 0 , P 1 , … are equal to P * . Since P * is the solution to the Lyapunov equation in (6) and is a positive number satisfying (8) as defined in Theorem 1, we have P k = P * and k = (k = 0, 1, … ). These relations imply that the start function s(t, x(t), t k , x(t k )) in (47) is equivalent to that in (7). Thus, (c) is directly derived from Theorem 1(i).

4.3.2
Proof that (ii) holds First, we consider k < K, where K is the minimum k satisfying (48). Then, it follows from (47) that Next, when k ≥ K, the start function is written by (7) because Δ k is nondecreasing. This implies that there exists a ′ min ∈ R + such that t k+1 − t k − > ′ min , which is derived in the same manner as the proof of Theorem 1(ii). Therefore, (4) holds for min = min ( h∕2, ′ min ) .

EXTENSIONS
In this section, we extend sparse event-triggered control in Section 3 into two cases with disturbance and with nonlinear dynamics.

Case with disturbance
Consider the control system Σ in Figure 2. The plant P is given by where D ∈ R n×r is a constant matrix and w(t) ∈ R r is disturbance. We assume that w(t) is unknown but bounded by a known constant d ∈ R + , that is, ||w(t)|| < d holds for every t ∈ R 0+ . The controller K is given by (2). In this case, we have the following result for the corresponding sparse event-triggered control problem.
Proof. See Appendix A. ▪ Theorem 4 indicates that x(t) is steered into a neighborhood of origin by sparse control input.

Case with nonlinear dynamics
Let us consider the control system Σ in Figure 2, where the plant P and controller K are given by We assume that (a) the functions f 0 and f cl , defined by f 0 (x) ∶= f (x, 0) and f cl (x) ∶= f (x, g(x)) for all x ∈ R n , are globally Lipschitz continuous, (b) the origin of the systeṁx is locally exponentially stable. Note that (a) guarantees an existence and uniqueness of the state trajectory of Σ. Then, there exist an i ∈ R + (i = 1, 2, 3, 4), r ∈ R + , and V ∶ B r → R 0+ be positive numbers and a continuous function satisfying for all x ∈ B r . 27 For the r and V, let ⊆ R n be given by which is the level set of a function V. Note that is an invariant set of the system in (53); however, it is not clear that the same statement holds for the system Σ.
Then, the following theorem is obtained.
Proof. First, 1) is proved by the following two facts.
First, in order to prove (a), we give the proof that holds for each k ∈ Z 0+ , which is sufficient for (a). We consider Σ on [t k , t k+1 ]. The dynamics of Σ on [t k , t k + ) is given by (53). Since is an invariant set of (53) contained in B r , x(t) ∈ B r holds on [t k , t k + ] if x(t k ) ∈ . In contrast, we have (11) for * defined by (57), which is straightforwardly derived by (54) and (55). From this, we obtain (14) on [t k , t k+1 ] in the same manner as the proof of Theorem 1. Therefore, x(t) ∈ holds on [t k , t k+1 ] under x(t k ) ∈ because of V(x(t)) ≤ V(x(t k )) ≤ 1 r 2 . This proves (a).
Next, let us prove (b). From (a), we obtain (14) on [t k , t k+1 ] for all k ∈ Z 0+ . Thus, (17) holds for every t ∈ R 0+ . This implies that x(t) converges to 0 exponentially for all x(0) ∈ . This completes the proof of (b).

CONCLUSION
In this article, we established a framework of sparse event-triggered control. First, we presented a model-based method for sparse event-triggered control, where the event condition is defined by a Lyapunov function. The resulting control input is proven to be sparse and the control system is confirmed to be asymptotically stable. Second, we extended it to a data-driven version, where the event condition is adaptively updated from online data on the state trajectory. Finally, we discussed the possibility of extending our framework to the cases of disturbance and nonlinear dynamics. Our proposed data-driven method was derived in the absence of noise. In future work, this approach must be extended to the case where the data are exposed to noise.
First, let us consider the case (b-1). From (A1) for = 1∕2, we obtain on [t k , t k + ) for each k. This inequality with k = 0 and V( for every t ∈ [t 0 , t 0 + ). Moreover, from the definition of t 1 and the continuity of V(x(t)), we have (A11) on [t 0 , t 1 ]. Meanwhile, if (A11) holds at the time instant t k for some k ∈ Z 0+ , then (A11) also holds on [t k , t k+1 ], which is derived by a similar discussion to the case for k = 0. These facts imply (A11) on the time interval [t k , t k+1 ] for all k ∈ Z 0+ , that is, for every t ∈ R 0+ . Next, we assume (b-2). Let p ∈ Z 0+ be the maximum k ∈ Z 0+ satisfying V(x(0))e − t k ≥ max (P * )c 2 d 2 . Then, we have (17) on the time interval [0, t p ] and (A2) on [t p , t p+1 ] in a similar manner to the proof of Theorem 1(ii). Furthermore, we obtain (A11) on [t p+1 , ∞) from a similar discussion to the case (b-1). These facts derive (A2) for every t ∈ R 0+ . Thus, (b) is proved.

A.2 Proof of 2)
Statement 2) is proved by the following two facts.
(a) The relation ) . (b) Assume (a) and let min = min , where ∈ R + is the positive number in (19). Then (4) holds for every k ∈ Z 0+ .
These facts are proved as follows.