An event-triggered observation scheme for systems with perturbations and data rate constraints

In this paper, an event-triggered observation scheme is considered for a perturbed nonlinear dynamical system connected to a remote location via a communication channel, which can only transmit a limited amount of data per unit of time. The dynamical system, which is supposed to be globally Lipschitz, is subject to bounded state perturbations. Moreover, at the system’s location, the output is measured with some bounded errors. The objective is to calculate estimates of the state at the remote location in real-time with maximum given error, whilst using the communication channel as little as possible. An event-triggered communication strategy is proposed in order to reduce the average number of communications. An important feature of this strategy is to provide an estimation of the relation between the observation error and the communication rate. The observation scheme’s efficiency is demonstrated through simulations of unicycle-type robots. © 2022TheAuthors.PublishedbyElsevierLtd.ThisisanopenaccessarticleundertheCCBYlicense (http://creativecommons.org/licenses/by/4.0/).


Introduction
Efficiency has always played a central role in the field of system dynamics and control.In the past twenty years, with the appearance of wireless technologies, efficiency has gained a new meaning.It is not enough to simply observe and control systems.These tasks should in addition be carried out in a way that is efficient in terms of data rates.This quest for efficiency has lead to the birth of an entire sub-field: control and estimation over data rate constrained communication channels (Matveev & Savkin, 2009;Yüksel & Başar, 2013).The problems in this sub-field share some common ingredients: one or several dynamical systems, communication channels, or other devices such as controllers, actuators, sensors, that interact by exchanging messages in the presence of a source of uncertainty.This uncertainty can be in ✩ It was elaborated in the UCoCoS project which has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 675080.The 21st IFAC World Congress (IFAC 2020), July 12-17, 2020, Berlin, Germany.This paper was recommended for publication in revised form by Associate Editor Hernan Haimovich under the direction of Editor Sophie Tarbouriech.the form of noise, perturbations, parametric uncertainty, or deviations in the initial conditions.The incertitude can be understood as information in the sense of Shannon's theory (see Shannon (1948)).This information needs to be transmitted via the communication channels, which are generally limited either in the frequency at which they can send messages or in the number of bits they can transmit per unit of time and can be subject to losses or noise themselves.
The earliest work in this sub-field focused on linear systems.For example, these early results include Wong and Brockett (1997), where the problem of state estimation for a stochastic plant over data rate constrained communication is investigated, and Elia and Mitters (2001), where the problem of stabilization of a linear plant with limited communication is considered.Many more results were obtained for linear systems and broad surveys of these results are available in Nair, Fagnani, Zampieri, and Evans (2007), Baillieul and Antsaklis (2007) and Andrievsky, Matveev, and Fradkov (2010).For nonlinear systems, results followed soon after.These include De Persis (2003), where the problem of the stabilization of a nonlinear system via a constrained channel is posed, and Baillieul (2004) which investigates the data rate requirements for feedback control.Both of these early works assumed a particular structure on the nonlinear system.Results for more general structures were obtained in Nair, Evans, Mareels, and Moran (2004) and Liberzon and Hespanha (2005) which adapted techniques for linear systems from Nair and Evans (2003) and Liberzon (2003) to nonlinear ones.We put the emphasis on Nair et al. (2004) because it is among the first papers to introduce a notion of entropy (in this case, topological feedback entropy) to describe the minimum sufficient data rate to stabilize a system.Several other notions of entropy have since been used to provide bounds on the sufficient and/or necessary data rates allowing for constrained control and/or observation of unperturbed systems.These results include invariance entropy (Kawan, 2013), feedback entropy, (Colonius, Kawan, & Nair, 2013) topological entropy (Liberzon & Mitra, 2016;Matveev & Pogromsky, 2016;Matveev & Savkin, 2009;Savkin, 2006;Sibai & Mitra, 2018;Voortman, Pogromsky, Matveev, & Nijmeijer, 2019), estimation entropy/α-entropy (Kawan, 2018;Sibai & Mitra, 2017), and restoration entropy (Matveev & Pogromsky, 2019).As an alternative to notions of entropy, some works such as Fradkov, Andrievsky, and Ananyevskiy (2015), Fradkov, Andrievsky, and Evans (2008) relied on passivity-based methods to provide bitrate bounds.The aforementioned results on entropy are limited to unperturbed systems as the entropy of a perturbed system is, generally speaking, infinite (see Matveev and Savkin (2009)) and, as such, entropy is not a useful mathematical tool to analyze uncertain systems.
For similar reasons, another topic: event-based control.Some of the earliest works include (Åarzén, 1999), where an eventtriggered PID controller is presented and Åström and Bernhardsson (1999), where the effects of event-based sampling are compared to periodic ones.An introduction to event-based control can be found in Heemels, Johansson, and Tabuada (2012) and an overview of sampling-related results in Hetel et al. (2017) and Ge, Han, Zhang, Ding, and Yang (2020).
One possible approach to obtain constructive bounds for sampled-data systems relies on LMI-based techniques.Early results include Fridman, Seuret, and Richard (2004) which provides sufficient conditions for the robust sampled-data stabilization of linear systems with delayed input and Fu and Xie (2005) which considers several quantized feedback designs for linear systems.Recently, LMI-based techniques have been employed for nonlinear systems with specific structures such as Lur'e-type systems (Seifullaev & Fradkov, 2016;Zhang, Mazo, & van de Wouw, 2017), nonlinear systems with cone-bounded nonlinearities (Tarbouriech, Seuret, Moreira, & Gomes da Silva, 2017) and with cone-bounded nonlinear inputs (Moreira, Tarbouriech, Seuret, & Gomes da Silva, 2019).
In some recent works, both concepts (data rate constraints and event-based) have explicitly been used together for control and observation purposes.Among them are (Han et al., 2015), which uses an event-triggered sensor schedule for remote estimation for a linear system, Shi, Chen, and Darouach (2016), designing a remote estimator for a linear system with unknown exogenous inputs, Huang, Shi, and Chen (2017), where a remote estimator for a system with an energy harvesting sensor is developed, Trimpe (2017), which tackles distributed state estimation with data rate constraints, Xia, Gupta, and Antsaklis (2017), which considers networked state estimation with a shared communication medium and, Muehlebach and Trimpe (2018), where an LMI approach is used.
Focusing on the observation, the problem statement in this paper is motivated by the following practical situation: a unicycletype robot needs to communicate its position and orientation to a remote location by using Wi-Fi, whilst measuring only its position and using limited computation capacities.Because wireless networks cannot transmit infinite amounts of data, it is necessary to develop an observation scheme that minimizes the data rate usage.The simplest equations describing a unicycle-type robot are with x(t) = [x 1 (t) x 1 (t) x 3 (t)] ⊺ the state-space vector, y(t) the measured output, where x 1 (t) ∈ R, x 2 (t) ∈ R are the coordinates of the robot in the xy-plane, x 3 (t) ∈ [0, 2π ) is the angular orientation of the robot, v l and v θ are linear and angular velocities respectively, d1 (t) and d2 (t) are perturbations such that dmax ≥ di (t) ≥ dmin > −1 which correspond to an actuation mismatch, and w(t) ∈ R 2 is a measurement error.Note that the bound dmin > −1 implies that the actual velocity v l (1 + d1 (t)) has the same sign as v l , which is a natural condition stemming from experiments (see Guerra, Efimov, Zheng, andPerruquetti (2014, 2017) for more details about this formulation).
In this paper, an event-triggered scheme is developed for the remote observation of systems with Lipschitz nonlinearities, state perturbations, and measurement noise (as in ( 1)) via lossless data rate constrained communication channels.A first important feature of the scheme is that the minimum duration between two consecutive messages that are sent via the channel can be chosen.
A second important feature is that the precision of the estimates can be tuned.Additionally, the observation scheme functions on an event-triggered basis, communicating a new estimate when the precision of the current estimate is not sufficient anymore.The combination of these features leads to an observation scheme that is very efficient in terms of the transmitted number of bits.Providing analytical bounds on the sufficient transmission capacity is the main contribution of this paper.This paper is a generalization and extension of Voortman, Efimov, Pogromsky, Richard, and Nijmeijer (2020): a generalization because nonlinear systems are considered, as opposed to linear systems, and an extension because the results are developed for continuous-time systems as opposed to Voortman et al. (2020) which considered discrete-time systems.An observer specific for unicycle robots utilizing a similar communication protocol as this paper has been experimentally validated on Turtlebots and the results have been accepted for publication in Voortman et al. (2021).
The structure of this work is as follows.First, in Section 2 we specify the problem statement.An observation scheme is developed in Section 3. In Section 4, two results about this observation scheme are exposed.The first one provides a bound on the maximum observation error.The second one evaluates a bound on the so-called ''channel transmission capacity'' which is sufficient to implement the observation scheme.Finally, in Section 5 simulations of the observation scheme are provided for the motivating example (1).These simulations illustrate why the observation scheme is particularly efficient and how its different parameters can be tuned to fit the user's preferences in terms of performance.

Problem statement
We consider continuous-time systems of the following form and w(t) ∈ R m is a measurement error.We make the following assumptions about the external signals: Assumption 1.We have ∥d(t)∥ 2 ≤ δ and ∥w(t)∥ 2 ≤ ω, ∀t ≥ 0, where ∥ • ∥ 2 is the Euclidean norm, δ is the maximum state perturbation, and ω is the maximum measurement error.
We make the following regularity assumption about the righthand side of (2).
Assumption 2. ϕ is Lipschitz with constant L.
Note that the previous assumption can be relaxed to local Lipschitz continuity if the perturbations from Assumption 1 are such that the solutions of (2) are uniformly ultimately bounded (Khalil, 2002).The system is equipped with a smart sensor (a sensor admitting some computational capacities, which allows it to perform additional computations on the measured data) and it is connected to a remote location via a data rate constrained communication channel, which can only send messages that are of finite size.For any time interval t between two consecutive transmission, the channel can transmit at most b + ( t) bits.The objective is to provide estimates x(t) of x(t) at the remote location by sending messages over this communication channel.The sensor and the remote location are aware of an initial estimate where ϵ 0 is a user-specified parameter corresponding to the error of initial conditions.The reason why (3) is assumed, will be discussed further in this paper in Remark 2.
In order to generate the estimates, messages m(t j ), where t j are the transmission times, are sent.Four ingredients interact with these messages: a sampler S , a coder C , an alphabet function A , and a decoder D. The four devices together form a communication protocol.The following constants/parameters are known by all devices: the system matrices A and C , the vector field ϕ, the maximum state perturbation δ, the maximum measurement error ω, the discretization error ϵ (which is induced by coding/decoding operation), and the initial estimate x(0) with its accuracy ϵ 0 .
At the system side, the sampler S generates the instants of transmission in the following way t j+1 = S (t j , {y(s)} t j ≥s≥0 , m(t 1 ), . . ., m(t j )), (4) t 0 = 0.The coder then generates the messages: (5) ∀t j : j > 0. At each communication instant, the messages are encoded into a finite-sized alphabet (the finiteness being due to the data rate constraints).The alphabet function A determines the last index l j of the messages: The restriction on the choice of messages is then m(t j ) ∈ {1, . . ., l j }, ∀t j : j > 0. After encoding the messages, the number of transmitted bits should not exceed the maximum number of bits that can be sent during the communication.This implies the following constraint on the alphabet length: At the remote location, the decoder D receives the messages and interprets them to generate a deterministic estimate of the state x(t) in the following way ∀j ≥ 0. Because of the perturbation, measurement error and finite data rate, it is impossible to provide exact estimates at the remote location.
(2) To guarantee that its performance in terms of data rate is better when the perturbations are not the worst-case realizations.
(3) To investigate the relationship of the time interval between subsequent communications tj := t j+1 − t j , the maximum number of bits per time interval b + (•), and ξ for the proposed communication scheme.

Designing the devices
In this section, we introduce the different devices of the communication protocol.The main mechanism can be described as follows: the sensor emulates the dynamics of the remote estimate on the last sent estimate and forwards a new local estimate whenever such value is ''far away'' from the ''true" local measurement.More specifically, at the sensor side, a local observer transforms the output into estimates of the state x(t).A copy of the decoder is also simulated by the computational capacity of the sensor so that the sensor knows the current estimate x(t) the decoder currently has.This 'copy' of the remote estimate which is provided by the smart sensor will be denoted xc (t).Starting at the initial estimate x(0) and in the absence of messages, the decoder computes real-time estimates as solutions of (2) without perturbations.When the distance between x(t) and xc (t) = x(t) becomes larger than the prescribed maximum error (including a margin for the local observation error ē(t) := x(t) − x(t)), the sampler decides to communicate and the coder sends a message to the decoder to provide a new estimate x(t).Fig. 1 depicts how the different elements interact.Below, each of these algorithms is presented in detail.

The local observer
The local estimate x(t) has the following dynamics where K ∈ R n×m is a gain matrix.The dynamics of ē(t) are The local observer uses x(0) = x(0) as an initial point, which implies that ∥ē(0)∥ ≤ ϵ 0 .The gain K is computed by using the solutions of the LMI program: arg min with The proof of this proposition is provided in Appendix A.
Remark 1.The objective function of the LMI (11) simply aims to minimize the size of the attractive region (by maximizing γ * 1 and minimizing γ * 2 and γ * 3 ).This, in turn, minimizes the bound on the local observation error.Ideally, one would want to minimize η but this leads to a -generally intractable -nonlinear matrix inequality problem.

The protocol description
We now present the communication procedure, which we will further reference as Procedure 1.It is composed of a sampler, alphabet function, coder, and decoder as described below.For this particular communication procedure, a minimum time interval between communications is going to be employed.This quantity, denoted as t, is known by all devices.It is a user-specified parameter, which is to be chosen finite and it directly influences the upper bound on the estimation error.To properly describe the communication instants, we will need several quantities.The indexes j of the communication instants are inherently known by all devices.The quantity ¯j(t) refers to the index of the last instant of communication (initially, j(0) = 0).This quantity is always known by the sampler (because it knows how many communication instants it defined), the coder (because it knows how many messages it sent), as well as the decoder (because it knows how many messages it received).In between messages (i.e. for t ∈ [t j , t j+1 )), estimates x(t) and xc (t) are computed as with the initial conditions x(t j ) coming from the messages m(t j ).
Before we can define the communication protocol, an extra lemma is necessary.The alphabet relies on the assumption that the estimate x(t) will lie within a known set V j when the communications occur.This assumption guarantees that after each communication instant, which makes the procedure repeatable: ∀t ∈ [t j , t j + t], where x(t) is the solution of (2) with x(t j ) as an initial condition, x(t) is the solution of (13) with x(t j ) as an initial condition and The proof of this lemma is provided in Appendix B. Note that ( 15) is an LMI program.Because of its formulation, (15) always admits a solution (this can be seen from the fact that µ i can be chosen arbitrarily large, which makes the first inequality to hold for some µ i sufficiently large).
Procedure 1.The Sampler: For all t ≥ t¯j (t) + t, the sampler verifies whether the following condition is satisfied where µ * 1 and µ * 2 are solutions of (15).If the condition is not met, a message must be sent to provide a new estimate.The sampler thus updates ¯j(t) and t¯j (t) = t (j increases by one and ¯j(t) = j).
The Alphabet Function: If t = t¯j (t) , the coder and decoder build a covering of the set V j , where V j is defined as with balls of size ϵ.The balls in the covering are numbered from 1 till l¯j (t) , where l¯j (t) is the length of the alphabet and hence the output of the alphabet function.
The Coder: At the communication instants, the coder function finds the index of the ball in the covering whose center is the closest to x(t j ) and sends this index over the channel.To get x c (t), the coder computes the solutions of (13) with the center of that ball as an initial condition.
The Decoder: When the decoder receives a message, it computes the solutions of (13) with the center of that ball as an initial condition.This solution is then used as x(t).
The strategy to build the alphabet is based on the following idea.As was previously mentioned, in the absence of messages, new estimates are obtained by computing the solutions of (13).After receiving a message, the state of the system x(t) is contained in a ball of a certain radius whose center is the estimate x(t).In the absence of any messages, this ''ball'' of uncertainty is gradually deformed into a larger/small uncertain set.The uncertain set evolves due to three factors: first of all, the unknown state perturbation d(t) increases its radius, secondly, the uncertainty set is stretched/compressed by the action of the dynamics of the system (the deformations are proportional to the eigenvalues A), and thirdly, its radius is increased due to the measurement noise.Given that the communication intervals are chosen to be finite, this uncertain set remains of finite size, estimated by Lemma 1 with t = t, in between communications.It can thus be covered by a finite number of balls of size ϵ > 0. The balls in the covering can be indexed from 1 till l max < ∞.In order to produce such a covering, the only information needed is the initial ball and the different upper bounds on the uncertainties/errors, which implies that both the coder and the decoder can build the set.In order to transmit a new estimate, one can simply send the index of one of the balls whose center then serves as a new estimate with a precision that will depend on ϵ.The cost of communicating in that fashion is dependent on how many balls of size ϵ are required to cover the uncertain set.

Remark 2.
• The coordinates of the centers of the balls used in the covering are always relative to the previous estimate.By communicating in a relative fashion, it is possible to keep the size of the messages limited even if the system is unstable.If the system is unstable, state-space trajectories can drift arbitrarily far away from the origin, which implies that sending an estimate in an absolute fashion (that is, in a coordinate system that is with respect to a fixed point, e.g., the origin), requires to cover all of R n , which needs infinitely many balls and hence infinitely many bits to be sent.The main drawback of communicating in this fashion is that the channel has to be lossless since the loss of a single message would put the communication protocol to a halt.It is possible to make the communication protocol robust towards losses in the communication channel (see e.g.Voortman et al. (2019) for more information on communication protocols that are robust towards losses).This option was not explored as robustness towards losses lies outside of the scope of the current work.
• Since xc (t) = x(t), both devices can build the set I j according to its definition (17).The covering procedure which determines the alphabet is not demanding from a computational point of view since it consists of covering a set that always has the same shape except the whole set is shifted by a certain vector from the origin.Moreover, since this set is centered around the previous estimate, both the coder and decoder can build a covering for it and thus have access to the alphabet.
• The existence of t, the minimum time interval between two consecutive communications, implies that Zeno behavior is automatically avoided since at least t time has to elapse between two triggering instants and t is a strictly positive parameter that does not change during the execution of the communication protocol.
• The assumption that (3) holds is made in order to avoid unnecessarily complicating the communication protocol.If (3) does not hold, then an additional initialization step would be required, during which an initial estimate is provided.Because this step does not change the rest of the communication protocol and would have little impact on the overall communication rate, it is omitted and replaced by the assumption that (3) holds, to facilitate the understandability of the procedure.

Rate and errors
With the observation scheme and its devices fully introduced, we now focus on determining what minimum number of bits per time interval is sufficient to implement the observation scheme.The first result we present provides a closed-form expression for maximum observation error ξ .Proposition 2. The observation scheme described in Procedure 1 is the center of a ball that contains x(t).From Proposition 1, we have that ∥x(t) − x(t)∥ 2 ≤ η.At the times of communication, we consequently have that for s ∈ [0, t].Since after t, the sampler checks whether the distance is going to be exceeded, and a communication resets the distance to ϵ + η should this bound be reached, then (18) holds ∀t ≥ 0.

■
The next result of this section, which is also the main result of the paper, aims to provide a lower bound on b + (•) for the designed communication scheme.In the following theorem, the notation ⌈•⌉ refers to the ceiling function (aka the smallest integer that is greater than the argument of the function).
Theorem 1.The observation scheme described in Procedure 1 with Proof.In order to implement Procedure 1, ( 7) should be verified for all j.The size of the alphabet is equal to the number of balls of radius ϵ required to cover V j .Since the radius of the set is , it can evidently be covered by no more than These hypercubes are themselves contained in a sphere of radius ϵ.In order to verify ( 7), it thus suffices to take the log 2 of this last quantity, which leads to ( 19) and completes the proof.
Remark 3. As it will be demonstrated in the simulations section, both error bounds presented in this section may be conservative.This is due to several factors: (1) In the LMI formulation, the Lipschitz nonlinearity is modeled as a perturbation (that has to be compensated).For systems where the Lipschitz nonlinear term plays the most important part in the dynamics, this can be a source of conservatism.
(2) The proof relies on a Lyapunov-like function.As is always the case with Lyapunov functions, better functions can lead to tighter error bounds.
(3) The objective function of ( 11) is suboptimal in the sense that it is a linear formulation that minimizes the size of the convergence region, which depends on a quotient of γ i 's, which is an inherently nonlinear function.
(4) The covering of the set I j is simple but not optimal.A better covering would result in fewer balls being necessary and would improve the bound of Theorem 1.
Regarding point (3) of the above remark, one could alternatively take the objective function γ −1 1 + δ 2 γ 2 + ω 2 γ 3 and use Schur's lemma to transform (11) into an LMI again (this approach is used in e.g.Moreira et al. (2019)).After trying both objective functions out on the simulations which will be presented in the next section, the authors notice no improvement compared with the previous objective function.This different formulation thus was not used.

Case study
In this section, we apply the previously developed observation scheme to the motivating example that was presented in the introduction: a unicycle-type robot with data rate constraints.The goals of this section are: • To illustrate the validity of the theoretical upper bound ( 19), but also to show that, in a real situation, our observation strategy can be much more efficient; • To illustrate how the choices of ϵ and t affect the number of communications; • To show that with an improved local observer, the performance of the scheme becomes much better.
The unicycle-type model that we consider is of the form (1) with 05, and v θ = 0.2.We rewrite this system such that it fits in the form (2) with Assumption 1 holds with δ = 0.1, v l = 0.015 and ω = 0.05.In general, Assumption 2 holds with L = 2v l .However, ( 11) is not feasible with L = 2v l .We can adapt the formulation and solve this issue by using the fact that if the observation error is small, the Lipschitz constant is smaller than 2v l (for ∥x(t . In order to solve the LMI, we thus follow the following steps: (1) Pose L = L, where L is some arbitrary initial value to be used in solving of the LMI (11); (2) Compute γ i and K by solving the LMI ( 11) with L = L; (3) Compute η from Eq. ( 12) and find Lipschitz constant L over the interval [−η, η]; (4) If the Lipschitz constant that we compute is smaller than L, the problem is feasible, if not, we adapt L to a different value and start at step (1) again.
We begin by computing the gain of the local observer K by solving the LMI program (11).Following the aforementioned steps, we pose L = L = 0.38 and obtain K = To solve the LMI, the MATLAB package YALMIP (Lofberg, 2004) was used, together with the MOSEK solver (MOSEK ApS, 2019).
With these values, we have η = 0.370964 and for that range, the Lipschitz constant of the system over [−0.370964, 0.370964] is smaller than 0.38.The value for L is thus validated.We then used a Monte Carlo method with 10000 different simulations of the communication scheme from t = 0 till t = 100 with initial conditions (0, 0, 0), an initial estimate randomly chosen in a ball of radius η centered around the origin, and random perturbations.
In order to compare the actual number of transmitted bits with the theoretical maximum number of bits and to show the influence of the choices of ϵ and t, we run several simulations.First, we set t = 0.1 and simulate for various choices of ϵ.In Table 1, we expose the results of the experiments in term of maximum observation error ξ , number of communications N com and number of bits per communication N bits .We make several observations (1) The minimum time interval between two communications is set to 0.1, which implies that we could theoretically communicate 1000 times in 100 s.Clearly, the actual number of communications is much lower than that since it reaches 14.39 when the most precise estimates are sent.In that case, 8 bits need to be sent every time we communicate, on average every 7 s.This shows that the scheme is much more efficient in terms of the actual number of transmitted bits per unit of time, compared to the theoretical sufficient maximum number of bits.
(2) As the precision of the estimates increases, the total error decreases, but the number of transmitted bits increases (more balls are required to cover V j ).There is thus a tradeoff between precision and the number of transmitted bits.
Since ξ is smaller, we also communicate more often.
(3) Decreasing ϵ can only affect the precision up to a certain limit, which is largely dictated by the precision local observer.Since the bound on the local observation error is η = 0.370964, it is impossible to reach this bound for ξ .Moreover, even approaching this bound requires us to decreases ϵ drastically.
For the next set of simulations, we use only one value of ϵ = 0.05 and simulate the observation scheme for various choices of t.The results are displayed in Table 2.We make the following observations about the results: (1) Increasing t also increases ξ , which is logical: communicating more often leads to better precision.
(2) As t increases, the average time between two consecutive communications drastically increases as well.Even with t = 0.5, we communicate on average less than once every 100 time instants.This is mostly due to the fact that the state perturbations and measurement noise are relatively low.
(3) For almost all choices of t, the number of bits that need to be transmitted is the same.This is due to the fact that the number of bits is rounded upwards.The unrounded number of bits does increase as t increases.
(4) The limiting effect of the local observation error is again present.Even choosing t = 0.01, which corresponds to a sampling of 10 ms, ξ remains large.
In both sets of simulations, the main limiting factor is that the local observer has a limited precision (η = 0.371) and this greatly influences the total error.By using a better local observer (e.g., a nonlinear observer specifically designed for unicycle-type robots), the performance of the observation scheme would be improved.In order to illustrate this fact, we consider the situation where all states of the unicycle robot are observed locally and a local observer is not necessary.We thus consider the same system as the previous example except C = I 3 (where I n is the n×n identity matrix) v l = 0.1, v θ = 0.2, δ = 0.099 and ω = 0.05.This value for δ implies that very large perturbations are possible in the state.In particular, the linear velocity of the robot ranges from 0.001 m/s to 0.199 m/s.Since all states are observed, η = ω = 0.05.We again used a Monte-Carlo method with 1000 different simulations for 100 s.
Fig. 2 shows how the actual observation error evolves over time, together with ∥x(t)− x(t)∥ 2 , which is used for the triggering  condition.Note that the distance between both horizontal lines is equal to ω = 0.1.We observe that communication is triggered each time the triggering condition is met.The observation error always remains below ξ .There is some conservatism in the triggering condition, the reasons for which have been discussed in Remark 3. Towards the end, it can be seen that the actual observation error is larger than ∥x(t) − x(t)∥ 2 and the triggering condition, which is due to the measurement noise, and also proves that the protocol is not too conservative.
Fig. 3 shows the trajectory of the unicycle robot in the x 1 − x 2 plane for one particular simulation with ϵ = 0.01 and t = 0.5.We can see that the observation scheme follows the actual trajectory of the system but, due to the state perturbations and the measurement noise, the observation scheme regularly resets to a point close to the current local estimate.The full results of the Monte-Carlo simulations for relevant pairs of ϵ and δ are displayed in Table 3.We make the following observations about these results.(1) Although large state perturbations are present, the observation scheme is still more efficient than the theoretical maximum.Even in the case of ϵ = t = 0.01, we only communicate 477 times on average, as opposed to the theoretical maximum of 10000 times.
(2) The effect of ϵ and t on ξ and N com are similar as in the previous example.It is always possible to trade precision for the number of bits and vice-versa.
(3) The choice of t has more impact in the error, as well as the average number of communications than ϵ.

Conclusion
In this paper we presented an event-triggered, data rate constrained observation scheme for continuous-time linear systems with perturbations.After posing the problem statement, the design of the devices that form the communication scheme was explained.A theorem evaluating the upper bound for the minimum bit-rate required to implement this communication protocol was then presented.The protocol was tested via simulations of unicycle-type robots.Through these simulations, the following properties of the observation scheme have been highlighted.
• On average, it was observed in simulations that the number of communications is much lower than the theoretical maximum.This is due to some conservatism in the estimates as well as the fact that the observation scheme functions on an event-triggered basis; • By properly choosing the parameters of the observation scheme, it is possible to trade-off accuracy for a lower number of bits sent and vice-versa; • One of the limiting factors of the observation scheme is the usage of a generic local observer.The accuracy of the local observer greatly influences the general precision of the data rate constrained observation scheme.
The continuation of this work includes: • Using a local observer more adapted to the structure of the system to decrease the total observation error and hence the maximum observation error; • Using the data rate constrained observation scheme on several mobile robots; • Adapting the observation scheme to a larger class of nonlinear systems.

Appendix A. Proof of Proposition 1
Proof.For brevity, we will drop the dependency on time in the notations of this proof (e.g., ē refers to ē(t)).We start by defining the Lyapunov function V (ē) = ē⊺ P * ē.The derivative with respect to time of this function is V (ē) = ė⊺ P * ē + ē⊺ P * ė.  , we guarantee that the ē(0) starts within that region and hence, (12) holds for all t ≥ 0. ■.

Fig. 2 .
Fig. 2. Evolution over time of the observation error and quantity used in the triggering function, together with the respective bounds for one particular simulation with full state measurement, ω = 0.1, ϵ = 0.01, and t = 0.5.

Table 2
Results for various t with ϵ = 0.05.