Distributed State Estimation under State Inequality Constraints with Random Communication over Multi-Agent Networks

: In this paper, we investigate distributed state estimation for multi-agent networks with random communication, where the state is constrained by an inequality. In order to deal with the problem of environmental/communication uncertainties and to save energy, we introduce two random schemes, including a random sleep scheme and an event-triggered scheme. With the help of Kalman-consensus ﬁlter and projection on the constrained set, we propose two random distributed estimation algorithm. The estimation of each agent is achieved by projecting the consensus estimate, which is obtained by virtue of random exchange information with its neighbors. The estimation error is shown to be bounded with probability one when the agents randomly take the measurement or communicate with their neighbors. We show the stability of proposed algorithm based on Lyapunov method and projection and demonstrate their effectiveness via numerical simulations.


Introduction
Recently, state estimation using a multi-agent network have been a hot topic due to its broad range of applications in engineering systems such as robotic networks, area surveillance and smart grids.Distributed method has benefits of robustness and scalability.As one of the popular estimation method, distributed Kalman filter has ability of real-time estimation and non-stationary process tracking.
Distributed Kalman-filter-based (DKF) estimations have been wildly studied in the literatures [1][2][3][4][5][6][7][8][9][10][11].The idea of references [1,2,5,6,9] is adding consensus term into a traditional Kalman filter structure.For example, in [1], the authors proposed a distributed Kalman filter, where the consensus information state and associated information matrix were used to approximate the global solution.An optimal consensus-based Kalman filter was reported in [2], where all-to-all communication is needed.In order to obtain a practical algorithm, in [2], the authors also reported a scalable distributed Kalman filter, where only the state estimation need to be transmitted between neighbors.Moreover, the switching communication topology for distributed estimation problem was studied in [7,8].Another kind of distributed Kalman filter design is use the idea of fusing the local estimate of each agent [3,4,10,11].In [3], the authors investigated distributed Kalman filter by fusing information state of neighbors, and a covariance intersection based distributed Kalman filter was proposed in [4] proposed a diffusion Kalman filter based on covariance intersection method.The last kind of DKF is Consensus+Innovations approach [12,13], which allows each agent tracking the global measurement.It should be notice that the approaches in [12,13] can approximate global optimal solution, which achieved by knowing global pseudo-observations matrix in advance.
New estimation problems emerge resulting from unreliable communication channel and random data packet dropout.In a multi-agent network, KCF was characterized under the effect of lossy sensor networks in [5] by incorporating a Bernoulli random variable in the consensus term.Independent Bernoulli distribution was also used to model the random presence of nonlinear dynamics as well as the quantization effect in the sensor communications in [14].From an energy saving viewpoint, the agent can be enabled to reduce communication rate.The random sleep scheme (RS) was introduced into the projecting consensus algorithm in [15,16], which allows each agent to choose whether to fall asleep or take action.Different from the RS scheme, there is another well-known scheme, called event-triggered scheme, under which information is transmitted only when the predefined event conditions are satisfied.Assuming the Gaussian properties of the priori conditioned distribution, a deterministic event-triggered schedule was proposed in [17], and, to overcome the limitation of the Gaussian assumption, a stochastic event-triggered scheme was developed and the corresponding exact MMSE estimator was obtained in [18].
On the other hand, in practice, we may obtain some information or knowledge about the state variables, which could be used to improve the performance (such as accuracy or convergence rate) in the design of distributed estimators.In some cases, such knowledge can be described as given constraints of the state variables, which may be obtained from physical laws, geometric relationship, environment constraints, etc.There have been many method to solve the problem of constrained state in Kalman filter, such as pseudo-observations [19], projection [20,21], and moving horizon method [22].In fact, a survey about the conventional design of Kalman filter with state constraints was reported in [23], and moreover, it showed that the constraints as additional information are useful to improve the estimation performance.In [11], the authors studied distributed estimation with constraints information, where both the state estimation and covariance information are transmitted.In this paper, we concentrate on the problem of distributed estimation under state constraints with random communication, and we use projection method to deal with the constrained state.
In this paper, we focus on distributed estimation under state inequality constraints with random communication, which is an important problem.We design a distributed Kalman filter incorporating consensus term and projection method.Specifically, the constrained estimate obtained in each agent by projecting the consensus estimation onto the constrained surface.Moreover, in order to reduce the communication efficiently, we introduce a stochastic event triggered scheme to realize a tradeoff between communication rate and performance.We summarize the contribution of the paper as follows: (i) We proposed a distributed Kalman filer with state inequity constraints and stochastic communication.(ii) We introduce random sleep scheme and stochastic event-triggered scheme to reduce the communication rate.(iii) We analyze the stability properties of proposed algorithm by Lyapunov method.
The remainder of the paper is organized as follows.Section 2 provide some necessary preliminaries and formulate the distributed estimation problem with state constriants and random communication.Section 3 give algorithms based on random sleep scheme and stochastic event-triggered scheme.The performance analysis are provided in Section 4, and then a numerical simulation is shown in Section 5, which shows the benefit of state constraints.Section 6 give the discussions of the paper.Finally, some concluding remarks are provided in Section 7.
Notations: The set of real number is denoted by R. A symmetric matrix is denoted by M, and a positive definite matrix is M ≥ 0 (M > 0) means that the matrix is positive semi-definite (definite).The maximum and minimum eigenvalue are denoted by λ max (•) and λ min (•), respectively.E{•} represents mathematical expectation.P (x) denotes the probability of random variable (x).

Problem Formulation
In this section, we first provide some preliminaries about convex analysis [24].Then the problem of distributed estimation with state constraints and random communication is formulated.

Consider a function
Similarly, if for any x, y ∈ X and 0≤a≤1, ax + (1 − a)y ∈ X holds, then X ∈ R n is a convex set.Denote P X : R n → X is the projection onto a closed and convex subset X, which has following definition: In order to analyze the properties of proposed algorithm, we provide a lemma about projection (see [25]).

Problem Formulation
Consider the following linear dynamics where x k ∈ R m is the states, w k is zero-mean and covariance Q > 0 Gaussian white noises.
In practice, according to the physical laws or design specification, some additional information is known as prior knowledge, which can be formulated as inequality constriants about state variables (some engineering applications can be found in [23]).In this paper, we consider the state inequality constraints [26][27][28].Specifically, for the dynamics (1), the inequality constraints about the state can be given as follows, q t (x) ≤ 0, t = 1, . . ., s, where q t (x) : R m → R is a convex function, s is the number of the constraints.State x k is estimated by a network consisting of N agents.The measurement equation of the ith agent is given by: where C i ∈ R q i ×m , v i,k is the measurement noises by agent i which is assumed to be zero-mean white Gaussian with covariance R i > 0. v i,k is independent of w k ∀k, i and is independent of v j,s when i = j or k = s.
Graph theory [29] can be used to describe the communication topology of the network, where the agent i can be treated as node i, and the edge can be regarded as communication link.An undirected graph is denoted as G = (V, E ), where V = {1, 2, . . ., N} is the node set, and E = {(i, j) : i, j ∈ V } is the edge set.If node i and node j are connected by an edge, then these two vertices are called adjacent.The neighbor set of node i is defined by N i (k) = {j : (i, j) ∈ E }. Denote W = (ω ij ) nn ∈ R n×n as the weighted adjacency matrix of G, where the element ω ii = 0 and ω ij = ω ji > 0, and the corresponding Laplacian is L. The weighted adjacency matrix of graph G is defined as W = (ω ij ) nn ∈ R n×n , where ω ii = 0 and ω ij = ω ji > 0. Denote the Laplacian of the weighted graph is L. It is well known that, for Laplacian L associated with graph G, if G is connected, then λ 1 (L) = 0 and λ 2 (L) > 0.
In this paper, we adopt the following two standard assumptions, which have been widely used.
The undirected graph G is connected.
The Kalman-consensus filter is designed by using local measurement and information from neighbors, which was given in [2], where x * i,k is the estimate by each agent.G i,k is the consensus gain, K i,k and G i,k are the estimator gain and consensus gain need to be designed, respectively.
In order to satisfy the state constraints (2), the estimate obtained by each agent should solve the following optimization problem: min where xi,k+1 In this paper, we assume that each agent knows the constraints (2), and therefore, the constrained estimation can be obtained by projection at each agent.Namely, with X = {x | q t (x) ≤ 0, t = 1, . . ., s}.The aim of distributed estimator (4) is to find the optimal gain K i,k and G i,k to minimize the mean-squared estimation error Different with existing works in [2,7,8], which did not consider any state constraints and random communication.Here, we consider the problem of constrained KCF with random communication.Moreover, due to the uncertainty environment and saving energy, agents may loss measurement or random communication with neighbors.Hence, to solve our problem, we need to show how to design the Kalman filter gain K i,k and the consensus gain G i,k under random communication such that the estimation error of ( 6) is bounded.
In the following section, we introduce random sleep and event-triggered scheme to design distributed estimation algorithm, and then analyze the stability conditions of the proposed algorithms.

Distributed Algorithms
We present two algorithms in the following subsections, respectively.

Random Sleep Algorithm
In practice, the communication cost may be much larger than the computational cost.Hence it is reasonable to reduce the communication rate and to save energy.Here we introduce a random sleep (RS) scheme, which allows the agents to have their own policies to collect measurement or send message to its neighbors.To be specific, in our RS scheme, the agents may fall in sleep during the measurement collection and consensus stage with independent Bernoulli decision, respectively.When a agent sleeps during the measurement collection, it cannot use measurement to update the local estimate.When it sleeps during the consensus stage, it cannot send any information to its neighbors.
Let 0<ρ m i <1 and 0<ρ c i <1 be given constants.Denote σ m i,k and σ c i,k as independent Bernoulli random variables with P (σ m We make the following assumptions, Furthermore, notice that E( Then we propose a distributed random sleep algorithm for the Kalman-filter-based estimation as follows: Step 1.(random sleep on measurement collection) If we ignore consensus step, the covariance iteration and the gain can be obtained by the minimum mean squared error (MMSE) estimation as follows (see [30]): To ensure that the estimate by each agent always satisfies the constraints, we present the following algorithm by projecting the unconstrained estimation onto the constrained surfaces: The proposed Kalman-consensus filter with constraints via RS scheme (RSKCF) is summarized in Algorithm 2. Define ), which hints that the estimator can achieve better performance using the constraint information in the design.In [20], the author proved that the projection-based Kalman filter with linear equality constraints performs better than unconstrained estimation.
Notice that P i,k in the algorithm does not represent the estimation error covariance with respect to x * i,k any longer.A poor choice of consensus gain G i,k may spoil the stability of the error dynamics.In section IV, we will present the stability analysis of the algorithm with an appropriate choice of the consensus gain.

Algorithm 1 KCF with constraints via RS (RSKCF)
At time k, a prior information Initialization x * i,0 , P i,0 ; Random Sleep on Measurement Collection , then the error dynamics of Algorithm 1 can be written as Remark 1.The different choices of ρ m i and ρ c i correspond to different network conditions.If ρ c i = 0 and 0 < ρ m i < 1, ∀i ∈ V , the network is not connected, and each agent only computes the estimate by its local Kalman filter with constraints.If ρ c = 1 and 0 < ρ m i < 1, ∀i ∈ V , the multi-agent system has a perfect communication channel but each agent may fall in sleep during the measurement collection.When 0 < ρ c i < 1 and ρ m i = 1, ∀i ∈ V , each agent produces measurement at each time but maybe fall in sleep during the consensus stage.Since the RS probability ρ c i and ρ m i can be determined independently, we can take the advantage of the formulation to simplify the design in practice.
Remark 2. It should be notice that the inequality constraints q t (x) ≤ 0, t = 1, . . ., s in this paper including the linear equality constraints.In section IV, the problem (4) will be solved in closed-form under linear equality constraints Dx k = d as a special case, which consistent with the case in [20].It also should be notice that, the nonlinear equality constraints cannot be solve by projection method, because {x|q t (x) = 0, t = 1, . . ., s} may not be a convex set.

Stochastic Event-Triggered Scheme
An event-triggered scheme, based on an event monitor, can also be considered to reduce the communication rate and to save energy.The event-triggered idea is carried out as follows: when the associated monitor exceeds a predefined threshold, the agent transmits local information with its neighbors, which provides a tradeoff between the communication rate and the performance.Since the agents only transmit valuable information, the event-triggered scheme may achieve better performance compared with the proposed RS scheme.In [17,18], the authors dealt with a centralized stochastic event-triggered estimation problem.Here we introduce a distributed stochastic event-triggered scheme by extending the algorithms described in [18].
Denote a binary decision variable γ i,k ∈ {0, 1} for agent i at time k.We use the following strategy: agent i measures the target state and sends information to its neighbors if γ i,k = 1; agent i only receives information from its neighbors if γ i,k = 0.As stated in [18], at each time instant, agent i generates an i.i.d random variable ϕ i,k and computes γ i,k as follows: where Clearly, in (13), the parameter Y i introduces one degree of freedom to balance the tradeoff between the communication rate and the estimation performance.From an engineering viewpoint, a larger Y i means more information to be transmitted.
The local MMSE estimator (without consensus and projection) for agent i with incorporating the stochastic event-triggering ( 13) is given as follows (see Theorem 2 in [18]): As stated before, by the consensus idea and projection technique, the constrained estimation can be obtained by x * i,k = P X ( xi,k ).The stochastic event-triggered KCF with constraints (ETKCF) is described in Algorithm 2.
Algorithm 2 KCF with constraints via stochastic event-triggered scheme ).
Remark 3. Since only the important information is broadcasted, the stochastic event-triggered scheme may achieve better performance compared with the RS scheme.The RS scheme, however, has a simpler form and is easier to implement.

Remark 4.
Here, we provide some notations, which will help to analysis the proposed algorithms.For given A, C i , Q, R i and P i,k , without loss of generality, there are positive scalars b, c, q, r i , p i , and pi such that The last inequality holds because the solution of ( 9) is bounded by Assumption 1 and Theorem 1 in [30].

Performance Analysis
In this section, we analyze the convergence of the proposed algorithms in the following three subsections.We introduce some concepts related to stochastic processes, which are useful in the following convergence analysis ( [31,32]).Definition 1.The stochastic process ζ k is said to be exponential bounded in mean square, if there are real numbers, η > 0, ϑ > 0 and 0 < ν < 1 such that holds for every k ≥ 0.
Definition 2. The stochastic process ζ k is said to be bounded with probability one, if holds with probability one.
We first give three lemmas, which can be found in [33].
Lemma 2 (Lemma 2.1, [33]).Suppose that there is a stochastic process V k (ξ k ) as well as positive numbers θ, θ, µ, and and are fulfilled.Then the stochastic process is exponentially bounded in mean square, i.e., for every n ≥ 0.Moreover, the stochastic process is bounded with probability one.

Random Sleep Scheme
The following theorem shows the stochastic stability of (12).
Theorem 1.Under Assumptions 1 and 2, the error dynamics (12) for Algorithm 1 is exponentially bounded in mean square and bounded with probability one.
To prove the Theorem 1, we give a lemma first.Its proof can be found in Appendix A.
Remark 5. From the proofs of Theorems 1, the constrained estimation x * i,k is the same as unconstrained estimation xi,k once xi,k ∈ X for agent i.Otherwise, x * − x k < xi,k − x k .In other words, the constrained estimation x * i,k is closer to the true state x k than the unconstrained estimation xi,k .In fact, our schemes first make all agents' estimation error bounded, and then project each estimation onto the surface of constraints.Remark 6.It should be notice that, if the system is asymptotically stable, i.e., the spectral radius of A is less than 1, there alway exists the gains to guarantee the convergence of MSE.When the system is unstable, i.e., the spectral radius of A greater than 1, we need to design estimation gain K i,k and G i,k to guarantee the convergence of MSE.In [13], the authors discussed the maximum degree of instability of A, that guarantees the convergence of distributed estimation algorithm.The maximum degree of instability of A relates with connectivity and global measurement matrix, which reflects the tracking ability of the networks.Given the maximum degree of A, the problem remains how to design the gain matrices K i,k and G i,k .
In what follows, we give the closed-form solution under linear equality constraints, which can be written as follows: where D ∈ R s×m is constraint matrix, d ∈ R s , and s is the number of constraints.Generally, D should be of full rank, otherwise, we have redundant constraints.In such case, we can remove linearly dependent rows from D until D is of full rank.
In order to use projection method, the constraints (32) can be written as X = {x|Dx = d}.For each individual agent, we can obtain the constrained estimate by projection as follows: where xi,k+1 is the estimate obtained by the consensus step.Therefore, we have the following result.
Corollary 1.Under Assumptions 1 and 2, the error dynamics of the Algorithm 1 with constraints ( 32) is exponentially bounded in mean square and bounded with probability one.Moreover, the constrained estimation is Clearly, linear equality constraints can be extended to the linear state inequality constraints Dx k ≤ d.Specifically, if the consensus estimate step satisfies D xi,k ≤ d, then constrained estimation x * i,k and xi,k will be the same.Otherwise, the constrained estimate can be obtained by projecting xi,k onto Dx k = d.Therefore, the Corollary 1 still holds for the linear inequality constraints case.

Event-Triggered Scheme
The communication rate γ * i for agent i can be achieved by Notice that ψ( ỹi,k ) is proportional to the probability density of Gaussian variable.If we can obtain the covariance of ỹi,k in the steady state, we can obtain the communication rate [18].However, it is difficult to analyze the communication rate since ỹi,k depends on the consensus and projection stage in the distributed case, though a communication rate is determined using the stochastic event-triggered scheme (13).
Although we cannot explicitly obtain the stochastic event-triggered communication rate, we can still easily find that the event-triggered scheme performs better than the random sleep scheme having the same communication cost (referring to the the performance comparison between random sleep scheme and stochastic event-triggered scheme by simulations as shown in Section 5).
Actually, for agent i, there exists a stochastic event-triggered communication rate 0 < γ * i,k < 1.The probability of collection measurements and sending information can be taken as 1 and γ * i,k , respectively.Following Theorem 1, there exists a sufficient small consensus gain to guarantee that the error dynamics is exponentially bounded in mean square and bounded with probability one.

Simulations
In this section, we give some simulations to illustrate the effectiveness of proposed algorithms.We compare the estimation performances of the proposed estimator with the suboptimal consensus-based Kalman filter (SKCF) in [2].Moreover, we compare the performance between proposed two algorithms.
A target moves along a line with constant velocity, which is tracked by a network with N = 6 agents.The topology of the network is shown in Figure 1.The dynamics of target is given as follows where T is the sampling period, x k (1) and x k (2) are the target position, and x k (3) and x k (4) are the velocity along different directions.In this example, we take T = 1s and Q = diag(0.1,0.1, 0.1, 0.1).The measurement by agent i is expressed as follows, In [20], the author stated that the target is constrained if it is traveling along the road, otherwise, it is unconstrained.In this example, we assume the target is along a given road, and therefore, the problem is constrained.Here we consider that the target is travelling on a road with a heading of η, which means tan η = x k (2)  x k (1) = x k (4) x k (3) .Then the constrained information can be written as: In this example, we take η = 60 deg.

Simulation Case 1:
In this case, we test the performance of proposed two algorithms.Denote e 0 = [5, 5, 0.3, 0.3] , x 0 = [0, 0, tan η, 1] , and initial conditions by agent i is set to We consider the total mean square estimation errors (TMSEE) of Algorithm 1, and compare its performance with the existing consensus-based Kalman filter algorithms in [9].Here, we name the algorithm in [9] as SKCF.TMSEE is widely used to indicate the performance of the estimator, which is defined as The parameters of the multi-agent system are chosen that the upper bound on g is determined by Theorem 1.In both cases, 1000 times independent Monte Carlo simulations are carried out to show the estimation performance.Figure 2 shows the TMSEE of Algorithm 1 and SKCF in [9] with the same communication rate.It can be seen that the constrained estimation by Algorithm 1 is more accurate than the unconstrained KCF.Moreover, according to Figure 2, it is clear that the RSKCF with the constraints converges to a precision of 60 in less than 15 instances while SKCF without any constraints needs almost 20 instances.Therefore, for this example based on the constraint information, the proposed RSKCF with the constraints outperforms SKCF in both convergence rate and accuracy.Next, we give the simulation results of Algorithm 1.The parameters setting is the same as above.Take Figure 3 shows that the estimation error of the agents can reach consensus and bounded, which is consistent with the results of Theorem 1.The curve of g * with fixed ρ m i = 0.7, ∀i ∈ V is shown in Figure 4, while the curve with fixed ρ c i = 0.7, ∀i ∈ V is shown in Figure 5. From the proof of Theorem 1 it can be observed that the upper bound for g increases along with the probability of collection measurements.Moreover, the upper bound for g decreases along with the probability of broadcast accordingly.In order to compare the performance between stochastic event-triggered scheme and random sleep scheme, we choose parameters to ensure that γ * i = ρ c i and ρ m i = 1.Specifically, we can obtain that γ * 1 = 0.6027, γ * 2 = 0.5940, γ * 3 = 0.5823, γ * 4 = 0.5928, γ * 5 = 0.6194 and γ * 6 = 0.6248 by simulations.The triggering sequence are shown in Figure 6.The stochastic event-triggered scheme performs better than the random sleep scheme as shown in Figure 7.An intuitive explanation is that, only important information is transmitted by stochastic event-triggered scheme.The communication rate of Algorithm 2 with Y i = 0.1 × I 2 , 0.3 × I 2 , 0.5 × I 2 , 0.7 × I 2 , ∀i are shown in Figure 8, where I 2 = diag{1, 1}.The simulation results show that the communication rate increases along with the increasing of Y i , since the agent will be more likely to share the information with its neighbors along with the increasing of Y i based on the event-triggered scheme (13).

Simulation Case 2:
In this case, we give an example in a complex network, i.e., the network consists of 30 agents.The communication topology of the network shown in Figure 9.The parameters setting is the same as case 1.The initial conditions by agent i is set to x i,0 = x 0 + (−1) i ie 0 , i = 1, . . ., 30, R i = diag(r 1,i , r 2,i ), where r 1,i and r 2,i are taking from (50, 80) randomly.Figure 10 shows the comparison between Algorithms 1 and 2. From Figure 10, we can see that the stochastic event-triggered scheme still performs better than random sleep scheme Figures 11 and 12 give the TMSEE of Algorithms 1 and 2, respectively.

Discussion
As shown in Figure 2, the proposed algorithm can achieve better performance comparing with the one in [9].It is due to that the constraints information can be treated as additional information, and such additional information beyond those explicitly given in system model.Therefore, the description of modified model is different with the standard Kalman filter equations, and the modified one can help to improve the estimation performance.In [34], the authors studied estimation with nonlinear equality constraints, where the nonlinear state constraints are linearization locally and the estimation was obtained by projection to local linear surface.When it comes to distributed setting, such linearization will produce approximation error and may suffer from a lack of convergence.Our future study may include designing distributed filtering with nonlinear state equality constraints.
A consensus-based Kalman filter with stochastic sensor activation was proposed in [5], where all sensors share the same activation probability, and the stability of the algorithm was proved.In [9], the authors extended the results to different activation probability of each sensor, and the convergence property was shown under mild condition.Both [5] and [9] studied mean-square stability of the algorithm.Differently, we study the stochastic stability of the proposed algorithm, and show that the constraints information will help to improve the estimation performance.
In this paper, we investigate the random communication problem for distributed estimation with state constraints.It should be notice that local observability condition was needed.In [11], the authors also studied distributed estimation problem with state inequality constraints for deterministic case, where the global observability condition was needed, i.e., each agent need not to be observable.The results in [11] relies on the sufficient communication between agents so that the information will spread to whole network.However, when agent communicates with neighbors randomly, the information will not spread enough to whole network.Therefore, the global observability hard to guarantee the stability of estimation.Under global observability, it can be easy to see that if agents are not communicating with each other, the estimation will diverge.It is worth studying how to design the communication rate such that the estimation error is stable under global observability condition.
In [35,36], the distributed event-triggered estimation was also studied.In [35], a time-varying gain was designed by Riccati-like difference equations in order to adjust the innovative and state-consensus information, while, in [36], an event-triggered scheme was provided by analyzing the stability conditions of the error dynamics (without noise terms).Motivated by [18], we introduce the stochastic event-triggered scheme [18], which shown how to design a parameter in the event mechanism satisfying a desired trade-off between the communication rate and the estimation quality.However, in distributed setting, it is hard to obtain the communication rate due to the correlation between agents.In future we may study the communication rate for distributed stochastic event-triggered estimation.
There exist a lot of works to investigate network structures [37][38][39][40].In [37], the authors presented a metrics suite for evaluating the communication of the multi-agent systems, where one agent can choose any agent to communicate.The paper [38] considered the problem of network-based practical set consensus over multi-agent systems subject to input saturation constraints with quantization and network-induced delays.We considered distributed estimation problem over multi-agent systems without input, quantization and delays, which can be our future works.In [39], the authors considered the problem of permutation routing over wireless networks.The main idea in [39] is to partition the network into several groups.Then, broadcasting is performed locally in each group and getaways are used for communication between groups to send the items to their final destinations.

Conclusions
Distributed estimation based on the Kalman filter under state constraints with random communication was proposed in this paper.Two stochastic schemes, that is, the random sleep and event-triggered schemes, were introduced to deal with the environment or communication uncertainties and for energy saving.The convergence of the proposed algorithms was verified, and the conditions for the corresponding stability were given by choosing a suitable consensus gain.Moreover, it was shown that the information of additional state constraints is useful to improve the performance of distributed estimators.Moreover, it follows from P i,k > 0 and R i > 0 that Combining (A5) and (A6) gives As a result, Notice that C i P i,k C i ≥ 0, Substituting (A9) into (A8) yields Taking the inverse of both sides of (A10), multiplying from left and right with (A − σ m i K i,k C i ) and (A − σ m i K i,k C i ), respectively, and then taking expectation operator on the both sides, we obtain

Figure 1 .
Figure 1.Topology of the multi-agent network in case 1.

Figure 8 .
Figure 8. Communication rate for different Y i .