Design of Estimation Algorithms from an Innovation Approach in Linear Discrete-Time Stochastic Systems with Uncertain Observations

Nevertheless, in many real situations, usually the measurement device or the transmission mechanism can be subject to random failures, generating observations in which the state appears randomly or which may consist of noise only due, for example, to component or interconnection failures, intermittent failures in the observation mechanism, fading phenomena in propagation channels, accidental loss of some measurements or data inaccessibility at certain times. In these situations where it is possible that information concerning the system state vector may or may not be contained in the observations, at each sampling time, there is a positive probability (called false alarm probability) that only noise is observed and, hence, that the observation does not contain the transmitted signal, but it is not generally known whether the observation used for estimation contains the signal or it is only noise. To describe this interrupted observation mechanism (uncertain observations), the observation equation, with the usual additive measurement noise, is formulated by multiplying the signal function at each sampling time by a binary random variable taking the values one and zero (Bernoulli random variable); the value one indicates that the measurement at that time contains the signal, whereas the value zero reflects the fact that the signal is missing and, hence, the corresponding observation is only noise. So, the observation equation involves both an additive and amultiplicative noise, the latter modeling the uncertainty about the signal being present or missing at each observation.


Introduction
The least-squares estimation problem in linear discrete-time stochastic systems in which the signal to be estimated is always present in the observations has been widely treated; as is well known, the Kalman filter [12] provides the least-squares estimator when the additive noises and the initial state are Gaussian and mutually independent.Nevertheless, in many real situations, usually the measurement device or the transmission mechanism can be subject to random failures, generating observations in which the state appears randomly or which may consist of noise only due, for example, to component or interconnection failures, intermittent failures in the observation mechanism, fading phenomena in propagation channels, accidental loss of some measurements or data inaccessibility at certain times.In these situations where it is possible that information concerning the system state vector may or may not be contained in the observations, at each sampling time, there is a positive probability (called false alarm probability) that only noise is observed and, hence, that the observation does not contain the transmitted signal, but it is not generally known whether the observation used for estimation contains the signal or it is only noise.To describe this interrupted observation mechanism (uncertain observations), the observation equation, with the usual additive measurement noise, is formulated by multiplying the signal function at each sampling time by a binary random variable taking the values one and zero (Bernoulli random variable); the value one indicates that the measurement at that time contains the signal, whereas the value zero reflects the fact that the signal is missing and, hence, the corresponding observation is only noise.So, the observation equation involves both an additive and a multiplicative noise, the latter modeling the uncertainty about the signal being present or missing at each observation.
Communication Theory).Due to the multiplicative noise component, even if the additive noises are Gaussian, systems with uncertain observations are always non-Gaussian and hence, as occurs in other kinds of non-Gaussian linear systems, the least-squares estimator is not a linear function of the observations and, generally, it is not easily obtainable by a recursive algorithm; for this reason, research in this kind of systems has focused special attention on the search of suboptimal estimators for the signal (mainly linear ones).
In some cases, the variables modeling the uncertainty in the observations can be assumed to be independent and, then, the distribution of the multiplicative noise is fully determined by the probability that each particular observation contains the signal.As it was shown by Nahi [17] (who was the first who analyzed the least-squares linear filtering problem in this kind of systems assuming that the state and observation additive noises are uncorrelated) the knowledge of the aforementioned probabilities allows to derive estimation algorithms with a recursive structure similar to the Kalman filter.Later on, Monzingo [16] completed these results by analyzing the least-squares smoothing problem and, subsequently, [3] and [4] generalized the least-squares linear filtering and smoothing algorithms considering that the additive noises of the state and the observation are correlated.
However, there exist many real situations where this independence assumption of the Bernoulli variables modeling the uncertainty is not satisfied; for example, in signal transmission models with stand-by sensors in which any failure in the transmission is detected immediately and the old sensor is then replaced, thus avoiding the possibility of the signal being missing in two successive observations.This different situation was considered by [9] by assuming that the variables modeling the uncertainty are correlated at consecutive time instants, and the proposed least-squares linear filtering algorithm provides the signal estimator at any time from those in the two previous instants.Later on, the state estimation problem in discrete-time systems with uncertain observations, has been widely studied under different hypotheses on the additive noises involved in the state and observation equations and, also, under several hypotheses on the multiplicative noise modeling the uncertainty in the observations (see e.g.[22] - [13], among others).
On the other hand, there are many engineering application fields (for example, in communication systems) where sensor networks are used to obtain all the available information on the system state and its estimation must be carried out from the observations provided by all the sensors (see [6] and references therein).Most papers concerning systems with uncertain observations transmitted by multiple sensors assume that all the sensors have the same uncertainty characteristics.In the last years, this situation has been generalized by several authors considering uncertain observations whose statistical properties are assumed not to be the same for all the sensors.This is a realistic assumption in several application fields, for instance, in networked communication systems involving heterogeneous measurement devices (see e.g.[14] and [8], among others).In [7] it is assumed that the uncertainty in each sensor is modeled by a sequence of independent Bernoulli random variables, whose statistical properties are not necessarily the same for all the sensors.Later on, in [10] and [1] the independence restriction is weakened; specifically, different sequences of Bernoulli random variables correlated at consecutive sampling times are considered to model the uncertainty at each sensor.This form of correlation covers practical situations where the signal cannot be missing in two successive observations.In [2] the least-squares linear and quadratic problems are addressed when the Bernoulli variables describing the uncertainty in the observations are correlated at instants that differ two units of time.This study covers more general practical situations, for example, in sensor networks where sensor failures may happen and a failed sensor is replaced not immediately, but two sampling times after having failed.However, even if it is assumed that any failure in the transmission results from sensor failures, usually the failed sensor may not be replaced immediately but after m instants of time; in such situations, correlation among the random variables modeling the uncertainty in the observations at times k and k + m must be considered and new algorithms must be deduced.
The current chapter is concerned with the state estimation problem for linear discrete-time systems with uncertain observations when the uncertainty at any sampling time k depends only on the uncertainty at the previous time k − m; this form of correlation allows us to consider certain models in which the signal cannot be missing in m + 1 consecutive observations.The random interruptions in the observation process are modeled by a sequence of Bernoulli variables (at each time, the value one of the variable indicates that the measurement is the current system output, whereas the value zero reflects that only noise is available), which are correlated only at the sampling times k − m and k.Recursive algorithms for the filtering and fixed-point smoothing problems are proposed by using an innovation approach; this approach, based on the fact that the innovation process can be obtained by a causal and invertible operation on the observation process, consists of obtaining the estimators as a linear combination of the innovations and simplifies considerably the derivation of the estimators due to the fact that the innovations constitute a white process.
The chapter is organized as follows: in Section 2 the system model is described; more specifically, we introduce the linear state transition model perturbed by a white noise, and the observation model affected by an additive white noise and a multiplicative noise describing the uncertainty.Also, the pertinent hypotheses to address the least-squares linear estimation problem are established.In Section 3 this estimation problem is formulated using an innovation approach.Next, in Section 4, recursive algorithms for the filter and fixed-point smoother are derived, including recursive formulas for the estimation error covariance matrices.Finally, the performance of the proposed estimators is illustrated in Section 5 by a numerical simulation example, where a two-dimensional signal is estimated and the estimation accuracy is analyzed for different values of the uncertainty probability and several values of the time period m.

Model description
Consider linear discrete-time stochastic systems with uncertain observations coming from multiple sensors, whose mathematical modeling is accomplished by the following equations.
The state equation is given by where {x k ; k ≥ 0} is an n-dimensional stochastic process representing the system state, {w k ; k ≥ 0} is a white noise process and F k , for k ≥ 0, are known deterministic matrices.We consider scalar uncertain observations {y i k ; k ≥ 1}, i = 1, . . ., r, coming from r sensors and perturbed by noises whose statistical properties are not necessarily the same for all the sensors.Specifically, we assume that, in each sensor and at any time k, the observation y i k , perturbed by an additive noise, can have no information about the state (thus being only noise) with a known probability.That is, where, for i =  The aim is to address the state estimation problem considering all the available observations coming from the r sensors.For convenience, denoting 2) is equivalent to the following stacked observation equation (3)

Model hypotheses
In order to analyze the least-squares linear estimation problem of the state x k from the observations y 1 , . . ., y L , with L ≥ k, some considerations must be taken into account.On the one hand, it is known that the linear estimator of x k , is the orthogonal projection of x k onto the space of n-dimensional random variables obtained as linear transformations of the observations y 1 , . . ., y L , which requires the existence of the second-order moments of such observations.On the other hand, we consider that the variables describing the uncertainty in the observations are correlated in instants that differ m units of time to cover many practical situations where the independence assumption on such variables is not realistic.Specifically, the following hypotheses are assumed: Remark 2. For the derivation of the estimation algorithms a matrix product, called Hadamard product, which is simpler than the conventional product, will be considered.Let A, B ∈ M mn , the Hadamard product (denoted by •) of A and B is defined as From this definition it is easily deduced (see [7]) the next property that will be needed later.
For any random matrix G m×m independent of {Θ k ; k ≥ 1}, the following equality is satisfied Particularly, denoting Remark 3. Several authors assume that the observations available for the estimation come either from multiple sensors with identical uncertainty characteristics or from a single sensor (see [20] for the case when the uncertainty is modeled by independent variables, and [19] for the case when such variables are correlated at consecutive sampling times).Nevertheless, in the last years, this situation has been generalized by some authors considering multiple sensors featuring different uncertainty characteristics (see e.g.[7] for the case of independent uncertainty, and [1] for situations where the uncertainty in each sensor is modeled by variables correlated at consecutive sampling times).We analyze the state estimation problem for the class of linear discrete-time systems with uncertain observation (3), which, as established in Hypothesis 4, are characterized by the fact that the uncertainty at any sampling time k depends only on the uncertainty at the previous time k − m; this form of correlation allows us to consider certain models in which the signal cannot be absent in m + 1 consecutive observations.

Least-squares linear estimation problem
As mentioned above, our aim in this chapter is to obtain the least-squares linear estimator, x k/L , of the signal x k based on the observations {y 1 , . . ., y L }, with L ≥ k, by recursive formulas.Specifically, the problem is to derive recursive algorithms for the least-squares linear filter (L = k) and fixed-point smoother (fixed k and L > k) of the state using uncertain observations (3).For this purpose, we use an innovation approach as described in [11].
Design of Estimation Algorithms from an Innovation Approach in Linear Discrete-Time Stochastic Systems with Uncertain Observations Since the observations are generally nonorthogonal vectors, we use the Gram-Schmidt orthogonalization procedure to transform the set of observations {y 1 , . . ., y L } into an equivalent set of orthogonal vectors {ν 1 , . . ., ν L }; equivalent in the sense that they both generate the same linear subspace; that is, L(y 1 , . . ., Let {ν 1 , . . ., ν k−1 } be the set of orthogonal vectors satisfying L(ν 1 , . . ., ν k−1 ) = L(y 1 , . . ., y k−1 ), the next orthogonal vector, ν k , corresponding to the new observation y k , is obtained by projecting y k onto L k−1 ; specifically and, because of the orthogonality of {ν 1 , . . ., ν k−1 } the above projection can be found by projecting y k along each of the previously found orthogonal vectors ν i , for i ≤ k − 1, Since the projection of y k onto L k−1 is y k/k−1 , the one-stage least-squares linear predictor of y k , we have that where Hence, ν k can be considered as the "new information" or the "innovation" in y k given {y 1 , . . ., y k−1 }.
In summary, the observation process {y k ; k ≥ 1} has been transformed into an equivalent white noise {ν k ; k ≥ 1} known as innovation process.Taking into account that both processes satisfy that ν i ∈ L(y 1 , . . ., y i ) and we conclude that such processes are related to each other by a causal and causally invertible linear transformation, thus making the innovation process be uniquely determined by the observations.
This consideration allows us to state that the least-squares linear estimator of the state based on the observations, x k/L , is equal to the least-squares linear estimator of the state based on the innovations {ν 1 , . . ., ν L }. Thus, projecting x k separately onto each ν i , i ≤ L, the following general expression for the estimator x k/L is obtained where This expression is the starting point to derive the recursive filtering and fixed-point smoothing algorithms in the next section.

Least-squares linear estimation recursive algorithms
In this section, using an innovation approach, recursive algorithms are proposed for the filter, x k/k , and the fixed-point smoother, x k/L , for fixed k and L > k.

Linear filtering algorithm
In view of the general expression ( 6) for L = k, it is clear that the state filter, x k/k , is obtained from the one-stage state predictor, x k/k−1 , by Innovation ν k .We will now get an explicit formula for the innovation, ν k = y k − y k/k−1 , or equivalently for the one-stage predictor of the observation, y k/k−1 .For this purpose, taking into account (5), we start by calculating From the observation equation ( 3) and hypotheses 3 and 5, it is clear that Now, for k ≤ m or k > m and i < k − m, hypotheses 4 and 5 guarantee that Θ k is independent of the innovations ν i , and then we have that . So, after some manipulations, we obtain i ν i , and using (6) for Next, we determine an expression for Taking into account (9), it follows that or equivalently, To calculate the first expectation, we use again (3) for y k−i and from hypotheses 3 and 5, we have that which, using Property (4), yields where D k can be recursively obtained by Summarizing, we have that Taking into account the correlation hypothesis of the variables describing the uncertainty, the right-hand side of this equation is calculated differently for i = m or i < m, as shown below. (a = 0, and from ( 14) (b) For i < m, from Hypothesis 4, K θ k,k−i = 0 and, hence, from ( 14) Now, from expression (5), and using again that Θ k is independent of ν i , for i = k − m, it is deduced that or, equivalently, from ( 12) for i = m, (15) and noting we have that Next, substituting ( 15) and ( 16) into ( 11) and using ( 6) for x k/k−1 , it is concluded that Finally, using (3) and ( 16) and taking into account that, from (1), (17) are obtained by . Next, we calculate these expectations.

I. From Equation (3) and the independence hypothesis, it is clear that E[x k y
k ] is given by ( 13).I I.To calculate E[x k y T k/k−1 ], the correlation hypothesis of the random variables θ k must be taken into account and two cases must be considered: (a) For k ≤ m, from (10) we obtain By using the orthogonal projection lemma, which assures that hence, using again the orthogonal projection lemma and taking into account that Then, substituting these expectations in the expression of S k,k and simplifying, it is clear that Now, an expression for the prediction error covariance matrix, P k/k−1 , is necessary.From Equation (1), it is immediately clear that where is the filtering error covariance matrix.From Equation (7), it is concluded that

Covariance matrix of the innovation
From the orthogonal projection lemma, the covariance matrix of the innovation is obtained as . From (3) and using Property (4), we have that To obtain E[ y k/k−1 y T k/k−1 ] two cases must be distinguished again, due to the correlation hypothesis of the Bernoulli variables θ k : I.For k ≤ m, Equation (10) and Property (4) yield and in view of the orthogonal projection lemma, I I.For k > m, an analogous reasoning, but using now Equation ( 17), yields Next, again from the orthogonal projection lemma, Finally, from Equation ( 18), we have and hence, , the above expectations lead to the following expression for the innovation covariance matrices All these results are summarized in the following theorem.
Theorem 1.The linear filter, x k/k , of the state x k is obtained as where the state predictor, x k/k−1 , is given by The innovation process satisfies where The matrices T k,k−i are given by The covariance matrix of the innovation, The matrix S k,k is determined by the following expression where P k/k−1 , the prediction error covariance matrix, is obtained by with P k/k , the filtering error covariance matrix, satisfying

Linear fixed-point smoothing algorithm
The following theorem provides a recursive fixed-point smoothing algorithm to obtain the least-squares linear estimator, x k/k+N , of the state x k based on the observations {y 1 , . . ., y k+N }, for k ≥ 1 fixed and N ≥ 1.Moreover, to measure of the estimation accuracy, a recursive formula for the error covariance matrices, Theorem 2. For each fixed k ≥ 1, the fixed-point smoothers, x k/k+N , N ≥ 1 are calculated by whose initial condition is the filter, x k/k , given in (7).
The matrices S k,k+N are calculated from where the matrices M k,k+N satisfy the following recursive formula: The innovations ν k+N , their covariance matrices Π k+N , the matrices T k+N,k+N−i , Ψ k+N,k+N−m , D k and P k/k are given in Theorem 1.
Finally, the fixed-point smoothing error covariance matrix, P k/k+N , verifies with initial condition the filtering error covariance matrix, P k/k , given by (19).
Proof.From the general expression (6), for each fixed k ≥ 1, the recursive relation ( 20) is immediately clear.
, thus being necessary to calculate both expectations.

I. From Equation (3), taking into account that E[x k x T
k+N ] = D k F T k+N,k and using that Θ k+N and v k+N are independent of x k , we obtain I I. Based on expressions ( 10) and ( 17) for y k+N/k+N−1 , which are different depending on k + N ≤ m or k + N > m, two options must be considered: (a) For k ≤ m − N, using (10) for y k+N/k+N−1 with (8) for x k+N/k+N−1 , we have that where by following a similar reasoning to the previous one but starting from (17), we get Then, the replacement of the above expectations in S k,k+N leads to expression (21).
The recursive relation (22) for M k,k+N = E x k x T k+N/k+N is immediately clear from (7) for x k+N/k+N and its initial condition (20) and taking into account that x k/k+N−1 is uncorrelated with ν k+N , we have and, consequently, expression (23) holds.

Numerical simulation example
In this section, we present a numerical example to show the performance of the recursive algorithms proposed in this chapter.To illustrate the effectiveness of the proposed estimators, we ran a program in MATLAB which, at each iteration, simulates the state and the observed values and provides the filtering and fixed-point smoothing estimates, as well as the corresponding error covariance matrices, which provide a measure of the estimators accuracy.
• The process {w k ; k ≥ 0} is a zero-mean white Gaussian noise with covariance matrices Suppose that the scalar observations come from two sensors according to the following observation equations: , are zero-mean independent white Gaussian processes with variances R 1 k = 0.5 and R 2 k = 0.9, ∀k ≥ 1, respectively.According to our theoretical model, it is assumed that, for each sensor, the uncertainty at time k depends only on the uncertainty at the previous time k − m.The variables θ i k , i = 1, 2, modeling this type of uncertainty correlation in the observation process are modeled by two independent sequences of independent Bernoulli random variables, {γ i k ; k ≥ 1}, i = 1, 2, with constant probabilities P[γ i k = 1] = γ i .Specifically, the variables θ i k are defined as follows So, if θ i k = 0, then γ i k+m = 1 and γ i k = 0, and hence, θ i k+m = 1; this fact guarantees that, if the state is absent at time k, after k + m instants of time the observation necessarily contains the state.Therefore, there cannot be more than m consecutive observations consisting of noise only.
Moreover, since the variables γ i k and γ i s are independent, θ i k and θ i s also are independent for |k − s| = 0, m.The common mean of these variables is and its covariance function is given by To illustrate the effectiveness of the respective estimators, two hundred iterations of the proposed algorithms have been performed and the results obtained for different values of the uncertainty probability and several values of m have been analyzed.
Let us observe that the mean function of the variables θ i k , for i = 1, 2 are the same if 1 − γ i is used instead of γ i ; for this reason, only the case γ i ≤ 0.5 will be considered here.Note that, in such case, the false alarme probability at the i-th sensor, 1 − θ i , is an increasing function of γ i .
Firstly, the values of the first component of a simulated state together with the filtering and the fixed-point smoothing estimates for N = 2, obtained from simulated observations of the state for m = 3 and γ 1 = γ 2 = 0.5 are displayed in Fig. 1.This graph shows that the fixed-point smoothing estimates follow the state evolution better than the filtering ones.ii) The error variances corresponding to the fixed-point smoothers are less than those of the filters and, consequently, agreeing with the comments on the previous figure, the fixed-point smoothing estimates are more accurate.
iii) The accuracy of the smoothers at each fixed-point k is better as the number of available observations increases.On the other hand, in order to show more precisely the dependence of the error variance on the values γ 1 and γ 2 , Fig. 3 displays the filtering and fixed-point smoothing error variances of the first state component, at a fixed iteration (namely, k = 200) for m = 3, when both γ 1 and γ 2 are varied from 0.1 to 0.5, which provide different values of the probabilities θ 1 and θ 2 .
In this figure, both graphs (corresponding to the filtering and fixed-point smoothing error variances) corroborate the previous results, showing again that, as the false alarm probability increases, the filtering and fixed-point smoothing error variances (N = 2) become greater and consequently, worse estimations are obtained.Also, it is concluded that the smoothing error variances are better than the filtering ones.

Conclusions and future research
In this chapter, the least-squares linear filtering and fixed-point smoothing problems have been addressed for linear discrete-time stochastic systems with uncertain observations coming from multiple sensors.The uncertainty in the observations is modeled by a binary variable taking the values one or zero (Bernoulli variable), depending on whether the signal is present or absent in the corresponding observation, and it has been supposed that the uncertainty at any sampling time k depends only on the uncertainty at the previous time k − m.This situation covers, in particular, those signal transmission models in which any failure in the transmission is detected and the old sensor is replaced after m instants of time, thus avoiding the possibility of missing signal in m + 1 consecutive observations.By applying an innovation technique, recursive algorithms for the linear filtering and fixed-point smoothing estimators have been obtained.This technique consists of obtaining the estimators as a linear combination of the innovations, simplifying the derivation of these estimators, due to the fact that the innovations constitute a white process.
Finally, the feasibility of the theoretical results has been illustrated by the estimation of a two-dimensional signal from uncertain observations coming from two sensors, for different uncertainty probabilities and different values of m.The results obtained confirm the greater effectiveness of the fixed-point smoothing estimators in contrast to the filtering ones and conclude that more accurate estimations are obtained as the values of m are lower.
In recent years, several problems of signal processing, such as signal prediction, detection and control, as well as image restoration problems, have been treated using quadratic estimators and, generally, polynomial estimators of arbitrary degree.Hence, it must be noticed that the current chapter can be extended by considering the least-squares polynomial estimation 19 Design of Estimation Algorithms from an Innovation Approach in Linear Discrete-Time Stochastic Systems with Uncertain Observations problems of arbitrary degree for such linear systems with uncertain observations correlated in instants that differ m units of time.On the other hand, in practical engineering, some recent progress on the filtering and control problems for nonlinear stochastic systems with uncertain observations is being achieved.Nonlinearity and stochasticity are two important sources that are receiving special attention in research and, therefore, filtering and smoothing problems for nonlinear systems with uncertain observations would be relevant topics on which further investigation would be interesting.

Hypothesis 1 .Hypothesis 2 .Hypothesis 3 .Hypothesis 4 .Hypothesis 5 .
The initial state x 0 is a random vector with E[x 0 ] = x 0 and Cov[x 0 ] = P 0 .The state noise {w k ; k ≥ 0} is a zero-mean white sequence with Cov[w k ] = Q k , ∀k ≥ 0. The observation additive noise {v k ; k ≥ 1} is a zero-mean white process withCov[v k ] = R k , ∀k ≥ 1.For i = 1, . . ., r, {θ i k ; k ≥ 1}is a sequence of Bernoulli random variables with P[θ i k = 1] = θ i k .For i, j = 1, . . ., r, the variables θ i k and θ j s are independent for |k − s| = 0, m and Cov[θ i k , θ j s ] are known for |k − s| = 0, m.Defining θ k = (θ 1 k , . . ., θ r k ) T , the covariance matrices of θ k and θ s will be denoted by K θ k,s .Finally, we assume the following hypothesis on the independence of the initial state and noises: The initial state x 0 and the noise processes {w k ; k ≥ 0}, {v k ; k ≥ 1} and {θ k ; k ≥ 1} are mutually independent.
) Hence, an equation for the predictor x k/k−1 in terms of the filter x k−1/k−1 and expressions for the innovation ν k , its covariance matrix Π k and the matrix S k,k are required.State predictor x k/k−1 .From hypotheses 2 and 5, it is immediately clear that the filter of the noise w k−1 is w k−1/k−1 = E[w k−1 ] = 0 and hence, taking into account Equation (1), we have

9
Design of Estimation Algorithms from an Innovation Approach in Linear Discrete-Time Stochastic Systems with Uncertain Observations (b) For k > m, from (17) it follows that

Figure 1 .
Figure 1.First component of the simulate state, filtering and fixed-point smoothing estimates for N = 2, when m = 3 and γ 1 = γ 2 = 0.5.Next, assuming again that the Bernoulli variables of the observations are correlated at sampling times that differ three units of time (m = 3), we compare the effectiveness of the proposed filtering and fixed-point smoothing estimators considering different values of the probabilities γ 1 and γ 2 , which provides different values of the false alarm probabilities 1 − θ i , i = 1, 2; specifically, γ 1 = 0.2, γ 2 = 0.4 and γ 1 = 0.1, γ 2 = 0.3.For these values, Fig.2shows the filtering and fixed-point smoothing error variances, when N = 2 and N = 5, for the first state component.From this figure it is observed that:i) As both γ 1 and γ 2 decrease (which means that the false alarm probability decreases), the error variances are smaller and, consequently, better estimations are obtained.

Figure 3 .
Figure 3. Filtering error variances and smoothing error variances for N = 2 of the first state component at k = 200 versus γ 1 with γ 2 varying from 0.1 to 0.5 when m = 3.

Figure 4 .
Figure 4. Filtering error variances and smoothing error variances for N = 2 of the second state component at k = 200 versus γ 1 with γ 2 varying from 0.1 to 0.5 when m = 3.
1, . . ., r, {v i k ; k ≥ 1} is the observation additive noise process of the i-th sensor and H i k , for k ≥ 1, are known deterministic matrices of compatible dimensions.If we introduce {θ i k ; k ≥ 1}, i = 1, . . ., r, sequences of Bernoulli random variables with P[θ i k = 1] = θ

7
Design of Estimation Algorithms from an Innovation Approach in Linear Discrete-Time Stochastic Systems with Uncertain Observations

13
Design of Estimation Algorithms from an Innovation Approach in Linear Discrete-Time Stochastic Systems with Uncertain Observations Now, we need to prove(21)for S k,k+N