Robust Prediction, Filtering and Smoothing

,

to be as erroneous as possible, while trying to minimize the energy it invests in driving the system" [19].
Pertinent state-space H ∞ predictors, filters and smoothers are described in [4] - [19].Some prediction, filtering and smoothing results are summarised in [13] and methods for accommodating model uncertainty are described in [14], [18], [19].The aforementioned methods for handling model uncertainty can result in conservative designs (that depart far from optimality).This has prompted the use of linear matrix inequality solvers in [20], [23] to search for optimal solutions to model uncertainty problems.
It is explained in [15], [19], [21] that a saddle-point strategy for the games leads to robust estimators, and the resulting robust smoothing, filtering and prediction solutions are summarised below.While the solution structures remain unchanged, designers need to tweak the scalar within the underlying Riccati equations.This chapter has two main parts.Section 9.2 describes robust continuous-time solutions and the discrete-time counterparts are presented in Section 9.3.The previously discussed techniques each rely on a trick.The optimum filters and smoothers arise by completing the square.In maximum-likelihood estimation, a function is differentiated with respect to an unknown parameter and then set to zero.The trick behind the described robust estimation techniques is the Bounded Real Lemma, which opens the discussions.

Continuous-Time Bounded Real Lemma
First, consider the unforced system ( ) ( ) ( ) over a time interval t  [0, T], where A(t)    n n .For notational convenience, define the stacked vector x = {x(t), t  [0, T]}.From Lyapunov stability theory [36], the system (1) is asymptotically stable if there exists a function V(x(t)) > 0 such that ( ( )) 0 Lyapunov function is V(x(t)) = ( ) ( ) ( ) x t P t x t , where P(t) = ( )  n n is positive definite.To ensure x   2 it is required to establish that

V x t x t P t x t x t P t x t x t P t x t
Now consider the output of a linear time varying system, y =  w, having the state-space representation "Uncertainty is one of the defining features of Science.Absolute proof only exists in mathematics.In the real world, it is impossible to prove that theories are right in every circumstance; we can only prove that they are wrong.This provisionality can cause people to lose faith in the conclusions of science, but it shouldn't.The recent history of science is not one of well-established theories being proven wrong.Rather, it is of theories being gradually refined. .The Bounded Real Lemma [13], [15], [21], states that w   2 implies y   2 if 2 ( ( )) ( ) ( ) ( ) ( ) 0 for a γ   .Integrating (5) from t = 0 to t = T gives V x t dt y t y t dt w t w t dt (6) and noting that 0 ( ( ))   T V x t dt = x T (T)P(T)x(T) -x T (0)P(0)x(0), another objective is x T P T x T x P x y t y t dt w t w t dt .
Under the assumptions x(0) = 0 and P(T) = 0, the above inequality simplifies to The ∞-norm of  is defined as The Lebesgue ∞-space is the set of systems having finite ∞-norm and is denoted by  ∞ .That is,    ∞ , if there exists a γ   such that namely, the supremum (or maximum) ratio of the output and input 2-norms is finite.The conditions under which    ∞ are specified below.The accompanying sufficiency proof combines the approaches of [15], [31].A further five proofs for this important result appear in [21].
In general, where { ( ) ( )}  , the scaled matrix ( ) ( ) ( ) B t Q t may be used in place of B(t) above.When the plant  has a direct feedthrough matrix, that is, where A proof is requested in the problems.
Criterion (8) indicates that the ratio of the system's output and input energies is bounded above by γ 2 for any w   2 , including worst-case w.Consequently, solutions satisfying (8) are often called worst-case designs.

Problem Definition
Now that the Bounded Real Lemma has been defined, the H ∞ filter can be set out.The general filtering problem is depicted in Fig. 1.It is assumed that the system   has the state-space realisation Suppose that the system   has the realisation ( 14) and Figure 1.The general filtering problem.The objective is to estimate the output of   from noisy measurements of   .
It is desired to find a causal solution  that produces estimates 1 at time t so that the output estimation error, is in  2 .The error signal ( 18) is generated by a system denoted by e = ei  , where i =

 
T T i t i t dt < 0 for some    .For convenience, it is assumed here that w(t)   m ,

H ∞ Solution
A parameterisation of all solutions for the H ∞ filter is developed in [21].A minimumentropy filter arises when the contractive operator within [21] is zero and is given by where "Uncertainty and expectation are the joys of life.Security is an insipid thing, through the overtaking and possessing of a wish discovers the folly of the chase." Smoothing, Filtering and Prediction: Estimating the Past, Present and Future 216 is the filter gain and P(t) = P T (t) > 0 is the solution of the Riccati differential equation It can be seen that the H ∞ filter has a structure akin to the Kalman filter.A point of difference is that the solution to the above Riccati differential equation solution depends on C 1 (t), the linear combination of states being estimated.


T T e t t e t t dt -2 0 ( ) ( ) Proof: Following the approach in [15], [21], by applying Lemma 1 to the adjoint of (23), it is required that there exists a positive definite symmetric solution to "Although economists have studied the sensitivity of import and export volumes to changes in the exchange rate, there is still much uncertainty about just how much the dollar must change to bring about any given reduction in our trade deficit."Martin Stuart Feldstein on [0, T] for some γ   , in which τ = T -t is a time-to-go variable.Substituting Taking adjoints to address the problem (23) leads to (22), for which the existence of a positive define solution implies ( ) ( ) ( )


T T e t t e t t dt -2 0 ( ) ( ) Thus, under the assumption x(0


T T e t t e t t dt -2 0 ( ) ( )

Trading-Off H ∞ Performance
In a robust filter design it is desired to meet an H ∞ performance objective for a minimum possible γ.A minimum γ can be found by conducting a search and checking for the existence of positive definite solutions to the Riccati differential equation ( 22).This search is tractable because ( ) ( ) In some applications it may be possible to estimate a priori values for γ.Recall for output estimation problems that the error is generated by From the arguments of Chapters 1 -2 and [28], for single-input-single-output plants

 
< γ 2 , it follows that an a priori design estimate is γ = v  at high signal-to-noise-ratios.
When the problem is stationary (or time-invariant), the filter gain is precalculated as , where P is the solution of the algebraic Riccati equation Suppose that 2  = 1  is a time-invariant single-input-single-output system and let R ei (s) denote the transfer function of ei  .Then Parseval's Theorem states that the average total energy of ( | ) In view of ( 25) and (26), it follows that the H ∞ filter minimises the maximum magnitude of ( ) Consequently, it is also called a 'minimax filter'.However, robust designs, which accommodate uncertain inputs tend to be conservative.Therefore, it is prudent to investigate using a larger γ to achieve a trade-off between H ∞ and minimum-mean-squareerror performance criteria.The magnitude of the error spectrum exhibited by the optimal filter (designed with γ 2 = 10 8 ) is indicated by the solid line of Fig. 2. From a search, a minimum of γ 2 = 0.099 was found such that the algebraic Riccati equation (24) has a positive definite solution, which concurs with the a priori estimate of γ 2 ≈ 2 v  .The magnitude of the error spectrum exhibited by the H ∞ filter is indicated by the dotted line of Fig. 2. The figure demonstrates that the filter achieves ( ) Although the H ∞ filter reduces the peak of the error spectrum by 10 dB, it can be seen that the area under the curve is larger, that is, the mean square error increases.Consequently, some intermediate value of γ may need to be considered to trade off peak error (spectrum) and average error performance.
"If the uncertainty is larger than the effect, the effect itself becomes moot."Patrick Frank

Accommodating Uncertainty
The above filters are designed for situations in which the inputs v(t) and w(t) are uncertain.Next, problems in which model uncertainty is present are discussed.The described approaches involve converting the uncertainty into a fictitious noise source and solving an auxiliary H ∞ filtering problem. Figure 4. Input scaling in lieu of a problem that possesses an uncertainty.

Additive Uncertainty
Consider a time-invariant output estimation problem in which the nominal model is 2 where 2  is known and Δ is unknown, as depicted in Fig. 3.The p(t) represents a fictitious signal to account for discrepancies due to the uncertainty.It is argued below that a solution to the H ∞ filtering problem can be found by solving an auxiliary problem in which the input is scaled by ε   as shown in Fig. 4. In lieu of the filtering problem possessing the uncertainty Δ, an auxiliary problem is defined as where p(t) is an additional exogenous input satisfying Consider the scaled H ∞ filtering problem where Lemma 3 [26]: Suppose for a γ ≠ 0 that the scaled H ∞ problem ( 31) - (33)  for the solution of the auxiliary problem ( 27) - (29).

Multiplicative Uncertainty
Next, consider a filtering problem in which the model is G(I + Δ), as depicted in Fig. 5.It is again assumed that G and Δ are known and unknown transfer function matrices, respectively.This problem may similarly be solved using Lemma 3. Thus a filter that accommodates additive or multiplicative uncertainty simply requires scaling of an input.The above scaling is only sufficient for a H ∞ performance criterion to be met.The design may well be too conservative and it is worthwhile to explore the merits of using values for δ less than the uncertainty's assumed norm bound.

Parametric Uncertainty
Finally, consider a time-invariant output estimation problem in which the state matrix is uncertain, namely, (37) where (36) and (37), where p(t) = Δ A x(t) is a fictitious exogenous input.A solution to this problem would achieve for a γ ≠ 0. From the approach of [14], [18], [19], consider the scaled filtering problem "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful."George Edward Pelham Box (36), (37), where which implies (39).Thus, state matrix parameter uncertainty can be accommodated by including a scaled input in the solution of an auxiliary H ∞ filtering problem.Similar solutions to problems in which other state-space parameters are uncertain appear in [14], [18], [19].

Background
There are three kinds of H ∞ smoothers: fixed point, fixed lag and fixed interval (see the tutorial [13]).The next development is concerned with continuous-time H ∞ fixed-interval smoothing.The smoother in [10] arises as a combination of forward states from an H ∞ filter and adjoint states that evolve according to a Hamiltonian matrix.A different fixed-interval smoothing problem to [10] is found in [16] by solving for saddle conditions within differential games.A summary of some filtering and smoothing results appears in [13].
Robust prediction, filtering and smoothing problems are addressed in [22]; the H ∞ predictor, filter and smoother require the solution of a Riccati differential equation that evolves forward in time, whereas the smoother additionally requires another to be solved in reversetime.Another approach for combining forward and adjoint estimates is described [32] where the Fraser-Potter formula is used to construct a smoothed estimate.
Figure 5. Representation of multiplicative model uncertainty.Figure 6.Robust smoother error structure.
"The purpose of models is not to fit the data but to sharpen the questions."Samuel Karlin Smoothing, Filtering and Prediction: Estimating the Past, Present and Future 222

Problem Definition
Once again, it is assumed that the data is generated by ( 14) - (17).For convenience, attention is confined to output estimation, namely 2  = 1  within Fig. 1.Input and state estimation problems can be handled similarly using the solution structures described in Chapter 6.It is desired to find a fixed-interval smoother solution  that produces estimates 1 ˆ( | ) y t T of 1 ( ) y t so that the output estimation error As before, the map from the inputs i = to the error is denoted by ei    and the objective is to achieve for some    .

H ∞ Solution
The following H ∞ fixed-interval smoother exploits the structure of the minimum-variance smoother but uses the gain ( 21) calculated from the solution of the Riccati differential equation ( 22) akin to the H ∞ filter.An approximate Wiener-Hopf factor inverse, 1 ˆ  , is given by An inspection reveals that the states within (43) are the same as those calculated by the H ∞ filter (19).The adjoint of 1 ˆ  , which is denoted by ˆH   , has the realisation Output estimates are obtained as However, an additional condition requires checking in order to guarantee that the smoother actually achieves the above performance objective; the existence of a solution 2 ( ) P t = 2 ( ) where "Certainty is the mother of quiet and repose, and uncertainty the cause of variance and contentions."Edward Coke www.intechopen.com

Performance
It will be shown subsequently that the robust fixed-interval smoother (43) -(45) has the error structure shown in Fig. 6, which is examined below.

Lemma 4 [33]: Consider the arrangement of two linear systems f = fi i
 and u = H uj j  shown in Fig. 6, in which . Let ei  denote the map from i to e. Assume that w and v   2 .If and only if: The necessity of (i) follows from the assumption i   2 together with the property fi 83,21]).Similarly, j   2 together with the It is easily shown that the error system, ei  , for the model ( 14) -( 15), the data ( 17) and the smoother ( 43) -( 45), is given by where ( | ) x t t .The conditions for the smoother attaining the desired performance objective are described below. 15  Lemma 5 [33]: In respect of the smoother error system (47), if there exist symmetric positive define solutions to (22) and ( 46) for  , 2  > 0, then the smoother ( 43) -( 45) achieves ei    ∞ , that is, i  is equivalent to the arrangement of two systems fi  and H uj  shown in Fig. 6.The fi  is defined by (23) in which C 2 (t) = C(t).From Lemma 2, the existence of a positive definite solution to (22)  is given by the system "Doubt is uncomfortable, certainty is ridiculous."François-Marie Arouet de Voltaire www.intechopen.com Smoothing, Filtering and Prediction: Estimating the Past, Present and Future 224 For the above system to be in  ∞ , from Lemma 4, it is required that there exists a solution to (46) for which the existence of a positive definite solution implies The H ∞ solution can be derived as a solution to a two-point boundary value problem, which involves a trade-off between causal and noncausal processes (see [10], [15], [21]).This suggests that the H ∞ performance of the above smoother would not improve on that of the filter.Indeed, from Fig. 6, e = f + u and the triangle rule yields , where f is the H ∞ filter error.That is, the error upper bound for the H ∞ fixed-interval smoother (43) -( 45) is greater than that for the H ∞ filter ( 19) - (20).It is observed below that compared to the minimum-variance case, the H ∞ solution exhibits an increased mean-square error.
Thus, the cost of designing for worst case input conditions is a deterioration in the mean performance.Note that the best possible average performance attained in problems where there are no uncertainties present, 2 0    and the Riccati equation solution has converged, that is, ( ) 0  is a zero matrix.
"We know accurately only when we know little, with knowledge doubt increases."Johann Wolfgang von Goethe

Performance Comparison
It is of interest to compare to compare the performance of ( 43) -( 45) with the H ∞ smoother described in [10], [13], [16], namely, and (22).Substituting (54) and its differential into the first row of (53) together with (21) yields which reverts to the Kalman filter at 2   = 0. Substituting ( ) x t into the second row of (53) yields where , which reverts to the maximum-likelihood smoother at 2   = 0. Thus, the Hamiltonian form (53) -( 54) can be realised by calculating the filtered estimate (55) and then obtaining the smoothed estimate from (56).
 denote time-invariant parameters for an output estimation problem.Simulations were conducted for the case of T = 100 seconds, dt = 1 millisecond, using 500 realizations of zeromean, Gaussian process noise and measurement noise.The resulting mean-square-error (MSE) versus signal-to-noise ratio (SNR) are shown in Fig. 7.The H ∞ solutions were calculated using a priori designs of 2 (22).It can be seen from trace (vi) of Fig. 7 that the H ∞ smoothers exhibit poor performance when the exogenous inputs are in fact Gaussian, which illustrates Lemma 6.The figure demonstrates that the minimumvariance smoother out-performs the maximum-likelihood smoother.However, at high SNR, the difference in smoother performance is inconsequential.Intermediate values for 2   may be selected to realise a smoother design that achieves a trade-off between minimum-variance performance (trace (iii)) and H ∞ performance (trace (v)).
Example 3 [35].Consider the non-Gaussian process noise signal w(t) = 1 sin( ) sin( ) denotes the sample variance of sin(t).The results of a simulation study appear in Fig. 8.It can be seen that the H ∞ solutions, which accommodate input uncertainty, perform better than those relying on Gaussian noise assumptions.In this example, the developed H ∞ smoother ( 43) -( 45) exhibits the best mean-square-error performance.

Discrete-Time Bounded Real Lemma
The development of discrete-time H ∞ filters and smoothers proceeds analogously to the continuous-time case.From Lyapunov stability theory [36], for the unforced system where Now let y =  w denote the output of the system where "Education is the path from cocky ignorance to miserable uncertainty."Samuel Langhorne Clemens aka.

Mark Twain
The Bounded Real Lemma [18] states that w for a γ   .Summing (61) from k = 0 to k = N -1 yields the objective that is, Assuming that x 0 = 0, Conditions for achieving the above objectives are established below.

Lemma 7:
The discrete-time Bounded Real Lemma [18]: In respect of the above system  , suppose that the Riccati difference equation Proof: From the approach of Xie et al [18], define It is easily verified that

x x P x y y w w p I B P B p x A P A x
, which implies (61) -( 62) and (63) under the assumption x 0 = 0.
The above lemma relies on the simplifying assumption { } B Q may be used in place of B k above.In the case where  possesses a direct feedthrough matrix, namely, y k = C k x k + D k w k , the Riccati difference equation within the above lemma becomes "And as he thus spake for himself, Festus said with a loud voice, Paul, thou art beside thyself; much learning doth make thee mad."Acts 26: 24 www.intechopen.com Smoothing, Filtering and Prediction: Estimating the Past, Present and Future 228 A verification is requested in the problems.It will be shown that predictors, filters and smoothers satisfy a H ∞ performance objective if there exist solutions to Riccati difference equations arising from the application of Lemma 7 to the corresponding error systems.A summary of the discrete-time results from [5], [11], [13] and the further details described in [21], [30], is presented below.

Problem Definition
together with a fictitious reference system   realised by (68) and where A k , B k , C 2,k and C 1,k are of appropriate dimensions.The problem of interest is to find a solution  that produces one-step-ahead predictions, 1, / 1 ˆk k y  , given measurements at time k -1.The prediction error is defined as The error sequence (72) is generated by e = ei  i, and the objective is to achieve

H ∞ Solution
The H ∞ predictor has the same structure as the optimum minimum-variance (or Kalman) predictor.It is given by "Why waste time learning when ignorance is instantaneous?" William Boyd Watterson II where 1 / 1 2, 2, / 1 2, ( ) is the one-step-ahead predictor gain, and k M = T k M > 0 satisfies the Riccati differential equation . The above predictor is also known as an a priori filter within [11], [13], [30].

Performance
Following the approach in the continuous-time case, by subtracting ( 73) -( 74) from ( 68), (70), the predictor error system is where / 1 . It is shown below that the prediction error satisfies the desired performance objective.

 and taking the adjoint to address ei
 , it is required that there exists a positive define symmetric solution to "Give me a fruitful error any time, full of seeds bursting with its own corrections.You can keep your sterile truth for yourself."Vilfredo Federico Damaso Pareto Smoothing, Filtering and Prediction: Estimating the Past, Present and Future 230 )( ) in which use was made of the Matrix Inversion Lemma.Defining / 1 and applying the Matrix Inversion Lemma gives ( ) The change of variable ( 76), namely, where . Applying the Matrix Inversion Lemma within (80) gives Expanding ( 81) yields (77).The existence of M k > 0 for the above Riccati differential equation implies P k > 0 for (79).Thus, it follows from Lemma 7 that the stated performance objective is achieved.

Problem Definition
Consider again the configuration of Fig. 1.Assume that the systems   and   have the realisations (68) -( 69) and ( 68), (70), respectively.It is desired to find a solution  that operates on the measurements (71) and produces the filtered estimates 1, / ˆk k y .The filtered error sequence, "Never interrupt your enemy when he is making a mistake."Napoléon Bonaparte is generated by e = ei  i, . The H ∞ performance objective is to achieve

H ∞ Solution
As explained in Chapter 4, filtered states can be evolved from where  n p is a filter gain.The above recursion is called an a posteriori filter in [11], [13], [30].Output estimates are obtained from The filter gain is calculated as where k M = T k M > 0 satisfies the Riccati differential equation

Performance
Subtracting from (83) from (68) gives , then the filtered error system may be written as "I believe the most solemn duty of the American president is to protect the American people.If America shows uncertainty and weakness in this decade, the world will drift toward tragedy.This will not happen on my watch."George Walker Bush www.intechopen.com Smoothing, Filtering and Prediction: Estimating the Past, Present and Future 232 with 0 x  = 0, where ei It is shown below that the filtered error satisfies the desired performance objective.
Lemma 9 [11], [13], [30]: In respect of the H ∞ problem ( 68) -( 70), ( 82), the solution ( 83) -( 84) achieves the performance Proof: By applying the Bounded Real Lemma to H ei  and taking the adjoint to address ei  , it is required that there exists a positive define symmetric solution to in which use was made of the Matrix Inversion Lemma.Defining using (85) and applying the Matrix Inversion Lemma leads to where . Substituting (92) into (91) yields "Hell, there are no rules here -we're trying to accomplish something."Thomas Alva Edison which is the same as (86).The existence of M k > 0 for the above Riccati difference equation implies the existence of a P k > 0 for (88).Thus, it follows from Lemma 7 that the stated performance objective is achieved.

Solution to the General Filtering Problem
Limebeer, Green and Walker express Riccati difference equations such as (86) in a compact form using J-factorisation [5], [21].The solutions for the general filtering problem follow immediately from their results.Consider From the approach of [5], [21], the Riccati difference equation corresponding to the H ∞ problem (94) is Suppose in a general filtering problem that   is realised by (68), 2,k The filter solution is given by

H ∞ Solution
The following fixed-interval smoother for output estimation [28] employs the gain for the H ∞ predictor, where 76) and (77).The gain (100) is used in the minimum-variance smoother structure described in Chapter 7, viz., , 0 It is argued below that this smoother meets the desired H ∞ performance objective.

H ∞ Performance
It is easily shown that the smoother error system is and "I have had my results for a long time: but I do not yet know how I am to arrive at them."Karl Friedrich Gauss .
Lemma 10: In respect of the smoother error system (104), if there exists a symmetric positive definite solutions to (77) for  > 0, then the smoother ( 101) -( 103) Outline of Proof: From Lemma 8, x   2  , since it evolves within the predictor error system.
Therefore,   2  , since it evolves within the adjoint predictor error system.Then e  2  , since it is a linear combination of x  ,  and i  2  .

Performance Comparison
Example 4 [28].A voiced speech utterance "a e i o u" was sampled at 8 kHz for the purpose of comparing smoother performance.Simulations were conducted with the zero-mean, unity-variance speech sample interpolated to a 16 kHz sample rate, to which 200 realizations of Gaussian measurement noise were added and the signal to noise ratio was varied from -5 to 5 dB.The speech sample is modelled as a first-order autoregressive process where A   , 0 < A < 1. Estimates for 2 w  and A were calculated at 20 dB SNR using an EM algorithm, see Chapter 8. Simulations were conducted in which a minimum-variance filter and a fixed-interval smoother were employed to recover the speech message from noisy measurements.The results are provided in Fig. 9.As expected, the smoother out-performs the filter.Searches were conducted for minimum values of γ such that solutions to the design Riccati difference equations were positive definite for each noise realisation.The performance of the resulting H ∞ filter and smoother are indicated by the dashed line and solid line of the figure.It can be seen for this example that the H ∞ filter out-performs the Kalman filter.The figure also indicates that the robust smoother provides the best performance and exhibits about 4 dB reduction in mean-square-error compared to the Kalman filter at 0 dB SNR.This performance benefit needs to be reconciled against the extra calculation cost of combining robust forward and backward state predictors within (101) -(103).

High SNR and Low SNR Asymptotes
An understanding of why robust solutions are beneficial in the presence of uncertainties can be gleaned by examining single-input-single-output filtering and equalisation.Consider a time-invariant plant having the canonical form Since the plant is time-invariant, the transfer function exists and is denoted by G(z).Some notation is defined prior to stating some observations for output estimation problems.Suppose that an H ∞ filter has been constructed for the above plant.Let the H ∞ algebraic Riccati equation solution, predictor gain, filter gain, predictor, filter and smoother transfer function matrices be denoted by where ( ) . The transfer function matrix of the map from the inputs to the filter output estimation error is The H ∞ smoother transfer function matrix can be written as ( ) ( ) K , (2)  L , (2) ( ) (2) ( ) p  denote the (1,1) component of ( ) (ii) Observation (108) follows from ( ) An interpretation of (107) and ( 110) is that the maximum magnitudes of the filters and smoothers asymptotically approach a short circuit (or zero impedance) when 2 v  → 0. From (108) and (111), as 2 v  → 0, the maximum magnitudes of the H ∞ solutions approach the short circuit asymptote closer than the optimal minimum-variance solutions.That is, for low measurement noise, the robust solutions accommodate some uncertainty by giving greater weighting to the data.Since The transfer function matrix of the map from the inputs to the input estimation error is The noncausal H  transfer function matrix of the input estimator can be written as ( ) ( ) ) Proposition 2 [28]: For the above input estimation problem: (2) ( ) (2) ( )

Outline of Proof: (i) and (iv)
The high measurement noise observations ( 114) and (117) follow from (ii) and (v) The observations ( 115) and ( 118) follow from ( ) "Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live."Damian Conway (iii) The observation (116) follows immediately from the application of ( 114) in ( 113).
An interpretation of (114) and ( 117) is that the maximum magnitudes of the equalisers asymptotically approach an open circuit (or infinite impedance) when Proposition 1 follows intuitively.Indeed, the short circuit asymptote is sometimes referred to as the singular filter.Proposition 2 may appear counter-intuitive and warrants further explanation.When the plant is minimum phase and the measurement noise is negligible, the equaliser inverts the plant.Conversely, when the equalisation problem is dominated by measurement noise, the solution is a low gain filter; that is, the estimation error is minimised by giving less weighting to the data.

Conclusion
Uncertainties are invariably present within the specification of practical problems.Consequently, robust solutions have arisen to accommodate uncertain inputs and plant models.The H ∞ performance objective is to minimise the ratio of the output energy to the input energy of an error system, that is, minimise


for some γ   .In the time-invariant case, the objective is equivalent to minimising the maximum magnitude of the error power spectrum density.
Predictors, filters and smoothers that satisfy the above performance objective are found by applying the Bounded Real Lemma.The standard solution structures are retained but larger design error covariances are employed to account for the presence of uncertainty.In continuous time output estimation, the error covariance is found from the solution of .Discrete-time predictors, filters and smoothers for output estimation rely on the solution of "Your most unhappy customers are your greatest source of learning."William Henry (Bill) Gates III
(i) Consider a system  having the state-space representation ( ) x t  = Ax(t) + Bw(t), y(t) = Cx(t).Show that if there exists a matrix P = P T > 0 such that For a  modelled by x k+1 = A k x k + B k w k , y k = C k x k D k w k , show that the existence of a solution to the Riccati difference equation

 ∞
The Lebesgue ∞-space defined as the set of continuous-time systems having finite ∞-norm.

Figure 2 Example 1 . 2 w = 10 and 2 v
Figure 2. R ( ) H ei ei R s versus frequency for Example 1: optimal filter (solid line) and H∞ filter (dotted line).Example 1.Consider a time-invariant output estimation problem whereA = -1, B = C 2 = C 1 = 1, 2 w  = 10 and 2 v  = 0.1.The magnitude of the error spectrum exhibited by the optimal filter (designed with γ 2 = 10 8 ) is indicated by the solid line of Fig.2.From a search, a minimum of γ 2 = 0.099 was found such that the algebraic Riccati equation (24) has a positive definite solution, which concurs with the a priori estimate of γ 2 ≈ 2

Figure 3 .
Figure 3. Representation of additive model uncertainty.Figure4.Input scaling in lieu of a problem that possesses an uncertainty.

( 33 )
in which ε 2 = (1 + δ 2 ) -1 ."Atheory has only the alternative of being right or wrong.A model has a third possibility -it may be right but irrelevant."Manfred Eigen Δ and Prediction:Estimating the Past, Present and Future 220
) with P T = 0, has a positive definite symmetric solution on [0, N].Then   ≤ γ for any w  2  .

F
-variance algebraic Riccati equation solution, predictor gain, filter gain, filter and smoother transfer function matrices respectively."In computer science, we stand on each other's feet."Brian K. Reid Proposition 1 [28]: In the above output estimation problem: Observation (109)  follows immediately from the application of (107) in (106).(iv)Observation (110) follows from 2 Observation (111) follows from( )

→
 v and the H ∞ filter achieves the performance  ei R <  , it follows from (109) that an a priori design estimate is  =  v ."All programmers are optimists."Frederick P. Brooks, Jr www.intechopen.comSmoothing, Filtering and Prediction: Estimating the Past, Present and Future 238Suppose now that a time-invariant plant has the transfer function A, B and C are defined above together with D   .Consider an input estimation (or equalisation) problem in which the transfer function matrix of the causal H ∞ solution that estimates the input of the plant is 0, the maximum magnitude of the H ∞ solution approaches the open circuit asymptote closer than that of the optimum minimum-variance solution.That is, under high measurement noise conditions, robust solutions accommodate some uncertainty by giving less weighting to the data.Since 2 follows from (116) than an a priori design estimate is w

Problem 5 .
once beat me at chess, but it was no match for me at kick boxing."Emo Philips www.intechopen.comSmoothing, Filtering and Prediction: Estimating the Past, Present and Future 242 Now consider the model x k+1 = A k x k + B k w k , y k = C k x k + D k w k and show that the existence of a solution to the Riccati difference equation

ei
  ∞The map ei  from the inputs i(t) to the estimation error e(t) Therefore, i   2 implies e   2 .The Lebesgue ∞-space defined as the set of discrete-time systems having finite ∞-norm.ei  The map ei  from the inputs i k to the estimation error e k satisfies is solvable, that is,

.5 Discrete-Time H ∞ Smoothing 9.3.5.1 Problem Definition
Suppose that measurements (72) of a system (68) -(69) are available over an interval k [1,  N].The problem of interest is to calculate smoothed estimates / we knew what it is we were doing, it would not be called research, would it?"Albert Einstein www.intechopen.com ) for the case where y(t) = Cx(t) + Dw(t).
M(t) = γ 2 I -D T (t)D(t) > 0, has a solution on [0, T].Show that   ≤ γ for any w   2 . (Hint: define V(x(t)) = x T (t)P(t) x(t) and show that ( ( )) T y t y t -2 ( ) ( ) T w t w t  < 0.) Problem 3.For measurements z(t) = y(t) + v(t) of a system realised by ( ) x t  = A(t)x(t) + B(t)w(t), y(t) = C(t)x(t),show that the map from the inputs i = Suppose that a predictor attains a H ∞ performance objective, that is, the conditions of Lemma 8 are satisfied.Show that using the predicted states to construct filtered output estimates /  .