Multiple-Model Adaptive Estimation with A New Weighting Algorithm

The state estimation of a complex dynamic stochastic system is described by a discrete-time state-space model with large parameter (including the covariance matrices of system noises and measurement noises) uncertainties. A new scheme of weighted multiple-model adaptive estimation is presented, in which the classical weighting algorithm is replaced by a new weighting algorithm to reduce the calculation burden and to relax the convergence conditions. Finally, simulation results verified the effectiveness of the proposed MMAE scheme for each possibility of parameter uncertainties.


Introduction
Kalman filter (KF) [1] can be viewed as a sensor fusion or data fusion algorithm.It has many applications in information technology and engineering, such as in the guidance, navigation, and control of vehicles, particularly aircraft and spacecraft [2].It is also a widely applied concept in signal processing and econometrics.Currently, Kalman filters are one of the main topics in the field of robotic motion planning and control.
Practical implementation of the Kalman filter is often difficult due to the uncertainties and nonlinearities of the modeling of dynamic systems.Extensive research has been done to address the modeling uncertainty and nonlinearity problems in state estimation, filtering, and control.Among others, the multiple model adaptive estimation (MMAE) and multiple model adaptive control (MMAC) schemes have received much attention.The multiple model concept coincides with the logic of "divide and conquer." The thought of using multiple models for adaptive estimation come from Magill [3].Later on, Lainiotis [4], Athans et al. [5,6], Anderson and Moore [7], and Li and Bar-Shalom [8] studied MMAE/MMAC for different purposes of applications.
There are mainly three aspects to complete an MMAE system in the design stage: first, to construct a "local" model set to cover the parameter uncertainty or nonlinearity of the system as described in (1); second, to design a local KF set according to the local model set, in which each local KF is designed for each local model; and third, to design a weighting algorithm to calculate weights for each local KF.After that, each local KF generates its own state estimates and a corresponding output error (residual) to feed the weighting algorithm.The "global" MMAE state estimate is then a weighted summation of each local KF's estimates.We use Figure 1, which was borrowed from reference [22], to describe the logic structure of an MMAE system.In Figure 1, x k ∈ ℝ n is the state of the system, u k ∈ ℝ m is the control input, y k ∈ ℝ q is the system output, ω k ∈ ℝ q is the measurement noise that cannot be measured with In this paper, a new weighting algorithm adapted from reference [19] is adopted with two purposes: one is to simplify the classical weighting algorithm that depends on the dynamic hypothesis test concept and Bayesian formula [3,16]; the other is to relax the convergence condition of the classical weighting algorithm.
It should be noted that the preliminary version of this manuscript has been published on the proceedings of the 2018 International Conference on Artificial Life and Robotics [24].In this augmented version, the following changes have been made: (1) The weighting algorithm was further improved, that is, weighting Algorithm 2 was presented to get a faster convergence rate than weighting Algorithm 1; (2) the proof of the convergence of the MMAE system was further polished in details; and (3) simulation results were presented to support the theoretical analysis.
The reminder of this paper is organized as follows.Section 2 provides a brief description of an MMAE system; Section 3 includes the development and convergence analysis of two weighting algorithms; Section 4 gives the main results about the performance (convergence) of the MMAE system; Section 5 demonstrates simulation results in four cases; and finally, Section 6 presents conclusions and future works.
It should also be noted that all the limit operations in this section are in the sense of probability one.

The Multiple-Model Adaptive Estimator
Consider a discrete-time system P ϑ described by the following state-space equation and Q k, ϑ are assumed piecewise continuous, uniformly bounded in time, and contain unknown constant parameters denoted by vector ϑ ∈ ℝ l .The initial condition x 0 of (1) is assumed deterministic but unknown.Consider a finite set of candidate parameter values The MMAE can be described as follows: where x k is the estimate of the state x k at time k, and p i k ; i = 1, … , N are time-varying weights generated by the weighting algorithm, which will be given in the next section.In (2), each local state estimate xi k , i = 1, … , N is generated by a corresponding local KF, which is described as follows.
We expect that if the jth model M j in the model set is (or close to) the real plant model, then the corresponding jth KF will generate the optimal state estimation xj k .In addition, if the jth weight p j k converges to 1, and others to 0, then the state estimates of the MMAE will converge to xj k . 2 Complexity Thus, the key point for an MMAE system is to construct an effective weighting algorithm as well as an appropriate model set to include the real model or the closest model to the plant.The weighting algorithm will be described in the next section.

Weighting Algorithm
First of all, we give the residual/error signal of each local KF.
The classical weighting algorithm can be described by the following equations: ing factor, and m is the number of measurements.For more details of design and convergence analysis of the classical weighting algorithm, see references [3,16].
To relax the convergence condition of the classical weighting algorithm, a novel weighting algorithm was put forward in [19] for MMAC systems, to replace the classical weighting algorithm.
Algorithm 1 where ⋅ denotes the Euclidean norm.
According to [19], we have the following convergence result of weighting ( 6), ( 7), ( 8), (9), and (10).For more details of the proof of the theorem, see Lemma A.1 in the appendix.Theorem 1.If M j ∈ M is the model closest to the true plant in the following sense with probability one, where σ j is a constant and σ i may be a constant or infinity.
Then, the weighting algorithm ( 6), ( 7), (8), and (10) leads to It is worth pointing out that the convergence condition for the weighting algorithm ( 6), ( 7), (8), and ( 10) is fewer than that for the classical weighting algorithm.To be specific, the convergence condition (11) means the discriminability of r i k , while the convergence conditions for the classical weighting algorithm include ergodic, stationary, and discriminability of r i k ; for more details, please be referred to reference [16].
In order to get a sharper convergence rate, reference [25] proposed another weighting algorithm.

Main Results
We only consider the situation that the model set includes the unique real model of system (1).Other complicated situations will be considered in the future research work.
We have the following results about the convergence analysis of the proposed MMAE system.
Theorem 3. If the following conditions are satisfied: (1) M j ∈ M is the only real model of system (1) in the following sense with probability one

22
where σ j is a constant and σ i may be a constant or infinity.
(2) Each Kalman filter is designed with assurance of stability, that is, the state estimates of each Kalman filter are bounded.
Then, the state estimates of MMAE with a weighting algorithm ( 6), ( 7), ( 8), (9), and (10) will converge to the optimal estimates given by the jth KF corresponding to M j , that is, Proof.According to Theorem 1, condition (1) of Theorem 3, that is, (22), leads to Further, condition (2) of Theorem 3 guarantees that Then, based on ( 24) and ( 25), we have That completes the proof.
Theorem 4. If the following conditions are satisfied: (1) M j ∈ M is the only real model of system (1) in the following sense with probability one (2) Each Kalman filter is designed with assurance of stability; that is, the state estimates of each Kalman filter are bounded.

Simulation Results
To test the effectiveness of the proposed MMAE scheme, specifically the effectiveness of the weighting algorithms, four cases of simulation have been conducted with MATLAB® 2014a.The simulation programm was coded in M-file.
In system (1), we adopt the following settings for all four cases.In the model set, we have 4 models: In the model set, we have 4 models: Case 3. The covariance matrix of system noises, that is, Q, is uncertain, and the other parameters are constants.The real model of the plant is in the model set.
In the model set, we have 4 models: 6 Complexity Model 3.
The simulation results of the weight signals p i k , i = 1, 2, 3, 4 obtained by three weighting algorithms are shown in Figure 4.
Case 4. The covariance matrix of measurement noises, that is, R, is uncertain, and the other parameters are constants.The real model of the plant is in the model set.
In the model set, we have 4 models:  From the simulation results, in all cases, the weight signals converge correctly and identify the correct local KF.
Another observation is that the bigger the differences between the real (or the closest) model to the plant and the other models, the sharper the weight convergence rate.8 Complexity

Conclusions
A new MMAE scheme is proposed with improved weighting algorithms which were adapted from that of MMAC systems.Both theoretical analysis and simulation results verified the effectiveness of the proposed MMAE scheme.In the future, our research will be focused on three aspects: (1) to improve the weighting algorithm to have a more rapid convergence rate and good disturbance rejection performance; (2) to adapt the weighting algorithm for time-varying and nonlinear systems; and (3) to consider the situation that the true model of the plant is not included in the model set, that is, to add an online self-tuning model based on the machine learning algorithm (Traditionally, it is also well-known as parameter identification algorithm.)It is worth pointing out that the weighting algorithms adopted in this paper are actually online machine learning algorithms.

Appendix
Lemma A.1.Consider weighting algorithm ( 6), ( 7), ( 8), ( 9) and (10).Suppose M j is closest in the model set M = M i , i = 1, 2, … , N to the true plant in the following sense with probability one where k * is an unknown limited time instant, σ j is a constant, and σ i may be a constant or infinity.Then, we have Proof.It is not difficult to see that algorithms (6) to (10) together with (A.1) guarantee with probability one that Further, considering (A.2), we have Putting (A.4), (A.5), and ( 9) together, we obtain lim k→∞ l j k = l j k * > 0 ; lim Then, from (10), we have lim k→∞ p j k = 1 ; lim That completes the proof of Lemma A.1.

Case 1 .
The parameters of A are uncertain, and the other parameters are constants.The real model of plant is in the model set.

R 2 . 2 .
And the initial state x T 0 = −100, 2, 200, 20 is known.The simulation results of the weight signals p i k , i = 1, 2, 3, 4 obtained by three weighting algorithms are shown in Figure Case The parameters of C are uncertain, and the other parameters are constants.The real model of the plant is in the model set.

40
And the initial state x T 0 = −100, 2, 200, 20 is known.The simulation results of the weight signals p i k , i = 1, 2, 3, 4 obtained by three weighting algorithms are shown in Figure 3.

Figure 2 :
Figure 2: The weight signals of three weighting algorithms in Case 1.

Figure 3 :
Figure 3: The weight signals of three weighting algorithms in Case 2.

48
And the initial state x T 0 = −100, 2, 200, 20 is known.The simulation results of the weight signals p i k , i = 1, 2, 3, 4 obtained by three weighting algorithms are shown in Figure 5.

Figure 4 :
Figure 4: The weight signals of three weighting algorithms in Case 3.

Figure 5 :
Figure 5: The weight signals of three weighting algorithms in Case 4.