Two-Dimensional Monte Carlo Filter for a Non-Gaussian Environment

In a non-Gaussian environment, the accuracy of a Kalman filter might be reduced. In this paper, a two- dimensional Monte Carlo Filter is proposed to overcome the challenge of the non-Gaussian environment for filtering. The two-dimensional Monte Carlo (TMC) method is first proposed to improve the efficacy of the sampling. Then, the TMC filter (TMCF) algorithm is proposed to solve the non-Gaussian filter problem based on the TMC. In the TMCF, particles are deployed in the confidence interval uniformly in terms of the sampling interval, and their weights are calculated based on Bayesian inference. Then, the posterior distribution is described more accurately with less particles and their weights. Different from the PF, the TMCF completes the transfer of the distribution using a series of calculations of weights and uses particles to occupy the state space in the confidence interval. Numerical simulations demonstrated that, the accuracy of the TMCF approximates the Kalman filter (KF) (the error is about 10−6) in a two-dimensional linear/ Gaussian environment. In a two-dimensional linear/non-Gaussian system, the accuracy of the TMCF is improved by 0.01, and the computation time reduced to 0.067 s from 0.20 s, compared with the particle filter.


Introduction
Bayesian inference is one of the most popular theories in data fusion [1][2][3][4][5]. For a linear Gaussian dynamic system, the Bayesian filter can be achieved in terms of the well-known updating equations of the Kalman Filter (KF) perfectly [6]. However, the analytical solution of the Bayesian filter is impossible to be obtained in a non-Gaussian scenario [7]. This problem has attracted considerable attention for a few decades because of the wide application in signal processing [8,9], automatic control systems [10,11], biological information engineering [12], economic data analysis [13], and other subjects [14]. Approximation is one of the most effective approaches for solving the nonlinear/non-Gaussian filter problem.
The linearization of the state model is an important strategy for solving the nonlinear/non-Gaussian filtering problem. The extended Kalman filter (EKF) was introduced to approximate the nonlinear model using the first-order term of the Taylor expansion of the state and observation equations in [15]. In [16], the unscented Kalman filter (UKF) was proposed to reduce the truncation error by introducing the unscented transformation (UT) [17]. The cubature Kalman filter based on the third-degree spherical-radial cubature rule was proposed in [18]. The third-degree cubature rule is a special form of the UT and has better numerical stability in the application of filtering [19]. The Gauss-Hermite filter and central difference filter (CDF) were proposed by Kazufumi Ito and Kaiqi Xiong in [20] and made the Gaussian assumption for the noise model. Sequential Monte Carlo (SMC) provides another important strategy for the nonlinear/non-Gaussian filtering problem and can approximate any probability density function (PDF) conveniently using weighted particles. Particle Filter (PF) [21,22] is an algorithm derived from the recursive Bayesian filter based on the SMC approach that is used to solve To overcome the aforementioned problem, the two-dimensional Monte Carlo (TMC) method is proposed to improve the efficiency of the sampled particles. Then, the TMC filter (TMCF) algorithm is proposed to solve the non-Gaussian filter problem based on the principle of the TMC method. The main contributions arising from this study are as follows: (1) The TMC method, as a deterministic sampling method, is proposed to improve the efficacy of particles. Particles are sampled in the confidence interval uniformly according to the sampling interval. Then, the posterior weight of each particle is calculated based on Bayesian inference. Subsequently, any probability distribution can be described by a small number of weighted particles. (2) A discrete solution to the problem of how to describe a known probability distribution transmitted in a linear or nonlinear state model is proposed. First, a small number of original weighted particles are obtained according to TMC method. Then, the confidence interval of the next time step for a fixed confidence is calculated according to the state model. Some new particles are then set in this confidence interval uniformly in terms of the sampling interval. After that, the weights of these new particles are obtained using a series of calculations based on Bayesian inference. Then, the transferred probability distribution is described by these new weighted particles. (3) The TMCF algorithm is proposed based on the above two points. The proposed algorithm can be divided into four parts: initialization, particle deployment, weight mixing, and state estimation. The TMC method is used in the initialization step to generate the efficacy weighted particles. Particle deployment solves the problem of state space transfer for a certain degree of confidence and deploys particles in the confidence interval. The weight mixing step achieves the fusion of several arbitrary continuous probability densities in a discrete domain. Some invalid weighted particles are omitted in the particle choice step and the state is estimated using the remaining weighted particles. (4) The performance of TMCF was verified using a numerical simulation. The results demonstrated that the proposed algorithm with the approach of fewer particles and less computation estimated accuracy better than the PF in linear and Gaussian systems and performed better than the KF and PF in linear and Gaussian mixture noise model.
The outline of this paper is as follows. In Section 2, the problem statement and Bayesian filter are presented. The TMC method is introduced in Section 3. In Section 4, the TMCF algorithm is introduced in detail. The numerical simulation is described in Section 5, and the validity of the proposed framework is demonstrated. In Section 6, the conclusion of this study is presented.

Problem Statement
For filtering algorithms introduced in this paper, the state space model is defined as: [37] x where x t ∈ n x and y t ∈ n y denote the state variable and observation at time step t, respectively; n x and n y denote the dimensions of the state vectors and observation, respectively; u t ∈ n x and v t ∈ n y denote the system noise and observation noise, respectively; and the mappings f : n x × n u → n x and h : ( n x × n u ) × n v → n y describe the state transition equation and observation equation, respectively; z t denotes the observation at time step t. In this paper, u t and v t are independent of each other, and the probability distributions of u t and v t are p u (x) and p v (x), respectively. Meanwhile, the probability distribution of the initial state is known. The goal is to obtain the approximate Bayesian estimation in the filtering process in a nonlinear and non-Gaussian environment.

Bayesian Estimation
Recursive Bayesian filtering provides an effective guide for the real-time fusion of the state equation and observation. The procedure of the Bayesian filter framework can be divided into prediction and update steps as follows: where p(x t |x t−1 ) denotes the state transition PDF, p(x t−1 |z 1:t−1 ) denotes the posterior PDF at time step t − 1, p(x t |z 1:t−1 ) denotes the prior PDF at time step t, p(x t |z 1:t ) denotes the likelihood PDF and p(z t |z 1: For a linear and Gaussian environment, this procedure can be accurately operated by the celebrated KF as the integral problem of Equation (1), and the likelihood probability p(z k |x k ) can be solved conveniently. For a nonlinear and non-Gaussian environment, it is impossible to solve Equation (1) directly.

Two-Dimensional Monte Carlo Method
The Monte Carlo approach provides a convenient track inference of the posterior PDF in a non-Gaussian environment. PF is a branch of the family of filter algorithms and is based on the Monte Carlo approach. It is used to process the nonlinear and non-Gaussian system filter problem. Several improved particle filter algorithms exist. The core of the PF approach is to sample particles according to the difference in the proposal distribution. Particles are used to describe the transition of the PDF in the system model. The integration of the observation depends on the likelihood weight. The concept of weight provides the possibility for the application of Monte Carlo to the filtering problem, which plays an important role. In the following, the TMC method is introduced to make full use of the weight and the noise model to enhance particle efficiency.
Suppose p(x, y) is the PDF of a two-dimensional noise model. Its marginal PDF can be expressed as: where p(x) and p(y) denote the marginal PDF of x and y, respectively. The confidence interval c for confidence 1 − α can be defined as: where: Particles X can be set according to sampling interval dT for c, as shown in Figure 1. Additionally, where n denotes the particle number. The weight of these particles w is calculated as:    The weight of these particles w is calculated as:

 
where: Then, {X, w} is used to describe p(x, y) with the accuracy of dT in the confidence interval c for confidence 1 − α discretely. The sketch map of the particles and their weights is shown in Figure 2. The weight of these particles w is calculated as: where: x y with the accuracy of dT in the confidence interval c for confidence 1   discretely. The sketch map of the particles and their weights is shown in Figure 2.
Proof. Suppose the probability space of p(x, y) is divided into n small squares in terms of dT, where dT ≡ dt x , dt y T . When dT → 0 , n → ∞ and As dt x and dt y are independent of i, Hence, A simple sample is used to further demonstrate that TMC improves particle efficiency. Consider a one-dimensional gamma distribution: The MC and TMC methods are used to generate particles from the gamma distribution. Figure 3 shows the sampling results from the two sampling methods. For the MC method, the selection of particles is random. Increasing the particle number allows for a better description of the gamma distribution, and this is the only way to mitigate the indeterminacy. For the TMC method, the position of particles is determined when the confidence and sampling interval are provided. The task of describing the probability distribution is transferred to the weights corresponding to the particles. The estimation results for the expectation errors of the Monte Carlo method for the gamma distribution are shown in Figure 4. Considering the indeterminacy of MC, it is run 10,000 times and the RMSE is used to reflect the size of the error: (20) where m denotes particle number and monte denotes the Monte Carlo number. The RMSE decreases as the particle number increases. Figure 4 shows that the RMSE is about 0.21 when the particle number is 400. For TMC, the relationship between the magnitude of confidence and the mean error is shown in Figures 5 and 6. The absolute expectation error decreases rapidly as the confidence increases. Meanwhile, the particle number increases slowly as confidence increases over a fixed sampling interval. The absolute expectation error can be reduced to 0.015 using only 75 particles for the confidence of 0.999, whereas the sampling interval is 0.4.
Electronics 2021, 10, x FOR PEER REVIEW 8 of 18 where m denotes particle number and monte denotes the Monte Carlo number. The RMSE decreases as the particle number increases. Figure 4 shows that the RMSE is about 0.21 when the particle number is 400. For TMC, the relationship between the magnitude of confidence and the mean error is shown in Figures 5 and 6. The absolute expectation error decreases rapidly as the confidence increases. Meanwhile, the particle number increases slowly as confidence increases over a fixed sampling interval. The absolute expectation error can be reduced to 0.015 using only 75 particles for the confidence of 0.999, whereas the sampling interval is 0.4. The results demonstrate that particles generated by TMC can describe the noise distribution more efficiently than particles generated by MC.    where m denotes particle number and monte denotes the Monte Carlo number. The RMSE decreases as the particle number increases. Figure 4 shows that the RMSE is about 0.21 when the particle number is 400. For TMC, the relationship between the magnitude of confidence and the mean error is shown in Figures 5 and 6. The absolute expectation error decreases rapidly as the confidence increases. Meanwhile, the particle number increases slowly as confidence increases over a fixed sampling interval. The absolute expectation error can be reduced to 0.015 using only 75 particles for the confidence of 0.999, whereas the sampling interval is 0.4. The results demonstrate that particles generated by TMC can describe the noise distribution more efficiently than particles generated by MC.    where m denotes particle number and monte denotes the Monte Carlo number. The RMSE decreases as the particle number increases. Figure 4 shows that the RMSE is about 0.21 when the particle number is 400. For TMC, the relationship between the magnitude of confidence and the mean error is shown in Figures 5 and 6. The absolute expectation error decreases rapidly as the confidence increases. Meanwhile, the particle number increases slowly as confidence increases over a fixed sampling interval. The absolute expectation error can be reduced to 0.015 using only 75 particles for the confidence of 0.999, whereas the sampling interval is 0.4. The results demonstrate that particles generated by TMC can describe the noise distribution more efficiently than particles generated by MC.

Proposed Filter Algorithm
Each of the efficient particles from TMC is a possible state estimation. The weight of each particle is the probability that the particle becomes the state estimation. The continuous probability distribution is discretized in terms of these particles and their weights. The TMCF is further designed as shown in Figure 7. The entire filter system can be divided into four parts: initialization, particle deployment, weight mixing and state estimation. In this section, the four parts are explained in detail and the TMCF algorithm is proposed.

Initialization
The target of initialization is to set several efficient particles to describe the initial probability distribution discretely to facilitate the subsequent filtering process. After the confidence 1   and the sampling interval 0 dT are set, initial particles 0 X and their weights 0 w can be obtained in the confidence interval 0 c using the TMC method according to the known initial probability can be obtained according to Equation (8), and then the amplification of interval ε is defined as:  The results demonstrate that particles generated by TMC can describe the noise distribution more efficiently than particles generated by MC.

Proposed Filter Algorithm
Each of the efficient particles from TMC is a possible state estimation. The weight of each particle is the probability that the particle becomes the state estimation. The continuous probability distribution is discretized in terms of these particles and their weights. The TMCF is further designed as shown in Figure 7. The entire filter system can be divided into four parts: initialization, particle deployment, weight mixing and state estimation. In this section, the four parts are explained in detail and the TMCF algorithm is proposed.

Proposed Filter Algorithm
Each of the efficient particles from TMC is a possible state estimation. The weight of each particle is the probability that the particle becomes the state estimation. The continuous probability distribution is discretized in terms of these particles and their weights. The TMCF is further designed as shown in Figure 7. The entire filter system can be divided into four parts: initialization, particle deployment, weight mixing and state estimation. In this section, the four parts are explained in detail and the TMCF algorithm is proposed.

Initialization
The target of initialization is to set several efficient particles to describe the initial probability distribution discretely to facilitate the subsequent filtering process. After the and then the amplification of interval ε is defined as:

Initialization
The target of initialization is to set several efficient particles to describe the initial probability distribution discretely to facilitate the subsequent filtering process. After the confidence 1 − α and the sampling interval dT 0 are set, initial particles X 0 and their weights w 0 can be obtained in the confidence interval c 0 using the TMC method according to the known initial probability p 0 (x). Additionally, the confidence interval c ε for the system noise probability p u (x) of 1 − α can be obtained according to Equation (8), and then the amplification of interval ε is defined as: where column(c ε ) i denotes the ith column of matrix/vector c ε and E(p u (x)) denotes the expectation of p u (x). The real state x real,0 now exists in the confidence interval c 0 for the probability 1 − α. p 0 (x) is described by {X 0 , w 0 } for the accuracy of dT 0 . The probability of each particle's existence is described by its weight.

Particle Deployment
The target of this step is to analyze the transition of the confidence interval from time step t − 1 to time step t, and then deploy particles. At time step t − 1, the confidence interval c t−1 can be written as: in terms of the principle of the TMC method. where min(X t−1 ) denotes the minimum value of each row of matrix/vector X t−1 and max(X t−1 ) denotes the maximum value of each row of X t−1 . When the particles X t−1 are transferred through the system model without system noise, the transferred particles can be expressed as: Each particle in X t then is considered to be a possible state estimation without system noise at time step t, and the probability of each particle X t (i) is the weight of X t−1 (i).
Considering system noise, the confidence interval of each particle X t (i) for confidence 1 − α is where c t (i) denotes the confidence interval for confidence 1 − α corresponding to X t (i). Then, the complete confidence interval for 1 − α can be obtained by calculating the union of all confidence intervals: where n t−1 denotes the number of particles in set X t−1 . For simplicity, the complete confidence interval also can be estimated roughly by: where c t = min X t , max( X t ) .
Asĉ t ⊂ c t . Hence, confidence corresponding to the confidence intervalĉ t is greater than or equal to 1 − α. However, the amplification of the confidence interval might increase the number of deployed particles. Sometimes this phenomenon, particularly in the case of high dimensions, leads to too many particles, which might result in the failure of filtering.
Then, particles X t can be deployed according to the confidence interval c t orĉ t and dT t at time step t. Generally, dT t is set to a constant vector: When the confidence interval is unstable (increases or decreases over time), a specific strategy corresponding to the specific system needs to be designed to change the size of the sampling interval.
In this step, the deployed particles X t are distributed in this confidence interval uniformly, which is preparation for the subsequent step.

Weight Mix
In the weight mix step, the relationship between X t−1 and X t is analyzed to solve the prior weight corresponding to X t , the likelihood weight is calculated and the posterior weight is obtained.
As the distribution of the system noise is continuous, each particle in set X t−1 might arrive at any particle in set X t , in theory. The probability of each particle in set X t−1 arriving at each particle in set X t can be expressed as: where w t (i, j) denotes the probability of the ith particle in X t−1 being transferred to the jth particle in X t . m t denotes the number of particles in set X t . As shown in Figure 8, the prior weight is calculated as: As the distribution of the system noise is continuous, each particle in set 1 t− X might arrive at any particle in set t X , in theory. The probability of each particle in set 1 t− X arriving at each particle in set t X can be expressed as: where ( ) t w i j  ， denotes the probability of the ith particle in 1 t− X being transferred to the jth particle in t X . t m denotes the number of particles in set t X . As shown in Figure 8, the prior weight is calculated as: The likelihood weight is written as: Additionally, the posterior weight is mixed by: Then, the posterior distribution of the state at time step t is described by { } t t X w ， discretely.

Amplifying region
Amplifying region    The likelihood weight is written as: Additionally, the posterior weight is mixed by: Then, the posterior distribution of the state at time step t is described by X t , w t discretely.

Particle Choice and State Estimation
X t , w t describes the posterior distribution after the fusion of the prior distribution and the likelihood distribution. All the distribution information is concentrated in the weight w t , and the role of the particles X t is to only occupy the distribution space. Generally, many very low weighted particles emerge after fusion. Additionally, these particles with very low weights have very little effect on the accurate description of the distribution, so they can simply be omitted. n t particles are chosen in the order of largest to smallest so that the sum of the weights of the n t particles is 1 − α: Then, the weight is normalized: The state estimation can be obtained by In conclusion, the TMCF algorithm is summarized in Algorithm 1.

Algorithm 1
1 Initialization: 2 Setting 1 − α and dT 0 3 Generate {X 0 , w 0 } and ε according to TMC method and Equation (21)  4 //Over all time steps: 5 for t ← 1 to T do 6 Setting dT t = dT 0 , or other strategy is used to select dT t 7 Confidence interval choice according to Equation (25) or (26)  8 Particle deployment according to dT t 9 Weight fusion according to Equations (28)-(31) 10 Particles and their weights choice according to Equations (32) and (33)  11 State estimation according to Equation (34) 12 End

Numerical Simulation
In this section, a two-dimensional linear system is used to assess the performance of the TMCF [43]: where x i (t) and z(t) denote the state and observation at time step t, respectively; q i (t − 1) denotes the system noise sequence at time step t − 1; and r(t) denotes the observation noise sequence at time step t. In the two experiments, θ = π/18 and the initial state is [1,1] T . The initial probability satisfies p 0 (x) ∼ N(0, 0.1). It is well known that the KF is the optimal filter for a linear Gaussian system based on the Bayesian filter principle. Hence, the performance of the TMCF is first assessed in a linear and Gaussian system. The estimation results of the KF are used as a reference to evaluate the approximation degree of the TMCF algorithm and Bayesian filtering in the linear and Gaussian system. Because the TMCF algorithm is a filter based on the Monte Carlo principle, the performances of the TMCF algorithm and PF algorithm are compared in this experiment. Second, a heavy-tailed distribution (non-Gaussian environment) is considered in this linear system. The performance of the TMCF is compared with that of the KF and PF in this linear and non-Gaussian system. Four sets of parameters are selected for the TMCF algorithm, which are shown in Table 1. Two forms of mean square errors (MSE) are used to evaluate the performance of the algorithms: In this section, MATLAB is used to build the simulation environment. The performance (including the filtering precision, the number of samples, filter time) of the TMCF, KF and PF are verified and compared by this simulation environment. All the data are generated by the simulated program. The configurations of the simulation computer can be seen in Table 2.
PFs with 3000 and 5000 particles are used as the comparison algorithms of the TMCF. Figures 9 and 10 show that the deviation between each filtering result of the different algorithms and the KF results for x 1 and x 2 , respectively. The results show that it is difficult for the PF to approximate the performance of the KF in the linear and Gaussian system. Compared with the KF, although the number of particles is 3000, the results of the PF deviate by about 0.15 from that of the KF in each filtering process. Meanwhile, as the number of particles increases dramatically, this deviation declines very slowly. The deviation is about 0.1 when the number of particles is 5000. This is caused by the indeterminacy of the Monte Carlo method. The indeterminacy is greatly reduced when the TMC method is used to generate particles. Using the TMC method, the results of the TMCF are very close to those of the KF. The difference between the TMCF and KF is less than 0.01 for all four parameters selected. The deviation decreases as the confidence increases and the sampling interval decreases. Particularly, the deviation is less than 0.001 when the confidence is 0.9999 and the sampling interval is 0.8. Figure 11 shows that only about 40 particles need to be transferred in each filtering process for parameter 1, and the number of set particles is about 250. The number of particles required increases as the sampling interval decreases and the confidence increases. For parameter 4, the number of transferred particles is about 130 and the number of set particles is only about 800. Figure 12 shows the time consumed in each filtering process by the different algorithms on a computer using the same configuration. The computation time of the TMCF is much less than that of PF. Table 3 shows the filtering results of 5000 time steps processed by Equations (37) and (38). The MSE 2 is about 0.01 for PF with 5000 particles, and the computation time is about 0.1 s for each filtering process. The MSE 2 reaches 10 −6 for the TMCF with parameter 4, and the computation time is only about 0.0035 s. The results demonstrate that the TMCF can approximate the KF algorithm better with fewer particles and less computation in linear and Gaussian systems compared with PF. Different from the KF, the TMCF does not use the propagation characteristics of the conditional means and covariances of Gaussian noise in linear systems. Therefore, this method is also applicable to non-Gaussian noise.
Electronics 2021, 10, x FOR PEER REVIEW 14 of 18 the sampling interval decreases. Particularly, the deviation is less than 0.001 when the confidence is 0.9999 and the sampling interval is 0.8. Figure 11 shows that only about 40 particles need to be transferred in each filtering process for parameter 1, and the number of set particles is about 250. The number of particles required increases as the sampling interval decreases and the confidence increases. For parameter 4, the number of transferred particles is about 130 and the number of set particles is only about 800. Figure 12 shows the time consumed in each filtering process by the different algorithms on a computer using the same configuration. The computation time of the TMCF is much less than that of PF. Table 3 shows the filtering results of 5000 time steps processed by Equations (37) and (38). The 2 MSE is about 0.01 for PF with 5000 particles, and the computation time is about 0.1 s for each filtering process. The 2 MSE reaches 10 for the TMCF with parameter 4, and the computation time is only about 0.0035 s. The results demonstrate that the TMCF can approximate the KF algorithm better with fewer particles and less computation in linear and Gaussian systems compared with PF. Different from the KF, the TMCF does not use the propagation characteristics of the conditional means and covariances of Gaussian noise in linear systems. Therefore, this method is also applicable to non-Gaussian noise.  x .

State estimation deviation from that of Kalman Filter
State estimation deviation from that of Kalman Filter Figure 9. Difference between the state estimation results of the different algorithms and those of KF for x 1 .
the sampling interval decreases. Particularly, the deviation is less than 0.001 when the confidence is 0.9999 and the sampling interval is 0.8. Figure 11 shows that only about 40 particles need to be transferred in each filtering process for parameter 1, and the number of set particles is about 250. The number of particles required increases as the sampling interval decreases and the confidence increases. For parameter 4, the number of transferred particles is about 130 and the number of set particles is only about 800. Figure 12 shows the time consumed in each filtering process by the different algorithms on a computer using the same configuration. The computation time of the TMCF is much less than that of PF. Table 3 shows the filtering results of 5000 time steps processed by Equations (37) and (38). The 2 MSE is about 0.01 for PF with 5000 particles, and the computation time is about 0.1 s for each filtering process. The 2 MSE reaches 10 for the TMCF with parameter 4, and the computation time is only about 0.0035 s. The results demonstrate that the TMCF can approximate the KF algorithm better with fewer particles and less computation in linear and Gaussian systems compared with PF. Different from the KF, the TMCF does not use the propagation characteristics of the conditional means and covariances of Gaussian noise in linear systems. Therefore, this method is also applicable to non-Gaussian noise.

Gaussian Mixture Distribution System
In this experiment, the Gaussian mixture model is selected for both system noise and observed noise: The noise model is shown in Figure 13. Similar to Table 3 for the previous experiment, Table 4 shows the filtering results of 5000-time steps processed by Equation (37). The 1 MSE of the PF with 3000 particles is greater than that of the KF. The performance of the

Gaussian Mixture Distribution System
In this experiment, the Gaussian mixture model is selected for both system noise and observed noise: The noise model is shown in Figure 13. Similar to Table 3 for the previous experiment, Table 4 shows the filtering results of 5000-time steps processed by Equation (37). The 1 MSE of the PF with 3000 particles is greater than that of the KF. The performance of the
The noise model is shown in Figure 13. Similar to Table 3 for the previous experiment,  Table 4 shows the filtering results of 5000-time steps processed by Equation (37). The MSE 1 of the PF with 3000 particles is greater than that of the KF. The performance of the TMCF with parameter 1 is better than that of PF with 5000 particles and KF. Meanwhile, the number of transferred particles is only 110 and the number of set particles is only 650. The computation time is 0.006 s, that is, much less than that of the PF. With the decrease of the sampling interval and the increase of confidence, the accuracy of filter is improved, and the computation time is increased. For the parameters 4 of TMCF, the accuracy of the TMCF is improved by 0.01, and the computation time reduced to 0.067 s from 0.20 s, comparing with the particle filter (5000 particles). TMCF with parameter 1 is better than that of PF with 5000 particles and KF. Meanwhile, the number of transferred particles is only 110 and the number of set particles is only 650. The computation time is 0.006 s, that is, much less than that of the PF. With the decrease of the sampling interval and the increase of confidence, the accuracy of filter is improved, and the computation time is increased. For the parameters 4 of TMCF, the accuracy of the TMCF is improved by 0.01, and the computation time reduced to 0.067 s from 0.20 s, comparing with the particle filter (5000 particles).

Conclusions
The TMCF algorithm was proposed to overcome the challenge of the non-Gaussian filtering in this paper. First, the TMC method was proposed to sample particles in the confidence interval according to the sampling interval. The performance of the TMC method has been simulated and the property of the TMC method has been proved. Second, the TMCF algorithm was proposed by introducing the TMC method into the PF algorithm. Different from the PF, the TMCF algorithm completes the transfer of the distribution using a series of calculations of weights and particles were used to occupy the state space in the confidence interval. Third, Numerical simulations demonstrated that the MSE of the TMCF was about 10 −6 compared with that of the Kalman filter (KF) in a two-dimensional linear/Gaussian system. In a two-dimensional linear/non-Gaussian system, the MSE of the TMCF for parameter 4 was 0.04 and 0.01 less than that of the KF and PF with 5000 particles, respectively. The single filter times of the TMCF and PF with 5000 particles were 0.006 s and 0.2 s, respectively.
In this paper, we have designed an improved PF algorithm, we called TMCF algorithm. In the non-Gaussian filter environment, it can not only improve the accuracy of filter, but also reduce the computation time. In the future development of new disciplines such as artificial intelligence, multi-sensor data fusion, and multi-target tracking, there are

Conclusions
The TMCF algorithm was proposed to overcome the challenge of the non-Gaussian filtering in this paper. First, the TMC method was proposed to sample particles in the confidence interval according to the sampling interval. The performance of the TMC method has been simulated and the property of the TMC method has been proved. Second, the TMCF algorithm was proposed by introducing the TMC method into the PF algorithm. Different from the PF, the TMCF algorithm completes the transfer of the distribution using a series of calculations of weights and particles were used to occupy the state space in the confidence interval. Third, Numerical simulations demonstrated that the MSE of the TMCF was about 10 −6 compared with that of the Kalman filter (KF) in a two-dimensional linear/Gaussian system. In a two-dimensional linear/non-Gaussian system, the MSE of the TMCF for parameter 4 was 0.04 and 0.01 less than that of the KF and PF with 5000 particles, respectively. The single filter times of the TMCF and PF with 5000 particles were 0.006 s and 0.2 s, respectively.
In this paper, we have designed an improved PF algorithm, we called TMCF algorithm. In the non-Gaussian filter environment, it can not only improve the accuracy of filter, but also reduce the computation time. In the future development of new disciplines such as artificial intelligence, multi-sensor data fusion, and multi-target tracking, there are more and more types of data and more and more complex sources of data. the quality of nonlinear non-Gaussian filtering method becomes more and more important in data fusion. Our work lays a theoretical foundation for nonlinear/non-Gaussian filter and can be used to improve the filtering precision under the condition of reducing computation time in some non-Gaussian filter environments. In the future, we will try to apply the algorithm to the integrated navigation system to improve the positioning accuracy of satellite navigation.