Closed-Form Distance Estimators under Kalman Filtering Framework with Application to Object Tracking

,


Introduction
e problem of measuring the distance between real-valued signals or images arises in most areas of scientific research. In particular, the familiar Euclidean distance plays a prominent role in many important application contexts not only in engineering, economics, statistics, and decision theory, but also in fields such as machine learning, cryptography, image recognition, and others. e statistical methods related to the distance estimation can be categorized into image and signal processing areas.
e concept of a distance metric is widely used in image processing and computer vision [1][2][3][4][5] (also see references therein). e distance provides a quantitative measure of the degree of match between two images or objects. ese objects might be two profiles of persons, a person and a target profile, camera of a robot and people, or any two vectors taken across the same features (variables) derived from color, shape, and/or texture information. Image similarity measures play an important role in many image algorithms and applications including retrieval, classification, change detection, quality evaluation, and registration [6][7][8][9][10][11][12].
e proposed paper deals with the distance estimation between random signals. In signal processing, a good distance metric helps in improving the performance of classification, clustering and localization in wireless sensor networks, radar tracking, and other applications [13][14][15][16][17][18][19][20].
e Bayesian classification approach based on concepts of the Euclidian and Mahalanobis distances is often used in discriminant analysis. Survey of the classification procedures which minimize a distance between raw signals and classes in multifeature space is given in [21,22]. e distance estimation algorithm based on the goodness-of-fit functions where the best parameters of the fitting functions are calculated given the training data is considered in [23]. Algorithm for estimation of a walking distance using a wristmounted inertial measurement unit device is proposed in [24]. e concept of distance between two samples or between two variables is fundamental in statistics due to the fact that a sum of squares of independent normal random variables has a chi-square distribution. Knowledge of the distribution and usage of the usual approximations make a confidence interval for distance metrics [25,26]. Usage of the Taylor series expansions for aircraft geometric-height estimation using range and bearing measurements is addressed in [27,28]. e minimum mean square error (MMSE) estimation of a state vector in the presence of information about the absolute value of a difference between its subvectors is proposed in [29].
In many applications, it is interesting to estimate not only a position or state of an object but also a nonlinear distance function which gives information to effectively control target tracking. However, most authors have not focused on a simultaneous estimation of a state and distance functions in dynamical models such as a Kalman filtering framework.
e problem of estimation of the distance function, d k � d(x k , y k ), between two vector signals x k and y k is considered in the paper, but its difference from the aforementioned references is that the both signals x k and y k are unknown, and they should be simultaneously estimated with the function d k using indirect measurements. For example, we observe positions of two points A(x k ) and B(y k ) in a line and a distance between the points represents the absolute difference, i.e., d k (A, B) � |x k − y k |. e positions x k and y k and consequently the distance d k � |x k − y k | are unknown, and our problem is to optimally calculate three estimates x k , y k , and d k . Note that the simple distance estimator d k � |x k − y k | is not an optimal solution. e purpose of the paper is to derive an analytical closedform MMSE estimator for distance functions between random signals such as the absolute value, the Euclidean distance, inner product, bilinear, and quadratic forms. e advantage of the estimator is quick and accurate calculation of distance metrics compared to the approximate or iterative estimators. A further study of using the estimators is also done for the object tracking problem where we can obtain important practical results for the distance estimation of signals in linear Gaussian discrete-time systems. e following list highlights the primary contributions of this paper: (1) Extension of the MMSE approach to the estimation of a nonlinear functions of a state vector within the Kalman filtering framework. e obtained MMSEoptimal solution represents a two-stage estimator.
(2) Derivation of analytical expressions for the different metrics between two points in a line, between a point and a line, and between a point and a plane. We establish that the obtained estimators represent compact closed-form formulas depending on the Kalman filter state estimates and error covariance.
(3) e MMSE estimators for quadratic and bilinear forms of a state vector are investigated and applied, including the estimators for the square of a norm ‖x k ‖ 2 2 , the square of the Euclidean distance ‖x k − y k ‖ 2 2 , and the inner product 〈x k , y k 〉. A novel low-complexity algorithm for suboptimal estimation of a special class of composite functions is proposed. Tracking radar responses such as range, angles, and range rate are described by the functions. is paper is organized as follows. Section 2 presents a statement of the MMSE estimation problem for an arbitrary nonlinear function of a state vector within a Kalman filtering framework. In Section 3, the general MMSE estimator is proposed, and computational complexity of the estimator is discussed.
e concept of a closed-form estimator is introduced. In Section 4, the closed-form MMSE estimator for absolute value of a linear form of a state vector is derived ( eorem 1). In particular cases, the estimator calculates distances between two points in 1-D line, between a point and line in 2-D plane, and between a point and plane in 3-D space. e comparative analysis of the estimator via several practical examples is presented. In Section 5, the MMSE estimators for quadratic and bilinear forms of a state vector are comprehensively studied ( eorems 2 and 3). Effective matrix formulas for the quadratic and bilinear MMSE estimators are derived and applied with the Euclidean distance, a norm, and inner product of vector signals. In Section 6, a low-complexity suboptimal estimator for composite nonlinear functions is proposed and recommended for calculation of radar range-angle responses. In Section 7, the efficiency of the suboptimal estimator is demonstrated on an 2-D dynamical model. Finally, we conclude the paper in Section 8. e list of main notations is given in Table 1.

Problem Statement
e basic framework for the Kalman filter involves estimation of a state of a discrete-time linear dynamical system with additive Gaussian white noise: where x k ∈ R n is a state vector, y k ∈ R m is a measurement vector, and v k ∈ R r and w k ∈ R m are zero-mean Gaussian white noises with process (Q k ) and measurement (R k ) noise covariances, respectively, i.e., v k ∼ N(0, Q k ), w k ∼ N(0, R k ), and F k ∈ R n×n , G k ∈ R r×r , Q k ∈ R r×r , R k ∈ R m×m , and H k ∈ R m×n . e initial state x 0 ∼ N(m 0 , C 0 ) and the process and measurement noises v k , w k are mutually uncorrelated.
In parallel with the state-space model (1), consider the nonlinear function of a state vector: which in particular case represents a distance metric in R n .
Given the overall noisy measurements y k � y 1 , y 2 , . . . , y k }, k ≥ 1, our goal is to desire optimal estimators x k and z k for the state vector (1) and nonlinear function (2), respectively.
ere are a multitude of statistics-based methods to estimate the unknown value z k � f(x k ) from the sensor measurements y k We focus on the MMSE approach, which minimizes the mean square error (MSE), minẑ E(‖z k − z k ‖ 2 2 ), which is a common measure of estimator quality. e MMSE estimator is the conditional mean (expectation) of the unknown z k � f(x k ) given the known observed value of the measurements, z opt k � E(z k | y k ) [30,31]. e most challenging problem in the MMSE approach is how to calculate the conditional mean. In this paper, explicit formulas for distance metrics within the Kalman filtering framework are derived.

General Formula for Optimal Two-Stage MMSE Estimator
In this section, the optimal MMSE estimator for the general function f(x k ) of a state vector is proposed. It includes two stages: the optimal Kalman estimate of the state vector x k computed at the first stage is used at the second stage for estimation of f(x k ).
First stage (calculation of Kalman estimate): the mean square estimate x k � E(x k | y k ) of the state x k based on the measurements y k and error covariance P k � Cov(e k ), e k � x k − x k are described by the recursive Kalman filter (KF) equations [30,31]: k+1 are the time update estimate and error covariance, respectively, and K k ∈ R n×m is the filter gain matrix.
Second stage (optimal MMSE estimator): next, the optimal MMSE estimate of the nonlinear function z k � f(x k ) based on the measurements y k also represents a conditional mean, that is, where p(x | y k ) � N(x k , P k ) is a multivariate conditional Gaussian probability density function.
us, the best estimate in equation (4) represents the optimal MMSE estimator, z opt k � F(x k , P k ), which depends on the Kalman estimate x k and error covariance P k determined by KF equation (3).
Remark 1 (closed-form MMSE estimator). In general case, the calculation of the optimal estimate, z opt k � E(z k | y k ), is reduced to calculation of the multivariate Gaussian integral (5). e lack of the estimate is impossibility to calculate the integral in explicit form for the arbitrary nonlinear function f(x). Analytical calculation of the integral (closed-form MMSE estimator) is possible only in special cases considered in the paper. e closed-form estimators for distance metrics in terms of x k and P k are proposed in Sections 4 and 5.
e Euclidean distance between two points, x 1 , x 2 ∈ R n , is defined as In this particular case where x 1 and x 2 represent two points located on the 1-D line, the Euclidean distance represents the absolute value (see Figure 1), i.e., In Section 4, the MMSE estimator for the absolute value is comprehensively studied.

Closed-Form MMSE Estimator for
where Φ(·) is the cumulative distribution function of the standard normal distribution, N(0, 1). Covariance (covariance matrix) of random vector x k Cov(x k , y k ) Cross covariance of random vectors x k and y k ‖x‖ 2 Euclidean norm (2-norm) of vector, e derivation of equation (8) is given in the Appendix. Let ℓ � c T x + d be a linear form (LF) of the normal random vector, x ∈ R n , and x ∈ R n and P ∈ R n×n are the MMSE estimate and error covariance, respectively. en, the MMSE estimate of the linear form ℓ and its error variance can be calculated as and we have the following theorem.

Theorem 1 (MMSE estimator for absolute value of LF). Let
x ∈ R n be a normal random vector, and x ∈ R n and P ∈ R n×n are the MMSE estimate and error covariance, respectively. en, the closed-form MMSE estimator for the absolute value z � |c T x + d| is defined by formula (8): where ℓ and P (ℓ) are determined by equation (9).
e MMSE estimator (10) allows to calculate distances measured in terms of the absolute value in n-dimensional space.

Examples of MMSE Estimator for Distance between
Points. Let x k ∈ R n be a normal state vector, and x k ∈ R n and P k ∈ R n×n are the Kalman estimate and error covariance, respectively, P k � [P ij,k ].
Example 1 (distance on 1-D line). e MMSE estimator for the distance z k � |x k − a k | between the moving point (x k ) and given sequence (a k ) in 1-D line takes the form Example 2 (distance between point and line). e shortest distance Figure 2. e distance is given by Substituting c T � A B and d � C into equations (9) and (10), we get the MMSE estimator for the shortest distance (12): where ℓ k and P (ℓ) k are determined by equation (9): e MMSE estimator (12)- (14) can be generalized on 3-D space.
Example 3 (distance between point and plane). Similar to equation (12), the shortest distance between the moving point M(x k ) � M(x 1,k , x 2,k , x 3,k ) and the plane P: is shown in Figure 3. Substituting c T � A B C and d � D into equations (9) and (10), we get where ℓ k and P (ℓ) k are determined by equation (9): Trajectory

Direction of motion
e MMSE distance estimators in eorem 1 and Examples 1∼3 are summarized in Table 2.

Numerical Examples.
In this section, numerical examples demonstrate the accuracy of the two closed-form estimators calculated for the absolute value z � |c T x|. e optimal MMSE estimator z opt is compared with the simple suboptimal one z sub � |c T x|.

Estimation of Distance between Random Location and
Given Point in 1-D Line. Let x k be a scalar random position measured in additive white noise; then, the system model is where m 0 is the known initial condition and v k ∼ N(0, q) and w k ∼ N(0, r) are the uncorrelated white Gaussian noises. e KF equation (3) gives the following: Consider the distance between x k and the known point a, i.e., z k � |x k − a|. en, the optimal MMSE estimate of the distance is defined by (10). Further we are interested in the special case in which a � 0 and z k � |x k |. In this case, formula (10) represents the optimal estimate of the distance between the current position x k and the origin point, i.e., In parallel to the optimal estimate (20), consider the simple suboptimal estimate, z sub k � |x k |.

Remark 2.
Reviewing formula (20), we find the following. If the values of α k and β k are large (α k , Assuming the estimate x k is far enough from zero, then the large values of the functions α k and β k depend on the error variance P k . Using (19), the steady-state value of the variance P ∞ satisfies the quadratic equation Since the variance P ∞ � P ∞ (q, r) depends on the noise statistics q and r, this fact can be used in practice to compare the proposed estimators. For example, if the estimate x k is far enough from zero and the product rq is small (rq ≪1), then P ∞ ≈ 0, and α k , |β k | ≫1. In this case, both estimators are close, z opt k ≈ z sub k . Simulation results confirm this result. Next, we test the efficiency of the proposed estimators. e estimators are compared under different values of the noise variances q and r. e following scenarios were considered: Both estimators were run with the same random noises for further comparison. e Monte Carlo simulation with 1000 runs was applied in calculation of the root mean square e simulation results are illustrated in Table 3 and Figures 4∼7.
In Case 1, interest is zero and nonzero initial condition At m 0 � 0 and q � 10 − 4 , the signal x k and its estimate x k are close to zero, and P k ≈ 0.001 at k > 8. In this case, the values of α k and β k are not large; therefore, the optimal and suboptimal estimates are different as shown in Figure 4 and confirmed by the values R(z opt ) � 0.0213 and R(z sub ) � 0.0327 in Table 3. At m 0 � 1 and q � 10 − 4 , the estimate x k is far enough from zero, and P k ≈ 0.001. According to Remark 1, the optimal and suboptimal estimates are approximately equal, z opt k ≈ z sub k , as shown in Figure 5. e equal values R(z opt ) � R(z sub ) � 0.0334 confirm the fact.
In Cases 2 and 3, the variance P ∞ is not small; therefore, the initial condition m 0 does not play a significant role in comparing both estimators. In these cases, the optimal estimator z opt k has better performance than the simple suboptimal one z sub k � |x k |. Typical graphics are shown in Figures 6 and 7, and the values R(z opt ) and R(z sub ) in Table 3 confirm that conclusion.
us, the simulation results in Section 4.3.1 show that the optimal estimator is suitable for practical applications.
Absolute value of linear form z � |c T x + d| (9) and (10) Distance between two points on 1-D line z � |x − a| Distance between point M and line L in 2-D plane (14) Distance between point M and plane P in 3-D space

Estimation of Distance between Two Random Points in 1-D Line.
Consider a motion of two random points A 1 (x 1 ) and A 2 (x 2 ) in 1-D line. Assume that evolution of the state vector x k � [x 1,k x 2,k ] T from time t k to t k+1 is defined by the random walk model: where m 1 and m 2 are the known initial conditions and v 1,k ∼ N(0, q 1 ) and v 2,k ∼ N(0, q 2 ) are uncorrelated white Gaussian noises. Assuming we measure the true position of the points with correlated measurement white noises w 1 and w 2 , respectively, the measurement equation is where E(w 1,k w 2,k ) � r 12 .
Our goal is to estimate the unknown distance d(A 1 , A 2 ) � |x 1,k − x 2,k | between the current location of the points A 1 (x 1,k ) and A 2 (x 2,k ).
According to the proposed two-step estimation procedure, the optimal Kalman estimate x k � x 1,k x 2,k T and error covariance P k � [P ij,k ], P 0 � I 2 computed at the first stage are used at the second stage for estimation of the distance z k � |x 1,k − x 2,k |. Using formulas (9) and (10) for c T � 1 − 1 and d � 0, we obtain the best MMSE estimate for the distance: In parallel with the optimal distance estimator (24), we consider the simple suboptimal estimator z sub k � |x 1,k − x 2,k |.

Remark 3.
As we see, the optimal estimate z opt k of the distances in (20) and (24) depends on the functions α k and β k .
e functions in formulas (20) and (24) are calculated in the pairs of points (x k , P k ) and (ℓ k , P (ℓ) k ), respectively. e second pair depends on the state estimate x k � [x 1,k x 2,k ] T and error covariance P k � [P ij,k ]. erefore, Remark 2 is also valid for models (22) and (23). For example, if the estimate ℓ k � x 1,k − x 2,k is far enough from zero and the variance P (ℓ) k � E(ℓ k − ℓ k ) 2 is small, then z opt k ≈ z sub k . e simulation results in Figure 8 with P (ℓ) k � 0.0015, k > 8, and very close values of the average RMSEs, R(z opt ) � 0.0189 and R(z sub ) � 0.0189, confirm this fact.
In addition, we are interested in the following new scenarios: Case 1: both points A 1 (x 1 ) and A 2 (x 2 ) are fixed, and their positions are measured with small noises Case 2: the first point A 1 (x 1 ) is fixed, but the movement of the second one A 2 (x 2 ) is subject to a small noise Case 3: the movement of both points is subject to a medium noise e model parameters and simulation results for the scenarios are illustrated in Table 4. From Table 4, we observe the strong difference between the average RMSEs R(z opt ) and R(z sub ), i.e., R(z opt ) < R(z sub ). It is not a surprise that the optimal estimator (24) is better than the suboptimal one, z sub k � |x 1,k − x 2,k |.

Optimal Closed-Form MMSE Estimator for Quadratic
Form. Consider a quadratic form (QF) of the state vector x k ∈ R n : In this case, the optimal MMSE estimator (4) can be explicitly calculated in terms of the Kalman estimate x k and error covariance P k . Theorem 2 (MMSE estimator for QF). Let x k ∈ R n be a normal random vector, and x k ∈ R n and P k ∈ R n×n are the Kalman estimate and error covariance, respectively. en, the optimal MMSE estimator for the QF z k � x T k A k x k has the following closed-form structure: Proof. Using the formulas x T Ax � tr(Axx T ) and In parallel to the optimal quadratic estimator (26), we consider the simple suboptimal estimator denoted as z sub k , which is obtained by direct calculation of the QF at the point e simple estimator (28) depends only on the Kalman estimate x k and does not require the KF error covariance P k in contrast to the optimal one (26). e following result compares the estimation accuracy of the optimal and suboptimal quadratic estimators.

Lemma 2 (difference between MSEs for quadratic estimators).
e difference between the true MSEs P opt z,k � E(z k − z opt k ) 2 and P sub z,k � E(z k − z sub k ) 2 for the optimal and simple suboptimal quadratic estimators is tr 2 (A k P k ).
Proof. Using the fact that the MMSE estimator is unbiased, E(z opt k − z k ) � 0, and the equality z sub k � z opt k − tr(A k P k ), we obtain Let us illustrate eorem 2 and Lemma 2 on the example of the squared norm of a random vector, z k � ‖x k ‖ 2 � x T k x k . en, A k � I n , and the quadratic estimators and difference between their MSEs take the form We see the difference δ k � tr 2 (P k ) depends on the quality of the KF data processing (3).

Optimal Closed-Form MMSE Estimator for Bilinear Form.
Let x k ∈ R n and x k ∈ R n be two arbitrary state vectors. en, a bilinear form (BLF) on the state space can be written as follows: 10 Note that a BLF can be written as a QF in the vector X k ∈ R 2n . In this case, For the QF (25), the optimal bilinear estimator can be explicitly calculated in terms of the Kalman estimate x k ∈ R 2n and block error covariance matrix P k ∈ R 2n×2n : where P xx,k � Cov(e x,k , e x,k ) is a cross covariance between estimation errors e x,k � x k − x k and e x,k � x k − x k . Applying eorem 2 to the QF z k � X T k B k X k and taking into consideration the block structure of the matrix B k , we have the following.

Theorem
3 (MMSE  estimator  for  BLF). Let be a joint normal random vector, and x k ∈ R 2n and P k ∈ R 2n×2n are the Kalman estimate and block error covariance matrix (33). en, the optimal MMSE estimator for the BLF u k � x T k A k x k has the following closedform structure: Example 4 (estimation of inner product and squared Euclidean distance). Using the bilinear estimator (34) with A k � I n , the MMSE estimator for the inner product u k � x T k x k takes the form u opt k � x T k x k + tr P xx,k .
Next, calculate the optimal MMSE estimator for the squared Euclidean distance between two points z k � d 2 (x k , x k ) � ‖x k − x k ‖ 2 2 or z k � ‖η k ‖ 2 2 , where η k � x k − x k . e Kalman estimate and error covariance of the difference η k take the form (36) Applying the quadratic estimator (26) with A k � I n , we obtain the MMSE estimator for the squared Euclidean distance: e MMSE estimators for bilinear and quadratic forms are summarized in Table 5.

Practical Usefulness of Squared Euclidean Distance.
In many practical problems, for example, finding the shortest distance from a point to a curve, min x,x∈M d(x, x), or comparing a distance with a threshold value, d(x, x) ≷ ε, there is no need to calculate the original Euclidean distance but we just need to calculate its square due to the equivalence of the problems, In such situations, the optimal quadratic estimator (37) for the squared Euclidean distance, can be successfully used.

Example 5. (deviation of normal and nominal trajectories).
Suppose that the piecewise feedback control law U * k depends on the difference between a normal (x k ) and nominal (x n k ) trajectories. For example, it is given by where d(x k , x n k ) is the Euclidean distance and D is the distance threshold (see Figure 9).
In view of the above, rewrite the control law in the equivalent form: where d 2 (x k , x n k ) is the squared of the Euclidean distance and D 2 is the new threshold.
Using the quadratic estimator (37) for the squared distance z k � d 2 (x k , x n k ), we obtain the MMSE estimator, z opt k � (x k − x n k ) T (x k − x n k ) + tr(P k ), which can be used in the control (39): In the next section, we discuss application of the linear, bilinear, and quadratic estimators ( eorems 1-3) for estimation of composite nonlinear functions.

Definition of Composite Function.
Consider a composite function F depending on LF, QF, and BLF, such as where the inside functions are defined as Example 6. (composite and inside functions in object tracking). Let x ∈ R 6 be an object state vector consisting of the position (p x , p y, p z ) and corresponding velocity (v x , v y , v z ) components in the Cartesian coordinates (x, y, z), i.e., In the spherical coordinates, we assume that a Doppler radar is located at the origin of the Cartesian coordinates, and it measures the following quantities obtained via nonlinear composite functions F i (g 1 (x), . . . , g h (x)) of the state components depending on LF, QF, and BLF: where d is the range (distance), θ is the bearing angle, φ is the elevation angle, and _ d is the range rate.

Suboptimal Estimator for Composite Functions.
Given the Kalman estimate and covariance (x, P), we estimate a quantity obtained via the composite function F(x) � F(g 1 (x), . . . , g h (x)). e idea of the algorithm is based on the optimal MMSE estimators for LF, QF, and BLF proposed in equations (10), (26), and (34), respectively. We have

Mathematical Problems in Engineering 11
For LF g i (x) � c T x: g i (x, P) � c T x; For QF g i (x) � x T Ax: g i (x, P) � x T Ax + tr(AP); Replacing the unknown inside functions g i (x) with the corresponding optimal estimates (45), we obtain the novel suboptimal estimator for the composite function z � F(g 1 , . . . , g h ), i.e., z comp � F g 1 (x, P), . . . , g h (x, P) . (46) Example 7 (estimation of cosine of angle). Let X � x T x T T ∈ R 2n be a joint normal state vector, and are the Kalman estimate and block error covariance. e cosine of angle between two vectors x, x ∈ R n is equal to We observe the ratio (48) represents the composite function z � F(x, x) depending on the three inside functions x T x √ , and g 0 � 〈x, x〉: (49) e optimal MMSE estimators for the inside functions g 0 , g 1 , and g 2 are known. Using equation (45), we have Replacing the inside functions g i with their estimates g i , we get the suboptimal estimator for the cosine of angle: Numerical example illustrates the applicability of the all estimators proposed in the paper.

Numerical Example: Motion in a Plane
In this section, we estimate the range and the bearing angle in 2-D motion of an object. Because of the difficulties of getting analytical closed-form expressions for the optimal estimators for range and bearing, we apply the simple estimator (z sim ) and the estimator based on the composite functions (z com ). In addition, we are interested in the angle between the two state vectors x k− 1 and x k at time instants t k− 1 and t k , respectively, φ k � def ∠(x k-1 , x k ).

Suboptimal Estimators for Range-Angle
Response. e example of Section 4.3.2 is considered again. Consider the 2-D models (22) and (23) describing motion of the two random points A 1 (x 1,k ) and A 2 (x 2,k ). To calculate the range (d k ), tangent of the bearing angle (θ k ), and cosine of the angle (φ k ), we use the following formulas: e following estimators for the range-angle responses (52) are illustrated and compared: (1) Simple estimator: (2) Estimator for composite functions: for a bearing angle coincide. In equation (54), P (1) k− 1,k � E(e 1,k− 1 e 1,k ) and P (2) k− 1,k � E(e 2,k− 1 e 2,k ) are the error crosscovariances satisfying the following recursive: In equations (53)-(55), the values x i,k , K (i) k , and P ii,k � E(e 2 i,k ) represent the Kalman estimate, KF gain, and variance of the error e i,k � x i,k − x i,k , respectively.

Simulation Results.
e simple and composite estimators were run with the same random noises for further comparison. e Monte Carlo simulation with 1000 runs was applied in calculation of the RMSEs for the range (d k ), the bearing angle (θ k ), and the angle between state vectors (φ k ). Figures 10-12 show the range and angle estimates for the model parameters in equations (22) and (23), with m 1 � 0.1, m 2 � − 0.1, P 0 � I 2 , q 1 � 0.2, q 2 � 0.3, r 1 � 0.05, r 2 � 0.1, and r 12 � 0. e following results about the relative performance of the above estimators can be made. to the simple one f sim k . is is due to the fact that the MMSE estimate z opt k � (x 2 1,k + P 11,k ) + (x 2 2,k + P 22,k ) of the squared norm z k � ‖x k ‖ 2 � x 2 1,k + x 2 2,k contains the error variances P 11,k and P 22,k as additional terms. If the variances tend to zero, P ii,k ⟶ 0, then the range estimators will converge, i.e., f (2) Figure 11 shows the true value of tangent of the bearing angle h(x k ) �  (1) and (2), were used to check performance of the estimators g sim k and g com k . e true cosine value g k is shown in Figure 12 for comparison with the estimated values.
For detailed consideration of the proposed estimators, we divide the whole time interval into two subintervals I 1 � [1; 6] and I 2 � [7; 20], respectively. From Figure 12, we can observe that on the first subinterval, the estimate g com k is better than g sim k , and on the second one, the difference between them is negligible. is is also confirmed by the values of R(g k ) presented in Table 6. Note that both estimators g sim k and g com k are based on the MMSE estimators for a squared norm and inner product. erefore, the difference between them becomes small if the KF error variances P ii,k are small (see (c) in equations (53) and (54)). In our case, the steady-state values of the variances are P 11,k � 0.0214 and P 22,k � 0.0101, k > 8.

Conclusion
In this paper, we propose a novel MMSE approach for the estimation of distance metrics under the Kalman filtering framework. e main contributions of the paper are listed in the following.
Firstly, an optimal two-stage MMSE estimator for an arbitrary nonlinear function of a state vector is proposed. e distance metric is an important practical case of such nonlinearities, detailed study of which is given in the paper. Implementation of the MMSE estimator is reduced to calculation of the multivariate Gaussian integral. To avoid the difficulties associated with its calculation, the concept of a closed-form estimator depending on the Kalman filter statistics is introduced. We establish relations between the Euclidean metrics and the closed-form estimator, which lead to simple compact formulas for the real-life distances between points presented in Table 2.
Secondly, an important class of bilinear and quadratic estimators is comprehensively studied. ese estimators are applied to a square of norm, Euclidean distance, and inner product. Table 5 summarizes the results. Moreover, an effective low-complexity suboptimal estimator for nonlinear composite functions is developed using the MMSE bilinear and quadratic estimators. As shown in Section 6.1, radar tracking range-angle responses are described by the composite functions.
Simulation and experimental results show that the proposed estimators perform significantly better than the existing suboptimal distance or angle estimators such as a simple estimator defined in the paper. e low-complexity estimator developed in Section 6.1 is quite promising for radar data processing. Also, the numerical results confirm the fact that the more accurate the Kalman estimate of a state vector, the more accurately we can obtain the range and angle estimates.

Appendix
Proof of Lemma 1. e derivation of formula (8): direct calculation of the Gaussian integral gives To calculate (A.1), we start with the first integral I 1 : Similar technique can be used to find the second integral I 2 : and finally, is completes the derivation of equation (8).

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.