Position and Velocity Estimation Using TOA and FOA Based on Lagrange Programming Neural Network

This paper addresses the problem of estimating the position and velocity of a moving source utilizing the time-of-arrival (TOA) and frequency-of-arrival (FOA) measurements. Since the concerned estimation problem is highly non-linear and non-convex, we propose to utilize a novel neural circuit, namely the Lagrange programming neural network (LPNN) framework, to solve this problem. LPNN equips the abilities of fast convergence and the robustness of resisting high noise level, and thus these two advantages have drawn much attention for it recently. Since LPNN is able to solve the optimization problem with constraint, we first reformulate the original non-linear and non-convex maximum likelihood (ML) problem by introducing additional variables and constraints, and thus a neural network is built up based on the LPNN framework. Subsequently, the convergence and stability of the proposed neural network is mathematically proved and then verified by the results of numerical experiments. Different from the conventional numerical algorithms, the analog neural network can be utilized to fulfil the task of real-time calculation, especially when there are limited computation resources in some applications. The simulation results demonstrate that the proposed LPNN model equips the basic properties of convergence and stability, and also show the superior localization accuracy of the proposed method than other numerical algorithms.


1.
Introduction In recent years, passive source localization has attracted many attentions for its wide applications in communications, radar and others [1][2][3]. Among the positioning parameters used so far, the time-of-arrival (TOA) measurements can be utilized to locate a stationary radiation source with high precision [1,2]. When there is relative movement between the receiving sensors and the radiation source, the frequency of arrival (FOA), combined with the TOAs, can be incorporated to estimate the source position and velocity. Theoretically, these two types of parameters can be obtained simultaneously when the carrier frequency of the signal can be obtained, and the clock synchronization between receivers and the radiation source can be achieved [2]. Under these two assumptions, the hybrid of TOA and FOA measurements can improve the localization accuracy [2]. Due to the possibility of jointing and its corresponding advantages, TOA and FOA joint positioning has received intensive investigations in geolocation based on communication satellites, direct position determination and sensor network positioning [1,2].
It is not easy to solve the hybrid of TOA and FOA localization problem due to its high nonlinearity and non-convexity. Therefore, several types of methods have been utilized to tackle such nonlinear and non-convex problems [2][3][4][5][6][7]. The author in [3] proposes an iterative method based on Taylor-series to solve this problem. In general, the iterative methods are computationally expensive. To alleviate this shortcoming, algorithms with a closed form solution are developed based on the least squares criterion [4,5]. However, when the measurement noise in the received signal is large, the second-order noise terms ignored in the two methods will significantly degrade the positioning performance of these algorithms. To obtain the optimal solution at a high noise level, the convex optimization technique is introduced to address this problem [6,7]. But as shown in [7], the estimated result is sub-optimal if the relaxation applied to the original ML problem is not appropriate. More recently, a Monte Carlo importance sampling algorithm based on Pincus theorem is designed in [2] to generate an approximate global solution. However, this algorithm can only yield an optimal solution at moderate noise levels.
Unlike the numerical algorithms listed above, using neural circuit has attract many attentions since it can achieve real-time calculation through hardware implementation [8]. Among a series of neural networks proposed successively [8][9][10], the Lagrange programming neural network (LPNN) can solve a variety of optimization problems with constraints, and thus it has been applied in many fields [11][12][13][14][15][16][17], including source localization [13][14][15][16][17]. The authors in [13] and [17] first propose to use TOA (or TDOA) measurements based on LPNN framework for localization when only the measurement noise is considered. Then this method is employed to handle the outlier data [14] and a more sophisticated scenario in TOA positioning [15,16]. These previous studies show the excellent performance of LPNN, but they do not consider the case where the radiation source and receivers are not stationary. Besides, the hybrid of TOA and FOA can be used to improve the positioning accuracy as mentioned before.
Therefore, based on the LPNN framework, we advocate to use TOA and FOA measurements to estimate the source position and velocity simultaneously. This is of great value for locating a moving source with high calculation speed, especially in applications that have limited computing resources, e.g., the satellite-based geolocation. Besides, note that the authors in [13,14] simply ignore the inequality constraints in the process of reformulating the original ML problem. Instead, we advocate to introduce additional variables to convert inequality constraints into equalities as executed in [8,16,17], which can approximate the original maximum likelihood (ML) cost function more closely. Moreover, it is of great essential for a network of practical sense to equip the property of fast convergence, as well as the ability of maintaining its stable output when the neural network converges. Therefore, the convergence and stability of our model is analytically proved based on the analysis in [8]. These two properties of the proposed neural network, as well as its superior estimation performance, are verified by the simulation results.
The rest of the paper is organized as follows. First, the localization scenario and a TOA&FOA based model for moving source positioning are given in Section II. Section III constructs the proposed neural network and algorithm according to the LPNN framework, and the stability and computational complexity of the network is also analyzed. Then we conduct several simulation tests to examine the two basic properties and estimation performance of the proposed algorithm in Section IV. Finally, the conclusions are drawn in Section V.

2.
Problem formulation We consider the general problem of locating a moving source using TOA and FOA measurements in a sensor network. There are M sensors deployed in the D -dimensional space to determine the unknown positon u and velocity u of a mobile source. The position and velocity of , 1, 2, , iM  sensor is accurately known, which is denoted by and . For simplicity, we assume line of sight (LOS) propagation. Thus the LOS TOA measurements with additive measurement noise are given by 1 is Euclidean norm and c represents the signal propagation speed. Multiplying the both sides of (1) by constant c , the range of arrival (ROA) measurements can be obtained as where ii n c e . Taking the time derivation of (2) yields the FOA measurements: Without loss of generality, the measurement noises and are assumed to be uncorrelated. Moreover, these two types of noises follow the Gaussian distribution with zero means, whose covariance are given by T E( )   n n Q and T E( )   n n Q , respectively. Therefore, based on the above assumptions, the ML estimation of the source position and velocity can be directly computed as The problem of interest is to find the optimal estimates of u and u , which minimizes the ML cost function in (4) by making use of the available noisy measurements of TOA and FOA.

3.
The proposed method According to [8], the LPNN is designed to solve constrained optimization problems. This method defines two kinds of neurons, that is, the variable neurons and the Lagrange neurons. Moreover, these two types of neurons represent the optimization variables and the Lagrangian multipliers, respectively. Based on the original constrained optimization problem and the neurons defined, a Lagrangian function is firstly established. Then the differential dynamic equations of neurons are calculated from the Lagrangian function, and a self-feedback neural network model can be constructed based on these dynamic equations. During the working processing of the network, these equations will govern the network state until it reaches an equilibrium point. More detailed introduction of LPNN can be found in [8], and here we illustrate this method through the construction of the proposed neural network.

LPNN Model for TOA and FOA Based Localization.
When the measurement noise at different sensors are independent from each other, the measurement noise covariance matrices . Here {} diag  is the diagonal operator. Therefore, the cost function in (4) can be equivalently written as In order to apply the LPNN framework, the above unconstrained problem need to be reformulated as a constrained one, and thus we introduce the dummy variable ii g  us to the original problem in (5). Therefore, it is straightforward to obtain the following constrained optimization problem: , which means that the location of source should not be the same as one of the sensor positions. In practice, this condition can be met in many localization scenarios.
As described in [8], the inequality constraints in (6) (6) can be further recast as (7), a Lagrange function can be directly computed as However, our previous simulation experiments show that the LPNN model directly established by (8) is not satisfactory in terms of convergence speed and stability, as well as the final equilibrium output. Such kind of problem also appears in [12,14]. Therefore, we add penalty parameter 0 C and augmented term  to the objective function in (7) for the sake of improving the convexity of it and further enhancing the stability of the LPNN model. Details can be found in [8].
Based on (7) and the augmented term, the LPNN framework can be applied since both the objective function and constraints are twice differentiable, and thus the augmented Lagrangian function is calculated as (9) According to the Lagrangian function established (9) and the LPNN framework described in [8], the dynamics of variable neurons can be directly computed as T T 22 The dynamics of Lagrangian neurons can be obtained in the same way, and they are given by By utilizing the LPNN framework, we can build up a neural network based on the above differential equations defined in (10)- (15). According to [10,13,14], the block diagram in Figure 1 shows the main structure of the neural network established. It can be noticed that the proposed neural According to the analysis and circuit implementation approach given in [10], the working process of the network can be summarized as follows: Step 1: Initialize the neurons , , , , , u u g y . Input the TOA and FOA measurements and set the integration time Step 2: The differential result of each neuron is first calculated and then transmitted to the integrator. The integrators will output the updated network states.
Step 3: Feedback the updated network states as inputs.
Step 4: Repeat steps 2 and 3 until the network converges to an equilibrium point.
Step 5: Take the steady output states of the network as the final estimation results. The hardware realization of LPNN model is illustrated in [10,18]. For simplicity, as applied in [13][14][15][16][17], we advocate simulating the searching process of the designed neural network by employing the MATLAB ode solver to solve the differential equations (10)- (15). This is mainly because the purpose of our paper is to establish a TOA and FOA based neural network model for source localization, and further verify its stability and estimation performance. Therefore, the convergence and stability required for an asymptotical stable network are analyzed in the coming subsection.

Network convergence and stability analysis
To explain how the network can converge to the optimal solution, the saddle point property is employed to make an illustration in [8]. It is necessary to point out that our proposed LPNN model also satisfies the general analysis made in [8] since the problem considered meets the fundamental requirements of applying LPNN framework. Hence the convergence of network can be guaranteed. Then we prove that the designed neural network is asymptotically stable when the network converges to the minimum point (or equilibrium point), which means that the network can maintain the stable output state when it settles down at an equilibrium point from an arbitrary initial state within the attraction domain [8].
It is proved in [8] that when the gradient vectors of constraints in (7)     are linearly independent. As stated in Remark 1, the sufficient condition that the source position is different from the sensor locations can be met in many localization scenarios. Thus, the linear independent property is established.
Then we show that the Hessian matrix of the proposed Lagrangian function is positive definite when the penalty parameter 0 C is taken sufficient large. According to (7)-(9), the augmented Lagrangian function can be re-expressed as . Combine the above two proofs, we can reach the asymptotical stability of the neural network designed.

Complexity analysis
In this subsection, the computational complexity of the designed algorithm is analyzed. For comparison, we also include the analysis for other three classic numerical algorithms, such as Taylor-series (TS) method [3], weighted least square (WLS) algorithm extended from the algorithm compared in [5] and Monte Carlo importance sampling (MCIS) [2].
As put in [11,12,17], the main computational complexity of using neural network lies in the calculation of the time derivatives defined by the dynamic equations in (10)- (15). Typically, considering that there are Μ sensors in a K dimensional scenario, the complexity of calculating )+ +  Μ ΜK+ ΜK ΜK + 20M in a single iteration, as given in Table 1.
The main computational complexities of the other three classical numerical algorithms in a single iteration are also provided in Table 1. We can clearly see from Table 1 that the computational complexity of the TS method and the WLS algorithm is comparable, and the MCIS algorithm is computationally high due to the number of sampling points employed in this method. In contrast, the algorithm designed in this paper has a relatively low computational complexity compared with the other three algorithms. This is mainly due to the difference in methods, and the use of neural network has some certain computational advantages.

Simulation results
In this section, we first use noise-free and noisy TOA and FOA measurements to examine the convergence and stability of the designed network, and then we perform several simulations to test the localization performance of the network. In the simulation, we use Matlab ode solver to simulate the network dynamic process, which is governed by a group of differential equations defined in (10)- (15). The initial states of the variables are randomly generated between 0 and 10. Besides, the penalty parameter 0 C is set as 5, which is sufficient to guarantee the local convexity. The positive constants i  , 1, 2, , iM  , are set as 100 to accelerate the convergence speed [8].

Verification of convergence and stability
In this subsection, we use four sensors deployed in the 2-dimension plane to locate a moving source at the position  Table 2. It is worthy to point out that our method can be directly generalized to a 3-dimension scenario, though a 2-dimension scenario is considered here for simplicity. When using the noise-free TOA and FOA measurements, the transient behavior of each neuron that represents the coordinates of source position and velocity after 50 independent runs is shown in Figure  2. It can be observed that after a short transient period from a random initial state, the neuron output state can eventually settle down at the true coordinates of source position and velocity. Besides, the output states can remain stable as the time elapses. The estimation results of using noise-free measurements exhibit that the proposed LPNN model is intrinsically to converge and remain stable.  Then we examine the proposed network with noisy TOA and FOA measurements since we can only obtain the contaminated positioning parameters in the non-ideal scenarios. For the sake of simplicity, assume that the measured noise power of different sensors is the same, namely 2  Figure 3 and Figure 4. From the figures, we find that our neural network can converge and settle down after a short period of oscillation under both conditions. Moreover, it takes fewer time to converge when measurement noise is small, and the output estimates of source position and velocity are close to true values even when the measurement noise is large. In conclusion, our model equips the abilities of convergence and stability through the numerical experiment results.

Estimation performance
In this subsection, several simulations are conducted to evaluate the location and velocity estimation performance of the designed neural model. Meanwhile, the proposed method is compared with three classic numerical algorithms, namely the Taylor-series method (TS), weighted least square algorithm (WLS) and Monte Carlo importance sampling (MCIS) algorithm developed in [2]. Besides, the Cramér -Rao lower bound (CRLB) is also included as a benchmark. We consider a 2-dimension scenario and four sensors are employed as listed in Table 1. The noise standard deviation  is varied from 1 m to 17 m to achieve different noise level, and the localization performance is compared in terms of root mean square error (RMSE) through 1000 independent runs.
In the first test, the position of the moving source is (600, 50)  m with velocity (30, 15)  u m/s. Figure 5 shows the RMSEs versus the noise standard deviation  . It can be clearly seen that the method designed in this paper can reach the CRLB within a large noise variation range. However, the other three algorithms perform much worse than our method. As the measurement noise gradually grows large, the WLS algorithm suffers from the threshold effect, and thus its RMSE deviates from the CRLB immediately. The iterative Taylor series algorithm equips a higher threshold value than WLS, but the second-order term of noise ignored in this algorithm will cause a significant error when the measurement noise is large. Besides, the performance of the MCIS algorithm deteriorates seriously once the importance weight employed fail to modify the estimation in the importance sampling (IS) method at the high noise level.
In the second test, the true position and velocity of the moving source are randomly chosen from the 600 600  m and 60 60  square regions, respectively. Figure 6 shows the RMSEs versus the In the third test, since the non-ideal sensor geometry will greatly affect the positioning performance, we place the four sensors in a line to test the robustness of the proposed neural network. Here the moving source position is (600, 50)  Figure 7 except for the RMSE of WLS algorithm. Because in this case, the WLS method has failed to estimate the source location and velocity, thus the estimation result of WLS is not given. Figure 7 exhibits the excellent performance of the proposed LPNN model even the other two types of algorithms deviate from CRLB consecutively under the non-ideal sensor localization geometry.

Conclusions
In this paper, we build up a neural network for mobile source localization based on LPNN framework, which uses TOA and FOA measurements. Moreover, the proposed analog neural model is proved to be asymptotically stable, and the stability of the network is also verified through numerical experiments.
The simulation results demonstrate that our LPNN model has excellent estimation performance