Next Article in Journal
Monitoring the Whole Cycle Length Change of Cement Mortar Incorporated with SRA by CMOS Image Sensor
Previous Article in Journal
Signaling Response to Transient Redox Stress in Human Isolated T Cells: Molecular Sensor Role of Syk Kinase and Functional Involvement of IL2 Receptor and L-Selectine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Cooperative Localization Method Based on IMU and UWB

1
School of Automation, Beijing Institute of Technology, Beijing 100081, China
2
The 95894 Unit, PLA, Beijing 102211, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(2), 467; https://doi.org/10.3390/s20020467
Submission received: 22 October 2019 / Revised: 31 December 2019 / Accepted: 10 January 2020 / Published: 14 January 2020
(This article belongs to the Section Sensor Networks)

Abstract

:
In this paper, a range-based cooperative localization method is proposed for multiple platforms of various structures. The localization system of an independent platform might degrade or fail due to various reasons such as GPS signal-loss, inertial measurement unit (IMU) accumulative errors, or emergency reboot. It is a promising approach to solve this problem by using information from neighboring platforms, thus forming a cooperative localization network that can improve the navigational robustness of each platform. Typical ranging-based ultra-wideband (UWB) cooperative localization systems require at least three auxiliary nodes to estimate the pose of the target node, which is often hard to meet especially in outdoor environment. In this work, we propose a novel IMU/UWB-based cooperative localization solution, which requires a minimum number of auxiliary nodes that is down to 1. An Adaptive Ant Colony Optimization Particle Filter (AACOPF) algorithm is customized to integrate the dead reckoning (DR) system and auxiliary nodes information with no prior information required, resulting in accurate pose estimation, while to our knowledge the azimuth have not been estimated in cooperative localization for the insufficient observation of the system. We have given the condition when azimuth and localization are solvable by analysis and by experiment. The feasibility of the proposed approach is evaluated through two filed experiments: car-to-trolley and car-to-pedestrian cooperative localization. The comparison results also demonstrate that ACOPF-based integration is better than other filter-based methods such as Extended Kalman Filter (EKF) and traditional Particle Filter (PF).

1. Introduction

The cooperative operation among manned and unmanned platforms is becoming increasingly demanding with the development of navigation, communication, and intelligent control technologies, etc. Most robotic platforms or even human are equipped with navigation devices such as Global Navigation Satellite System (GNSS) and inertial measurement unit (IMU) [1]. However, positioning tasks become difficult in the face of IMU error accumulation, signal occlusion, and environmental interference (smoke, buildings, forests, canyons, etc.). The concept of joint localization begins to gain in popularity [2,3]. Unlike positioning platforms with a single node, cooperative localization fully exploits positioning information exchanged in a multi-node ad hoc network. Not only can it lead to more accurate estimates for the position of the target node, but also expand the coverage and enhance the overall stability of the network [4]. Figure 1 illustrates an application scenario in which soldiers and unmanned vehicles are accurately localized under a collaborative network.
Current mainstream schemes of collaborative localization fall into two categories in generally, one of which employs Bluetooth [5], Wi-Fi [6], ZigBee [7], ultrasonic [8], infrared [9], UWB [10] based on angle of arrival (AOA), time of arrival (TOA), and time difference (TDOA) methods under such models of geometry as trilateration, triangulation, and hyperbolas, which entails at least 3 auxiliary nodes together with range information. Unfortunately, this requirement cannot be met in many cases, and one shortcoming of these schemes is that the localization of the target node is completed only according to the information of the auxiliary node, while the information about the target node is ignored. Reference [11] can attain centimeter-level positioning accuracy using multiple UWB transceivers to co-locate but with a considerable number of nodes involved, thus incurring high costs. Reference [12] uses geometric features of translation and rotation to estimate the positioning error in inertial navigation of aircraft. Usually, these commonly employed nodes location methods entail additional auxiliary nodes, causing waste of resources and introducing extra costs, computational load, and communication load of the entire network.
Another type of formulation, rather than relying merely on auxiliary nodes based on geometry, is based on fusion of inertial readings from an IMU deployed on the target node and ranging information from auxiliary nodes. IMU is a self-contained system that measures linear and angular motion usually with gyroscopes and accelerometers independent of external assistance, it is immune to deception and signal jamming [13]. Therefore, IMU is capable of performing autonomous localization, but integration of inertial measurements is bound to drift. Typical combinations include Wi-Fi/IMUs, cameras/IMUs, UWB/IMUs, etc. In this paper, we focus on IMU/UWB collaborative localization in view of the properties of the UWB signal including low power consumption, immunity to channel fading (such as multipath, non-line-of-sight (NLOS), etc.), powerful signal penetration, and high precision.
Numerous studies of research have been done on UWB/IMU collaborative localization. Most UWB cooperative localization systems assume prior knowledge on the positions of no less than two auxiliary nodes. Reference [14] proposed a Simultaneous Localization and Mapping (SLAM) solution to localize pedestrians through three UWB auxiliary nodes in the case of unknown prior information with the precision to 0.1 m but it needed three nodes to be arranged in advance. Reference [15] proposed a framework of indoor autonomous robot localization using a Sage-Husa fuzzy self-adaptive filter to fuse Inertial Navigation System (INS) and UWB signals. With two UWB nodes deployed, it can perform well with high accuracy and robustness. Reference [16] combines UWB ranging measurements with inertial observations to localize based on Extended Kalman Filter (EKF) with only one auxiliary node but on the assumption of previous knowledge of the position and azimuth of the target node. Fallon et al. proposed a cooperative Autonomous Underwater Vehicle (AUV) navigation method using a single maneuvering surface craft, but it has a direct access to azimuth estimation [17].
A Gaussian white noise is often the presupposition in approaching the nonlinearity of the problem of fusion of IMU and UWB observations. It is well known that most common solutions to nonlinear filtering problems are based on EKF and Unscented Kalman Filter (UKF). However, EKF often suffers large errors and deviations in its estimates of states and variances of non-linear stochastic systems [18], while UKF may miscalculate if the posterior probability density of the system states is non-Gaussian [19]. Compared with EKF and UKF, the main advantage of PF is that they do not place any restrictions on the form of the propagation and measurement models. Owing to the arbitrariness of particle distribution, PF is more suitable for arbitrary non-linear and non-Gaussian stochastic systems, although the computational complexity of PF is relatively large. With the enhancement of computer performance and the development of parallel computing technology, the problem of PF can be overcome [20].
The standard particle filter algorithm uses the resampling method to prevent particle starvation. The specific method is to eliminate particles with smaller weight and copy particles with larger weight. The advantage of this method is simple operation, but after many iterations it will cause lack of particle diversity. If we use ant colony optimization to make particles with smaller weight move to the optimal position, and keep the position of particles with larger weight unchanged, the particles will have better distribution and keep the diversity of particles [21].
In this paper, a cooperative localization method with only one auxiliary node is proposed based on the Adaptive Ant Colony Optimization Particle Filter (AACOPF) [22] and dead reckoning (DR) [23]. Even if the INS suffers power-down and hot reboot, there is no need for initial positioning information to realign itself. Only one auxiliary node is enough to help estimate the real-time position and azimuth of the target node. In the cooperative localization based on ranging information, communication between target nodes and auxiliary nodes as well as that between target nodes themselves is delivered so that the distances between target nodes and auxiliary nodes can be combined with the positions of auxiliary nodes themselves to localize more effectively. The cooperative localization method can establish single-to-multi, multi-to-single, and multi-to-multi network topologies making full use of the nodes in the network, which means, one auxiliary node can locate multiple target nodes whereas one or multiple target nodes may also be located by multiple auxiliary nodes.
Compared with existing approaches, this work has the following innovative aspects:
  • This paper proposes a formulation in which collaborative localization is realized with recourse to only one auxiliary node in motion along with its feasibility proof. Allowing for the multipath effects of UWB and NLOS coupled with occlusion by buildings and pedestrians, an AACOPF algorithm is designed which recognizes and eliminates auxiliary nodes with larger errors in ranging. The algorithm can be applied to Gaussian or non-Gaussian nonlinear models, implying a higher degree of positioning accuracy and robustness.
  • When the topology network containing more than one auxiliary node, the particle filter is able to perform collaborative localization by virtue of adaptive weight adjustment, no extra treatment is needed with the addition of other sensors, thereby permitting a plug and play mechanism, as opposed to traditional schemes bases on the EKF and UKF which, in this situation, necessitate re-linearization or even remodelling.
  • The system has been verified not only by simulation but through real data garnered in real situation and effects including errors in the monitoring of human gait and measurement errors have been taken into account, all of which demonstrate the feasibility and generality of the system.
  • To qualitatively and quantitatively analyse the observability of our cooperative localization system, different from the method in Fallon’s paper, we applied a Piece-Wise Constant System (PWCS) method to analyze the observability of the cooperative navigation system, which can not only qualitatively shows the observability based on the rank of Observability Matrix (OM), but also quantitatively presents the degree of observability based on its eigenvalues.

2. Problem Statement

2.1. Adaptive Cooperative Localization Problem

For the problem studied in this paper, we define node that does not have accurate coordinates or azimuth due to various reasons such as GPS failure, system malfunction, hot restart, etc. and need to be relocated as target node, and nodes that have access to localization information accuracy as auxiliary nodes.
To ensure the observability and convergence of the system, quantitative and qualitative analysis of the observability of our cooperative localization system is necessary. For the dynamical multiple auxiliary nodes topology network, we need to consider the measurement disruption caused by multipath or NLOS effects, particle depletion and dynamical multiple auxiliary nodes data fusion. In the complex collaborative network shown in Figure 1, if a node cannot be self-positioned due to low GPS accuracy or IMU power-down restart, the algorithm of Figure 2 is triggered to detect surrounding auxiliary nodes with UWB information, and combines IMU incremental information from the target node to decide on a co-location algorithm and then achieve coordinated positioning by AACOPF.
There are four presumptions based on which the proposed cooperative navigation algorithm can work properly: (1) Auxiliary node has higher localization accuracy. (2) The target node does not need the initial coordinates or the azimuth, but it is required to have dead reckoning ability. (3) Each node in the network of cooperative localization is capable of ranging with the auxiliary node. (4) Wireless communication is available between nodes to ensure the transmitting of ranging and localization information.
The kinematic model of the cooperative localization system
X ( t ) = ϕ ( t 1 ) X ( t 1 ) + B ( t 1 ) U ( t 1 ) + ν ( t 1 )
Let X ( t ) = [ P x ( t ) , P y ( t ) , φ ( t ) ] T denote the 3D navigation state, comprising the position [ P x ( t ) , P y ( t ) ] T , azimuth φ ( t ) of the robot X ( t ) can be estimated by DR model.
The observation equation of the cooperative localization system
Z ( t ) = H ( t ) X ( t ) + ω ( t )
where Z ( t ) = [ d i ( t ) ] represents all the measurements from IMU and UWB up to the current time t, i represents the i-th sensor. In our positioning localization model, the distance d ( t ) is between the target node and the auxiliary node.

2.2. Kinematic Model

In this paper, we assume that the target node has basic relative dead reckoning capability. The navigation device needs to provide real-time information Δ φ ( t 1 ) and displacement increment Δ L ( t 1 ) . As shown in Figure 3, this information is used to estimate the current coordinates of the carrier by DR model [24]. At time (t − 1), the current position is ( x ( t 1 ) , y ( t 1 ) ) . If the next step is to move a distance of Δ L ( t 1 ) with an azimuth angle of φ ( t 1 )   +   Δ φ ( t 1 ) to the position of time (t), the coordinates of time (t) can be calculated by Equation (3) [25].
{ x ( t ) = x ( t 1 ) + Δ L ( t 1 ) cos [ φ ( t 1 ) ] y ( t ) = y ( t 1 ) + Δ L ( t 1 ) sin [ φ ( t 1 ) ] φ ( t ) = φ ( t 1 ) + Δ φ ( t 1 ) )
The cooperative localization algorithm proposed is insensitive to platforms, no matter they are manned or unmanned. We consider two major types of DR models in this paper. One is based on land vehicle’s odometer and IMU, and the other is based on the shoe-mounted IMU for pedestrian localization.
(1) DR model based on vehicle’s odometer and the IMU
The azimuth angle and displacement incremental information required for co-location is obtained by the Strapdown Inertial Navigation Solution (SINS). Due to the maturity of the development of the SINS based on odometer and IMU, this article does focus much on it. We recommend readers refer to [26].
(2) The DR model for pedestrian localization
From our previous work [27], pedestrian localization is based on the kinematics of the pedestrian, extracting the gait information during pedestrians’ movement, detecting the zero-velocity interval, and triggering the error correction algorithm based on the EKF to achieve the effect of restraining the navigation positioning error. In this paper, velocity information is selected as the observation to establish the current state of the pedestrian, and the EKF is used to suppress the velocity error in the detected zero speed range. Since the observation can only be obtained in the standing period, the EKF only updates the time and measurement information in standing duration (zero speed intervals), and only updates the time in the non-standing period. The state variables of EKF is as follows:
X = [ δ γ δ θ δ φ δ p x δ p y δ p z δ ν x δ ν y δ ν z ] Τ
where [ δ γ δ θ δ φ ] Τ are attitude angle errors, [ δ p x δ p y δ p z ] are position errors, [ δ ν x δ ν y δ ν z ] are speed errors. The linearized system state transition matrix is
Φ k , k 1 = [ I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 Δ t × I 3 × 3 S k 0 3 × 3 I 3 × 3 ] ,   S k = [ 0 Δ t × α z k n Δ t × α y k n Δ t × α z k n 0 Δ t × α x k n Δ t × α y k n Δ t × α x k n 0 ]
Taking the actual speed value during standing periods in a pedestrian gait cycle as an observation measurement Z k = ν k n 0 = δ ν k n , the corresponding observation matrix is H = [ 0 3 × 3 0 3 × 3 I 3 × 3 ] , the variance matrix of system process noise and observation noise are defined as Q = d i a g [ ( 0.01 ) 2 ( 0.01 ) 2 ( 0.01 ) 2 0 0 0 ( 0.01 ) 2 ( 0.01 ) 2 ( 0.01 ) 2 ] , R = d i a g { ( 0.01 ) 2 ( 0.01 ) 2 ( 0.01 ) 2 } , respectively.
When the state error estimate of the filter output is brought into the strapdown inertial navigation system for error compensation, then at time k, the error-corrected position increment information can be expressed as
Δ L k = p k | k 1 δ p k p k 1
Attitude error compensation is accomplished by updating the attitude transformation matrix as follows:
C k n = ( 2 I 3 × 3 + δ Θ k ) C k 1 n 2 I 3 × 3 δ Θ k ,   δ Θ k = [ 0 δ φ k δ θ k δ φ k 0 δ γ k δ θ k δ γ k 0 ]
The information of the azimuth angle increment after error correction is expressed as follows:
Δ φ k = φ k | k 1 δ φ k φ k 1

2.3. Measurement Model Based on UWB

Since the observation model is based on UWB ranging measurements, we analyze the sources of UWB observation errors in order to improve positioning precision. In general, the sources of ranging error are multipath fading ε m and NLOS propagation ε n l o s [28]. References [29,30] claimed that the propagation error ε n l o s is not relevant to d, but rather the penetration coefficient describing how the LOS path is impeded. The distance ranging error ε d can be expressed explicitly as a function of the Transport-Receive (TX-RX) separation distance and system bandwidth as follows:
ε d = ε m + ε n l o s
At time (t), assuming that the position of an auxiliary node in navigation coordinates is p j ( t ) = [ x j ( t ) , y j ( t ) ] T , where j = 1 , 2 n denotes the auxiliary serial number, and n is the total number of auxiliary node. Then the measurement distance from the j-th auxiliary node with UWB tag to the target node with UWB is written as:
d 1 j ( t ) = ( x 1 ( t ) x j ( t ) ) 2 + ( y 1 ( t ) y j ( t ) ) 2 + ε d
d ^ 1 , k i ( t ) = ( x ^ 1 , k i ( t ) x j ( t ) ) 2 + ( y ^ 1 , k i ( t ) y j ( t ) ) 2
where the distance d 12 ( t ) between true target node and j-th auxiliary node can be measured by ranging sensor, p ^ 1 , k i ( t ) is the position of the i-th particle, and the distance d ^ 1 , k i ( t ) between the particle p ^ 1 , k i ( t ) = [ x ^ 1 , k i ( t 1 ) , y ^ 1 , k i ( t 1 ) ] T and the auxiliary node p j ( t ) = [ x j ( t ) , y j ( t ) ] T can also be obtained. The particle weight updating equation is established by taking the deviation between p ^ 1 , k i ( t ) and p j ( t ) .

2.4. Algorithm Overview

The cooperative localization method based on AACOPF and dead reckoning is proposed that performs without prior knowledge of the initial location and azimuth of the target node and with only one auxiliary node capable of localization and one ranging sensor measuring the Euclidean distance between a target node and an auxiliary node. The AACOPF algorithm is designed to solve the effects of UWB-related peak offset error and non-Gaussian error caused by UWB multipath and NLOS factor on cooperative localization error in the multiple auxiliary nodes topology network structure. It can realize multi-to-single collaborative localization by adjusting the weight ω ( k ) update adaptively. It has the function of plug and play, because the new auxiliary node is only an additional source added to the weight update. Thus, if the auxiliary node cannot be used due to signal loss or sensor failure, the system only needs to avoid adding relevant factors, without special procedures to re-model and re-linearize the system model.

3. Cooperative Localization Algorithm with an Adaptive Ant Colony Optimization Particle Filter

PF is another non-parametric implementation of Bayesian filtering, whose notion is to represent posterior by a series of random state samples obtained from posteriors. In PF principle, let χ t be a sample of posteriors distribution (particles), defined as χ t : = x t [ 1 ] , x t [ 2 ] , , x t [ m ] , and each particle x t [ m ] ( 1 m M ) is a possible hypothesis of the state at time t. The intuitive sense of PF is to approximate confidence b e l ( x t m ) with a series of particles χ t . Ideally, the probability that state x t is included in particles set χ t is proportional to its posterior b e l ( x t m ) of Bayesian filter: x t [ m ] p ( x t | z 1 : t , u 1 : t ) , where u t is the control and z t is the observation [31].
As shown in Figure 4, at time (1), p 2 ( 1 ) represents the position of auxiliary node 2 with normal localization function, p 1 * ( 1 ) represents the true position of target node 1. d 12 * ( 1 ) denotes the measured distance between true target node 1 and auxiliary node 2 obtained by ranging sensor. More precisely, with p 2 ( 1 ) as the center of the circle and d 12 * ( 1 ) ± Δ d as the radius. The N × M particles p ^ 1 , k i ( 1 ) , k = 1 , 2 , N , i = 1 , 2 , M of the target node 1 are generated at the time (1) and each particle represents the possible location and initial azimuth of the target node 1. After time (1), N × M particles are propagated by DR model.
Let d 12 * ( t ) be the measured true distance between true target node 1 and auxiliary node 2 at (t) time. The predicted distance between p ^ 1 , k i ( t ) and auxiliary node 2 is denoted as d ^ 1 , k i ( t ) . | ( d ^ i 1 , k ( t ) d 12 * ( t ) ) | denotes the difference between the predicted distance and the measured distance of each particle, which is used in the calculation of particle weight. The particle with predicted distance that best matches the measured distance gets the highest weight. Through PF, the particles swarm will gradually converge to the true state. The position of the target node 1 is replaced by the position of cooperative localization to realize the trajectory tracking of the target node 1, and the initial azimuth of the target node 1 is derived from the optimal trajectory.
In this paper, ant colony optimization is used to improve the traditional particle filter algorithm. The steps are as follows.
Step 1: Particles initialization
Through the known prior probability density distribution P ( x 0 ) of the random dynamic system, the initial particles { x 0 ( i ) } , i = 1 , 2 , 3 N × M at (t = 0) are obtained by N × M samples and the corresponding initial weight of each particle is ω 0 ( i ) = 1 / ( N × M ) .
Step 2: Importance sampling
According to x t i ~ q ( x t i | x t 1 i , h ( t ) ) , i = 1 , 2 , 3 N × M , the particle set x t i at time (t) is obtained by the Equation (3).
Step 3: Adaptive cooperative localization
The error in ranging observations are modelled as a Gaussian distribution. In view of the multipath and ULOS effects of UWB and occlusion by pedestrians and buildings along with their consequent impacts on UWB peak values. Adaptive ranging recognition is involved to detect and eliminate auxiliary nodes with unacceptable errors. Updating the weights in the Step 4 leads to higher accuracy and robustness.
Let ρ j denote the residual between the measured distance d 1 j and the one calculated with estimated position. If the mean value of ranging error is greater than the threshold value Ω 1 and the variance is greater than threshold Ω 2 , that is E [ ρ j ] > Ω 1 & D [ ρ j ] > Ω 2 , the node will be eliminated followed by weight updating.
Step 4: Update weights
The predicted value of each particle is obtained by Equation (12), and the weight of each particle x t i is calculated by the following equation according to the current observation z t :
ω t i = ω t 1 i p ( z t | x t i ) p ( x t i | x t 1 i ) q ( x t i | x t 1 i , z t )
In order to get better tracking effect with the ant colony algorithm, it is necessary to eliminate the particles whose estimated values contradict the real value. That is, when x t i x < 0 , the weight is given ω t i = 0 ; otherwise, the weight remains unchanged.
Normalized particle weights
ω t i = ω t i j = 1 N ω t j
Step 5: Ant colony optimization resampling [32]
The transition probability set of { P i j ( t ) } j = 1 N × M from the i-th particle to the j-th particle at time (t) is obtained by the Equation (14). In particle selection, a threshold value λ is set, when the maximum transfer probability P i j b e s t ( t ) is less than λ , the particle does not change its position. When it is greater than λ , let x ¯ t i = x t j , the particle set { x ¯ t i } i = 1 N × M is updated to maintain particle diversity, and the corresponding weight set of each particle is { ω t i } i = 1 N × M = 1 N × M . The value of λ can be obtained by experiments or experience.
P i j ( t ) = ω ^ i j α ( t ) η i j β ( t ) ω ^ i j α ( t ) η i j β ( t )
Define the particle transition probability P i j ( t ) to represent the probability that particle i will shift to j at the time (t) and ω ^ i j α ( t ) denote the difference between the weights of particle i and j at the time (t). The greater the difference between weights, the greater the probability that particle i will move to particle j. η i j ( t ) is the moving distance of the particles. The shorter the distance the particles move, the greater the probability. α and β respectively reflect the weight information and position information of the particles during the transfer process.
Step 6: Output state estimation
According to the steps above, the current state of target node 1 can be estimated by Equation (15), and let t = t + 1, return to Step 2.
x ^ t = i = 1 N × M ω t i x ¯ t i
At time (t), the particle with the largest weight among all particles is selected and its corresponding particles index L is calculated by the following equation:
L = r ( mod m )
where, in Equation (16), m represents each initial particle have m directions, and r is the index of the largest weight particle.

4. Observability Analysis of Cooperative Localization System with One Auxiliary Node

For discrete time-varying linear systems, Goshen-Meskin et al. proposed a Piece-Wise Constant System (PWCS) observability analysis theory [33]. According to the motion characteristics of the system, the system can be divided into time segments and the observability matrix of each time segment can be calculated, so that the total observability matrix of the system can be obtained. The following introduces an observability analysis method based on singular value decomposition of the observability matrix of the system.
Without considering the process noise and the observation noise, the linearized system model is:
Δ X ( t ) = Φ Δ X ( t 1 ) + B Δ u ( t 1 ) Δ Z ( t ) = H Δ X ( t )
where,
Φ = [ 1 0 Δ L ( t 1 ) sin [ φ ( t 1 ) ] 0 1 Δ L ( t 1 ) cos [ φ ( t 1 ) ] 0 0 1 ] , B = [ cos φ ( t 1 ) 0 sin φ ( t 1 ) 0 0 1 ] , H = [ x ( t ) x 2 ( t ) d ( t ) y ( t ) y 2 ( t ) d ( t ) 0 ]
For a set of observations H ( 0 ) , H ( 1 ) , H ( 2 ) , , H ( k ) , based on the observability analysis theory of PWCS, the observability matrix of dynamic system is:
R k = [ H ( 0 ) H ( 1 ) Φ ( 0 ) H ( 2 ) Φ ( 1 ) Φ ( 0 ) H ( K ) i = 0 k 1 Φ ( i ) ]
where ϕ is the state transition matrix of the system, and H is the observation matrix.
The singular value decomposition of R k is expressed as:
R k = U V T
where U = [ u 1 , u 2 , , u k ] is an k × k dimensional orthogonal matrix; V = [ v 1 , v 2 , v 3 ] is a 3 × 3 dimensional orthogonal matrix; = [ S 3 × 3 , 0 ( k 3 ) × 3 ] is a k × 3 dimensional matrix; S = d i a g ( σ 1 , σ 2 , σ 3 ) is a diagonal matrix composed of the singular value of R k and 0. σ 1 σ 2 σ r > σ r + 1 = = σ n = 0 , n = 3 . The singular value σ i of R k is greater than 0. The magnitude of the singular value can effectively reflect the observability of the state of the system. The larger the singular value, the better the observability of the corresponding state.
Gadre and Stilwell [34] proposed that the system is locally unobservable when the auxiliary node is fixed or the range of target node to the auxiliary node located at the same relative direction. According to the observability analysis theory of PWCS, the observability of the system is determined by singular value σ which is calculated by R k . Suppose the target node (blue line) makes a linear motion with a slope of 0.5, the estimated trajectory of target node is shown in Figure 5.
As shown in Table 1, when the auxiliary node is fixed or the range of target node to the auxiliary node located at the same relative direction, there is a singular value of 0, which denotes the system is not observable. When the auxiliary node does not move with respect to the target node, the system is observable for the full rank of observable matrix R k . Therefore, to make the system observable, in the process of cooperative localization, the PWCS observability analysis method can be used to optimize the appropriate auxiliary node trajectory.
According to PF algorithm and DR model, if auxiliary node 2 is moving, the particles propagation of the AACOPF algorithm is shown in Figure 6. p 1 , k i ( t ) = [ x 1 , k i ( t ) , y 1 , k i ( t ) ] ( k = 1 , 2 , , N , i = 1 , 2 , 3 , , m ) are the possible positions of target node 1. p * ( t ) = [ x * ( t ) , y * ( t ) ] are the true position of target node 1, and p 2 ( t ) = [ x 2 ( t ) , y 2 ( t ) ] denotes the position of moving auxiliary node 2. At time (0), N particles are generated, and each particle is given m directions, after particles propagate by DR model, we can get N × M particles at time (1). The red line denotes the true trajectory of target node 1, the blue line is the estimated trajectory, and the yellow line denoted the true trajectory of auxiliary node 2. According to the proof of cooperative positioning algorithm, only when node 2 is in motion, the error converges, thus, the coordinates of the target node 1 can be estimated.
The cooperative localization algorithm is shown in Algorithm 1. By making use of the known information, such as the distance d 12 ( t ) between target node and auxiliary node, weight information α, position information β, threshold value λ , displacement increment Δ L ( t ) , and azimuth increment Δ φ ( t ) of target node, we can get the localization information p ˜ 1 ( t ) and initial azimuth φ 1 , k i ( 0 ) of the target node. In Table 1, p ^ 1 , k i ( t ) = [ x 1 , k i ( t ) , y 1 , k i ( t ) ] denotes the predicted state of particle k with initial azimuth i at time (t).
Algorithm 1 Cooperative Localization Algorithm of Single Auxiliary Node
1: Input: Number of initial particles N, Number of initial azimuth M, Euclidean distance d 12 ( t ) , Displacement increment Δ L ( t ) , Azimuth increment Δ φ ( t )
2: Output: Estimation trajectory p ˜ 1 ( t ) , azimuth φ k i ( t )
3: Generated N × M initial particles p N m ( 0 )
4: While localization do
5:   for 0 < k < N do
6:     for 0 < i < M do
7:         x k i ( t ) x k i ( t 1 ) + Δ L ( t ) cos [ φ k ( t 1 ) + Δ φ ( t ) ]
8:         y k i ( t ) y k i ( t 1 ) + Δ L ( t ) sin [ φ k ( t 1 ) + Δ φ ( t ) ]
9:     end
10:   end
11: d ^ k i ( t ) j = 1 n p ^ k i ( t ) p j ( t )
12:   Particles weight w k i s o r t ( exp ( j = 1 n ( d ^ k i ( t ) d 1 j ( t ) ) 2 ) exp ( d ^ k i ( t ) d 1 j ( t ) ) 2 ) )
13:   Moving distance η k i j ( t ) 1 n o r m ( p k i p k j )
14:   Transition probability p i j = w i j α ( t ) η i j β ( t ) w i j α ( t ) η i j β ( t )
15:   If p i j max > λ
       p ¯ k i = p k j
16:   end
17:   Estimation trajectory p ˜ 1 ( t ) m e a n ( i = 1 3 w k i p ^ 1 i ( t ) )
18:   Localization error e 1 ( t ) p ˜ 1 ( t ) p 1 * ( t )
19:   Azimuth φ k i ( t ) [ φ k i ( t 1 ) + Δ φ ( t ) ]
20: end

5. Experiment and Results

5.1. Car-to-Trolley Cooperative Localization

5.1.1. Experimental Sets

In this section, firstly, we introduce the platform of the experiment and then analyze the feasibility of the algorithm. The platform consists of a car equipped with an odometer, IMU, GPS, and UWB transmitter, and a trolley with UWB receiver and GPS. The objective of the mission is to efficiently verify the dual-platform localization algorithm based on DR and AACOPF, then, analyze the tracking error of the car. The experimental setup and picture is shown in Figure 7.
The experimental sets are as following:
  • The distance between the car and the trolley is measured by UWB ranging module (DW1000FOLLOWER). The measurement accuracy of DW1000 chip is about 10 cm, and the bidirectional ranging accuracy is 20–30 cm.
  • The car equipped with IMU, odometer, GPS, and UWB ranging module functions as the target node (node 1). The trolley functions as auxiliary node (node 2), which is equipped with GPS that can provide accurate position information as well as UWB ranging module. The positioning accuracy of GPS is 5 cm, and the Gyro bias less than 0.05°/h.
  • The displacement increment Δ L ( t ) and azimuth increment Δ φ ( t ) of the car is obtained by odometer and INS. On the car, the double antenna GPS is not used in the localization process but regarded as reference criterion to verify the accuracy of cooperative localization algorithm.
  • The position of the trolley is measured by GPS.
  • The GPS localization information of the car is assumed to be the benchmark, and the localization information calculated by cooperative localization is the estimated information.
  • The experimental site covers an area of approximately 100 × 100 m2. The car moves in a closed loop around the site.

5.1.2. Results

(1) Tracking Verification When the Trolley is in Static and Moving State
The estimated trajectory and the actual trajectory of the car is drawn in Figure 8 and Figure 9. The trolley remains stationary in Figure 8. In Figure 9, the motion of the trolley satisfies the observable motion of the system. The feasibility of the algorithm is verified by comparing the estimated trajectory with the actual trajectory.
In the experiment, the true initial azimuth of car is φ = 71.0548 ° . In Figure 8, it can be seen that when the trolley is stationary, the estimation trajectory of car cannot track the true trajectory, and the estimation of initial azimuth φ 1 i = 11.9977 ° is very different from the true initial azimuth φ = 71.0548 ° , the cooperative localization algorithm could neither track the trajectory of the car, nor can it estimate the azimuth when the trolley is stationary. However, in Figure 9, when the trolley is moving, the estimated trajectory and azimuth of the car is convergent, and the convergence process of particles at different times is represented by different colors. In Figure 10, it can be seen that the cooperative localization algorithm can track the trajectory of the car with high accuracy of up to 0.3 m. the initial azimuth φ 1 i = 71.0009 ° calculated by the algorithm and the estimated the azimuth error converges and the accuracy of azimuth estimation reaches 0.0539°.
(2) Select the Appropriate Localization Parameters on the Estimation of States
It can be seen from Table 2 that the accuracy of the positioning error and azimuth error keep stable after N = 400 and M = 400. In general, the larger the value of M and N are, the smaller the positioning error and the azimuth error are. However, there are some special cases, such as when N = 100 and M = 100, the system has high accuracy, which are caused by the formation of N × M particles (random generation in this paper) and UWB ranging error. Ultimately, considering the positioning accuracy and the azimuth estimation accuracy, N = 400 and M = 400 can be selected as the relatively appropriate trajectory parameters in the 3D circumstance.
(3) Comparison of Positioning Accuracy of EKF, PF, and AACOPF with Gaussian Ranging Noise
The localization errors and azimuth estimation errors of the pedestrian in Figure 11 show that the convergence and localization accuracy of cooperative positioning are EKF < PF < AACOPF. The reason for this phenomenon is that our co-localization model is a nonlinear system. The EKF principle is to linearize the nonlinear system, which often suffers large errors and deviations in its estimates of states and variances. Owing to the arbitrariness of particle distribution, the PF is more suitable for arbitrary non-linear systems. The standard PF algorithm uses the resampling method to prevent particle starvation. One of the advantages of this method is simple operation, but after many iterations it will suffer from lack of particle diversity. Just as we can see from the azimuth estimation error in Figure 11, although PF has fast convergence rate and high positioning accuracy, its curve experiences some protruding outliers. Therefore, we use AACOPF to make the particles have better distribution and keep the diversity of particles.

5.2. Car-to-Pedestrian Cooperative Localization

5.2.1. Experimental Sets

As Figure 12 has shown, the MTi-G710 inertial measurement unit of Xsens of the Netherlands is selected as the experimental equipment. The IMU is 57   mm   ×   42   mm   ×   24   mm in size and only 21 g in weight, so it has the characteristics of small and light, which is very suitable for foot installation and will not affect the normal action of pedestrians. MTi-G710 mainly integrates MEMS gyroscope, MEMS accelerometer and a three-axis magnetometer. The gyroscope bias of less than 5°/h and acceleration bias is 12 ug. In the field of 30   m   ×   15   m , pedestrians wear the shoes of a fixed IMU and hold the laptop connected with IMU and UWB. The inertial data measured by IMU is processed by the strapdown inertial navigation solution based on zero-velocity updating. The GPS on backpack is regarded as reference criterion to verify the accuracy of cooperative localization algorithm. The car and trolley function as auxiliary nodes (node 2 and node 3), which are equipped with GPS that can provide accurate position information as well as UWB ranging module. In the course of the experiment, the auxiliary node 2 is added with the peak shift error, the auxiliary node 3 is added with non-Gaussian noise or the noise that exceeds the measurement error threshold.

5.2.2. Results

Comparison of positioning accuracy of PF and AACOPF with non-Gaussian ranging noise.
When auxiliary node 2 is added with the peak shift error, auxiliary node 3 is added with non-Gaussian noise or the noise that exceeds the measurement error threshold, the localization errors and azimuth estimation errors of the pedestrian in Figure 13 show that AACOPF convergences faster than PF, and the localization accuracy is higher than PF. Due to non-Gaussian ranging noise and particle starvation of PF, we can see from the curve of azimuth estimation error and trajectory estimation errors in Figure 12 that PF experiences some protruding outliers, while the AACOPF not. Therefore, we use AACOPF to make the particles have better distribution and keep the diversity of particles, through AACOPF the accuracy of localization and azimuth of this system is 0.66372 m and 0.22658°.

6. Conclusions

In this paper, a multi-platform cooperative localization method based on IMU and UWB is proposed. Firstly, the particle propagation model is constructed based on a DR model. Then the weight updating algorithm is designed using ranging information between target node and auxiliary node measured by UWB as observations. The position and azimuth of the target node are estimated by AACOPF. The convergence property of the algorithm is proved and analyzed to verify the feasibility and effectiveness of the single to single cooperative localization algorithm. This method contributes to solving the problem of localization in harsh and complex environments. Only one moving auxiliary node is needed to estimate the position and initial azimuth of the target node, with no prior information of position and azimuth of the target node. When there are multiple auxiliary nodes (new sensors are added), the particle filter algorithm can perform rapid cooperative positioning without remodelling or re-linearization of the system, only by self-adaptive adjustment of particle weights. It has the capacity for plug and play operation since a new sensor is only added as a guide of weight updating. Similarly, if the sensor cannot be used due to signal loss or sensor failure, the system only needs to avoid adding relevant factors, without special procedures.

Author Contributions

Y.H. proposed the conceptualization of this paper; C.W. did the investigation and formal analysis; Y.H. and C.W. conceived the AACOPF algorithm; Y.H., C.W., J.W. and H.Y. designed and performed the experiments; C.W. analyzed the data; C.W. wrote this paper; Y.H., R.L., J.W. and H.Y. reviewed the paper and provided important professional feedback. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Equipment Development Department of People’s Republic of China Central Military Commission: 41417030103 and Equipment Development Department of People’s Republic of China Central Military Commission: GFZX0403260302.

Acknowledgments

Yashan Zheng, from the school of automation, Beijing Institute of Technology, provided some programming support for the second author of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X.; Zhang, X.; Ren, X.; Fritsche, M.; Wickert, J.; Schuh, H. Precise positioning with current multi-constellation global navigation satellite systems: GPS, GLONASS, Galileo and BeiDou. Sci. Rep. 2015, 5, 8328. [Google Scholar] [CrossRef] [PubMed]
  2. Xing, B.; Zhu, Q.; Pan, F.; Feng, X. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles. Sensors 2018, 18, 1706. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Salari, S.; Kim, I.M.; Chan, F. Distributed cooperative localization for mobile wireless sensor networks. IEEE Wirel. Commun. Lett. 2018, 7, 18–21. [Google Scholar] [CrossRef]
  4. Wymeersch, H.; Lien, J.; Win, M.Z. Cooperative localization in wireless networks. Proc. IEEE 2009, 97, 427–450. [Google Scholar] [CrossRef]
  5. Wang, Y.; Ye, Q.; Cheng, J.; Wang, L. RSSI-based bluetooth indoor localization. In Proceedings of the 2015 11th International Conference on Mobile Ad-hoc and Sensor Networks (MSN), Shenzhen, China, 16–18 December 2015; pp. 165–171. [Google Scholar]
  6. Golden, S.A.; Bateman, S.S. Sensor measurements for Wi-Fi location with emphasis on time-of-arrival ranging. IEEE Trans. Mob. Comput. 2007, 6, 1185–1198. [Google Scholar] [CrossRef]
  7. Sugano, M.; Kawazoe, T.; Ohta, Y.; Murata, M. Indoor localization system using RSSI measurement of wireless sensor network based on zigbee standard. In Proceedings of the 6th IASTED International Conference on Wireless and Optical Communications, Banff, AB, Canada, 3–5 July 2006; pp. 503–508. [Google Scholar]
  8. Kubota, N.; Nishida, K. Cooperative perceptual systems for partner robots based on sensor network. Int. J. Comput. Sci. Netw. Secur. 2006, 6, 19–28. [Google Scholar]
  9. Lee, S.; Song, J.B. Mobile robot localization using infrared light reflecting landmarks. In Proceedings of the Mobile robot localization using infrared light reflecting landmarks, Seoul, Korea, 17–20 October 2007; pp. 674–677. [Google Scholar]
  10. Liu, R.; Yuen, C.; Do, T.N.; Jiao, D.; Liu, X.; Tan, U.X. Cooperative relative positioning of mobile users by fusing IMU inertial and UWB ranging information. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5623–5629. [Google Scholar]
  11. Liu, W.; Zheng, J.; Wang, M.; Wang, M.; Lin, J. A Multipoint Collaborative UWB Location System Based on TDOA. J. Guilin Univ. Electron. Technol. 2004, 24, 1–4. [Google Scholar]
  12. Li, S.; Hedley, M.; Collings, I.B. New Efficient Indoor Cooperative Localization Algorithm with Empirical Ranging Error Model. IEEE J. Sel. Areas Commun. 2015, 33, 1407–1417. [Google Scholar] [CrossRef]
  13. Flenniken, W.; Wall, J.; Bevly, D. Characterization of Various IMU Error Sources and the Effect on Navigation Performance. Available online: https://pdfs.semanticscholar.org/1f07/13f093a208f11752c669e53ef06e2654803c.pdf (accessed on 14 January 2020).
  14. Gentner, C.; Ulmschneider, M.; Jost, T. Cooperative simultaneous localization and mapping for pedestrians using low-cost ultra-wideband system and gyroscope. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 April 2018; pp. 1197–1205. [Google Scholar]
  15. Liu, J.; Pu, J.; Sun, L.; He, Z. An Approach to Robust INS/UWB Integrated Positioning for Autonomous Indoor Mobile Robots. Sensors 2019, 19, 950. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Trujillo, J.C.; Munguia, R.; Guerra, E.; Grau, A. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments. Sensors 2018, 18, 1351. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Fallon, M.F.; Papadopoulos, G.; Leonard, J.J.; Patrikalakis, N.M. Cooperative AUV navigation using a single maneuvering surface craft. Int. J. Robot. Res. 2010, 29, 1461–1474. [Google Scholar] [CrossRef] [Green Version]
  18. Sabatini, A.M. Quaternion-based extended Kalman filter for determining orientation by inertial and magnetic sensing. IEEE Trans. Biomed. Eng. 2006, 53, 1346–1356. [Google Scholar] [CrossRef] [PubMed]
  19. Ren, X.; Ni, S.; Ma, J. Analysis of Ground Moving Target Tracking Algorithms for Airborne Radar Based on UKF Algorithms. Inf. Commun. 2018, 3, 21–22. [Google Scholar]
  20. Tong, K.; Tian, S.; Li, G. Research on Performance of Dynamic Co-location System Based on Particle Filter Algorithm. Commun. Technol. 2016, 49, 163–167. [Google Scholar]
  21. Shemi, P.M.; Jibukumar, M.G.; Sabu, M.K. A novel relay selection algorithm using ant colony optimization with artificial noise for secrecy enhancement in cooperative networks. Int. J. Commun. Syst. 2018, 31, e3739. [Google Scholar] [CrossRef]
  22. Hao, Z.; Zhang, X.; Yu, P.; Li, H. Video object tracing based on particle filter with ant colony optimization. In Proceedings of the 2010 2nd International Conference on Advanced Computer Control, Shenyang, China, 27–29 March 2010; Volume 3, pp. 232–236. [Google Scholar]
  23. Li, N.; Chen, J.; Yuan, Y.; Tian, X.; Han, Y.; Xia, M. A Wi-Fi indoor localization strategy using particle swarm optimization based artificial neural networks. Int. J. Distrib. Sens. Netw. 2016, 12, 4583147. [Google Scholar] [CrossRef] [Green Version]
  24. Fischer, C.; Muthukrishnan, K.; Hazas, M.; Gellersen, H. Ultrasound-Aided Pedestrian Dead Reckoning for Indoor Navigation. Available online: https://dl.acm.org/doi/10.1145/1410012.1410020 (accessed on 14 January 2020).
  25. Deng, L.; Guo, H.; Luo, X.; Tian, B.; He, H. Research on Pedestrian Dead Reckoning Method Based on Multi-sensor Combination. Mod. Electron. Technol. 2017, 40, 11–14. [Google Scholar]
  26. Yan, G. Research on Strapdown Inertial Navigation Algorithm and Vehicle Integrated Navigation System. Master’s Thesis, Northwest University of Technology, Xi’an, China, 2004. [Google Scholar]
  27. Tian, X.; Chen, J.; Han, Y.; Shang, J.; Li, N. Pedestrian navigation system using MEMS sensors for heading drift and altitude error correction. Sens. Rev. 2017, 37, 270–281. [Google Scholar] [CrossRef]
  28. Jourdan, D.B.; Dardari, D.; Win, N.Z. Position error bound for UWB localization in dense cluttered environments. In Proceedings of the 2006 IEEE International Conference on Communications, Istanbul, Turkey, 11–15 June 2006; Volume 8, pp. 3705–3710. [Google Scholar]
  29. Chong, C.C.; Watanabe, F.; Win, M.Z. Effect of bandwidth on UWB ranging error. In Proceedings of the 2007 IEEE Wireless Communications and Networking Conference, Kowloon, China, 11–15 March 2007; pp. 1559–1564. [Google Scholar]
  30. Zhao, D. Localization and Navigation in Complicated Environments Using UWB and IMU. Master’s Thesis, Harbin University of Technology, Harbin, China, 2016. [Google Scholar]
  31. Sebastian, T.; Wolfram, B.; Dieter, F. Probabilistic Robot; Machinery Industry Press: Beijing, China, 2006. [Google Scholar]
  32. Nobahari, H.; Sharifi, A. A novel heuristic filter based on ant colony optimization for non-linear systems state estimation. In International Symposium on Intelligence Computation and Applications; Springer: Berlin/Heidelberg, Germany, 2012; pp. 20–29. [Google Scholar]
  33. Goshen-Meskin, D.; Bar-Itzhack, I.Y. Observability analysis of piece-wise constant systems. I. Theory. IEEE Trans. Aerosp. Electron. Syst. 1992, 28, 1056–1067. [Google Scholar] [CrossRef]
  34. Gadre, A.S.; Stilwell, D.J. A complete solution to underwater navigation in the presence of unknown currents based on range measurements from a single location. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 1420–1425. [Google Scholar]
Figure 1. The scenario application model of collaborative localization.
Figure 1. The scenario application model of collaborative localization.
Sensors 20 00467 g001
Figure 2. The proposed inertial measurement unit (IMU) and ultra-wideband (UWB) collaborative localization system diagram.
Figure 2. The proposed inertial measurement unit (IMU) and ultra-wideband (UWB) collaborative localization system diagram.
Sensors 20 00467 g002
Figure 3. The principle of the dead reckoning (DR) model (the position of the carrier at the next time is estimated using the speed of the velocity, heading, and position of the carrier at the previous time.).
Figure 3. The principle of the dead reckoning (DR) model (the position of the carrier at the next time is estimated using the speed of the velocity, heading, and position of the carrier at the previous time.).
Sensors 20 00467 g003
Figure 4. The set of possible locations and initial azimuth of target node 1, and the particles propagation model of target node 1 and auxiliary node 2.
Figure 4. The set of possible locations and initial azimuth of target node 1, and the particles propagation model of target node 1 and auxiliary node 2.
Sensors 20 00467 g004
Figure 5. System observability analysis. (a) The auxiliary node fixed (red dot), (b) the range of target node to the auxiliary node (red line) located at the same relative direction, (c) the auxiliary node (red line) does not move with respect to the target node.
Figure 5. System observability analysis. (a) The auxiliary node fixed (red dot), (b) the range of target node to the auxiliary node (red line) located at the same relative direction, (c) the auxiliary node (red line) does not move with respect to the target node.
Sensors 20 00467 g005
Figure 6. Particle’s propagation model of target node 1 with a moving auxiliary node.
Figure 6. Particle’s propagation model of target node 1 with a moving auxiliary node.
Sensors 20 00467 g006
Figure 7. Car-to-trolley cooperative localization system.
Figure 7. Car-to-trolley cooperative localization system.
Sensors 20 00467 g007
Figure 8. The trajectory tracking of car (target node 1) with trolley stationary (auxiliary node 1).
Figure 8. The trajectory tracking of car (target node 1) with trolley stationary (auxiliary node 1).
Sensors 20 00467 g008
Figure 9. The trajectory tracking (a) and azimuth (b) estimation of car (target node 1) with trolley moving (auxiliary node 1).
Figure 9. The trajectory tracking (a) and azimuth (b) estimation of car (target node 1) with trolley moving (auxiliary node 1).
Sensors 20 00467 g009
Figure 10. The positioning (a) and azimuth (b) estimation error of the car (target node 1) for one of the running program, when N = 400, M = 400.
Figure 10. The positioning (a) and azimuth (b) estimation error of the car (target node 1) for one of the running program, when N = 400, M = 400.
Sensors 20 00467 g010
Figure 11. The positioning (a) and azimuth (b) estimation error of the car (target node 1) for one of the running programs, when N = 400, M = 400.
Figure 11. The positioning (a) and azimuth (b) estimation error of the car (target node 1) for one of the running programs, when N = 400, M = 400.
Sensors 20 00467 g011
Figure 12. Experimental equipment of MTi-G710.
Figure 12. Experimental equipment of MTi-G710.
Sensors 20 00467 g012
Figure 13. The trajectory estimation errors (a) and azimuth estimation errors (b) of the pedestrian (target node 1) for one of the running programs, when N = 400, M = 400.
Figure 13. The trajectory estimation errors (a) and azimuth estimation errors (b) of the pedestrian (target node 1) for one of the running programs, when N = 400, M = 400.
Sensors 20 00467 g013
Table 1. The singular value of the system state under three kinds of motion states.
Table 1. The singular value of the system state under three kinds of motion states.
Auxiliary Node FixedRelative MovementNon-Relative Movement
σ 1 1.2884103.0097124.9266
σ 2 0.001614.0294
σ 3 000.8866
Table 2. Effects of the number of initial positions and azimuth on the positioning error and azimuth error.
Table 2. Effects of the number of initial positions and azimuth on the positioning error and azimuth error.
MNInitial Azimuth Error/°Localization Error/m
50506.94774.5267
1001000.11340.2583
2002001.34261.2635
4004000.05390.0687
6006000.06840.0681
8008000.05390.0187
100010000.05390.0245
150015000.05390.0197

Share and Cite

MDPI and ACS Style

Han, Y.; Wei, C.; Li, R.; Wang, J.; Yu, H. A Novel Cooperative Localization Method Based on IMU and UWB. Sensors 2020, 20, 467. https://doi.org/10.3390/s20020467

AMA Style

Han Y, Wei C, Li R, Wang J, Yu H. A Novel Cooperative Localization Method Based on IMU and UWB. Sensors. 2020; 20(2):467. https://doi.org/10.3390/s20020467

Chicago/Turabian Style

Han, Yongqiang, Chenchen Wei, Rong Li, Jingzhe Wang, and Huan Yu. 2020. "A Novel Cooperative Localization Method Based on IMU and UWB" Sensors 20, no. 2: 467. https://doi.org/10.3390/s20020467

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop