Next Article in Journal
Effect of Strength Training Protocol on Bone Mineral Density for Postmenopausal Women with Osteopenia/Osteoporosis Assessed by Dual-Energy X-ray Absorptiometry (DEXA)
Previous Article in Journal
Effectiveness of Video Games as Physical Treatment in Patients with Cystic Fibrosis: Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precise Target Geo-Location of Long-Range Oblique Reconnaissance System for UAVs

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
Key Laboratory of Airborne Optical Imaging and Measurement, Chinese Academy of Sciences, Changchun 130033, China
3
Beijing Institute of Control Engineering, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(5), 1903; https://doi.org/10.3390/s22051903
Submission received: 22 January 2022 / Revised: 13 February 2022 / Accepted: 23 February 2022 / Published: 28 February 2022
(This article belongs to the Section Vehicular Sensing)

Abstract

:
High-precision, real-time, and long-range target geo-location is crucial to UAV reconnaissance and target strikes. Traditional geo-location methods are highly dependent on the accuracies of GPS/INS and the target elevation, which restricts the target geo-location accuracy for LRORS. Moreover, due to the limitations of laser range and the common, real time methods of improving the accuracy, such as laser range finders, DEM and geographic reference data are inappropriate for long-range UAVs. To address the above problems, a set of work patterns and a novel geo-location method are proposed in this paper. The proposed method is not restricted by conditions such as the accuracy of GPS/INS, target elevation, and range finding instrumentation. Specifically, three steps are given, to perform as follows: First, calculate the rough geo-location of the target using the traditional method. Then, according to the rough geo-location, reimage the target. Due to errors in GPS/INS and target elevation, there will be a re-projection error between the actual points of the target and the calculated projection ones. Third, a weighted filtering algorithm is proposed to obtain the optimized target geo-location by processing the reprojection error. Repeat the above process until the target geo-location estimation converges on the true value. The geo-location accuracy is improved by the work pattern and the optimization algorithm. The proposed method was verified by simulation and a flight experiment. The results showed that the proposed method can improve the geo-location accuracy by 38.8 times and 22.5 times compared with traditional methods and DEM methods, respectively. The results indicate that our method is efficient and robust, and can achieve high-precision target geo-location, with an easy implementation.

1. Introduction

Unmanned aerial vehicles (UAVs) have attracted widespread attention, with the advantages of good timeliness and flexibility. In the military, UAVs have been demonstrated to be effective mobile platforms, which can satisfy the spatial and temporal resolution requirements for carrying onboard sensors [1,2,3,4]. In civil applications, numerous UAV platforms have been applied to disaster monitoring, rescue, mapping, surveying, and environment scouting [5,6,7,8]. In order to enhance reconnaissance security during high-risk missions and achieve a wide area search, long-range oblique reconnaissance systems (LRORS) and high precision, real-time target geo-location have become prevalent [9,10,11,12,13,14].
The geo-location method is essential to achieve high-precision geo-location. To obtain the geo-location of the ground target, target geo-location algorithms have been widely studied by many scholars. D. B. Barber [15] proposed a localization method based on the flat earth model for a ground target when imaged from a fixed-wing miniature air vehicle (MAV). This method has a good localization effect for low-attitude and short-range targets. The experiment results showed that it could improve the localization of the target to within 3 m when the MAV flew at 100–200 m. However, this method is not applicable for LRORS, since the influence of the earth curvature was not considered.
In view of the above problem, J. Stich [16] proposed a target geo-location method based on the World Geodetic System 1984 (WGS84) ellipsoidal earth model. This method is also the traditional method used by LRORS at present. This algorithm enables autonomous imaging of defined ground areas at arbitrary standoff distances, without range finding instrumentation, and effectively reduces the influence of earth curvature on the target geo-location accuracy. The method calculates the geo-location based on the external orientation elements measured by GPS/INS and the elevation of the target. On the one hand, the geo-location accuracy depends on similar ground elevations under the platform and at the target area. For mountainous regions, the allowable stand-off distance will be constrained by the variation in terrain elevation. The impact of the terrain elevation on geo-location accuracy was analyzed in [17]. The results demonstrated that with the increase of off-nadir looking angle and imaging distance, the influence of target elevation on the geo-location accuracy increases. On the other hand, the measurement accuracy of the GPS/INS equipped by most UAVs is not sufficient to achieve high precision geo-location. Since the target elevation is usually unknown in an actual project, and usually the GPS/INS is not sufficient, application of the traditional method is limited.
In order to improve the geo-location precision when the target elevation is unknown, the laser range finder (LRF) method [18,19] is proposed. By measuring the distance between the target and the UAVs, the LRF method resolves the influence of target elevation on geo-location accuracy. Flight-test results show that the target geo-location accuracy is less than 8 m when the distance between the target and the UAVs is 10 km [18]. However, the LRF method is not suitable for target geo-location of LRORS, due to the limitation of laser range [19].
The digital elevation model (DEM) method [20,21] can solve the distance limitation of the LRF method. A method based on DEM was described in [20]. The simulation and the flight-test results demonstrated that when the off-nadir looking angle is 80 degrees and the target ground elevation is 100 m, the geo-location accuracy can be improved, from the 600 m of the traditional method, to 180 m with the DEM method. However, the DEM method has some problems in practical applications for LRORS. First, the DEM method is restricted by the accuracy and continuity of DEM data. Second, the DEM method cannot be used in artificial buildings, because usually the DEM data do not contain the height information of buildings. Third, the DEM method needs pre-obtained data and has a high computational cost. The DEM method is inappropriate for real-time target geo-location.
In order to solve the problem that DEM data does not include the height information of a building, a geo-location algorithm for building targets, based on the image method, was proposed in [22]. A convolutional neural network was used to automatically detect the location of buildings, and the imaging angle was used to estimate the height of a building. The results demonstrated that the image method can improve the positioning accuracy of building targets by approximately 20–50% compared with the traditional geo-location method. Unfortunately, this method is also inappropriate for real-time target geo-location because of the requirements of the complexity of image processing.
Methods based on geographic reference data [23,24,25,26] and cooperative localization between UAVs [27,28,29,30] have been proposed to improve the accuracy of the target geo-location. These methods are also inappropriate for real-time target geo-location because of the need for pre-obtained geo-referenced images and high flight cost.
Focusing on these above mentioned problems, we propose a novel geo-location method for long-range UAVs, which uses multiple observations on the same target by a single UAV to improve the geo-location accuracy. The main contributions of this paper are given as follows:
  • Based on a comparative analysis of the present methods affecting geo-location accuracy, a set of work patterns and a novel geo-location method are proposed in this paper to address these problems. There is an iterative process in the proposed method, and the geo-location accuracy is improved greatly by repeatedly imaging the same stationary target point. In brief, the procedure can be summarized by the following: Step 1, calculate the rough geo-location of the target using the traditional method. Step 2, based the rough geo-location of the target, adjust the position of the gimbal and reimage the target. Step 3, process the reprojection errors and obtain optimized target geo-location. Repeat the above process; after several iterations, the estimated geo-location converges on the true value.
  • Compared with the traditional method, the proposed method does not rely on the accuracy of GPS/INS and target elevation, which are regarded as the key error sources in the traditional method. Compared with the laser range finder method, the proposed method is not limited by laser ranging distance. Compared with the DEM method and image method, high-precision real-time geo-location can be realized without DEM or geographic reference data. Compared with cooperative localization between UAVs, the proposed method can achieve high precision without multiple UAVs.
  • The proposed method can achieve high precision, without high-precision GPS/INS, multiple UAVs, and geographic reference data, such as a standard map, DEM, and so on. The proposed method has strong timeliness and a more extensive application value in practical engineering.
The remainder of the paper is structured as follows: Section 2 introduces the traditional geo-location model, based on the WGS-84 ellipsoidal earth model. Section 3 elucidates the detailed implementation of the proposed work pattern and algorithm. Finally, experimental validation is presented in Section 4, and the discussions associated with the experimental results and analysis are organized in Section 5, while conclusions are drawn in Section 6.

2. Geo-Location Method Based on WGS-84 Ellipsoidal Earth Model

This section first introduces the traditional geo-location method based on the WGS-84 ellipsoidal earth model. Then, the sources of influence on the geo-location accuracy of the traditional geo-location method are analyzed.

2.1. Geo-Location Model using the Traditional Method

Four basic coordinate systems are used in the traditional geo-location method, including the earth-centered earth-fixed (ECEF) coordinate system, the north-east-down (NED) coordinate system, the UAV platform (P) coordinate system, and the sensor (S) coordinate system, respectively.
In the following discussion, the coordination of point D in the A coordinate system is denoted as D A = [ x D A y D A z D A ] T , coordination transforms are denoted by C A B , where C is the matrix transformation from A coordination system to B coordination system, C B A = ( C A B ) 1 is the matrix transformation from B coordination system to A coordination system. The transformation matrices for each of the coordinate systems are introduced in Appendix A.
The target point G is projected on a frame CCD, as shown in Figure 1. A frame CCD is made of a 2D array of sensor detectors, and one exposure captures the entire scene. The deviation of the projection point from the CCD center is m pixels and n pixels in the XC and YC direction.
The projection point G′ in S coordinate system can be expressed as
G S = [ x G S y G S z G S ] T = [ m × a n × a f ] T ,
As shown in Equation (1), a is the pixel size of the CCD, and f is the focal length of LRORS. The target projection point G′ and the origin of the S coordinate system O1 in the ECEF coordinate system can be expressed as GE and O1E.
[ G E 1 ] = [ x G E y G E z G E 1 ] = C N E D E C E F × C P N E D × C S P × [ G S 1 ] [ O 1 E 1 ] = [ x O 1 E y O 1 E z O 1 E 1 ] = C N E D E C E F × [ 0 0 0 1 ] ,
The target point G in the ECEF coordinate system can be expressed as G E = [ x G E y G E z G E ] T and the target to sensor vector can be expressed as G E O 1 E , the geodetic height of the target point G is defined as hG. For an ideal LRORS, the target point G, the origin of the S coordinate system O1, and the projection point G′ are collinear. GE should meet the condition in Equation (3) [11].
{ x G E x O 1 E x G E x O 1 E = y G E y O 1 E y G E y O 1 E = z G E z O 1 E z G E z O 1 E ( x G E ) 2 ( R E + h G ) 2 + ( y G E ) 2 ( R E + h G ) 2 + ( z G E ) 2 ( R P + h G ) 2 = 1 ,
We can obtain the condition of the target G E = [ x G E y G E z G E ] in the ECEF coordinate system from Equation (3). Then, the geo-location [ λ G φ G h G ] T of target G can be solved according to Equations (A2) and (A3) in Appendix A.

2.2. The Sources of Influence in the Traditional Method on Geo-Location Accuracy

The target geo-location accuracy is affected by the target elevation error and the measurement variances. The target elevation is usually unknown in an actual project. The measurement variances are largely dependent on the measurement precision of the LOS vector direction and the UAV position. LOS vector direction consists of the UAV attitude and the gimbal angles, which are measured by INS and encoders, respectively. The position information of the UAV is measured by GPS. The error sources affecting the geo-location accuracy are shown in Figure 2.
In order to intuitively illustrate the influence of the measurement variances and target elevation error on the geo-location accuracy of LRORS, we conducted comparative analysis on the geo-location accuracy when the off-nadir looking angle was 0°, 35°, and 75°, respectively. The measurement variances in the geo-location are shown in Table 1.
Assuming the target G is located at the position (43.300000° N, 84.200000° E, 1551.00 m), the target elevation error is 50 m and the UAV flies at a geodetic height of 10000 m. The Monte-Carlo method is used to analyze the target geo-location error. The results are shown in Figure 3 and Figure 4. It can be seen that the geo-location accuracy decreases significantly with the increase of the off-nadir looking angles at the same measurement variances. The maximum value of the geo-location error reaches 800 m when the off-nadir looking angle is 75°, while the maximum value of the geo-location errors are only 300 m and 200 m when the off-nadir looking angles are 35° and 0°, respectively. Among the various measurement variances, the target elevation error has the greatest impact on the geo-location accuracy. When the target elevation error is 50 m, the maximum value of the geo-location errors caused by target elevation error are 130 m, 150 m, and 500 ms at 0°, 35°, and 75° of the off-nadir looking angles, respectively.

3. The Proposed Geo-Location Method

In this section, we first introduce the work pattern of the proposed method, then we use the proposed algorithm to estimate the target geo-location.

3.1. The Work Pattern of LRORS

The work pattern proposed in this paper is shown in Figure 5. First, the initial geo-location of the target is calculated using the traditional method. Affected by the accuracy of GPS/INS and the target elevation, the initial geo-location is approximate. Then, based on the initial geo-location, it is possible to calculate the LOS adjustment and re-image the target. Due to errors in GPS/INS and elevation of the target, there will be a re-projection error between the actual points of the target and the calculated projection ones during the reimaging. Third, an algorithm is proposed to deal with the reprojection error and obtain a new optimized target geo-location. Repeat the above process, and the target geo-location estimation will converge on the true value.

3.2. The Proposed Algorithm

In the proposed work pattern, the LOS should be always maintained as pointing to the target. The earth-fixed target geo-location is a state variable in the state equation that is constant. According to the position of the target in the sensor coordinate system, the target geo-location is computed by a nonlinear measurement equation as an initial state, which includes variances in the model. The equation can be expressed as
{ x k = Φ k 1 x k 1 + W k 1 z k = h ( x k ) + V k ,
where x k = [ φ k , λ k , h k ] is the geographical position of the target G; and φ k , λ k , and h k are the latitude, longitude, and geodetic height of the target G at time k, respectively. z k = [ m k n k ] is obtained from each of the remote sensing images as the measurements. W k 1 represents the process noise and V k represents the observation noises, which in the covariance matrix are Q k 1 and R k . Φ k 1 is the state transition matrix. For a stationary target, Φ k 1 and Q k are expressed as
Φ k 1 = [ 1 0 0 0 1 0 0 0 1 ] , Q k 1 = [ 0 0 0 0 0 0 0 0 0 ] ,
The position of the target in an image is obtained by image registration. Using SIFT to obtain the image features in the current image and previous image, then matching the features and optimizing the match results using the RANSAC algorithm. The offsets of the target between two continuous images can be calculated by the match information. By controlling the LOS using the offsets of the target, the target position can always be in the image center and the target deviates from the image center within 2 pixels. The observation noises, whose covariance matrix is R k , can be assumed as
R k = ( 2 0 0 2 ) ,
In Equation (4), Φ k 1 is the state transition matrix, and h ( x k ) is the measurement transition matrix that can be used to compute the predicted measurement from the predicted state. The measurement transition matrix h ( x k ) can be expressed
h ( x k ) = 1 a × [ x G | k S y G | k S ] T ,
where G S | k = [ x G | k S y G | k S z G | k S ] T = f / z G | k S × G S | k represents the target estimation position in image. According to Equation (A1) in Appendix A, the coordinate of the target in the S coordinate system can be expressed as
[ G S | k 1 ] = [ x G | k S y G | k S z G | k S 1 ] = C P S × C N E D P × C E C E F N E D × [ ( R N + h k ) cos φ k cos λ k ( R N + h k ) cos φ k sin λ k ( R N ( 1 e 2 ) + h k ) sin φ k 1 ]
The solution process of the function h ( x k ) is shown as Figure 6.
Since h ( x k ) is a nonlinear matrix, the cubature Kalman filtering (CKF) method is adopted in our method [31]. CKF does not require the calculation of the Jacobian and Hessian matrices; the filter is easy to create, it quickly computes estimations with low complexity, and, in particular, it satisfies the system requirements for fast state estimation in real time. The steps of the weighted filtering algorithm are as follows:
Step 1: Determine the cubature sample point.
The cubature points and the corresponding weights are
ξ i = { n [ I ] ( i ) n [ I ] ( i n ) for   i = 1 , , n   for   i = n + 1 , , 2 n ω i = 1 2 n , i = 1 , 2 , , 2 n ,
where i is the number of the cubature sample point. [ I ] ( i ) is the i th column of the n × n identity matrix.
Step 2: Information prediction.
We define x 0 as the initial state of the target geo-location, and P 0 is the initial matrix of the error variance matrix. P k 1 is the error variance matrix at time k 1 . By using the Cholesky decomposition approach [32], we get the equation
P k 1 = P k 1 P k 1 T ,
S k 1 is the square root matrix of P k 1 , So S k 1 = P k 1 .
Then, 2n cubature points can be obtained according to the following equation
x i , k 1 = P k 1 ξ i + x k 1 ,
We will use S k 1 instead of P k 1 at measurement updating step. Therefore, Equation (11) can be expressed
x i , k 1 = S k 1 ξ i + x k 1
The propagated cubature points can be expressed
x i , k | k 1 = Φ k 1 x i , k 1
Next, the one-step prediction can be completed according to Equation (13)
x k | k 1 = ω i i = 1 2 n x i , k | k 1  
P k | k 1 = ω i i = 1 2 n ( x i , k | k 1 ( x i , k | k 1 ) T x k | k 1 ( x k | k 1 ) T   ) + Q k 1
Step 3: QR decomposition.
In Equation (15), replacing P k | k 1 with S k | k 1 gives a new error variance matrix
S k | k 1 S k | k 1 T = [ x k | k 1 S Q k 1 ] [ ( x k | k 1 ) T S Q k 1 T ]
Q k 1 = S Q k 1 S Q k 1 T
where x k | k 1 = 1 2 n [ x 1 , k | k 1 x k | k 1 , , x 2 n , k | k 1 x k | k 1 ] [33], S Q k 1 is the square root matrix of Q k 1 . A 2 n × n is defined as A 2 n × n = [ ( x k | k 1 ) T S Q k 1 T ] T , after QR decomposition, A 2 n × n can be written as
{ A 2 n × n = Q ^ 2 n × n R ^ n × n I n × n = Q ^ T 2 n × n Q ^ 2 n × n ,
where R ^ n × n is the nonsingular upper or lower triangular matrix; thus, Equation (16) can be rewritten as
S k | k 1 S k | k 1 T = A 2 n × n T A 2 n × n = ( Q ^ 2 n × n R ^ n × n ) T Q ^ 2 n × n R ^ n × n = R ^ T n × n R ^ n × n ,
where the one-step prediction of S k | k 1 can be expressed as S k | k 1 = R ^ n × n T .
The QR decomposition can be derived from the orthogonalization of Gram–Schmidt, S k 1 can be simplified as Equation (20) [34]
S k | k 1 = T r i a ( [ x k | k 1 , S Q k 1 ] ) ,
In Equation (20), T r i a ( · ) is the triangulation operation.
Step 4: Measurement updating.
After QR composition to transmit the square root factor of the covariance matrix, the 2n cubature points can be updated as
x i , k | k 1 = x k | k 1 + S k | k 1 ξ i ,
The updated cubature points can be transformed into the forms below based on the measurement function
z i , k | k 1 = h ( x i , k | k 1 ) ,
z k | k 1 = ω i i = 1 2 n z i , k | k 1 ,
where i = 1 , 2 , , 2 n , z i , k | k 1 are the transformed points.
Then the square root of prediction covariance can be obtained by QR decomposition:
S z z , k | k 1 = T r i a ( [ z k | k 1 , S R k ] ) ,
where z k | k 1 = 1 2 n [ z 1 , k | k 1 z k | k 1 , z 1 , k | k 1 z k | k 1 , , z 2 n , k | k 1 z k | k 1 ] , the measured random noise whose covariance matrix R k is defined as R k = S R k S R k T .
The cross variance matrix and the filter gain can be obtained according to Equations (25) and (26) below
P x z , k | k 1 = χ k | k 1 ( z k | k 1 ) T ,
K k = ( P x z , k | k 1 / S z z , k | k 1 T ) / S z z , k | k 1 ,
where χ k | k 1 = 1 2 n [ x 1 , k | k 1 x k | k 1 , x 2 , k | k 1 x k | k 1 , , x 2 n , k | k 1 x k | k 1 ] .
In the final step, the state estimation and the square root of error variance at time k can be updated as
x k | k = x k | k 1 + K k ( z k z k | k 1 ) ,
S k | k = T r i a ( [ χ k | k 1 K k z k | k 1 K k S R k ] ) ,
A flowchart of the proposed method is shown in Figure 7. First, according to the information of GPS/INS and the LOS vector direction, the initial target geo-location x 0 can be calculated by the traditional method. Second, according to x 0 , compute the projection point h ( x k ) and reimage the target. The actual point z k can be obtained using image recognition. Third, by updating the state estimation, x k + 1 is obtained. Through the proposed iterative method, the geo-location of the target can be accurately estimated.

4. Experiments

The accuracy and robustness of the proposed method were verified by both simulation and flight experiment. First, the influencing factors on the proposed method were analyzed using the Monte-Carlo method. Moreover, the proposed method, the traditional method, DEM method, and the building target method were compared. Finally, the proposed method was applied in a real flight.

4.1. Simulation

The geo-location accuracy of the proposed method is affected by the flight height, the off-nadir looking angle, target elevation, UAV position, and LOS vector direction. The Monte-Carlo method was used to analyze the target geo-location error of the proposed method. The parameters of the simulation are summarized in Table 1. The geo-location error is defined as 1 / N k = 1 N ε k in 1000 times simulation, where ε k is the geo-location error at each time. Five simulated experiments were performed.

4.1.1. Effect of Flight Heights and Off-Nadir Looking Angle on Geo-Location Accuracy

The geo-location accuracy for different flight heights and off-nadir looking angles are shown in Figure 8.
Judging from the simulation results, three conclusions about the influence of the flight height and the off-nadir looking angle on the geo-location accuracy are as follows:
  • The geo-location accuracy is decreased with increasing flight height.
  • The influence of the off-nadir looking angle on the geo-location accuracy is increased with the increment of the flight height.
  • Even at a flight height of 14,500 m and an off-nadir looking angle of 75°, the target geo-location error is less than 20 m.

4.1.2. Effect of Target Elevation on Geo-Location Accuracy

When the initial value of the target elevation changes from 750 m to 2250 m, the target elevation error correspondingly changes from −801 m to 699 m, the simulation results are shown in Figure 9. It can be seen that the target elevation error has little influence on the proposed method.

4.1.3. Effect of UAV Position and LOS Vector Direction on Geo-Location Accuracy

When the UAV flies at a geodetic height of 10,000 m and the off-nadir looking angle is 75°, the influence of the UAV position and LOS vector direction are shown in Figure 10. It can be seen that the geo-location accuracy is mainly affected by the UAV position error and LOS vector direction error. The convergence speed and the geo-location accuracy decreased with the increase of the UAV position error and LOS vector direction error. After 180 times, the target geo-location error was less than 20 m.

4.1.4. Comprehensive Simulation

In the simulation, it was assumed that the target G is located at the position (43.300000° N, 84.200000° E, 1551.00 m). The UAV flew around the target at a geodetic height of 10,000 m and the off-nadir looking angle was 75°.
The elevation error was less than 1500 m in rough terrain area where the target elevation was unknown. Therefore, P 0 can be assumed as
P 0 = [ d i a g ( 0.015 , 0.015 , 1500 ) ] 2 ,
The initial geo-location of the target was (43.303653° N, 84.195190° E, 1000.00 m), which was obtained by Equation (A2) and Equation (A3) in Appendix A. According to the ellipsoidal earth model, the geo-location error can be expressed as
ε t = [ ε λ ( R N + h T ) cos φ T ] 2 + [ ε φ ( R M + h T ) ] 2 + ε h 2 ,
where R M = R E ( 1 e 2 ) / ( 1 e 2 sin 2 φ T ) 3 / 2 is the radius of the curvature in the principal vertical; and ε λ , ε φ , and ε h are the errors of the longitude, latitude, and geodetic height.
The geo-location error curves are shown in Figure 11, when N is 1000. The geo-location error was less than 100 m after 32 measurements and reached within 50 m after 53 measurements. The final geo-location error was 10.44 m at 180 measurements, and the result of the target geo-location was (43.299979° N, 84.199981° E, 1549.52 m).

4.1.5. Comparison of Simulation Experiment with the Traditional Method

The geo-location error results were compared with the traditional method at a typical fight height of 10,000 m. When the target elevation error was 50 m, the corresponding results with the same measurement variances were as listed in Table 2.
According to Table 2, the method proposed in this paper significantly improved the geo-location accuracy of the target for LRORS.

4.1.6. Comparison of the Simulation Experiment with the DEM Method

The geo-location error results were compared with DEM method [20]. The parameters used in the simulation process are presented in Table 3 (Table 1 in [20]).
According to [20], when the outer gimbal angle was from 10° to 80°, the corresponding results with the same measurement variances were as shown in Figure 12.
It can be seen from Figure 12 that when the outer gimbal angle is 80°, the geo-location accuracy of the target calculated by DEM method was 180 m, while the geo-location accuracy obtained by our method was only 8 m. Compared with the DEM method, the accuracy of this method is improved by 22.5 times.

4.1.7. Comparison of the Simulation Experiment with the Building Target Geo-Location Method

The geo-location error results were compared with the building target geo-location methods. The parameters used in the simulation process are presented in Table 4 (Table 1 in [22]).
According to [22], when the building target had a height of 70 m and the outer gimbal angle was 60°, the distribution of the geo-location results in 10,000 simulation experiments were as shown in Figure 13 and Table 5.

4.2. Flight Experiment and Results

As shown in Figure 14, in order to validate the effectiveness of the proposed method, a flight experiment was carried out on the LRORS; the shock absorbers were used to connect with the UAV platform.
In Figure 15, W1 and W2 are the start point and the end point of the flight path. The targets G1 and G2 are measured by the LRORS when the UAV flies in a straight line, which were observed 100 times during the entire flight path. Remote sensing images of the targets are shown in Figure 16. Figure 16a shows the remote sensing images of the target point G1 obtained by the UAV in positions A1, B1, and C1, respectively. Figure 16b shows the remote sensing images of the target point G2 obtained by the UAV in positions A2, B2, and C2, respectively.
In order to verify the effectiveness of the proposed method, targets G1 and G2 were measured by the global navigation satellite system (GNSS) real time kinematic (RTK) method. The measuring equipment was survey-grade GNSS receivers I70 made by CHC- NAV. The positional accuracy for the points was less than 0.1 m and can be viewed as the standard value. The initial value of the target elevation was assumed equal to 800 m. The geo-location accuracy of the proposed method and the traditional method are shown in Figure 17 and Table 6.
In a comparison of two methods, it can be seen that the proposed method significantly improved the geo-location accuracy. We can see that the proposed method reduced the geo-location error from 1459.3 m to 37.53 m, which is improved by 38.8 times. It seems that our method could measure the elevation of the targets. As shown in Table 3, the elevation errors of the target points G1 and G2 were 2.65 m and 2.89 m, respectively.

5. Discussions

The traditional geo-location method heavily relies on the measurement precision of GPS/INS and target elevation accuracy, which restricts the target geo-location accuracy for LRORS. In order to improve the accuracy of target geo-location, the laser range finder method, DEM method, and image geo-registration method have been proposed in recent years, but they are inappropriate for long-range real-time target geo-location. The multiple-UAV method is difficult to implement in practical engineering. Focusing on the above mentioned problems, a set of work patterns and a novel geo-location method is proposed in this paper. There is an iterative process in the method, and the geo-location accuracy is improved greatly by repeatedly imaging the same stationary target point. The proposed method does not rely on the accuracy of GPS/INS and target elevation, is not limited by laser ranging distance, and does not need geographic reference data or the cooperative localization of multiple UAVs. The flight experimental data shows that our method can improve geo-location precision by 38.8 times compared with traditional methods.
The proposed method improved the geo-location accuracy through multiple observations on the same target. Theoretically, the method is not only applicable to long-distance targets, but also can improve the geo-location accuracy of targets at short distances. However, when observing the near-distance targets at different positions for multiple observations, the motion range of the LRORS gimbal will be significantly larger than observing a long-distance target, which requires a two-axis gimbal LRORS, to have a larger motion range.
As is well known, high-precision, real-time, and long-range target geo-location of UAVs cannot be separated from the support of positioning, navigation, and timing (PNT) technology. PNT technology has achieved rapid development in recent years. Therefore, it is necessary to discuss the influence of PNT on the geo-location accuracy of LRORS. On the one hand, the development of PNT will improve the geo-location accuracy of the traditional method. However, restricted by the target elevation, PNT technology has a limited improvement on the traditional method. On the other hand, the proposed method can realize geo-location without the elevation of the targets. PNT technology will improve the proposed method in two ways: first, reduce the observation times of the target; thus, reducing the motion range of a two-axis gimbal. Second, achieve a faster convergence and better accuracy.

6. Conclusions

A novel work pattern and algorithm was proposed in this paper. The geo-location accuracy is improved greatly by repeatedly imaging the same stationary target point. The proposed method can achieve high precision geo-location, without high-precision GPS/INS, multiple UAVs, and geographic reference data, such as standard maps, DEM, and so on. After verification using a Monte-Carlo simulation and flight experimental data, we can conclude that the proposed method has a better capability to improve the accuracy of target geo-location compared with the present methods. The results show that the proposed method can improve the geo-location accuracy by 38.8 times and 22.5 times comparing with traditional method and DEM method, respectively. The analysis results show that the proposed method has a strong timeliness and more extensive application value in practical engineering.

Author Contributions

X.Z. wrote the main manuscript and conducted the simulations; H.Z. and Y.D. designed and assembled the LRORS; X.Z. and C.Q. conceived the idea and designed the research; X.Z., G.Y., H.Z. and Y.D. conducted the practical experiments; X.Z. and Z.L. analyzed the data of all experiments; C.L. and Z.L. supervised the whole experiments. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Science and Technology Major Project of China (No. 2016YFC0803000), and the Key Laboratory of Airborne Optical Imaging and Measurement, Chinese Academy of Sciences.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no competing interest.

Appendix A

The coordinates systems involved in this section are shown in Figure A1. O-XEYEZE is the ECEF coordinate system, which is defined in WGS-84 and has its origin at the earth’s geometric center. O1-NED is the NED coordinate system, whose origin O1 is the center of UAV platform. O1-XPYPZP and O1-XSYSZS are the P coordinate system and S coordinate system, respectively.
Figure A1. Schematic diagram of the coordinate systems.
Figure A1. Schematic diagram of the coordinate systems.
Sensors 22 01903 g0a1
The geo-location of point G can be expressed as the longitude, latitude, and geodetic height (recorded λG, φG, and hG, respectively). The point G in the ECEF coordinate system can be expressed as Equation (A1) [16,35,36,37,38].
[ x G E y G E z G E ] = [ ( R N + h G ) cos φ G cos λ G ( R N + h G ) cos φ G sin λ G ( R N ( 1 e 2 ) + h G ) sin φ G ]
where e = R E 2 R P 2 / R E is the first eccentricity of WGS-84 coordinates reference ellipsoid, and R E = 6,378,137   m and R P = 6,356,752.3142   m are the long and short half axles of ellipsoid, respectively. R N = R E / 1 e 2 sin 2 φ is the radius of the curvature in prime vertical.
According to the ellipsoidal earth model, the latitude of the northern hemisphere is positive and the latitude of the southern is negative. The latitude and geodetic height can be solved by the following iteration equation:
{ R N 0 = R E h 0 = [ ( x E ) 2 + ( y E ) 2 + ( z E ) 2 ] 1 2 ( R E R p ) 1 2 φ 0 = tan 1 [ z E ( x E ) 2 + ( y E ) 2 ( 1 e 2 N 0 ( N 0 + h 0 ) ) 1 ] { R N i = R E ( 1 e 2 sin 2 φ i 1 ) 1 2 h i = ( x E ) 2 + ( y E ) 2 cos φ i 1 R N i φ i = tan 1 [ z E ( x E ) 2 + ( y E ) 2 ( 1 e 2 R N i ( R N i + h i ) ) 1 ] ,
Generally, when the iterations are over four, the computing accuracy of the latitude and geodetic height are higher than 0.00001 and 0.001 m, respectively [20].
According to the ellipsoidal earth model, the longitude of the eastern hemisphere is positive and the longitude of the western hemisphere is negative. The longitude can be solved using the following equation [20]:
λ = { λ 0 λ 0 + π λ 0 π    x E > 0 x E < 0 , λ 0 < 0 x E < 0 , λ 0 > 0
where λ 0 = tan 1 ( y E / x E ) .
The matrix transforms from the ECEF coordinate system to NED coordinate system can be expressed as [16]
C E C E F N E D = [ S φ P C λ P S φ P S λ P C φ P R N P e 2 S φ P C φ P S λ P C λ P 0 0 C φ P C λ P C φ P S λ P S φ P R N P + h P R N P e 2 S 2 φ P 0 0 0 1 ] ,
where “sin” and “cos” are abbreviated to “S” and “C”, respectively, and RNP denotes the prime vertical radius of the curvature of the UAV platform, the geographical position λP, φP, and hP are the longitude, latitude, and geodetic height of the UAV platform, respectively.
The matrix transformation from the NED coordinate system to the P coordinate system can be expressed as [17]
C N E D P = [ C θ C ψ C θ S ψ S θ 0 S κ S θ C ψ C κ S ψ S κ S θ S ψ + C κ C ψ S κ C θ 0 C κ S θ C ψ + S κ S ψ C κ S θ S ψ S κ C ψ C κ C θ 0 0 0 0 1 ] ,
where the attitude angles ψ , θ , and κ are yaw angles, pitch angles, and roll angles of the UAV platform, respectively.
Generally, a LRORS consists of an imaging system and a two-axis gimbal. The imaging system is installed in a two axis gimbal which is connected with the UAV platform, and the basic structure is shown in Figure A2.
Figure A2. The structure of the LRORS.
Figure A2. The structure of the LRORS.
Sensors 22 01903 g0a2
The S coordinate system has its origin at the principal point of the imaging system, and the ZS axis is the LOS of the imaging system. When the outer and inner gimbal angles are α and β , the matrix transformation from the P coordinate system to S coordinate system can be expressed as [20]
C P S = [ C β S β S α S β C α 0 0 C α S α 0 S β C β S α C β C α 0 0 0 0 1 ]

References

  1. Iyengar, M.; Lange, D. The Goodrich 3rd generation DB-110 system: Operational on tactical and unmanned aircraft. Proc. SPIE 2006, 6209, 620909. [Google Scholar]
  2. Sohn, S.; Lee, B.; Kim, J.; Kee, C. Vision-based real-time target localization for single-antenna GPS-guided UAV. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 1391–1401. [Google Scholar] [CrossRef]
  3. Brownie, R.; Larroque, C. Night reconnaissance for F-16 multi-role reconnaissance pod. Proc. SPIE 2004, 5409, 1–7. [Google Scholar]
  4. Chai, R.; Savvaris, A.; Tsourdos, A.; Chai, S.; Xia, Y. Optimal tracking guidance for aeroassisted spacecraft reconnaissance mission based on receding horizon control. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 1575–1588. [Google Scholar] [CrossRef] [Green Version]
  5. Li, Y.; Zhang, W.; Li, P.; Ning, Y.; Suo, C. A method for autonomous navigation and positioning of UAV based on electric field array detection. Sensors 2021, 21, 1146. [Google Scholar] [CrossRef]
  6. Brown, S.; Lambrigtsen, B.; Denning, R.; Gaier, T.; Kangaslahti, P.; Lim, B.; Tanabe, J.; Tanner, A. The high-altitude MMIC sounding radiometer for the global hawk unmanned aerial vehicle: Instrument description and performance. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3291–3301. [Google Scholar] [CrossRef]
  7. Popescu, D.; Stoican, F.; Stamatescu, G.; Ichim, L.; Dragana, C. Advanced UAV–WSN system for intelligent monitoring in precision agriculture. Sensors 2020, 20, 817. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Zheng, Q.; Huang, W.; Ye, H.; Dong, Y.; Shi, Y.; Chen, S. Using continous wavelet analysis for monitoring wheat yellow rust in different infestation stages based on unmanned aerial vehicle hyperspectral. Appl. Opt. 2020, 59, 8003–8013. [Google Scholar] [CrossRef] [PubMed]
  9. Wang, Y.; Yang, Y.; Kuang, H.; Yuan, D.; Yu, C.; Chen, J.; Huan, N.; Hou, H. High performance both in low-speed tracking and large-angle swing scanning based on adaptive nonsingular fast terminal sliding mode control for a three-axis universal inertially stabilized platform. Sensors 2020, 20, 5785. [Google Scholar] [CrossRef]
  10. Yuan, G.; Zheng, L.; Ding, Y.; Zhang, H.; Zhang, X.; Liu, X.; Sun, J. A precise calibration method for line scan cameras. IEEE Trans. Instrum. Meas. 2021, 70, 5013709. [Google Scholar] [CrossRef]
  11. Held, K.; Robinson, B. TIER II plus airborne EO sensor LOS control and image geolocation. In Proceedings of the IEEE Aerospace Conference, Snowmass, CO, USA, 13 February 1997; pp. 377–405. [Google Scholar]
  12. Sun, J.; Ding, Y.; Zhang, H.; Yuan, G.; Zheng, Y. Conceptual design and image motion compensation rate analysis of two-axis fast steering mirror for dynamic scan and stare imaging system. Sensors 2021, 21, 6441. [Google Scholar] [CrossRef] [PubMed]
  13. Yuan, G.; Zheng, L.; Sun, J.; Liu, X.; Wang, X.; Zhang, Z. Practical Calibration Method for Aerial Mapping Camera based on Multiple Pinhole Collimator. IEEE Access 2019, 8, 39725–39733. [Google Scholar] [CrossRef]
  14. Downs, J.; Prentice, R.; Dalzell, S.; Besachio, A.; Mansur, M. Control system development and flight test experience with the MQ-8B fire scout vertical take-Off unmanned aerial vehicle (VTUAV). In Annual Forum Proceedings-American Helicopter Society; American Helicopter Society, Inc.: Fairfax, VA, USA, 2007; pp. 566–592. [Google Scholar]
  15. Barber, D.; Redding, J.; McLain, T.; Beard, R.; Taylor, C. Vision-based target geolocation using a fixed-wing miniature air vehicle. J. Intell. Robot. Syst. 2006, 47, 361–382. [Google Scholar] [CrossRef]
  16. Stich, E. Geo-pointing and threat location techniques for airborne border surveillance. In Proceedings of the IEEE International Conference on Technologies for Homeland Security, Waltham, MA, USA, 12–14 November 2013; pp. 136–140. [Google Scholar]
  17. Du, Y.; Ding, Y.; Xu, Y.; Liu, Z.; Xiu, J. Geolocation algorithm for TDI-CCD aerial panoramic camera. Acta Opt. Sin. 2017, 37, 0328003. [Google Scholar]
  18. Zhang, H.; Qiao, C.; Kuang, H. Target-geo-location based on laser range finder for airborne electro-optical imaging systems. Opt. Precis. Eng. 2019, 27, 8–16. [Google Scholar] [CrossRef]
  19. Amann, M.; Bosch, T.; Lescure, M.; Myllyla, R.; Rioux, M. Laser ranging: A critical review of usual techniques for distance measurement. Opt. Eng. 2001, 40, 10–19. [Google Scholar]
  20. Qiao, C.; Ding, Y.; Xu, Y.; Ding, Y.; Xiu, J.; Du, Y. Ground target geo-location using imaging aerial camera with large inclined angles. Opt. Precis. Eng. 2017, 25, 1714–1726. [Google Scholar]
  21. Qiao, C.; Ding, Y.; Xu, Y.; Xiu, J. Ground target geolocation based on digital elevation model for airborne wide-area reconnaissance system. J. Appl. Remote Sens. 2018, 12, 016004. [Google Scholar] [CrossRef]
  22. Cai, Y.; Ding, Y.; Zhang, H.; Xiu, J.; Liu, Z. Geo-location algorithm for building target in oblique remote sensing images based on deep learning and height estimation. Remote Sens. 2020, 12, 2427. [Google Scholar] [CrossRef]
  23. Shiguemori, E.; Martins, M.; Monteiro, M. Landmarks recognition for autonomous aerial navigation by neural networks and Gabor transform. Proc. SPIE 2007, 6497, 64970R. [Google Scholar]
  24. Kumar, R.; Samarasekera, S.; Hsu, S.; Hanna, K. Registration of highly-oblique and zoomed in aerial video to reference imagery. In Proceedings of the 15th International Conference on Pattern Recognition(ICPR), Barcelona, Spain, 3–7 September 2000; pp. 303–307. [Google Scholar]
  25. Kumar, R.; Sawhney, H.; Asmuth, J.; Pope, A.; Hsu, S. Registration of video to geo-referenced imagery. In Proceedings of the 14th International Conference on Pattern Recognition(ICPR), Brisbane, QLD, Australia, 20 August 1998; pp. 1393–1400. [Google Scholar]
  26. Wang, Z.; Wang, H. Target Location of Loitering Munitions Based on Image Matching. In Proceedings of the IEEE Conference on Industrial Electronics and Applications, Beijing, China, 21–23 June 2011; pp. 606–609. [Google Scholar]
  27. Pack, D.; Toussaint, G. Cooperative control of UAVs for localization of intermittently emitting mobile targets. IEEE Trans. Syst. Man. Cybern 2009, 39, 959–970. [Google Scholar] [CrossRef] [PubMed]
  28. Lee, W.; Bang, H.; Leeghim, H. Cooperative localization between small UAVs using a combination of heterogeneous sensors. Aerosp. Sci. Technol 2013, 27, 105–111. [Google Scholar] [CrossRef]
  29. Bai, G.; Liu, J.; Song, Y.; Zuo, Y. Two-UAV intersection localization system based on the airborne optoelectronic platform. Sensors 2017, 17, 98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Frew, E. Sensitivity of cooperative target geolocalization to orbit coordination. J. Guid. Control Dynam. 2008, 31, 1028–1040. [Google Scholar] [CrossRef]
  31. Ienkaran, A.; Simon, H. Cubature kalman filters. IEEE Trans. Automat. Contr. 2009, 54, 1254–1269. [Google Scholar]
  32. Zhang, Z.; Li, Q.; Han, L.; Dong, X.; Liang, Y.; Ren, Z. Consensus based strong tracking adaptive cubature kalman filtering for nonlinear system distributed estimation. IEEE Access 2019, 7, 98820–98831. [Google Scholar] [CrossRef]
  33. Kumar, P.; Da-Wei, G.; Ian, P. Square root cubature information filter. IEEE Sens. J. 2013, 13, 750–758. [Google Scholar]
  34. Mu, J.; Wang, C. Levenberg-marquardt method based iteration square root cubature kalman filter ant its applications. In Proceedings of the International Conference on Computer Network, Electronic and Automation, Xi’an, China, 23–25 September 2017; pp. 33–36. [Google Scholar]
  35. Helgesen, H.H.; Leira, F.S.; Johansen, T.A.; Fossen, T.I. Detection and Tracking of Floating Objects Using a UAV with Thermal Camera. In Sensing and Control for Autonomous Vehicles: Applications to Land, Water and Air Vehicles; Springer: Cham, Switzerland, 2017; pp. 289–316. [Google Scholar]
  36. Pan, H.; Tao, C.; Zou, Z. Precise georeferencing using the rigorous sensor model and rational function model for ZiYuan-3 strip scenes with minimum control. ISPRS J. Photogramm. Remote Sens. 2016, 119, 259–266. [Google Scholar] [CrossRef]
  37. Doneus, M.; Wieser, M.; Verhoeven, G.; Karel, W.; Fera, M.; Pfeifer, N. Automated Archiving of Archaeological Aerial Images. Remote Sens. 2016, 8, 209. [Google Scholar] [CrossRef] [Green Version]
  38. Jia, G.; Zhao, H.; Shang, H.; Lou, C.; Jiang, C. Pixel-size-varying method for simulation of remote sensing images. J. Appl. Remote Sens. 2014, 8, 083551. [Google Scholar] [CrossRef]
Figure 1. The projection of the target point G on a frame CCD.
Figure 1. The projection of the target point G on a frame CCD.
Sensors 22 01903 g001
Figure 2. The error sources affecting the geo-location accuracy.
Figure 2. The error sources affecting the geo-location accuracy.
Sensors 22 01903 g002
Figure 3. Geo-location error of the target by the traditional method: (a) Geo-location error of the target; (b) UAV position error in the geo-location; (c) LOS vector direction error in the geo-location; (d) Target elevation error in the geo-location.
Figure 3. Geo-location error of the target by the traditional method: (a) Geo-location error of the target; (b) UAV position error in the geo-location; (c) LOS vector direction error in the geo-location; (d) Target elevation error in the geo-location.
Sensors 22 01903 g003
Figure 4. Probability of geo-location error by the traditional method: (a) The off-nadir looking angle is 75°; (b) The off-nadir looking angle is 35°; (c) The off-nadir looking angle is 0°.
Figure 4. Probability of geo-location error by the traditional method: (a) The off-nadir looking angle is 75°; (b) The off-nadir looking angle is 35°; (c) The off-nadir looking angle is 0°.
Sensors 22 01903 g004
Figure 5. The proposed work pattern.
Figure 5. The proposed work pattern.
Sensors 22 01903 g005
Figure 6. The solution process of the function h ( x k ) .
Figure 6. The solution process of the function h ( x k ) .
Sensors 22 01903 g006
Figure 7. Flowchart of the proposed method.
Figure 7. Flowchart of the proposed method.
Sensors 22 01903 g007
Figure 8. Flowchart of the proposed method. Influence of flight heights and off-nadir looking angle on geo-location: (a) Geo-location error curves with different flight heights when the off-nadir looking angle was changed from 15° to 75°; (b) Geo-location error curves with different off-nadir looking angles when the flight height was changed from 5500 m to 14,500 m.
Figure 8. Flowchart of the proposed method. Influence of flight heights and off-nadir looking angle on geo-location: (a) Geo-location error curves with different flight heights when the off-nadir looking angle was changed from 15° to 75°; (b) Geo-location error curves with different off-nadir looking angles when the flight height was changed from 5500 m to 14,500 m.
Sensors 22 01903 g008
Figure 9. Influence of the target elevation error on the geo-location accuracy.
Figure 9. Influence of the target elevation error on the geo-location accuracy.
Sensors 22 01903 g009
Figure 10. Influence of measurement variances on geo-location: (a) Geo-location error curves with different UAV position errors; (b) Geo-location error curves with different LOS vector direction errors.
Figure 10. Influence of measurement variances on geo-location: (a) Geo-location error curves with different UAV position errors; (b) Geo-location error curves with different LOS vector direction errors.
Sensors 22 01903 g010
Figure 11. Geo-location error curves of the simulation.
Figure 11. Geo-location error curves of the simulation.
Sensors 22 01903 g011
Figure 12. Circular error probability (CEP) of geo-location with different outer gimbal angle: (a) DEM method in [20], θ r o l l is the outer gimbal angles and σ 1 is circular error probability; (b) the proposed method.
Figure 12. Circular error probability (CEP) of geo-location with different outer gimbal angle: (a) DEM method in [20], θ r o l l is the outer gimbal angles and σ 1 is circular error probability; (b) the proposed method.
Sensors 22 01903 g012
Figure 13. Schematic diagram of the distribution of geo-location results: (a) the building target geo-location method in [22]; (b) the proposed method.
Figure 13. Schematic diagram of the distribution of geo-location results: (a) the building target geo-location method in [22]; (b) the proposed method.
Sensors 22 01903 g013
Figure 14. Photograph of the LRORS.
Figure 14. Photograph of the LRORS.
Sensors 22 01903 g014
Figure 15. The flight path in Google Earth.
Figure 15. The flight path in Google Earth.
Sensors 22 01903 g015
Figure 16. The remote sensing images obtained by the LRORS: (a) the remote sensing images of the target point G1; (b) the remote sensing images of the target point G2.
Figure 16. The remote sensing images obtained by the LRORS: (a) the remote sensing images of the target point G1; (b) the remote sensing images of the target point G2.
Sensors 22 01903 g016
Figure 17. Results of geo-location in the flight test: (a) Target point G1; (b) Target point G2.
Figure 17. Results of geo-location in the flight test: (a) Target point G1; (b) Target point G2.
Sensors 22 01903 g017aSensors 22 01903 g017b
Table 1. Measurement Variances in Geo-location.
Table 1. Measurement Variances in Geo-location.
Error TypeError Value
UAV positionlatitude (north)0.00018°
longitude (east)0.00024°
geodetic height (down)40 m
UAV attitudeyaw0.3°
pitch0.1°
roll0.1°
gimbal angleouter0.01°
inner0.01°
Table 2. Geo-location error results of the proposed method and the traditional method.
Table 2. Geo-location error results of the proposed method and the traditional method.
MethodOff-Nadir
Looking Angle
The Geo-Location Error
UAV Position ErrorLOS Vector
Direction Error
Target
Elevation Error
Total Error
The traditional method60°91 m24 m99 m151 m
65°111 m30 m117 m182 m
70°141 m46 m147 m237 m
75°191 m69 m195 m316 m
The proposed method60°2.36 m7.35 m0.73 m8.83 m
65°2.45 m7.93 m0.76 m9.27 m
70°3.25 m8.26 m0.81 m9.98 m
75°4.3554 m9.52 m0.87 m10.44 m
Table 3. Simulation experiment parameters in [20].
Table 3. Simulation experiment parameters in [20].
Error TypeError Value
UAV positionlatitude (north)0.0001°
longitude (east)0.0001°
geodetic height (down)5 m
UAV attitudeyaw0.02°
pitch0.01°
roll0.01°
gimbal angleouter0.006°
inner0.006°
UAV flight height18,000 m
Table 4. Simulation experiment parameters in [22].
Table 4. Simulation experiment parameters in [22].
Error TypeError Value
UAV positionlatitude (north)0.0001°
longitude (east)0.0001°
geodetic height (down)10 m
UAV attitudeyaw0.06°
pitch0.02°
roll0.02°
gimbal angleouter0.006°
inner0.006°
UAV flightheight10,000 m
Table 5. Geo-location error results.
Table 5. Geo-location error results.
MethodThe Average Position Error of the LatitudeThe Average Position Error of the Longitude
The building target geo-location method [22]2.8738 × 10−6°2.3203 × 10−6°
The proposed method3.6398 × 10−9°4.3882 × 10−9°
Table 6. Geo-location error results in the flight test.
Table 6. Geo-location error results in the flight test.
MethodError TypeTarget Point G1Target Point G2
Geographical
position standard
value by GNSS
latitude (north)26.221386°26.184767°
longitude (east)105.894206°105.857411°
geodetic height (down)1367.89 m1324.67 m
The proposed methodlatitude (north)26.221087°26.184489°
longitude (east)105.894025°105.857283°
geodetic height (down)1365.24 m1321.78 m
total error37.53 m34.08 m
The traditional methodlatitude (north)26.2135937°26.1955568°
longitude (east)105.8838978°105.8639344°
geodetic height (down)800 m800 m
total error1459.3 m1459.5 m
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, X.; Yuan, G.; Zhang, H.; Qiao, C.; Liu, Z.; Ding, Y.; Liu, C. Precise Target Geo-Location of Long-Range Oblique Reconnaissance System for UAVs. Sensors 2022, 22, 1903. https://doi.org/10.3390/s22051903

AMA Style

Zhang X, Yuan G, Zhang H, Qiao C, Liu Z, Ding Y, Liu C. Precise Target Geo-Location of Long-Range Oblique Reconnaissance System for UAVs. Sensors. 2022; 22(5):1903. https://doi.org/10.3390/s22051903

Chicago/Turabian Style

Zhang, Xuefei, Guoqin Yuan, Hongwen Zhang, Chuan Qiao, Zhiming Liu, Yalin Ding, and Chongyang Liu. 2022. "Precise Target Geo-Location of Long-Range Oblique Reconnaissance System for UAVs" Sensors 22, no. 5: 1903. https://doi.org/10.3390/s22051903

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop