Next Article in Journal
Coping with Energy Poverty: Measurements, Drivers, Impacts, and Solutions
Next Article in Special Issue
Concept of an Innovative Autonomous Unmanned System for Bathymetric Monitoring of Shallow Waterbodies (INNOBAT System)
Previous Article in Journal
Online Short-Term Remaining Useful Life Prediction of Fuel Cell Vehicles Based on Cloud System
Previous Article in Special Issue
Implementation of QR Code Recognition Technology Using Smartphone Camera for Indoor Positioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Methodology for the Correction of the Spatial Orientation Angles of the Unmanned Aerial Vehicle Using Real Time GNSS, a Shoreline Image and an Electronic Navigational Chart

1
Institute of Electrical Engineering and Automatics, Polish Naval Academy, 81-000 Gdynia, Poland
2
Institute of Navigation and Hydrography, Polish Naval Academy, 81-000 Gdynia, Poland
3
Institute of Mathematics, University of Zielona Góra, 65-417 Zielona Góra, Poland
4
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk University of Technology, 80-233 Gdańsk, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Energies 2021, 14(10), 2810; https://doi.org/10.3390/en14102810
Submission received: 14 April 2021 / Revised: 6 May 2021 / Accepted: 10 May 2021 / Published: 13 May 2021

Abstract

:
Undoubtedly, Low-Altitude Unmanned Aerial Vehicles (UAVs) are becoming more common in marine applications. Equipped with a Global Navigation Satellite System (GNSS) Real-Time Kinematic (RTK) receiver for highly accurate positioning, they perform camera and Light Detection and Ranging (LiDAR) measurements. Unfortunately, these measurements may still be subject to large errors-mainly due to the inaccuracy of measurement of the optical axis of the camera or LiDAR sensor. Usually, UAVs use a small and light Inertial Navigation System (INS) with an angle measurement error of up to 0.5 (RMSE). The methodology for spatial orientation angle correction presented in the article allows the reduction of this error even to the level of 0.01 (RMSE). It can be successfully used in coastal and port waters. To determine the corrections, only the Electronic Navigational Chart (ENC) and an image of the coastline are needed.

1. Introduction

Unmanned aerial vehicles (UAVs) are increasingly used in numerous marine applications, including intelligence, surveillance and reconnaissance (conducted by the navy or coast guard), observation of the maritime environment (in particular the coastal zone, for example, in the field of environmental protection), supervision and support of maritime traffic (e.g., as part of the Vessel Traffic Service), search and rescue (conducted by SAR services), bathymetric and geodetic measurements (in order to obtain information on the shape of the seabed and for the preparation of cartographic studies, including sea port navigation charts), an inspection of port infrastructure (piers, breakwaters, etc.), and navigation (navigation markings, antenna masts, etc.).
Micro-mini UAVs [1,2], equipped with visible, near-infrared, and multispectral cameras, hyperspectral cameras, thermal cameras and laser scanners, are usually used to perform the indicated tasks [2].
As they rise to a certain height (and most often remain to hover), they enable real-time camera monitoring of large areas of coastal sea waters, including roadways, anchorages, or traffic regulation systems. Navy, coast guard, SAR and VTS can obtain online information about the positions of tracked vessels (also in motion), drifting objects and survivors, supervised navigational markings, or areas of contamination moving on the water surface. It is worth emphasizing that these positions are determined by a camera from considerable distances, reaching even several dozen kilometers. It is also impossible to overestimate the possibility of using USV for bathymetric measurements performed with Light Detection and Ranging (LiDAR) instead of an echosounder or sonar. A measuring system operating in this way allows reducing the workload. Moreover, it is much faster, more effective, and practical to use in increased vessel traffic, for example, ports (where periodic checking of the depth is obligatory). However, it should also be emphasized that the positions of the bottom points are determined using LiDAR from a distance of up to several dozen meters.
An important factor affecting the accuracy of positioning using both the camera and LiDAR is the lack of correlation between the antenna’s position of the positioning system of the mounted UAV and the reflection point of the light wave. Under ideal propagation conditions (ignoring the influence of atmospheric refraction), the light ray can be treated as a straight line with a constant direction to the plumb line and azimuth. Such an assumption makes it possible to link the terrain point (seabed) with the position of the positioning system antenna using constant values of the rotation parameters (most often these are spatial orientation angles: yaw, pitch, roll), which are used to transform the coordinates between the coordinate system of the measuring sensor and the coordinate system connected with the Earth.
However, in real conditions, due to wind or changes in atmospheric pressure and temperature as well as structural imperfections, the values of the UAV’s spatial orientation angles will change during flight. Therefore, it becomes necessary to update them with each new measurement regularly. Only based on their current values can the changed direction of the light beam relative to the plumb line and the azimuth be determined, and then the location of the terrain point (seabed) of the light wave reflection. Currently, most micro-mini UAVs use information about spatial orientation angles obtained from the inertial navigation system (INS) based on data from the inertial measurement unit (IMU) of the type microelectromechanical systems (MEMS) supported by the global navigation satellite system (GNSS) receiver allowing flexible approach for seabed survey through Autonomous Mobile Vehicles [3].
An IMU is typically composed of a 3-axis accelerometer and 3-axis gyroscope. Usually, information from a 3-axis magnetic sensor and a thermal sensor supplements them as well. An IMU combines linear accelerations from an accelerometer and rotations from a gyroscope to deliver navigation parameters and position update information. The accuracy of these parameters is influenced by several errors, which are a function of time. Gyroscope errors are bias stability, angle random walk, and calibration errors. Accelerometer errors are constant bias, velocity random walk, scale factor, and vibration rectification error (VRE). Depending on orientation performance, IMUs belong to different classes; beginning with the most accurate, we have the followings grades: military, marine, navigation, tactical, industrial, automotive, the consumer. The first three are precise, very expensive ($ 100 k–1 M), and not available on the open market. In UAVs, tactical or industrial grade IMUs are used to compromise the price and performance. Typical spatial orientation errors for popular IMUs used in drones are presented in Table 1.
However, the errors presented in Table 1 are only theoretical. In reality, they are higher due to additional errors connected with assembly, such as the IMU coordinate system is not aligned to the axis of the drone, and misalignment between the IMU and camera coordinate systems is also not unknown.
The solution to this problem may be in the methodology of correcting the spatial orientation angles determined by INS, based on the shoreline image generated based on the electronic navigational chart (ENC) as equivalent to the shoreline image (photograph) recorded with a micro-mini UAV camera. ENCs are incredibly widely available on the World-wide ENC Database [4] and cover most of the world’s coastal and port waters [5].
Nevertheless, considering the assumed purpose of applying the methodology, it should be assumed that the calculations should be performed with the smallest possible delay (and preferably in real time) directly on the onboard AUV computer, based on a single photo. Consequently, this excludes the possibility of using the Structure from Motion (SfM) method, commonly used in photogrammetry. On the one hand, it is based on a sequence of overlap photos taken from different positions. On the other hand, it is characterized by data processing in post-processing due to its very high demand for computing power.
In mathematical terms, the proposed methodology uses the existing collinearity of the projection center (whose position is determined by the GNSS RTK receiver), the terrain point (with ENC), and its projection in the photo described by the collinearity equation. By expanding this equation into the Taylor series, limited to the linear part, we obtain correction equations in which the vector of unknowns are the angles: yaw, pitch, roll. Their values are determined by the least squares method due to the iterative process of matching the coastline from the ENC to the shoreline in the photograph.
Considering the proposed concepts of the matching algorithm (based only on changes in orientation angles, taking into account the mean error of the INS angle measurement and the angular resolution of the camera measurement) and the methods of generating the equivalent shoreline image based on ENC (in accordance with the optical parameters of the UAV camera, taking into account the geoid and the added edge points), the methodology can be considered to be unique. The few studies related to the use of ENCs have focused solely on the development of positioning techniques with methods of comparison with respect to the coastline [6,7,8] or methods of geodetic adjustment with respect to navigation marks [9].
On the other hand, those studies concerning optical measurements performed in sea areas focused more on methods of determining spatial orientation angles concerning the line of the horizon [10,11,12,13,14]. In the case of land optical angular measurements based on a single image, most studies focus on the use of specific land signs being encoded [15], in the shape of concentric circles [16], luminous [17], or distinctive terrain points [18].
Considering the proposed methodology from the point of view of conceptual assumptions, the most important of these were determined in the form of a thesis that an equivalent image of the shoreline can be generated only based on exact values of spatial orientation angles corresponding to the photograph taken. The approximate values of the angles of spatial orientation, the exact coordinates of the photograph’s position, the internal parameters of the camera, and the exact terrain coordinates of the points forming the coastline must be known. Due to the fact that the ENC-encoded coordinates of the terrain points forming the coastline are determined by geodetic methods (with millimeter accuracy), the use of the methodology depends primarily on the positioning accuracy of the micro-mini UAV. Until recently, precisely for this reason, the proposed methodology would be useless; however, this is now different, as an increasing number of micro-mini UAVs use GNSS RTK for positioning (with at least centimeter accuracy) [19,20,21,22,23,24,25]. This fact played a crucial role in the decision to conduct research on this methodology and, consequently, present the results in this article.
The remaining part of this article is divided into three sections: The first section presents the methodology of correcting the spatial orientation angles, initially describing the process of preparing a suitable photograph by removing distortions and detecting its edges. The main part of this section is the iterative process of correcting the spatial orientation angles, which consists of generating the shoreline’s image with different spatial orientations and determining the degree of matching of this image to the detected edges in the photograph. Finally, the reader’s attention is focused on the methods of transforming the terrain coordinates to the camera reference system, perspective projection of the edges forming the coastline, the method of fitting the image of the coastline generated based on the ENC to the edges on the photo and the algorithm that automates the correction process.
The second section describes the research on the developed methodology, and is divided into four consecutive stages of research, that is, taking aerial measurements, determining reference angles, determining corrections of the angles, and conducting a precision analysis of the corrected angles in relation to the reference angles in conjunction with the evaluation of the iterative process of determining the corrections of those angles. The third and final section presents generalized conclusions regarding the specifics of the methodology’s operation and application, derived in the light of the results of the conducted research.

2. Method

The proposed methodology for the correction of spatial orientation angles is based on an iterative process of matching an equivalent “computer” generated image of the coastline (based on ENC) to its real image recorded in the form of a photograph taken with a UAV camera. At the outset, to carry out this process, it is necessary to pre-process the photograph. Geometric distortions—caused by the imperfection of the camera’s optical system—have to be removed, and the edges-among which there will be fragments of the coastline, must be detected.
The primary process of correcting the angles of spatial orientation consists of generating (based on the ENC and the known position in which the photograph was taken) an image of the shoreline with different spatial orientations (with added corrections) and then determining the degree of matching this image (according to a given criterion) to the detected edges on the appropriately processed photograph. A single increment or decrement changing the correction value for each of the spatial orientation angles corresponds to the angular value of the measurement resolution, which increases by increments until the nominal resolution of the optical system with which the photograph was taken is reached (this depends primarily on the complementary metal-oxide semiconductor parameters (CMOS) of the matrix and focal length). At the same time, the maximum value of the correction, treated as the absolute value of the sum of all increments, cannot exceed the value of the mean error of the actual measurement of each angle made with the on-board device on the UAV (most often, this is a small INS or IMU).
As a result of the process carried out in this way, the corrected values of spatial orientation angles are finally obtained, corresponding to the best-match coastline image generated based on the ENC for the nominal angular measurement resolution of the optical system, which was used to take a photograph from the UAV.

2.1. Geometric Distortion Removal and Edge Detection

Currently, a number of methods are known to obtain the calibration parameters of the optical system intended to improve the reproducible image. Most often, these parameters are characterized by three main distortions of the optical system being radial, decentering and affine. In situations when distortions caused by radial distortion dominate, the methods described in [26,27,28,29,30,31,32] are proposed. However, in the case of most cameras mounted on UAVs (cheap, with small dimensions, exposed to continuous shocks and vibrations), each of the indicated distortions may occur equally—therefore, the best solution is to use the methods described in publications [33,34,35].
On the other hand, when it comes to detecting the edges in the photograph, the most common methods for this purpose are those that can be divided into two groups, based on either the first-order derivative (gradient) [36,37,38,39] or the second-order derivative (using the Laplace of the Gaussian function) [40,41]. Among them, the method proposed by John Canny [41] was found to be the most suitable for the detection of edges, which are to be fitted in the second place to the coastline generated based on the ENC. This method was chosen mainly due to its three main performance criteria, that is:
  • minimizing the number of erroneous detections, where the detection error is both the detection of false edges (false-positive detection) and the omission of the actual edges in the image (false-negative detection),
  • ensuring the exact location of the edge—a point classified as an edge point should be as close as possible to the middle point of the real edge,
  • generating a single response for each real edge in the image—this is equivalent to one pixel edge detection.
Thus, it is based on the simple assumption that this method should guarantee good detection of edges with a thickness of one pixel, while maintaining their original position in the picture (however, it should be emphasized here that the choice of the edge detection method was not the main subject of research). Figure 1 shows the pier in the original photo:
(a)
taken with a DJI FC6310R camera with the UAV (size: 5472 × 3648 pixels, colorspaces: RGB) [42],
(b)
after removing geometric distortions (camera parameters: f x = 3670.0 pixels, f y = 3663.45 pixels, c x = 2.89 pixels, c y = 0.88 pixels; distortion coefficients: k 1 = 0.262391 , k 2 = 0.111511 , k 3 = 0.0396721 , p 1 = 0.000859802 , p 2 = 0.000259255 ) [33],
(c)
after edge detection (changing colorspaces: RGB to grayscale; Gaussian blur operation: σ G B O = 1 ; Canny algorithm: hysteresis procedure (150, 225), Sobel operator-two 3 × 3 kernels) [36,41].

2.2. Generating the Coastline Image Based on ENC

Each line or area object is encoded in ENC as a set of nodes using the “chain-node” topology [43]. The location of the node is most often described by ellipsoidal coordinates in the World Geodetic System 1984 (WGS 84) [44], although it should be borne in mind that the “S-57” standard defines 228 of them, each of which can be used [43]. Node pairs form edges that can be utilized, among other things, for the geometric description of the coastline. The “S-57” standard defines, in the “Object Catalog”, several hundred classes of objects, attributes and attribute domains [45]. The object classes and their attributes provided with the standard allow describing of most of the real world objects necessary to be placed on marine navigation charts. For the coding of the land area, the spatial object of the GEO type with the acronym LNDARE (code 71) is used, defined as the solid portion of the Earth’s surface, as opposed to sea or water (IHO Dictionary, S-32, 5th Edition, 2635) [45,46]. Figure 2 shows an image of an electronic navigational chart (display “all other information” [47,48]) with the border of the land area marked.
Based on this, an equivalent image of the coastline can be generated. For this purpose, first of all, the coordinates of the border nodes of the land area should be transformed to the camera reference system, and secondly-a perspective projection on the plane of the edges created between the transformed nodes.

2.2.1. Transforming the Coordinates of Nodes to the Camera Reference System

The transformation of the coordinates of each node from WGS 84 to the camera reference system can be easily described mathematically based on three-dimensional Euclidean space E 3 over the field of real numbers R , associated Euclidean space E 3 ,point O E 3 , and an orthonormal basis E 3 e 1 , e 2 , e 3 . The Ortho-Cartesian frame of reference (global benchmark) of space E 3 can then be defined as follows:
T   =   O , e 1 , e 2 , e 3 ,
where the point O determines its origin (or base point), and versors e 1 , e 2 , e 3 its base. However, the location of each point (node) P relative to the frame of reference T can be described as a vector O P :
P = O P = x e 1 + y e 2 + z e 3 ,
where a set of numbers x , y , z specifies the Ortho-Cartesian coordinates of a point P relative to the frame of reference T .
Using the above definitions, three Ortho-Cartesian reference frames of space can be established E 3 : T   =   O , e 1 , e 2 , e 3 , T H =   O H , e 1 H , e 2 H , e 3 H , and T C =   O C , e 1 C , e 2 C , e 3 C .
Assuming T will correspond to the Earth-Centered Earth-Fixed (ECEF) coordinate system [49,50,51], T H will correspond to the East-North-Up (ENU) coordinate system (Figure 3).
Frame of reference T H will be obtained as a result of the transformation of T . It will be a point translation O to the point O H (which will also be the focal point of the camera O C located on the UAV) and rotation C ENU ECEF base e 1 H , e 2 H , e 3 H relative to the base e 1 , e 2 , e 3 performed according to the dependencies:
e 1 H = x 1 H y 1 H z 1 H = 0 = sin λ e 1 + cos λ e 2 ,
e 2 H = x 2 H y 2 H z 2 H = sin φ · cos λ e 1 sin φ · sin λ e 2 + cos φ e 3 ,
e 3 H = x 3 H y 3 H z 3 H = cos φ · cos λ e 1 + cos φ · sin λ e 2 + sin φ e 3 ,
where:
φ —is the geocentric latitude of the point O H ,
λ —means the longitude of the point O H .
Frame of reference T C will be rigidly related to the projection system (i.e., the projection center and the CMOS projection) of the camera on the UAV in such a way that the direction of the versor e 3 C will be consistent with the direction of the camera’s optical axis. The other two versors, e 2 C , e 3 C , will lie in a plane parallel to the plane of the camera’s COMS matrix passing through the focal point (projection centre) O H = O C . The versor e 2 C after mapping to the CMOS matrix will lie in the middle of its height, and the e 1 C versor halfway across its width (Figure 4). Rotation C CAM ENU of the base e 1 C , e 2 C , e 3 C of the reference system T C will be performed in relation to the base e 1 H , e 2 H , e 3 H of the reference system T H according to the dependencies (Figure 4):
e 1 C = x 1 C y 1 C z 1 C = cos ψ · cos ϕ + sin ψ · sin θ · sin ϕ e 1 H + sin ψ · cos ϕ + cos ψ · sin θ · sin ϕ e 2 H cos θ · sin ϕ e 3 H ,
e 2 C = x 2 C y 2 C z 2 C = sin ψ · cos θ e 1 H + cos ψ · cos θ e 2 H + sin θ e 3 H ,
e 3 C = x 3 C y 3 C z 3 C = cos ψ · sin ϕ + sin ψ · sin θ · cos ϕ e 1 H + sin ψ · sin ϕ + cos ψ · sin θ · cos ϕ e 2 H cos θ · cos ϕ e 3 H ,
where: ψ —yaw, θ —pitch, ϕ —the roll will be represented by the so-called Tait – Bryan angles [52,53].
Assuming the ellipsoidal coordinates of φ c , λ c , h c , point O C defined in relation to T (designated GNSS RTK on UAV) are known, and angles ( ψ , θ , ϕ ) (corresponding to the yaw, pitch, roll angles obtained from IMU on UAV) bases e 1 C , e 2 C , e 3 C and e 1 H , e 2 H , e 3 H can be determined.
To do this, the ellipsoidal coordinates φ c , λ c , h c to Ortho-Cartesian coordinates x c , y c , z c of the point O C should be changed according to the dependencies:
x c = a 1 e 2 · sin 2 φ c + h c · cos φ c · cos λ c ,
y c = a 1 e 2 · sin 2 φ c + h c · cos φ c · sin λ c ,
z c = a 1 e 2 · sin 2 φ c · 1 + e 2 + h c · sin φ c ,
where a and b denote the lengths of the major and minor semi-axes of the reference ellipsoid, e 2 = a 2 b 2 a 2 the square of the first eccentric.
Based on them, the geocentric latitude is determined according to the relationship:
φ = atan z c x c 2 + y c 2 .
Knowing the spherical coordinates φ , λ describing the position of the point O C = O H the base can be set T in accordance with T H (3)–(5), which will form the base after rotation by angles ( ψ , θ , ϕ ) in accordance with (6)–(8) T C .
Taking the bases so established, e 1 C , e 2 C , e 3 C and e 1 H , e 2 H , e 3 H , it is possible to easily transform the Ortho-Cartesian coordinates of each border node N ENC = x ENC , y ENC , z ENC (obtained according to (9)–(11) from the source 2D ellipsoidal coordinates φ ENC , λ ENC encoded in ENC and the ellipsoidal height h ENC = geoid height ) land area from the frame of reference T to the frame of reference T C .
This transformation will be a combination of the node coordinate translation N ENC from T down T C and successive coordinate transformations C ENU ECEF from the base e 1 , e 2 , e 3 to base e 1 H , e 2 H , e 3 H , and then C CAM ENU from the base e 1 H , e 2 H , e 3 H to the base e 1 C , e 2 C , e 3 C . Relationships for the coordinate transformation carried out in this way, Ortho-Cartesian node N ENC = x ENC , y ENC , z ENC in T to the target Ortho-Cartesian coordinates of the node N ENC C = x ENC C , y ENC C , z ENC C in T C , can be represented in matrix notation as follows:
N ENC C = N ENC O C · C ENU ECEF · C CAM ENU ,
x ENC C y ENC C z ENC C = x ENC x c y ENC y c z ENC z c · x 1 H y 1 H z 1 H x 2 H y 2 H z 2 H x 3 H y 3 H z 3 H · x 1 C y 1 C z 1 C x 2 C y 2 C z 2 C x 3 C y 3 C z 3 C .

2.2.2. Edge Perspective Projection

Due to the fact that the coastline image should be based on the edges between nodes boundaries of the land area, it becomes necessary to designate points representing them. For this purpose, Bresenham’s algorithm can be successfully applied, assuming a specific distance between consecutive points, for example, equal to the ground sample distance (GSD) in the photograph [54]. Each point of the edge N EDG C = x EDG C , y EDG C , z EDG C so designated can then be subjected to the projection process assuming that the distance of the focal point from the projection, and the projection parameters (i.e., its size and resolution) correspond to the focal length and the CMOS matrix parameters of the UAV camera. The coordinates then obtained as a result of projection N EDG P = x P , y P of each point (forming a single point of the coastline on the equivalent image) can be calculated using the following relationships:
x P = w p · f · x EDG C w m · z EDG C ,
y P = h p · f · y EDG C h m · z EDG C ,
where:
w p —the number of pixels on the horizontal line of the matrix (in the versor axis e 1 C ),
h p —the number of pixels on the vertical line of the matrix (in the versor axis e 2 C ),
w m —matrix width,
h m —matrix height,
f—focal length.
Figure 5 shows the coastline’s image obtained as a result of the perspective projection of the edge of the land area from the ENC No. PL5GDYNA.000 corresponding to the DJI FC6310R camera, wherein it was assumed that O C = (54 32 1.116 N, 18 32 48.528 E, 131.44 m),yaw ( ψ ) = 128.0 , pitch ( θ ) = 0.1 , roll ϕ = 0 , w p = 5472 pixels, h p = 3648 pixels, w m = 12.8333 mm, h m = 8.55554 mm, and f = 8.5797 mm—these values were read out from the eXtensible Metadata Platform (XMP) tags [55] of the .jpg file of the photograph presented in Figure 1 and the database of camera model parameters in Mapper Pix4D [56].

2.3. Match the Coastline Image to the Edge in the Photo

For determining spatial orientation angle corrections ( ψ , θ , ϕ ), an iterative method of matching the boundary of the land area obtained on the basis of the ENC (e.g., Figure 5) to the edges detected in the photo (e.g., Figure 1c) was used, with the evaluation of the sum of squares of the distance “deviations” between the identical points or the border of the area of the close neighborhood (20). At each iteration step i, for each point in the set N = N EDG P 1 , N EDG P 2 , , N EDG P n N —representing the boundary of the land area with the ENC, a point “identical” to the set I = I 1 , I 2 , , I n I is assigned, representing the edge points detected in the photograph. The association of the points from set I with points from set N is based on the nearest neighbor algorithm (NN) [57]. The area in the vicinity of the point from set N specifies a circle with radius length l i (changed in each step of the main iteration i), determined according to the dependencies:
l i = 0.75 · σ i / r n o m f o r i = 0 , 1 , , k a n d σ i r n o m ,
σ i + 1 = σ i 2 ,
r n o m = atan w m 2 + h m 2 f · w p 2 + h p 2 ,
where:
σ 0 = σ m a x —the highest mean error value of angle measurement: yaw, pitch, roll with navigation devices (e.g., INS) on the UAV,
r n o m —nominal angular measurement resolution of the optical system (of UAV camera),
k—the number of iterations dependent on the value σ i and r n o m .
The point from set N has the same point if in its close vicinity there is a point from set I . In the case when the neighborhood affiliation concerns more than one point from set I , the closest of them becomes the counterpart (identical). If, however, identical points are at the same distance, one of them is selected randomly (Figure 6).
Then, all possible combinations of corrections are checked using the brute force method [58]—added to the angles of spatial orientation ( ψ , θ , ϕ ), the values of which change step by step σ i 2 in the range of σ i , σ i . In order to find the best three among them, the combination for which the sum value is selected S squared distances (“deviations”) d 1 , d 2 , , d n M , calculated according to the formula:
S = j = 1 n M d j 2 ,
where:
d j = l i , d m i n l i d m i n , d m i n < l i ( area of the close neighborhood ) ,
d m i n = m i n m = 1 , 2 , , n I N E D G P j I m ,
is the smallest.
The best corrections are then added to the angles ψ , θ , ϕ which, after changing their values, are corrected again with increasing precision in subsequent iterations. It should be noted here that in each subsequent iterative step, corrections are checked, the values of which are changed with increments and in the interval, reduced by half.
The process of correcting the spatial orientation angles (iteration) is considered complete when the nominal measurement resolution of the optical system r n o m is less than the iteratively reduced measurement error σ i . The algorithm (in the form of pseudocode) of the entire process of correction of spatial orientation angles, implemented in accordance with the proposed methodology, is presented below (see Algorithm 1).
Algorithm 1 Spatial orientation angles correction
Input:ENC; .jpg photograph; σ m a x ; spatial orientation parameters: φ c , λ c , h c , ψ , θ , ϕ ; camera parameters: f x , f y , c x , c y , w m , h m , w p , h p , f; distortion coefficients: k 1 , k 2 , k 3 , p 1 , p 2
Output:Spatial orientation angle corrections: Δ ψ c o r , Δ θ c o r , Δ ϕ c o r
Initialization:Removing geometric distortions in the photograph as per [33]. Edge detection: changing color spaces RGB to grayscale, Gaussian blur operation, Canny algorithm.
Result: a black and white image (one bit) with the resolution of the source image with detected edges and a set of points I = I 1 , I 2 , , I n I
Set of C ENU ECEF for φ c , λ c , h c r n o m = atan w m 2 + h m 2 f · w p 2 + h p 2 ; ψ O l d = ψ ; θ O l d = θ ; ϕ O l d = ϕ
1:while σ m a x r n o m do
2: l = 0.75 · σ m a x / r n o m
3: for Δ ψ = σ m a x to σ m a x
4: for Δ θ = σ m a x to σ m a x
5: for Δ ϕ = σ m a x to σ m a x
6: Set of C CAM ENU for ψ + Δ ψ , θ + Δ θ , ϕ + Δ ϕ
7: Determination of the ellipsoidal height h ENC = geoid height of each land node from the ENC with 2D ellipsoidal coordinates φ ENC , λ ENC
8: 3D ellipsoidal coordinate conversion φ ENC , λ ENC , h ENC of each knot to Ortho Cartesian N ENC = x ENC , y ENC , z ENC
9: Transformation of the coordinates of each node N ENC on N ENC C = N ENC O C · C ENU ECEF · C CAM ENU
10: Determination of the coordinates of each point N EDG C = x EDG C , y EDG C , z EDG C of the edge of Bresenham’s algorithm between successive knots N ENC C
11: Dropping of each point N EDG C and creating a set of points N = N EDG P 1 , N EDG P 2 , , N EDG P n N
12: Finding the sum of squared distances S = j = 1 n M d j 2 and when it is the smallest, saving the correction values: Δ ψ B e s t = Δ ψ , Δ θ B e s t = Δ θ , Δ ϕ B e s t = Δ ϕ
13: Δ ϕ + = σ m a x 2
14: end for
15: Δ θ + = σ m a x 2
16: end for
17: Δ ψ + = σ m a x 2
18: end for
19: σ m a x / = 2
20:Spatial orientation angles correction:
ψ + = Δ ψ B e s t ; Δ ψ c o r = ψ O l d ψ
θ + = Δ θ B e s t ; Δ θ c o r = θ O l d θ
ϕ + = Δ ϕ B e s t ;   Δ ϕ c o r = ϕ O l d ϕ
21:end while

3. Research

In the actual research phase, the developed methodology for the correction of spatial orientation angles was planned to be accurately assessed under real conditions. It was decided that it would be based on a comparative analysis carried out against very accurate reference measurements made by the GNSS RTK method and tachometer, recognizing that this type of comparison will fully justify the purposefulness and indicate the right directions for further research phases.
The research was divided into four successive stages:
  • Performing aerial measurements-consisting of taking a series of photographs of the coastline from a UAV raid.
  • Determining reference angles of spatial orientation—based on the known coordinates of the image position and two characteristic photopoints.
  • Determining corrections-in accordance with the methodology presented in point 1, based on the photographs of the coastline and the ENC.
  • Conducting an accuracy analysis-based on a comparison of the source and corrected values of the spatial orientation angles in relation to their reference values determined by the GNSS RTK method and the tachometer.

3.1. Taking Measurements

The DJI Phantom 4 RTK UAV was used to carry out the measurements [59]. Before proceeding, the calibration parameters of the optical system of the FC6310R camera mounted on the UAV were established [59]. It was decided to use the DJI Assistant 2 For Phantom software application for this purpose [60,61]. The calibration parameters established in this way could later be encoded as metadata in each photograph using the standard on-board UAV software, enabling the determination of spatial orientation angle corrections according to Algorithm 1 only based on the photo and the ENC.
The port of Gdynia was selected as the test area (i.e., a selected fragment of the coastal zone) for the measurements (Figure 7).
In this area, using the DJI GS RTK App [60], a photogrammetric flight with the DJI Phantom 4 RTK UAV was planned and carried out—taking a series of photographs from a height of 100 m (above mean sea level (MSL)), with fixed values of gimbal rotation angles of the camera: θ = 0 (in the case of the DJI Phantom 4 RTK UAV, this angle corresponded to the value of 90 ), ϕ = 0 —so that the optical axis of the camera points vertically downwards. The position coordinates of the DJI Phantom 4 RTK UAV (including those in which the photos were taken) were determined with the on-board GNSS receiver using the RTK method, and using corrections sent from the server of the HxGN SmartNet reference stations network (the mean 3D error of their measurement did not exceed 3.7 cm) [61].
Among the photographs taken showing the coastline, the most useful (representative) for the accuracy assessment were selected. When choosing, firstly, the differentiation of the irregular shape of the shoreline and the quality of its detection depending on the occurring “disturbances” caused by, among others, moored ships or strong reflection of light from the water surface, was considered. Although, at this point it should be mentioned that the final decision on their selection was made only after confirming the possibility of identifying two field points (photopoint—the so-called ground control points, whose positions were determined by the GNSS receiver) in each selected photo: Z O P -corresponding to projection centre on the image (projected point O C ) and Z e 2 P -the unit vector lying on the axis e 2 C on the image.

3.2. Determination of Reference Angles of Spatial Orientation

3D ellipsoidal coordinates were determined for each of the selected photos ( φ O P , λ O P , h O P ) photopoint position Z O P with the Leica GS18 T RTK receiver with a CS 20 controller using (the same as the UAV) corrections sent from the HxGN SmartNet reference station network server (their mean error 3D measurement did not exceed 1.7 cm) and the reference angle value ψ r measured with the Leica Nova TS60 total station with mini prism GRS101 [62] from the photopoint Z O P on photopoint Z e 2 P (the mean error of the angle measurement did not exceed 0.5″).
Based on known coordinates φ e 2 P , λ e 2 P , h e 2 P photopoint position Z O P and known coordinates φ c , λ c , h c of position O C (saved as XMP metadata in a .jpg file, tag names: XMP-drone-dji:GPSLatitude, XMP-drone-dji: GPSLongtitude, XMP-drone-dji: AbsoluteAltitude [55] reference values of spatial orientation angles were calculated θ r and ϕ r (Table 2). For this purpose, the known WGS 84 ellipsoidal coordinates were first converted into Ortho-Cartesian coordinates according to (9)–(11), describing the position of the points with respect to T and then transformed to the frame of reference T C according to (14), assuming that the optical axis of the camera is directed vertically downwards ( ψ = ψ r , θ = 0 , ϕ = 0 ), ultimately thus obtaining the x O P C , y O P C , z O P C coordinates. Secondly, using simple trigonometric relationships, on their basis, the reference values of the angles of spatial orientation were calculated, according to the dependencies:
θ r = atan y O P C z O P C ,
ϕ r = atan x O P C z O P C .
The accepted idea of determining the angles of spatial orientation θ r , ϕ r based on the coordinates of the photograph’s position O C and photopoint Z O P and the angle ψ r are shown in Figure 8.
The values of the angles of spatial orientation determined in this way, ( ψ r , θ r , ϕ r ) were used as a reference for the comparative precision analysis presented later in the article.
The main reasons for determining the Leica GS18 T RTK position coordinates with different accuracy in relation to the DJI Phantom 4 RTK UAV can be indicated that these are receivers from different manufacturers, the measurements were carried out in motion, and statically, corrections from the HxGN SmartNet reference station network server were sent via the telephone network T-Mobile cellular priority for transmitting data.

3.3. Determination of Corrections According to Algorithm 1

Determination of corrections was carried out in accordance with the author’s Algorithm 1, prepared in RAD Studio C ++ Builder 10.2 [63]. First, this program, using the Open Source Computer Vision Library (OpenCV-3.4.7) [64], performed the task of image processing with two functions, geometric distortion removal, and edge detection.
The following input (initialization) arguments and data processing options were taken:
  • for the geometric distortion removal function:
    • camera parameters: f x = 3670.0 pixels , f y = 3663.45 pixels , c x = 2.89 pixels, c y = 0.88 pixels , w p = 5472 pixels , h p = 3648 pixels (these values were read out from the metadata of XMP photos, tag names: Dewarp Data and Exif, tag names: ExifImageWidth, ExifImageHeight);
    • distortion coefficients: k 1 = 0.262391 , k 2 = 0.111511 , k 3 = 0.0396721 ,
      p 1 = 0.000859802 , p 2 = 0.000259255 (these values were read out from the metadata of XMP photographs, tag names: Dewarp Data).
  • for the edge detection function:
    • changing colorspaces: RGB to grayscale (set arbitrarily);
    • Gaussian blur operation: σ G B O = 1 (set arbitrarily);
    • Canny algorithm: hysteresis procedure (150, 225), Sobel operator-two 3 × 3 kernels (set arbitrarily).
Secondly, the program performed the correction process iteratively, calling a function that adjusts the edge line image from the ENC to the edges detected in the image. The following input (initialization) arguments and data processing options were taken:
  • to find the rotation matrix C ENU ECEF - WGS ellipsoidal coordinates of 84 photo positions φ c , λ c , h c (these values were read out from the XMP metadata of the photos, tag names: XMP-drone-dji: GPSLatitude, XMP-drone-dji: GPSLongtitude, XMP-drone-dji: AbsoluteAltitude);
  • to designate r n o m -camera parameters: w p = 5472 pixels , h p = 3648 pixels, w m = w p · p s = 12.8333 mm, h m = h p · p s = 8.55554 mm, f = p s f x + f y 2 = 8.5797 mm (pixel size p s = 2.34527 m read out from the database of camera model parameters in Mapper Pix4D);
  • to find the rotation matrix C CAM ENU - ψ O l d = ψ ; θ O l d = 90 + θ ; ϕ O l d = ϕ , (these values were read out from the XMP metadata of the photographs, tag names: XMP-drone-dji: GimbalYawDegree, XMP-drone-dji: GPSLongtitude, XMP-drone-dji: GimbalPitchDegree, XMP-drone-dji: GimbalRollDegree);
  • to determine the ellipsoidal height h ENC = geoid height 29.27 m land nodes with WGS 84 2D ellipsoidal coordinates φ ENC , λ ENC read out from ENC-EGM2008 geoid model [65,66,67],
  • for determining the radius length l i close neighborhood circles: σ m a x = 3 (set arbitrarily).
Figure 9 shows a photo after removing geometric distortions and after edge detection with the boundary of the land area plotted based on the ENC in the 1st, 2nd, and 8th iteration.

3.4. Accuracy Analysis

The matching of the land borderline generated based on the ENC was correct in 97% of the photographs taken (321 in total), showing the edges of the pier. The analysis was carried out based on the results of measurements and calculations made for the five selected images (the most represented, where it was possible to identify photopoints). At the beginning, the following were used: source angles of spatial orientation ( ψ , θ , ϕ ), reference spatial orientation angles ( ψ r , θ r , ϕ r ), and improved spatial orientation angles ( ψ + Δ ψ c o r , θ + Δ θ c o r , ϕ + Δ ϕ c o r ). The obtained values of the above-mentioned types of angles, compiled separately for each photograph in order to facilitate their comparison with each other, are presented in Table 2.
The obtained results allow us to state that the proposed methodology for determining the corrections of spatial orientation angles proved to be very reliable in terms of accuracy in the case of photograph numbers 1–4 (80%). This can be proved by the values of statistical parameters characterizing the accuracy of determining the angles ( ψ + Δ ψ c o r , θ + Δ θ c o r , ϕ + Δ ϕ c o r ) referred to the values of the reference angles corresponding to them ( ψ r , θ r , ϕ r ) presented in Table 3.
The results from Table 3 show that the minimum and maximum values of absolute differences in the measurement results oscillated at the level of hundredths of a degree, which can be considered an excellent result. On the other hand, the mean value of the differences in the measurement results was concentrated around the zero value, only three hundredths of a degree away from it. Nevertheless, the standard deviation of the difference indicates a relatively large scatter of the measurement results around the mean value. Certainly, the cause of this phenomenon may be the achievement of a level of accuracy in determining corrections of spatial orientation angles close to (or even higher than) the accuracy of the reference measurements (despite the fact that the mean error value of the measurements ψ r did not exceed 0.08 , however, θ r and ϕ r 0.03 ).
Unfortunately, the values of the corrected angles of spatial orientation for photograph 5 were affected by gross errors (1.57 for ψ + Δ ψ c o r , −1.62 for θ + Δ θ c o r , and −0.68 for ϕ + Δ ϕ c o r ). This was mainly due to the fact that the border of the land area was in the shape of a straight line (it had no distinctive bends). In this case, it is possible to “slide” the straight line of the land area generated from the ENC over the straight edge of the coastline in the image and obtain the optimal level of alignment for many combinations of spatial orientation angles (there is simply more than one solution to this problem). In order to carry out a deeper analysis of extreme cases (for which the best and worst results were obtained), it was decided to follow the course of their iterative process of determining corrected angles of spatial orientation. Figure 10 and Figure 11 show processed photograph No. 1 and No. 5 with the land area boundary generated based on the ENC in the 1st, 2nd, and 8th iterative step.
On the other hand, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17 present the graph values d m i n , the number of analyzed points (from sets N and I ), values of the determined corrections of spatial orientation angles ( Δ ψ c o r , Δ θ c o r , Δ ϕ c o r ) as a function of successive iterations.
The analysis of the photos presented in Figure 10 and Figure 11 shows that the course of the process of adjusting the boundary of the land area generated based on the ENC in subsequent iterations is similar. A significant match can be seen as early as the third iteration. The visible disturbances in the photographs caused by the reflection of sunlight from the water surface (visible in particular in photograph No. 1) and the hulls and mooring lines of the ships standing at the pier do not affect the selection’s accuracy of the appropriate edges.
In turn, the graphs presented in Figure 10 and Figure 11 show that the mean value as well as the standard deviation of the distance d m i n between identical points from sets N and I decrease in successive iterations oscillating abruptly within the circle of radius l i (near neighborhood area). Their stabilization (reducing the amplitude of fluctuations) is much faster in the case of photograph No. 1 than photograph No. 5. This allows us to hypothesize that the achievement of stabilization in later iterations may indicate that the determined corrections of the spatial orientation angles ( Δ ψ c o r , Δ θ c o r , Δ ϕ c o r ) will be burdened with a gross error.
The graphs presented in Figure 14 and Figure 15 show that, after the seventh iteration, the number of identical points (found) from set I clearly decreased. In the case of photograph No. 1, identical points were found for about 20% of all searched points from set N . After the seventh iteration, the spatial orientation angle corrections are also stabilized ( Δ ψ c o r , Δ θ c o r , Δ ϕ c o r ). This can be clearly seen in the graphs presented in both Figure 16 and Figure 17. Then, the iteratively reduced value of the measurement error σ i is very close to the nominal measurement resolution value of the optical system r n o m .
The presented analysis of the iterative process of determining the corrections of the spatial orientation angles has confirmed the correctness of the methodology proposed in the article. The only identified main drawback was related to the situation where the land area’s border has a shape similar to a straight line, and its use should be avoided then. Therefore, the decision to use the methodology should be preceded by an analysis of the shape of the coastline generated based on the ENC in terms of the presence of refractions. The decision-making process could then take place, for example, based on the internal energy of the active “snake” contour model [68,69] or the sum of the distance from the line section determined following the Ramer-Douglas-Peucker algorithm [70,71]. Additionally, a requirement for a minimum number of identical points selected from the set may also be considered (e.g., calculated by the method of interval estimation for the assumed confidence level), allowing the determination of the corrections with the error within the specified limits.

4. Conclusions

Taking into account the results of the analysis carried out in the stages of preparation, development, and testing of the methodology for the correction of spatial orientation angles presented in the article, the following generalized conclusions can be drawn:
  • The conducted research proves that the presented methodology can guarantee a very high accuracy of measuring the spatial orientation angles of UAVs performed in coastal and port waters. The size of the angle measurement error depends primarily on the measuring accuracy of the camera and the UAV flight altitude. It was confirmed that for a camera with a CMOS matrix resolution of 5472 × 3648 pixels and a UAV flight altitude of about 100 m, the accuracy of measuring spatial orientation angles can be obtained after being corrected at the level of hundredths of a degree.
  • The conducted research also confirms the correctness of the methodology. On the one hand, in terms of matching the land boundary line generated based on the ENC, it was 97% accurate (concerning a statistical sample composed of all tested photographs) and on the other hand, in terms of the possibility of determining corrections it was at the level of 80% (concerning a statistical sample composed of the most representative photographs).
  • The conducted research has additionally shown that the strategy of searching for the best corrections (exploring the solution space) used within the methodology can be adapted (tuned) to the computing power of the computer or the measuring accuracy of the camera to increase or decrease the calculation time as well as to increase or decrease the accuracy of determining corrections, and this only requires the determination of appropriate values of two parameters: the maximum mean error of the angle measurement: yaw, pitch, roll and the distance between the edge points generated based on the ENC.
  • The equivalent image of the land area lines generated from the spatial object of the acronym LNDARE encoded in ENC can be successfully used to match with the image of the shoreline edge from the photo. However, it should be remembered that the use of the methodology should result from the analysis of the shape of the shoreline in terms of the presence of “kinks”. The decision-making process could then take place, for example, based on the internal energy of the active contour model “snake” or the sum of deviations from the line segment determined according to the Ramer-Douglas-Peucker algorithm.
  • The proposed methods for both removing geometric distortions and the detection of edges at the water-land interface proved successful in all photos, thus allowing for the efficient implementation of matching the points of the land area line generated based on the ENC. However, it should be borne in mind that it is possible to distort the shape of the land area line developed based on the ENC in accordance with the calibration parameters of the camera and thus resignation from the initial process of image processing, certainly requiring much greater computing power. This problem will be one of the threads of future research carried out by the authors of the publications to optimize the methodology in terms of minimizing the calculation time.
  • It should also be emphasized that the purpose of future research related to the implementation of the methodology for use (increasing its level of technological readiness) will also be to adapt the algorithm for the correction of spatial orientation angles to perform parallel calculations with it, for example, using hybrid multi-core systems with Graphics Processing Unit (GPU).
These generalized conclusions give rise to the hypothesis that the proposed methodology has great potential in supporting angular measurements made with UAVs in coastal and port waters. With the ongoing development of vision systems and computing power, it will undoubtedly be able to be used more widely in the future. Perhaps eventually using only a single photo and ENC.

Author Contributions

Conceptualization, K.N., P.S.; Data curation, K.N., P.P., M.N. and A.N.; Investigation, K.N.; Methodology, K.N., A.N.; Software, K.N.; Supervision, K.N.; Validation, P.S., P.P. and M.N.; Formal analysis, K.N. and P.S.; Visualization, K.N, P.P. and P.S.; Writing—original draft, K.N., P.S.; Writing—review and editing, P.S. and K.N. All authors have read and agreed to the published version of the manuscript.

Funding

This article has been funded by statutory research on marine optical navigation systems conducted by the Polish Naval Academy and research and development project “Universal Bulk Measurement System (UBMS)” co-financed under the European Union funds in 2020, in a frame of the program “Smart Growth”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van Blyenburgh, P. RPAS Remotely Piloted Aircraft Systems: The Global Perspective 2014/2015; Blyenburgh & Co.: Paris, France, 2014. [Google Scholar]
  2. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote. Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  3. Marini, S.; Gjeci, N.; Govindaraj, S.; But, A.; Sportich, B.; Ottaviani, E.; Márquez, F.P.G.; Bernalte Sanchez, P.J.; Pedersen, J.; Clausen, C.V.; et al. ENDURUNS: An Integrated and Flexible Approach for Seabed Survey Through Autonomous Mobile Vehicles. J. Mar. Sci. Eng. 2020, 8, 633. [Google Scholar] [CrossRef]
  4. Wainwright, P.; McPherson, D. The Implementation of the WEND Concept: The PRIMAR Experience. Available online: https://journals.lib.unb.ca/index.php/ihr/article/view/20497 (accessed on 12 May 2021).
  5. Maling, D. Surveying and map projections. In Coordinate Systems and Map Projections; Elsevier: Amsterdam, The Netherlands, 1992; pp. 310–335. [Google Scholar] [CrossRef]
  6. Veth, M.J. Navigation Using Images, A Survey of Techniques. Navigation 2011, 58, 127–139. [Google Scholar] [CrossRef]
  7. Naus, K.; Waz, M. Precision in Determining Ship Position using the Method of Comparing an Omnidirectional Map to a Visual Shoreline Image. J. Navig. 2015, 69, 391–413. [Google Scholar] [CrossRef] [Green Version]
  8. Naus, K. Electronic Navigational Chart as an Equivalent to Image Produced by Hypercatadioptric Camera System. Pol. Marit. Res. 2015, 22, 3–9. [Google Scholar] [CrossRef] [Green Version]
  9. Naus, K.; Wąż, M.; Szymak, P.; Gucma, L.; Gucma, M. Assessment of ship position estimation accuracy based on radar navigation mark echoes identified in an Electronic Navigational Chart. Measurement 2021, 169, 108630. [Google Scholar] [CrossRef]
  10. Naus, K.; Marchel, Ł.; Szymak, P.; Nowak, A. Assessment of the Accuracy of Determining the Angular Position of the Unmanned Bathymetric Surveying Vehicle Based on the Sea Horizon Image. Sensors 2019, 19, 4644. [Google Scholar] [CrossRef] [Green Version]
  11. Naus, K.; Wąż, M. Accuracy of measuring small heeling angles of a ship using an inclinometer. Sci. J. Marit. Univ. Szczec. 2015. [Google Scholar] [CrossRef]
  12. Praczyk, T.; Bodnar, T. Conception of using vision system and methods of image data processing for determination of ship orientation angles. Logistyka 2014, 1, 5278. [Google Scholar]
  13. Gershikov, E.; Libe, T.; Kosolapov, S. Horizon line detection in marine images: Which method to choose? Int. J. Adv. Intell. Syst. 2013, 6, 79–88. [Google Scholar]
  14. Praczyk, T. A quick algorithm for horizon line detection in marine images. J. Mar. Sci. Technol. 2017, 23, 164–177. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, J.; Ren, L.; Deng, H.; Ma, M.; Zhong, X.; Wen, P. Measurement of Unmanned Aerial Vehicle Attitude Angles Based on a Single Captured Image. Sensors 2018, 18, 2655. [Google Scholar] [CrossRef] [Green Version]
  16. Eberli, D.; Scaramuzza, D.; Weiss, S.; Siegwart, R. Vision Based Position Control for MAVs Using One Single Circular Landmark. In Unmanned Aerial Vehicles; Springer: Dordrecht, The Netherlands, 2010; pp. 495–512. [Google Scholar] [CrossRef]
  17. Gui, Y.; Guo, P.; Zhang, H.; Lei, Z.; Zhou, X.; Du, J.; Yu, Q. Airborne Vision-Based Navigation Method for UAV Accuracy Landing Using Infrared Lamps. J. Intell. Robot. Syst. 2013, 72, 197–218. [Google Scholar] [CrossRef]
  18. Cesetti, A.; Frontoni, E.; Mancini, A.; Zingaretti, P.; Longhi, S. A Vision-Based Guidance System for UAV Navigation and Safe Landing using Natural Landmarks. J. Intell. Robot. Syst. 2009, 57, 233–257. [Google Scholar] [CrossRef]
  19. DJI. GNSS RTK. Available online: https://www.dji.com/pl/d-rtk-2 (accessed on 28 February 2021).
  20. YUNEEC. Electronic Aviation. Available online: https://yuneec.pl/product-eng-12350-Yuneec-H520E-RTK-drone.html (accessed on 28 February 2021).
  21. AltiGator. Unmanned Solution. Available online: https://altigator.com/en/septentrio-rtk-technology-for-\uavs-with-mikrokopter-flight-electronics (accessed on 28 February 2021).
  22. Drones, B.G. RTK, GLONASS, Galileo, BeiDou and More. Available online: https://altigator.com/en/septentrio-rtk-technology-for-\uavs-with-mikrokopter-flight-electronics/ (accessed on 28 February 2021).
  23. Freeflysystems. GNSS RTK. Available online: https://freeflysystems.com/ (accessed on 28 February 2021).
  24. Leica. GS18 T GNSS RTK Rover. Available online: https://leica-geosystems.com/products/gnss-systems/smart-antennas/leica-gs18-t (accessed on 28 February 2021).
  25. GNSS. RTK. Available online: https://www.indiamart.com/elitesystems/carbon-fiber\-drone.html (accessed on 28 February 2021).
  26. Bräuer-Burchardt, C.; Voss, K. Automatic Lens Distortion Calibration Using Single Views. In Informatik Aktuell; Springer: Berlin/Heidelberg, Germany, 2000; pp. 187–194. [Google Scholar] [CrossRef]
  27. Devernay, F.; Faugeras, O. Straight lines have to be straight. Mach. Vis. Appl. 2001, 13, 14–24. [Google Scholar] [CrossRef]
  28. Nomura, Y.; Sagara, M.; Naruse, H.; Ide, A. Simple calibration algorithm for high-distortion lens camera. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 1095–1099. [Google Scholar] [CrossRef]
  29. Prescott, B.; McLean, G. Line-Based Correction of Radial Lens Distortion. Graph. Model. Image Process. 1997, 59, 39–47. [Google Scholar] [CrossRef]
  30. Shih, S.W.; Hung, Y.P.; Lin, W.S. When should we consider lens distortion in camera calibration. Pattern Recognit. 1995, 28, 447–461. [Google Scholar] [CrossRef]
  31. Stein, G. Lens distortion calibration using point correspondences. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997. [Google Scholar] [CrossRef] [Green Version]
  32. Tsai, R.Y. An Efficient and Accurate Camera Calibration Technique for 3D machine vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, USA, 22–26 June 1986; pp. 364–374. [Google Scholar]
  33. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef] [Green Version]
  34. Louhichi, H.; Fournel, T.; Lavest, J.M.; Aissia, H.B. Self-calibration of Scheimpflug cameras: An easy protocol. Meas. Sci. Technol. 2007, 18, 2616–2622. [Google Scholar] [CrossRef]
  35. Sun, C.; Liu, H.; Jia, M.; Chen, S. Review of calibration methods for Scheimpflug camera. J. Sens. 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
  36. Sobel, I. On calibrating computer controlled cameras for perceiving 3-D scenes. Artif. Intell. 1974, 5, 185–198. [Google Scholar] [CrossRef]
  37. Shen, J.; Castan, S. An optimal linear operator for step edge detection. CVGIP Graph. Model. Image Process. 1992, 54, 112–133. [Google Scholar] [CrossRef]
  38. Deriche, R. Using Canny’s criteria to derive a recursively implemented optimal edge detector. Int. J. Comput. Vis. 1987, 1, 167–187. [Google Scholar] [CrossRef]
  39. Bourennane, E.; Gouton, P.; Paindavoine, M.; Truchetet, F. Generalization of Canny–Deriche filter for detection of noisy exponential edge. Signal Process. 2002, 82, 1317–1328. [Google Scholar] [CrossRef]
  40. Marr, D.; Hildreth, E. Theory of edge detection. Proc. R. Soc. Lond. Ser. B. Biol. Sci. 1980, 207, 187–217. [Google Scholar] [CrossRef]
  41. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 82, 679–698. [Google Scholar] [CrossRef]
  42. DJI. Phantom 4 Pro. Available online: https://www.dji.com/pl/phantom-4-pro/info (accessed on 12 May 2021).
  43. S-57 Appendix b.1. IHO Dictionary. Available online: https://iho.int/uploads/user/pubs/standards/s-57/20ApB1.pdf (accessed on 28 February 2021).
  44. Department of Defense. World Geodetic System 1984. Available online: https://earth-info.nga.mil/GandG/publications/tr8350.2/wgs84fin.pdf (accessed on 28 February 2021).
  45. S-57 Appendix A. IHO Object Catalogue. Available online: https://iho.int/uploads/user/pubs/standards/s-57/31ApAch1.pdf (accessed on 28 February 2021).
  46. S-32. IHO Dictionary. Available online: http://hd.iho.int/en/index.php/Main_Page (accessed on 28 February 2021).
  47. International Maritime Organization, London. Performance Standards for Electronic Chart Display and Information Systems. Available online: https://www.register-iri.com/wp-content/uploads/A_19_Resolution_817.pdf (accessed on 28 February 2021).
  48. S-52. IHO Dictionary. Available online: https://iho.int/uploads/user/pubs/standards/s-52/S-52%20Edition%206.1.1%20-%20June%202015.pdf (accessed on 28 February 2021).
  49. Bomford, A.G. The Geodetic Adjustment of Australia. Surv. Rev. 1967, 19, 52–71. [Google Scholar] [CrossRef]
  50. Burkholder, E.F. The 3-D Global Spatial Data Model: Principles and Applications; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  51. Szabova, M.; Duchon, F. Survey Of GNSS Coordinates Systems. Eur. Sci. J. ESJ 2016, 12, 33. [Google Scholar] [CrossRef] [Green Version]
  52. Gray, A. A Treatise on Gyrostatics and Rotational Motion; Macmillan: New York, NY, USA, 1918. [Google Scholar]
  53. Goldstein, H.; Poole, C.; Safko, J. Classical Mechanics; American Association of Physics Teachers: College Park, MD, USA, 2002. [Google Scholar]
  54. Bresenham, J.E. Algorithm for computer control of a digital plotter. IBM Syst. J. 1965, 4, 25–30. [Google Scholar] [CrossRef]
  55. International Organization for Standardization. Graphic Technology—Extensible Metadata Platform (XMP)—Part 1: Data Model, Serialization and Core Properties; ISO 16684-1; International Organization for Standardization: Geneva, Switzerland, 2019. [Google Scholar]
  56. Pix4D. The Leading Photogrammetry Software for Professional Drone Mapping. Available online: https://www.pix4d.com/product/pix4dmapper-photogrammetry-\software (accessed on 28 February 2021).
  57. Andrews, L. A template for the nearest neighbor problem. C/C++ Users J. 2001, 19, 40–49. [Google Scholar]
  58. Burnett, M. Blocking Brute-Force Attacks. IIS Resour. Secur. Artic. 2004, 21. Available online: http://www.iis-resources.com/modules/news/article.php?storyid=275 (accessed on 28 February 2021).
  59. DJI. Phantom 4 RTK. Available online: https://www.dji.com/pl/phantom-4-rtk (accessed on 28 February 2021).
  60. DJI. Phantom 4 RTK: Download Center. Available online: https://www.dji.com/pl/downloads/products/phantom-4-rtk (accessed on 28 February 2021).
  61. HxGN. SmartNet. Available online: https://hxgnsmartnet.com/ (accessed on 28 February 2021).
  62. Leica. Leica Nova TS60. Available online: https://leica-geosystems.com/products/total-stations/robotic-total-stations/leica-nova-ts60 (accessed on 28 February 2021).
  63. Embarcadero. cbuilder. Available online: www.embarcadero.com/products/cbuilder (accessed on 28 February 2021).
  64. OpenCV Team. OpenCV. Available online: https://opencv.org/ (accessed on 28 February 2021).
  65. NGA. Earth Gravitational Model 2008. Available online: https://en.wikipedia.org/wiki/Earth_Gravitational_Model (accessed on 28 February 2021).
  66. GEOTRANS. Geoid Library. Available online: https://earth-info.nga.mil/GandG//geotrans/geotrans3.7/docs/html/_geoid_library_8cpp_source.html (accessed on 28 February 2021).
  67. Pavlis, N.K.; Holmes, S.A.; Kenyon, S.C.; Factor, J.K. The development and evaluation of the Earth Gravitational Model 2008 (EGM2008). J. Geophys. Res. Solid Earth 2012, 117, 1–13. [Google Scholar] [CrossRef] [Green Version]
  68. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  69. Xu, C.; Prince, J. Gradient vector flow: A new external force for snakes. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997. [Google Scholar] [CrossRef] [Green Version]
  70. Douglas, D.H.; Peucker, T.K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartogr. Int. J. Geogr. Inf. Geovis. 1973, 10, 112–122. [Google Scholar] [CrossRef] [Green Version]
  71. Naus, K.; Wąż, M. The idea of using the A* algorithm for route planning an unmanned vehicle “Edredon”. Sci. J. Marit. Univ. Szczec. 2013, 108, 143–147. [Google Scholar]
Figure 1. The view of the pier: (a) original, (b) after removing geometric distortions, (c) after edge detection.
Figure 1. The view of the pier: (a) original, (b) after removing geometric distortions, (c) after edge detection.
Energies 14 02810 g001
Figure 2. Image of the electronic navigational chart No. PL5GDYNA.000 in the Mercator cartographic projection with the border of the land area marked.
Figure 2. Image of the electronic navigational chart No. PL5GDYNA.000 in the Mercator cartographic projection with the border of the land area marked.
Energies 14 02810 g002
Figure 3. Arrangement of reference systems in relation to each other, T H and T space E 3 .
Figure 3. Arrangement of reference systems in relation to each other, T H and T space E 3 .
Energies 14 02810 g003
Figure 4. Graphical interpretation of base rotation e 1 C , e 2 C , e 3 C relative to the base e 1 H , e 2 H , e 3 H and the method of binding T C with the UAV camera projection system.
Figure 4. Graphical interpretation of base rotation e 1 C , e 2 C , e 3 C relative to the base e 1 H , e 2 H , e 3 H and the method of binding T C with the UAV camera projection system.
Energies 14 02810 g004
Figure 5. Coastline image: (a) on the photo with geometric distortions removed, (b) obtained as a result of perspective projection of the edge of the land area from the ENC (equivalent).
Figure 5. Coastline image: (a) on the photo with geometric distortions removed, (b) obtained as a result of perspective projection of the edge of the land area from the ENC (equivalent).
Energies 14 02810 g005
Figure 6. Graphical interpretation of the point association I from the close vicinity of the point N EDG P .
Figure 6. Graphical interpretation of the point association I from the close vicinity of the point N EDG P .
Energies 14 02810 g006
Figure 7. A fragment of an orthophotomap of the port of Gdynia with marked test area (source: goportal.gov.pl (accessed on 9 August 2019)).
Figure 7. A fragment of an orthophotomap of the port of Gdynia with marked test area (source: goportal.gov.pl (accessed on 9 August 2019)).
Energies 14 02810 g007
Figure 8. The idea of determining the angles of spatial orientation θ r , ϕ r .
Figure 8. The idea of determining the angles of spatial orientation θ r , ϕ r .
Energies 14 02810 g008
Figure 9. The process of matching the land area lines generated from the ENC to the edges detected in the image: (a) Source image; (b) Step 1 of iteration, l 1 = 115 pixels, σ 1 = 3 ; (c) Step 2 of iteration, l 2 = 58 pixels, σ 2 = 1.5 ; (d) Step 8 of iteration, l 8 = 1 pixel, σ 8 = 0.023 .
Figure 9. The process of matching the land area lines generated from the ENC to the edges detected in the image: (a) Source image; (b) Step 1 of iteration, l 1 = 115 pixels, σ 1 = 3 ; (c) Step 2 of iteration, l 2 = 58 pixels, σ 2 = 1.5 ; (d) Step 8 of iteration, l 8 = 1 pixel, σ 8 = 0.023 .
Energies 14 02810 g009
Figure 10. The processed image No. 1 with the boundary of the land area marked based on the ENC: (a) Iteration No. 1, (b) Iteration No. 2, (c) Iteration No. 3, (d) Iteration No. 8.
Figure 10. The processed image No. 1 with the boundary of the land area marked based on the ENC: (a) Iteration No. 1, (b) Iteration No. 2, (c) Iteration No. 3, (d) Iteration No. 8.
Energies 14 02810 g010
Figure 11. The processed image No. 5 with the boundary of the land area marked based on the ENC: (a) Iteration No. 1, (b) Iteration No. 2, (c) Iteration No. 3, (d) Iteration No. 8.
Figure 11. The processed image No. 5 with the boundary of the land area marked based on the ENC: (a) Iteration No. 1, (b) Iteration No. 2, (c) Iteration No. 3, (d) Iteration No. 8.
Energies 14 02810 g011
Figure 12. Graphs of the arithmetic mean and standard deviation d m i n during the iteration process for photograph No. 1.
Figure 12. Graphs of the arithmetic mean and standard deviation d m i n during the iteration process for photograph No. 1.
Energies 14 02810 g012
Figure 13. Graphs of the arithmetic mean and standard deviation d m i n during the iteration process for photograph No. 5.
Figure 13. Graphs of the arithmetic mean and standard deviation d m i n during the iteration process for photograph No. 5.
Energies 14 02810 g013
Figure 14. Graphs of the number of points in set N (“Seeking” identical points) and the number of points from set I considered identical (“found” in close proximity) during the iteration process for photograph No. 1.
Figure 14. Graphs of the number of points in set N (“Seeking” identical points) and the number of points from set I considered identical (“found” in close proximity) during the iteration process for photograph No. 1.
Energies 14 02810 g014
Figure 15. Graphs of the number of points in set N (“Seeking” identical points) and the number of points from set I considered identical (“found” in close proximity) during the iteration process for photograph No. 5.
Figure 15. Graphs of the number of points in set N (“Seeking” identical points) and the number of points from set I considered identical (“found” in close proximity) during the iteration process for photograph No. 5.
Energies 14 02810 g015
Figure 16. Graphs of the values of corrections of spatial orientation angles ( Δ ψ c o r , Δ θ c o r , Δ ϕ c o r ) during the iteration process for photograph No. 1.
Figure 16. Graphs of the values of corrections of spatial orientation angles ( Δ ψ c o r , Δ θ c o r , Δ ϕ c o r ) during the iteration process for photograph No. 1.
Energies 14 02810 g016
Figure 17. Graphs of the values of corrections of spatial orientation angles ( Δ ψ c o r , Δ θ c o r , Δ ϕ c o r ) during the iteration process for photograph No. 5.
Figure 17. Graphs of the values of corrections of spatial orientation angles ( Δ ψ c o r , Δ θ c o r , Δ ϕ c o r ) during the iteration process for photograph No. 5.
Energies 14 02810 g017
Table 1. Orientation performance specification of popular IMU used in drones.
Table 1. Orientation performance specification of popular IMU used in drones.
Type of IMU *PriceRoll/Pitch Error (RMS)Yaw Error (RMS)
Yost Labs 3-Space Sensor $ 250 1.0 2.0
Vectornav VN-100 $ 500 0.5 2.0
XSens MTi-600 series $ 520 0.5 1.0
Analog Devices Inc. ADIS16362BMLZ $ 550 0.5 1.0
Honeywell Aerospace HG1120CA50 $ 1500 0.1 1.0
SGB Eclipse 2 $ 4000 0.1 0.8
* All types use: 3-axis accelerometer, 3-axis gyroscope, 3-axis magnetic sensor.
Table 2. A comparison of five photos with the designated values of three types of spatial orientation angles.
Table 2. A comparison of five photos with the designated values of three types of spatial orientation angles.
SourceSource AnglesReference AnglesCorrect Angles
Photoof Spatialof Spatialof Spatial
(PhotoOrientationOrientationOrientation)
Number)(deg)(deg)(deg)
ψ θ ϕ ψ r θ r ϕ r ψ + Δ ψ c o r θ + Δ θ c o r ϕ + Δ ϕ c o r
1126.30.10123.72–0.61–0.31123.79–0.6–0.3
Energies 14 02810 i001
2–53.20.10–55.76–0.910.92–55.8–0.90.97
Energies 14 02810 i002
3850.1079.32–0.31–0.3279.35–0.3–0.3
Energies 14 02810 i003
4–52.70.10–56.21–0.740.35–56.2–0.80.34
Energies 14 02810 i004
5125.900122.630.530.28121.062.151.14
Energies 14 02810 i005
Table 3. Values of statistical parameters characterizing the accuracy of determining angles ( ψ + Δ ψ c o r , θ + Δ θ c o r , p h i + Δ ϕ c o r ) with respect to the angles ( ψ r , θ r , ϕ r ) for photographs 1–4.
Table 3. Values of statistical parameters characterizing the accuracy of determining angles ( ψ + Δ ψ c o r , θ + Δ θ c o r , p h i + Δ ϕ c o r ) with respect to the angles ( ψ r , θ r , ϕ r ) for photographs 1–4.
AngleMinimal Absolute Value of the Difference (deg)Maximal Absolute Value Difference (deg)Average Value of the Difference (deg)Standard Deviation of the Difference (deg)
ψ + Δ ψ c o r 0.030.080.010.08
θ + Δ θ c o r 0.020.050.010.05
ϕ + Δ ϕ c o r 0.010.05−0.030.001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Naus, K.; Szymak, P.; Piskur, P.; Niedziela, M.; Nowak, A. Methodology for the Correction of the Spatial Orientation Angles of the Unmanned Aerial Vehicle Using Real Time GNSS, a Shoreline Image and an Electronic Navigational Chart. Energies 2021, 14, 2810. https://doi.org/10.3390/en14102810

AMA Style

Naus K, Szymak P, Piskur P, Niedziela M, Nowak A. Methodology for the Correction of the Spatial Orientation Angles of the Unmanned Aerial Vehicle Using Real Time GNSS, a Shoreline Image and an Electronic Navigational Chart. Energies. 2021; 14(10):2810. https://doi.org/10.3390/en14102810

Chicago/Turabian Style

Naus, Krzysztof, Piotr Szymak, Paweł Piskur, Maciej Niedziela, and Aleksander Nowak. 2021. "Methodology for the Correction of the Spatial Orientation Angles of the Unmanned Aerial Vehicle Using Real Time GNSS, a Shoreline Image and an Electronic Navigational Chart" Energies 14, no. 10: 2810. https://doi.org/10.3390/en14102810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop