Next Article in Journal
A POI and LST Adjusted NTL Urban Index for Urban Built-Up Area Extraction
Next Article in Special Issue
Suitability of Automatic Photogrammetric Reconstruction Configurations for Small Archaeological Remains
Previous Article in Journal
Design of Secure Protocol for Cloud-Assisted Electronic Health Record System Using Blockchain
Previous Article in Special Issue
Hyper-Spectral Imaging Technique in the Cultural Heritage Field: New Possible Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Quality Assessment of Different Geolocalisation Methods for a Sensor System to Monitor Structural Health of Monumental Objects

1
Faculty of Geodesy and Cartography, Warsaw University of Technology, Pl. Politechniki 1, 00-661 Warsaw, Poland
2
Built Environment and Sustainable Technologies (BEST) Research Institute, Faculty of Engineering and Technology, Liverpool John Moores University, Liverpool L3 3AF, UK
3
Documentation and Digitalization Department, Museum of King Jan III’s Palace at Wilanów, ul. Stanisława Kostki Potockiego 10/16, 02-958 Warsaw, Poland
4
Department of Material Culture History, University of Wrocław, Szewska 49, 50-137 Wroclaw, Poland
5
Collage of Engineering, University of Sharjah, Sharjah P.O. Box 27272, UAE
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(10), 2915; https://doi.org/10.3390/s20102915
Submission received: 2 April 2020 / Revised: 11 May 2020 / Accepted: 18 May 2020 / Published: 21 May 2020
(This article belongs to the Special Issue Sensors for Cultural Heritage Monitoring)

Abstract

:
Cultural heritage objects are affected by a wide range of factors causing their deterioration and decay over time such as ground deformations, changes in hydrographic conditions, vibrations or excess of moisture, which can cause scratches and cracks formation in the case of historic buildings. The electromagnetic spectroscopy has been widely used for non-destructive structural health monitoring of concrete structures. However, the limitation of this technology is a lack of geolocalisation in the space for multispectral architectural documentation. The aim of this study is to examine different geolocalisation methods in order to determine the position of the sensor system, which will then allow to georeference the results of measurements performed by this device and apply corrections to the sensor response, which is a crucial element required for further data processing related to the object structure and its features. The classical surveying, terrestrial laser scanning (TLS), and Structure-from-Motion (SfM) photogrammetry methods were used in this investigation at three test sites. The methods were reviewed and investigated. The results indicated that TLS technique should be applied for simple structures and plain textures, while the SfM technique should be used for marble-based and other translucent or semi-translucent structures in order to achieve the highest accuracy for geolocalisation of the proposed sensor system.

1. Introduction

Cultural heritage objects are still endangered by a wide range of factors causing their deterioration and decay over time. In general, degradation processes are caused by both human activities (e.g., pollution, wrong conservation methods, vandalism) and environmental factors [1,2], including the influence of atmospheric conditions and climate change or biological invasion. For this reason, conservatory works are necessary to keep cultural heritage objects and sites in good shape and conditions in order to preserve them for future generations.
Architectural heritage, i.e., antique buildings and small architecture are affected by many degradation processes of varied course, dependent on characteristics of used materials as well as the object’s age, working conditions and random events [3]. The properties of the objects’ material may change over time due to natural processes, which can be influenced and accelerated by the presence of water, temperature variations, and micro-climatic conditions, e.g., pollution [4].
In order to find proper methods of conservation and maintenance of an antique building as well as to establish work schedule, it is necessary to identify causes of the objects’ degradation [5,6]. Tests performed in situ involve diagnosis based on qualitative approach (direct observation of structural damage, historical and archaeological research) and quantitative approach (monitoring and structural analysis) [6]. Diagnostic methods consider damage inspection, material testing and analyses as well as the detection of hidden spaces and cavities [7]. For the identification of wall composition, georadar, and endoscopic tests can be applied, especially in case of detection of voids, cavities, and consolidation interventions [4,7,8]. The presence of moisture in porous material, especially in water-damaged and humid buildings, is one of the factors causing biological invasion: mildew, moulds, and efflorescence. The examination of such forms of decay can be combined with the monitoring of environmental parameters in order to retrace deterioration patterns.
Structural health monitoring (SHM) for historical and architectural heritage should concern such objectives as [9]:
  • measuring ambient vibrations induced by traffic, wind etc.;
  • measuring the evolution of existing cracks;
  • observing the damage coming from chemical, physical and biological degradation of the object’s material.
Electromagnetic spectroscopy offers non-destructive [10] or embedded solutions [11,12] for structural health monitoring. The electromagnetic sensor, which is the focus of this work, has been used for monitoring of the excess moisture content and waterproof membrane defects [13,14] of concrete structures. The sensor demonstrated a good potential as a method for SHM. However, the limitation of this technology is the lack of geolocalisation in space which prevents the resulting data from being used as multispectral architectural documentation [15]. The accuracy of measurements performed by the proposed electromagnetic (EM) sensor relies on accurate positioning of the system owing to the air gap between the sensor and the scanned object as well as the angle of the sensor in relation to the object. These parameters are essential in analysis of obtained data as they influence/alter the reflected signal, i.e., the sensor response. Therefore, the aim of this article is to investigate the most suitable geolocalisation method for the proposed sensor system as a part of the multisensory platform for SHM of cultural heritage objects, which will then allow to georeference the results of measurements performed by this device and apply corrections to observations acquired by the sensor. This is a crucial element required for further data processing, essential for the analyses related to the object structure and its features and allowing spatial and multitemporal analyses.
This article is composed of several parts, including a short review of available solutions for indoor positioning (Section 2) together with the description of applied orientation model and characteristics of measuring and geolocalisation techniques used in performed experiments—classical surveying, terrestrial laser scanning, and close-range photogrammetry. In Section 3, a full and detailed description of the proposed methodology was presented. The results of performed experiments together with their brief analysis and interpretation are the subject of the Section 4. Section 5 contains a comprehensive analysis and discussion concerning obtained results and provides the conclusion of work performed.

2. Review of Indoor Geolocalisation Methods

2.1. Indoor Positioning Systems

Indoor Positioning provides accurate information about the location of objects and people inside the measured structure [16]. In the recent years, the interest in indoor localization has increased resulting in a rapid development of Indoor Positioning Systems (IPS). As opposed to the outdoor environment, where Global Navigation Satellite Systems (GNSS) are applied for localization and navigation, in case of indoor localization there is no standard system, which could be used on a mass scale.
Various technologies have been applied in the field of indoor positioning and navigation, which include techniques based on Radio Frequency (RF), ultrasound, audible sound, magnetic sensors, infrared (IR), and visual analysis-based systems. Mautz [17] presented an overview of indoor technologies along with their accuracy and coverage (Figure 1).
RF systems are further categorised into Bluetooth Low Energy (BLE), Ultra-wideband (UWB), ZigBee, Wireless Sensor Network (WSN), Wireless Local Area Network (WLAN), Radio-Frequency Identification (RFID), and Near Field Communication (NFC) [18]. Among these methods, WLAN, BLE and ZigBee are popular due to high availability of hardware. The accuracy of RF methods ranges from several meters (WLAN, BLE, ZigBee) [19,20], through several decimetres (UWB) [21,22,23,24,25] up to several centimetres (RFID) [23,26,27,28]. Numerous researchers investigated the usability of RF methods [24,29,30,31] in indoor positioning systems.
Ultrasound systems like the Cricket, Active Bat, and the Dolphin system [18,32] are also successfully applied in the field of IPS for being scalable, accurate (up to several centimetres), and low-cost [23,33]. However, these systems are sensitive to changes of environmental parameters like humidity and temperature [34].
Infrared positioning systems are generally exploited in three forms: active beacons, infrared imaging using natural radiation and artificial light sources [17]. This solution is low cost, but on the other hand, most IR systems require clear line-of-sight, which limits their usability in indoor positioning. Depending on a measuring principle accuracy ranges from several meters (Active Badges) to several centimetres while systems based on high resolution infrared sensors are able to detect artificial IR light sources at submillimetre accuracy [17].
Optical/Visual positioning systems involve determining the target position by identification of a marker or image, which is within view with the aid of a mobile sensor or a camera carried by the user [18]. Mautz [35] investigated different possibilities of reference data acquisition including reference acquired from 3D building models, images, deployed coded targets, projected targets, and observations performed by other sensors. For this task, simultaneous localization and mapping (SLAM) also can be applied [35], but it should be noticed that the use of these algorithms is not limited to vision-based methods only [36,37].
Techniques such as terrestrial laser scanning, photogrammetry, or classical surveying may also be considered as tools for optical/visual positioning since they exploit visual and optical observations in order to determine the position of a measured point. The main advantages of these methods are their low cost (especially in case of close-range photogrammetry), high accuracy (several millimetres and higher), widely available software, and known data processing workflow. For these reasons, their applicability as a tool for positioning of the proposed electromagnetic sensor was investigated in this work.

2.2. The Classical Surveying

In order to determine the position of a measuring device, it is necessary to perform the exterior orientation process resulting in the acquisition of so-called exterior orientation parameters, i.e., the location of the sensor body in the assumed reference system together with its rotation angles.
The basic geodetic method for calculating the position of a point in 3D space is tacheometry, which is based on the measurements of distance and angles [38]. Through a single measurement to a point its X p ,   Y p ,   Z p coordinates can be directly determined (Equation (1)):
X p = X s t + d cos ( A z ) Y p = Y s t + d sin ( A z ) Z p = Z s t + d tan ( α v ) ,
where X s t ,   Y s t ,   Z s t —known coordinates of total station position, d —horizontal distance, A z —azimuth to the measured point, α v —vertical angle.
The described method is burdened with some disadvantages such as relatively low accuracy (depending on the accuracy of the measuring equipment) and low reliability understood as the possibility to check the correctness of observations relatively to each other and to detect errors in measurements. In order to increase the accuracy of point position and its reliability, multiple angular and linear observations from several total station positions must be conducted. The establishment of geodetic network of points with interrelated geometric relations is also required. In this case, the number of observations is greater than the number of unknowns (the number of points multiplied by the number of coordinate components). The coordinates of points in the network are calculated by least squares adjustment. Teunissen [39] applied the well-known Gauss-Markow (Equation (2)) linear model (a linearized form of the input nonlinear relationships). This linear model is also used in the photogrammetric bundle adjustment process [40].
A x + e = y ;       e   ~   ( 0 ,   C e )
where:
  • A —design matrix (n × u) (n—number of observational equations, u—number of unknowns), rank ( A ) = u (full rank);
  • x —parameter vector (u × 1);
  • e —true error vector (n × 1);
  • y —observation vector (n × 1) (uncorrelated observations);
  • C e —observation error covariance matrix (n × n) (positively determined), which is also the observation result covariance matrix, i.e., C e C y ;
As a result of the above equation, the final coordinates of points are calculated together with the accuracy characteristics. It is worth noting that this particular method provides full knowledge about the accuracy of coordinate calculation, which can be derived from the covariance matrix. No other method ensures such complete characteristics of measurements accuracy.

2.3. Terrestrial Laser Scanning

Terrestrial Laser Scanning (TLS) is a range-based measuring technique [41], which has gained much popularity in a variety of fields including industry, civil engineering, damage assessment and deformation monitoring, forestry inventory or cultural heritage documentation [42,43,44,45]. As TLS is based on a laser beam deflection, it allows for the acquisition of large sets of accurate 3D metric data automatically and in a short time.
The result of such measurement is a 3D point cloud representing the shape of measured objects together with the intensity of the returning signal which may be successfully applied in examination of properties of investigated objects [46] such as their reflectance, roughness, moisture, brightness, and grain size [47,48]. However, in the case of geolocalisation purposes, the significant factors are measurement accuracy and resolution.
The accuracy of TLS measurements ranges from several centimetres up to submillimetre-level [49,50], however, it depends on many factors including the environment conditions, properties of a tested object, instrumental errors [51,52,53], as well as scanning geometry [54]. Some of them can be eliminated or minimised either by applying corrections to measured angles and distances or using the system’s calibration [55,56]. However, environment conditions such as inconvenient temperature, rain or dust are difficult to predict and model. Furthermore, the reflection intensity has an effect on the accuracy of range measurements. Voegtle [47] investigated the influence of different materials on TLS measurement accuracy and proved that for surfaces of brighter colour (and consequently higher intensity) the geometric accuracy was about 3 times higher compared to black surfaces (of low intensity). Moreover, in the case of partially transparent materials and specular surfaces, the geometric accuracy decreases rapidly [47,57,58,59,60].
The measurement resolution is determined by an angle increment, the incidence angle, scanning distance, and spot size called beam divergence [61,62]. The size of the laser spot (called divergence) increases with the increasing scanning distance affecting the accuracy of point position [63] and decreasing the scanning resolution. Moreover, beam divergence is associated with the “mixed edge” problem meaning that a part of the beam is reflected by the object’s edge, but the other part travels further and is reflected later by another object. In consequence, there is more than one distance recorded for a single signal, the final distance value, which is determined as a mean distance, can be inaccurate [61].

2.4. The Structure-from-Motion/Close-Range Photogrammetry

Close-range photogrammetry is another measuring technique for digital 3D reconstruction of the objects’ shape. Unlike terrestrial laser scanning, which is an active technique, photogrammetry is an image-based passive method, which provides high resolution and accurate datasets, being at the same time a low-cost solution characterized by high level of automation. These features made close-range photogrammetry very flexible and thus, widely used measuring method successfully applied in a variety of fields including civil engineering [64,65], forestry [66,67], geology [68,69], archaeology [70,71], and cultural heritage documentation [72,73].
Modern close-range photogrammetry applies methods and algorithms are derived from Computer Vision (CV) such as Structure-from-Motion (SfM) and Multi-View Stereo (MVS). The main function of SfM is to calculate the image orientation based on a series of overlapping, offset images [74], while MVS is used for dense point cloud generation. SfM is based on two principles: the binocular vision and the changing vision of the object, which is moving or being observed from different points of view [66].
As the parameters of internal orientation of the camera are calculated via self-calibration, the use of consumer-grade non-metric cameras is possible. Camera positions and scene geometry are reconstructed simultaneously during bundle adjustment, which involves image matching (the process of detecting corresponding features on multiple images). Image matching algorithms usually perform poorly in the case of objects and scenes where corresponding (homologuous) features cannot be clearly identified, so for areas characterized by poor or no texture as well as repetitive patterns. Apart from texture quality, several other factors influence the accuracy of image orientation: the base-to-depth (B/D) ratio, the number of measured tie points per image during image matching, the number of images, on which a certain tie point is visible, and the accuracy of self-calibration of the camera [75].
Camera positions and 3D scene derived from SfM lack scale and orientation in a specific coordinate system. To georeference them in a particular coordinate system (e.g., geodetic coordinate system), it is possible to apply 7-parameter transformation based on ground-control points with known coordinates [76]. In case of the objects, the shape of which must be accurately reconstructed but their position in space is irrelevant, scale bars with known length may be applied instead. Known image orientation and intrinsic camera parameters calculated by SfM allows the generation of a dense point cloud, and for this purpose, the MVS algorithms are usually used [77]. Resulting point cloud can be further processed for 3D modelling and othophoto generation.
Similar to other techniques, close-range photogrammetry has some limitations resulting from applied algorithms and methods. However, today the integration of several complementary techniques is becoming more and more popular as it allows for the enhancement of the final results.

2.5. The Rigid-Body Orientation

The main subject of this work was to examine different approaches aiming at the determination of exterior orientation parameters of the electromagnetic sensor at each sensor station. For this purpose, the sensor was treated as a rigid body, for which position and rotation angles had to be found in a defined reference system (Figure 2). In the performed experiments, the reference system was associated with the tested plane formed by walls to be examined by the electromagnetic sensor at each test site.
In order to determine 3 linear and 3 angular exterior orientation parameters, the 3D affine transformation is commonly applied. The transformation is performed using a minimum of three tie points, but if the number of points increases, the redundancy increases too, resulting in higher accuracy of data registration. This also allows the elimination of outliers and therefore provides better fitting of the 3D transformation model into performed observations.
The relation between the sensor system and the global reference system is expressed in Equations (3) and (4) [78,79,80].
M e x t = R ω φ κ M i n t + T ,
R ω φ κ = R 1 ( ω )   R 2 ( φ )   R 3 ( κ ) ,
where M e x t is the vector of point coordinates in the global system,   R ω φ κ is the rotation matrix, M i n t is the vector of point coordinates in the local (sensor) system, T is the translation matrix, R 1 ( ω ) is the rotation matrix in relation to the x-axis by angle ω , R 2 ( φ ) is the rotation matrix in relation to the y-axis by an angle φ , and R 3 ( κ ) is the rotation matrix in relation to the z-axis by an angle κ . The final representation of the rotation matrix [81]   R ω φ κ related to the each of angle can be expressed as Equation (5)
R ω φ κ = [ c o s   φ   c o s   κ c o s   φ   s i n   κ s i n   φ c o s   ω   s i n   κ   +   s i n   ω   s i n   φ   c o s   κ c o s   ω   c o s   κ   s i n   ω   s i n   φ   s i n   κ s i n   ω   c o s   φ s i n   ω   s i n   κ c o s   ω   s i n   φ   c o s   κ s i n   ω   c o s   κ   +   c o s   ω   s i n   φ   s i n   κ c o s   ω   c o s   φ ] ,
This representation of rotation and translation allows the independent orientation of the sensor system in a global, external reference system. However, in order to perform structural health monitoring, it is necessary to obtain the relative orientation i.e., the relation between the examined wall and the face of the sensor system. To compute this relation, it is necessary to: firstly, define a new reference system related to the wall, secondly, transform the coordinates of all points into the new coordinate system and finally, calculate the sensor’s translation and rotation parameters in the new coordinate system. In this case, it is not possible to directly implement the rotation matrix presented in the Equation (5) because the values of the rotation angles ω , φ , κ need to be found first. To do this, the relations between normal vectors of the reference plane and the sensor plane was computed.
To describe the rotations [82], the O   x 0 ,   y 0 ,   z 0 coordinate system was used, followed by the vector r (Figure 3), to which the O   x 1 ,   y 1 ,   z 1 coordinate system was associated. It is necessary to consider the relations, which occur between the coordinates of the vector r in the coordinate system O   x 1 ,   y 1 ,   z 1 and the coordinates of this vector in the fixed (non-rotated) reference system O   x 0 ,   y 0 ,   z 0 . The position of the vector r before rotation is denoted by r 0 , whereas by { i 0 ,   j 0 ,   k 0 } the standard orthogonal base in the system O   x 0 ,   y 0 ,   z 0 is designated.   i 0 ,   j 0 ,   k 0 are unit vectors along the x 0 ,   y 0 ,   z 0 axes, respectively. Similarly, { i 1 ,   j 1 ,   k 1 } is a standard orthogonal base in the O   x 1 ,   y 1 ,   z 1 system.
The vector r is presented in the system O   x 0 ,   y 0 ,   z 0 as follows:
r 0 = x 0 i 0 + y 0 j 0 + z 0 k 0 ,
However, the same vector r is presented in the system O   x 1 ,   y 1 ,   z 1 as follows:
r 1 = x 1 i 1 + y 1 j 1 + z 1 k 1 ,
The r vector is represented by the r 0 and r 1 vectors, so the relationships between the components of the r vector in both coordinate systems are:
r 0 , x = r 0 i 0 = r 1 i 0 = x 1 i 1 i 0 + y 1 j 1 i 0 + z 1 k 1 i 0 ,
Analogous formulas were obtained for the components r 0 , y and r 0 , z with:
r 0 , y = x 1 i 1 j 0 + y 1 j 1 j 0 + z 1 k 1 j 0 ,
r 0 , z = x 1 i 1 k 0 + y 1 j 1 k 0 + z 1 k 1 k 0 ,
The Equations (8)–(10) are presented in the form of a vector equation as follows:
r 0 = R 1 , 0 r 1 ,
The Matrix R 1 , 0 represents the transformation of coordinates of the point P from the system O x 1 ,   y 1 ,   z 1 to the system O x 0 ,   y 0 ,   z 0 and it is written in the form:
R 1 , 0 = [ i 1 i 0 j 1 i 0 k 1 i 0 i 1 j 0 j 1 j 0 k 1 j 0 i 1 k 0 j 1 k 0 k 1 k 0 ] ,
The coefficient values in the Equation (12) enable to calculate the rotation angles ω , φ , κ of the sensor plane relative to the wall plane described in the Equation (5). In this particular case, the unit vectors i 0 ,   j 0 ,   k 0 and i 1 ,   j 1 ,   k 1 are the parameters of the normal vector of the wall plane and the sensor plane, respectively.
In order to determine the plane equation parameters minimum three points are needed. The vector product r for two vectors r 1 and r 2 formed according to the Equation (13)
r = r 1 × r 2 ,
As the number of control points is four or more, the least square adjustment [83] method is used to compute the parameters [ a , b , c ] from the Equation (14).
a ( x x 0 ) + b ( y y 0 ) + c ( z z 0 ) = 0 ,
Since the examined object is represented by a very large number of points (e.g., a cloud of points from laser scanning or the SfM method), the pcfitplane [84] function in the Matlab software was used for the calculation. The plane is described by a planeModel object while maxDistance is a maximum allowed distance from an inlier point to the plane. Plane parameters are specified as a 1-by-4 vector. The four parameters [ a ,   b ,   c ,   d ] describe the equation for a plane: a x + b y + c z + d = 0 . This function uses the M-estimator SAmple Consensus (MSAC) algorithm to find the plane [85].
The last element is the calculation of the distance d between the point ( x 0 ,   y 0 ,   z 0 ), which is the center of the sensor and the wall plane [ a , b , c ] . The Formula (15) was used:
d = | a x 0 + b y 0 + c z 0 | a 2 + b 2 + c 2   ,

3. Materials and Methods

Structural health monitoring (SHM) involves the use of numerous instruments, i.e., sensors and other measuring devices that provide the measurement and assessment of features and parameters characterising investigated objects. There are many ready-to-use solutions available on the market for SHM, however, in case of monumental objects, which require non-invasive, contactless SHM methods, there is a need for development of novel measuring techniques, which are capable of detecting previously unknown features/conditions of the objects. This article concerns preparatory works that would be required prior to the use of data acquired by an electromagnetic sensor applied for SHM purposes.
The multisensor platform (Figure 4) is a prototype instrument consisting of an electromagnetic sensor and four digital cameras installed on a metal frame, full-frame camera Canon EOS 5D Mark II and Terrestrial Laser Scanner Z + F 5006 h. The proposed electromagnetic (EM) sensor system consists of two wideband horn antennas that operate in the frequency range of 2–18 GHz. The antennas are attached to the designed aluminum frame at a certain angle to each other in order to control the penetration depth of the sensor signal. Both antennas play different role in the setup, namely one acts as a transmitter and the second one acts as a receiver of the reflected signal. The measurements are provided using S-parameters, namely S21 (transmitting the signal from port one and receiving to port 2) via Vector Network Analyzer (VNA) Rohde and Schwarz ZVL13. The captured data is in a complex data format (real and imaginary data), which is recorded using a bespoke LabVIEW program on a laptop that is connected to the VNA via TCP/IP connection. The sensor system is positioned 2–6 cm from the measured object to avoid the contact, i.e., the measurements are contactless. This approach is a primary requirement when it comes to measurements of cultural heritage objects/structures to avoid any potential damages to their surface.
It must be stressed that the results of measurements performed by the electromagnetic sensor are not the subject of this study. The aim of this research is to propose and examine different methods of georeferencing of the sensor system to apply corrections to the sensor readings at further stages of data processing. The results of performed experiments will establish the data processing workflow, which will be a basis for future work.

3.1. Overview of the Approach

The proposed method of geolocalisation of the sensor system is a multi-stage process based on the combination of commonly used algorithms implemented in Agisoft Metashape and the original software. The proposed method is a novel approach allowing both orientation of the electromagnetic device and its integration with other measuring devices. The whole experimentation involved the acquisition of data on three test sites differing in features, containing materials and surface types commonly present in case of cultural heritage objects. Detailed characteristics of the test sites were presented in the next section. Figure 5 illustrates the scheme of research work and experiments performed.
The process of sensor registration is an original approach resulting in determination of exterior orientation parameters (the XYZ coordinates of the antenna centre and three angular parameters— ω , φ , κ ) of the EM sensor as well as relative orientation parameters in relation to the reference planewall, which is a subject of measurement performed by the electromagnetic sensor at each test site. In this work the plane corresponding to the examined wall is called “the base plane”. Calculation of these parameters (performed in two different ways) makes it possible to determine the sensor position, but on the other hand they can also be used in the EM data processing as a basis for the calculation of corrections applied to the raw data acquired by the sensor.
The whole experimentation was divided into three main parts: (1) data acquisition and pre-processing, (2) a set of tests concerning sensor orientation methods, and (3) the analysis of obtained results.
The process of data registration presented in this paper consists of the following stages:
  • Determination of the reference system and base planes (data pre-processing).
At this stage the reference system was established and the data serving as reference for further analyses were determined.
1.1.
Three different surveying methods were selected for this purpose: classical surveying with Leica TCPR 1202 Total Station (TS), terrestrial laser scanning (TLS) with a Z + F 5006 h terrestrial laser scanner, and close-range photogrammetry (SfM/MVS method) with a Canon EOS 5D Mark II SLR camera.
1.2.
Data processing at this stage involved the computation of coordinates of control points and check points as well as the determination of parameters of base planes at each test site. For these two operations, two sources of data were used: total station observations and TLS measurements. Both geodetic measurements and TLS data were processed separately. As a result, two sets of point coordinates and two base planes (TS base plane and TLS base plane) were calculated for each test site.
1.3.
The coordinates of control points and check points were calculated separately in two ways: based of TS observations through geodetic adjustment and based on TLS measurements, from, which the XYZ values for each point were derived.
1.4.
The parameters of base planes were computed using the coordinates of control points.
1.5.
For photogrammetric elaboration of close-range images, the SfM approach was used. The proposed methodology involved the pre-processing step of verification whether the image sets were acquired correctly. For this purpose, relative orientation of images was performed separately for each dataset containing the images covering the wall at each test site (denoted as SfM—the wall on the diagram) as well as for sets of images taken at each sensor’s position (denoted as SfM—EM positions).
2.
Sensor orientation involving the bundle adjustment.
For this purpose, four different scenarios for calculation of exterior orientation parameters of the sensor were proposed:
2.1.
Scenario I: this scenario involved the use of marked control and check points during the bundle adjustment. Point coordinates were derived from classical surveying measurements. These points were measured both on the images covering only the wall and on the images containing both the sensor and the wall at each station.
2.2.
Scenario II: similar to the first scenario, the second method involved the use of signalised control and check points in the bundle adjustment. However, in this case the coordinates of these points were derived from TLS measurements.
2.3.
Scenario III: since the measurement of reference points is time-consuming, the possibility to reduce the time of measurement in field was tested in Scenario III. This scenario involved the use of a featured-based orientation method implemented in the Agisoft Methashape software. Initially the images covering only the wall were aligned with the use of control and check points measured with a total station. This set of images served as a reference in the next step of data processing. The next step involved the relative orientation of images taken at each sensor station. Sets of images covering single stations were aligned, so that their relative orientation was computed. Next, tie points were automatically detected both on the images of each wall and on the images of the sensor at each position. Based on these tie points, the exterior orientation of image sets of all sensor stations was calculated.
2.4.
Scenario IV: the fourth scenario consisted of the same steps as Scenario III with the difference that in this case the coordinates of signalised points were derived from TLS data.
3.
The analysis of results obtained with each scenario.
Each of the above-mentioned geolocalization scenarios was examined with regard for their accuracy and efficiency. Efficiency in this case shall be understood as the time which is necessary to measure reference points and then calculate their coordinates applying different scenarios. Complete analysis required the verification of parameters listed below:
3.1.
The verification of base plane approximation using different source data: as the orientation of the EM sensor is computed in relation with the base plane, firstly it is required to verify the accuracy of approximation of the base plane. Two datasets were tested as source data to approximate the base plane: TLS data and classical surveying data. The quality assessment contained: (1) the analysis of RMSE values obtained on marked control and check points measured with a total station, (2) the analysis of RMSE values obtained on marked control and check points measured by TLS, and (3) the analysis of deviations between the TLS base plane and measured TLS points.
3.2.
The analysis of relative image orientation accuracy performed both for “the wall” and “the wall + the sensor” datasets on each test site: in order to check whether images were properly acquired and also to verify the correctness of their relative orientation, the inspection of RMSE reprojection errors appearing on natural and signalised control/check points was performed.
3.3.
The analysis of exterior image orientation using RMSE values obtained on control and check points performed both for “the wall” and “the wall + the sensor” datasets on each test site: in order to verify the quality of exterior orientation of the images, the quality assessment and accuracy analysis concerning marked control and check points were performed for each test site.
3.4.
The analysis of exterior image orientation using scalebars: additional verification involved the accuracy assessment on scalebars located both on the wall and on the sensor frame.
3.5.
Dense point cloud analysis involving base planes: it consisted of the calculation of deviations of dense point clouds generated for all sensor stations, obtained in various scenarios from the base planes determined at earlier stages of data processing. The purpose of this analysis was to check the possible quality of the approximation of a reference plane based on the MVS point cloud (not the control points). Tests included the analysis of deviations from the TS base plane calculated for the dense point clouds generated for Scenarios I and III and as well as the deviations from the TLS base plane observed for the dense point clouds generated for Scenarios II and IV.
3.6.
Dense point cloud analysis involving point cloud sampling—the aim of the analysis was to find the optimal subsampling resolution for plane detection; for this purpose, dense point clouds generated for all sensor stations were subsampled with four resolutions (5 mm, 10 mm, 30 mm, and 50 mm). Then, for each of them the deviations from the MVS reference plane was examined.
3.7.
The analysis of skews between base planes and reference planes fitted into dense point clouds generated for each sensor station and the assessment of ω , φ , κ angles of the sensor plane.
The results of analyses described above made it possible to assess which methods of data registration allowed to obtain the highest accuracy and efficiency.

3.2. Overview of Source Data and Selected Test Sites

In order to examine each of the proposed methods for EM sensor registration, three test sites were established: a fragment of a plastered brick wall and two memorial tables: one made of marble and one made of sandstone. All test sites were located in the Main Hall at Warsaw University of Technology, Warsaw, Poland. Figure 6 presents these test sites along with the sensor positions and locations of control/check points and scalebars.
Test site I covered a fragment of a plastered brick wall, where 12 EM sensor positions (blue dots) were established (Figure 6a). For each position, approximately 25 images were taken with an SLR Canon camera. From the set of reference points marked on this site, 5 control points and 3 check points were defined. Test site II (Figure 6b) covered a marble memorial table, while test site III was a memorial table made of sandstone. In both cases 3 control points, 2 check points, and 3 sensor positions were established. For each of these two test fields approximately 20 images were acquired. As the sites both covered a very small area, it was impossible to establish more than 3 control points and 2 check points.
The coordinates of control and check points were obtained with the use of two measurement techniques: a total station survey (geodetic surveying) and terrestrial laser scanning. The geodetic measurements were performed using Leica TCRP 1202 total station, which provides the angular accuracy of 2”, the accuracy of reflector distance measurement of 2 mm + 2 ppm and the accuracy of reflectorless distance measurement of 3 mm + 2 ppm. TLS measurements were done with a Z + F Imager 5006 h terrestrial laser scanner. In the case of total station measurements, the point coordinates were calculated by means of geodetic adjustment while in the case of TLS data a semi-automatic calculation method implemented in Z + F LaserControl software was applied. Table 1 presents the accuracy of coordinates of the reference points expressed through RMSE values.
The accuracy of point coordinates is presented in a different way depending on the applied measurement technique because of different methods used to calculate them.
Surveying involves a direct measurement of a distance from the instrument to the desired point as well as and two angles: horizontal and vertical. The final XYZ coordinates are computed in the process of geodetic adjustment. On the contrary, TLS technique does not allow direct measurements to selected points, but the surrounding areas are scanned. For this reason, it was necessary to use the algorithm implemented in Z + F LaserControl software, which identifies the center of the target automatically based on the initial part of the point cloud target area. Therefore, the accuracy of coordinates is represented by linear RMSE without splitting it into XYZ components.
The RMSE values presented in the Table 1 indicate similar accuracy of results obtained by TLS measurements on all test sites. However, this trend cannot be observed for total station measurements—it is clearly seen that RMSE values observed for the test site I differ significantly from the others. It is a consequence of applied measurement methodology as well as resulting accuracy of the adjustment of observations. Unlike the test sites II and III, where only one measurement station was established, in case of the test site I there were 3 instrument stations. Increased number of stations resulted in higher accuracy of the adjustment process as it was possible to include angular measurements, which are characterized by higher accuracy than distance measurements. In the case of test sites II and III, only one instrument position was used so it was necessary to include the distance measurement in the adjustment process which affected the final accuracy of reference points. These values were used as weights in the final SfM bundle adjustment process.

4. Results and Discussions

4.1. Selection of Data for Base Plane Computation

In order to calculate the exterior orientation (EO) parameters of the EM sensor, a set of reference points was used. For this reason, the accuracy of their coordinates was one of the key factors affecting the accuracy of EO parameters. Moreover, further stages of data processing required the determination of the sensor orientation relative to the base plane, which was determined based on the reference points. For these reasons, it was necessary to consider which measurement method should be applied and which configuration of points should be used in the process of determination of the base plane (Table 2). For the accuracy analysis, two types of points forming a common plane were used: control points—in order to calculate the plane parameters—and check points used for an independent accuracy analysis. It should be stressed that control and check points only for the Test Site II are different than those used in the image bundle adjustment process, due to the possibility of marking points on the historical marble. Additionally, in the case of the TLS data, for each point the distance (deviation) between the point and the base plane was calculated. The Table 2 presents the number of used control and check points as well as the results of quality assessment of base plane determination.
Based on the residuals presented in Table 2, it may be stated that for test sites, on which measured objects were made from different materials, only the use of marked points provides high orientation accuracy. The crucial aspect to consider is whether a particular measuring technique is appropriate to measure objects made of a certain material. TLS measurements are known for achieving low accuracy while used to scan transparent surfaces. For this reason, it is not recommended to use TLS point cloud for plane fitting in the case of test site II, where semi translucent marble was a dominant material.

4.2. Relative Orientation of the Images Covering the Walls

For the calculation of relative camera orientation, three approaches were applied. (1) The “SfM” method involved the use of signalised points as manually determined tie points during the bundle adjustment, but there were no coordinates in any external coordinate system assigned to them—the resulting model had no scale. (2) In the "TS” method point coordinates were determined by geodetic measurement with a total station. (3) In the "TLS” method point coordinates were derived from TLS measurements (Figure 7). The accuracy of relative orientation was examined by the analysis of the reprojection error values, which is the basic indicator allowing to verify the correctness of a photogrammetric model.
Eight signalised points were used in the Test Site I and five signalised points were used for each Test Sites II and III. For each method, the coordinate system was defined in a different way, so the Figure 7 presents the linear values of reprojection error as they are independent of the coordinate system definition, therefore making the acquired results comparable.
Analysing the results presented in Figure 7, it can be concluded that in all cases the relative orientation has been calculated correctly as the mean RMSE of the reprojection error only in one case exceeds 1 pixel. However, it can be easily seen that RMSE values of the reprojection error obtained for Test Site I differ significantly depending on the applied method. In this case the crucial factor, which resulted in such effect, was the material and texture of the tested object. The Test Site I consisted of a plain wall with solid, uniform texture, which caused the decrease in number and quality of automatically detected key points used as tie points in the process of image matching. In case of the first test field, the highest relative orientation accuracy was obtained by the TLS method, while the remaining methods provided the same and slightly worse accuracy. It can be observed that including TLS control points during photo alignment improved the relative orientation quality and decreased the reprojection error.
The results obtained for the second test site indicate similar accuracy of the SfM and TLS methods and moreover, in this case they provided higher accuracy than the TS method. As far as the mean value of the RMSE of the reprojection error is concerned, it can be seen that its value is approximately 0.3 pixels while the maximum is about 1.2 pixels both for the SfM and TLS methods. These results suggest that unlike in the Test Site I, TLS reference points did not influence the quality of relative orientation.
It is noteworthy that the values of minimum, maximum and mean reprojection error are all the same for all methods in the case of the sandstone block. Summarizing the results obtained for all datasets, it can be observed that among all methods the highest accuracy was acquired by the TLS method, while geodetic measurements proved to slightly decrease the accuracy of relative orientation. When the additional control and check points are used, it is necessary to consider whether the quality of these points (meaning location accuracy and the accuracy of measurement performed independently with another measuring technique) do not have a negative impact on the final accuracy of the bundle adjustment. Accordingly, the analysis of results acquired in the relative orientation process should be analysed along with the results of exterior orientation. The errors obtained on control points should be taken into consideration in order to verify the correctness of the assumed mathematical model. Last, but not least, the analysis of errors calculated for check points would serve as an independent verification of obtained results.

4.3. Relative Orientation of the Images Covering Fragments of Walls and the Sensor

The second dataset consisted of images covering both reference walls and the sensor at each station. The Table 3, Table 4 and Table 5 contain the main characteristics of relative orientation accuracy obtained on three test sites. In this case, five different orientation methods were applied. Being aware of the principles of SfM algorithms, their method of operation and the way the object’s features influence the final result, it was supposed that RMSE values obtained for this dataset would be slightly lower than in the case of reference walls. This assumption was supported by the fact that the texture of pictures covering both the walls and the sensor is less uniform, and therefore, it allows to achieve better results at the stage of image matching. This assumption proved to be true on the Test Site I and in most cases on the Test Site II.
In order to verify if the images are correctly oriented, the errors were calculated separately for points located on walls (“Wall”) and points on the sensor frame (“Sensor”). Detailed description of applied orientation methods was presented in Section 3.1.
In Table 3, due to the 12 EM positions, authors decided to show only 3 positions with minimum, maximum, and mean linear values of the reprojection error. The values acquired for remaining stations are similar. The reprojection error values (Table 3) obtained for all methods and all sensor positions on the Test Site I indicate that in all cases the relative orientation was calculated correctly since the mean RMSE of reprojection error does not exceed 1 pixel. It is readily seen the results acquired by different orientation methods are very similar. It also should be noticed that in all cases the errors observed on points located on the wall are higher than these obtained for the points on the sensor frame, however these differences are small and may be neglected.
Reprojection error values obtained for Test Site II (Table 4) imply that high accuracy of relative orientation was achieved. It can be noticed that similar to the results obtained on the previous test site the reprojection error takes similar values for all orientation methods. However, the results obtained for Scenario I clearly stand out from the rest—in this case the reprojection error values are higher for all sensor positions. Mean reprojection errors observed on points located on the wall and points located on the sensor frame are either equal or very similar.
Once again acquired results, presented in Table 5, proved the correctness of relative orientation determination as the mean reprojection error varies between 0.4–1.3 pix, which are acceptable values indicating high relative orientation accuracy. It can be observed that in this test site the highest orientation accuracy was obtained by applying the Scenario I. Moreover, this method provided the smallest differences between points located on the wall and points on the sensor frame. It is also worth noting that for the Scenario II the difference between reprojection errors calculated for point located on the wall and the point located on the sensor frame is highest.
On the grounds of the above-described tests performed on three test fields it has been observed that at the stage of calculation of relative orientation of images the choice of orientation method is not crucial for obtaining the highest accuracy since the differences observed for different methods are rather subtle. Furthermore, at this stage also the impact of material type on the orientation accuracy was not observed.

4.4. Exterior Orientation of the Images Covering the Walls

In the next step of data processing, exterior orientation parameters for all cameras were calculated. All images were divided into the same datasets as at the previous stage of relative orientation. To obtain exterior orientation parameters, two methods were applied: TS method (Scenario I) and TLS method (Scenario II). These two approaches were described in detail in the Section 3.1.
Again, the first group of images, which was processed, was the dataset containing images of the walls (the sensor was not included). Exterior orientation of the images was determined with accuracy, the characteristics of which are presented in Figure 8.
The Figure 8 contains values of RMSE calculated using coordinates of signalized reference points, which were determined with the use of geodetic measurements and TLS data as explained in the Section 3.1. For additional control, the scale bar examination was introduced. For this purpose, three sets of scale bars were established on walls in each test field. The lengths of scale bars were calculated based on coordinates determined on models and compared with reference values derived from point coordinates measured on site with a total station. The scale bar parameter is as a mean of absolute values of differences between distances computed in the way described above.
In case of the Test Site I, both orientation methods provided submillimetre accuracy of the resulting model. However, analyzing RMSE values on reference points, it can be concluded that the Scenario I provided higher accuracy of exterior orientation determination. Nevertheless, mentioned difference in accuracy is rather subtle. Similarly, to the results obtained on the Test Site I, on the Test Site II the Scenario I also allowed higher accuracy of exterior orientation parameters determination. In contrast, in this case the difference in favour of the Scenario I is much more noticeable as both the RMSE of check points and the scale bar are about 3 times lower that in the TLS scenario. The accuracy of reference points was lower since terrestrial laser scanning has been proven to achieve low accuracy on semi-translucent materials (e.g., marble)—this resulted in lower accuracy of exterior orientation of the images. On the other hand, the tests performed on the Test Site III yielded different results that the previous two. In this case RMSE values obtained on check points indicate higher accuracy of the TLS scenario. However, the difference of scale bars is relatively small, suggesting similar internal consistency of both models.

4.5. Exterior Orientation of the Images Covering Fragments of Walls

The next stage of data processing involved the determination of exterior orientation parameters of images covering the sensor together with fragments of scanned walls. According to the Figure 5 illustrating this phase, for this purpose four scenarios were applied. Figure 9, Figure 10 and Figure 11 contain the accuracy characteristics of exterior orientation of the images.
It can be noticed that RMSE values obtained on check points on all stations vary between 1 mm up to 3.9 mm and the RMSE on scale bars ranges between 0.3 mm to 4 mm. It should be noted that in all cases the accuracy achieved in the Scenario I is always higher than in Scenario III and analogically the situation is repeating with Scenario II and Scenario IV. Therefore, it can be concluded that on this test site the use of direct georeferencing with geodetic measurements and TLS reference data provides better results than 2-stage combined referencing involving SfM point-based approach. Moreover, in this case it cannot be stated clearly which one of these two combined scenarios offers better accuracy since for each of the three sensor stations the verdict should be different. In addition, Scenario I demonstrated slightly higher accuracy of exterior orientation. However, it should be noticed that the differences between these methods are relatively small, mainly in the difference of RMSE observed on scale bars.
The results obtained on the Test Site II lead to different conclusions than observations on the Test Site IV. First, it is clearly visible that both approaches engaging the use of TLS (Scenarios II and IV) provided lower accuracy than Scenarios I and III. The probable cause of this effect is due to the decreased accuracy of TLS measurements on semi-translucent materials, e.g., marble. Unlike the results obtained on the Test Site I, here the combination of TLS and SfM (Scenario IV) provided either higher (Station I) or similar (Station II and III) accuracy compared to the “direct” approach (Scenario II). This means that the use of this two-staged scenario may improve accuracy of orientation, which is decreased by other errors. Additionally, the use of combined approach “TS + SfM” (Scenario III) resulted in lower RMSE values of check points, suggesting higher positioning accuracy on Station II and III, while providing similar internal consistency of models being reflected by the scale bar value.
Compared to the results obtained on previous two test sites, the highest accuracy of exterior orientation was achieved on the Test Site III, which is represented by RMSE and scale bars values (mostly below 1 mm). Unlike the results obtained on previous test fields, in this case, it is difficult to identify patterns that would allow to select the best approach.

4.6. The Verification of Exterior Orientation Accuracy Performed on Scalebars

The verification of the accuracy of exterior orientation of image sets involved additional control of deviations observed on scale bars located on each wall at each test site as well as on the sensor frame. Reference lengths of the scalebars were measured with a metal ruler with the accuracy of 0.5 mm. In this section, the results obtained for each scenario are presented separately for these two sets of scale bars along with their interpretation.
Since the number of EM sensor stations at Test Site I was significantly higher than in case of Test Sites II and III, another way of presenting the results were chosen. The number of observations was enough for histogram generation, so in the case of the Test Site I four histograms illustrating the deviations obtained for each scenario were prepared (Figure 12).
While analysing the histograms of deviations calculated on scalebars with regard to applied orientation method, it can be readily seen that the results acquired in Scenarios I and II as well as Scenario III and IV are pair-wise similar—characterized by similar range of values and deviation distribution. Starting with Scenarios I and II it can be observed that the deviation values range from about −1 mm up to 2 mm. Although the range of values is almost the same in both cases, Scenario I (the “TS” method) should be considered as more accurate since in this case the most observations are within 0–1 mm range, while in case of Scenario II (“TLS” method) the results are more dispersed. Moving on to Scenarios III and IV, the range of deviation values acquired by these two methods is also similar (−1.5 mm up to 10 mm). The accuracy of data registration in case of these two scenarios is approximately equal. It can be concluded that the first two scenarios provided better results—the range of deviation values was much narrower indicating higher accuracy and repeatability of results. In means that in the case of Test Site I the use of SfM (Scenarios III and IV) during data registration achieved worse results. The Test Site I was a plain white wall lacking any characteristic features, which affected the quality of image matching decreasing final accuracy of exterior orientation. Consequently, it can be inferred that for images containing mostly poorly textured areas data registration methods involving the use of SfM for exterior orientation (like Scenario III and IV) are not recommended.
Due to the insufficient amount of data, the deviations occurring on Test Sites II and III were not presented in the form of histograms. Instead, mean deviation values were analysed. Figure 13 and Figure 14 contain bar charts presenting the results.
On the Test Site I mean deviation values range from approximately 0.5 mm up to approximately 2.5 mm. It can be seen that the results obtained by the “TS” method (Scenario I) provided the most similar results on all three stations. Moreover, on two stations this method provided the smallest mean deviations, which do not exceed 1 mm. Only in the case of Station 3, a slightly better effect was achieved by the “TS + SfM” method (Scenario III). Though, it should be noticed that scenarios involving the use of TLS (Scenario II and IV) performed mostly worse than methods relying on geodetic measurements (Scenario I and III). It should be associated with the characteristics of marble, which was the dominant material on the site, and their impact on TLS measurements. Since the TLS technique achieves lower measurement accuracy on translucent materials, in this case the decreased accuracy caused lower accuracy of computation of exterior orientation and thus, higher values of mean deviations observed for Scenarios II and IV.
On the other hand, analysing the results obtained on the Test Site III, it is clearly seen that the observed range of deviations is much narrower than in the case of Test Site II—mean deviations vary approximately between 0.06 mm up to 0.4 mm. On all sensor stations the best results were acquired by the TS method—this time, mean deviations did not exceed 0.15 mm. What is more, comparing the results obtained with and without the use of SfM (so the pairs: Scenario I/III and Scenario II/IV) it can be stated that the use of SfM does not improve the accuracy of exterior orientation. Additionally, in 2 of 3 cases the use of geodetic measurements instead of TLS provided smaller mean deviation values.
To summarized, it might be noticed that the worse results are obtained for Scenarios III and IV. Nevertheless, RMSE values are acceptable and are within the measurement accuracy.
Additional tests involved the calculation of deviations occurring on scalebars determined on the metal frame of the EM sensor. There were 6 signalised reference points installed on the sensor frame and between them three scale bars were defined (Figure 6d).
Similar to previous analyses involving scalebars, in this case the deviations were calculated as differences between the reference values and distances computed basing on coordinates derived from the model, according to applied scenario (I–IV). Again, the results obtained on the Test Site I are presented in the form of histograms (Figure 15).
The results obtained on scalebars located on the sensor’s frame again indicate that the accuracy possible to achieve by scenarios I/II and scenarios II/IV is pairwise similar (Figure 15). In the case of TS and TLS methods, the deviation values range from approximately −0.5 mm up to approximately 1.5 mm. The distribution of deviations in both cases is similar. Furthermore, in case of scenarios III and IV the range of deviation values are between approximately −0.5 mm up to approximately 4 mm, being slightly narrower for the Scenario III. This leads to the conclusion that methods of orientation, which do not involve the use of SfM, performed better. It is also worth noting that compared to the results obtained for scalebars located on the wall, the accuracy achieved by Scenario I and II methods is a little higher, but in the case of Scenario III and Scenario IV methods the difference is significant (max deviation on the wall: approximately 10 mm vs max deviation on the sensor frame: approximately 4 mm). Therefore, it can be concluded that SfM-based orientation methods provide better results in case of objects of more complex structure (3D objects)—it is consistent with SfM principles mentioned in previous sections.
It is readily seen that the results obtained on the Test Site II vary significantly between stations, especially in the case of scenarios III and IV, where mean deviations observed on Station I are several times higher than on the rest of stations (Figure 16). Moreover, on this test site the best results were acquired by the TLS method, which provided both the highest accuracy (mean deviation below 0.5 mm) and repeatability between stations. It should be noted that low accuracy achieved on scale bars located on the wall did not entail the decrease of accuracy on scalebars located on the sensor. However, the differences between scenarios are noticeable only on the first two stations—on the last one the results are comparable and not exceeding 0.5 mm.
Analysing the results obtained on the Test Site III it can be seen that mean deviations vary in a narrow range between 0.5 mm and 0.9 mm. The differences between scenarios are rather subtle in this case, however, on all stations the Scenario I performed the worst. Nevertheless, basing on Figure 17 it is not possible to determine unequivocally, which method provided the highest accuracy.
Considering deviations calculated on Test Sites II (Figure 16) and III (Figure 17) for scalebars located both on the walls and on the sensor frame, it can be seen that their distributions are contradictory methods, which provided the highest accuracy in the first case and give the worst results in the other.

4.7. The Quality Assessment of the Reference Plane Fitting into the Wall + Deformation Analysis

According to the diagram (Figure 5), the next stage of data processing was dense point cloud generation based on terrestrial images. The purpose of this step was to verify the quality of each dense point cloud with regard to corresponding base plane. Dense clouds were generated for image sets covering whole reference walls (as it can be seen in the Figure 6) as well as for the sets of images covering both the sensor and fragments of reference walls, taken at each EM sensor station. Again, the tests considered the results obtained on three test sites, on which four scenarios of data registration were applied. The Figure 18, Figure 19 and Figure 20 illustrate statistics of deviations of cloud points from each base plane (the process base plane definition for each scenario was described in Section 4.1). Each graph presents results obtained for one scenario.
The Test Site I was a plane wall lacking any decorations and thus, characterised by uniform, poor texture. This poor texture visibly affected the accuracy of data registration and the quality of the final dense point cloud. Starting with the analysis of results obtained for scenarios I (Figure 18I) and II (Figure 18II) it is readily seen that the distribution of deviations in both cases is very similar. Nevertheless, in case in Scenario I all values are slightly shifted down indicating the presence of a systematic error which is probably caused by the difficulty in defining proper weights of the observations during the bundle adjustment. The systematic error is approximately 7–8 mm—such values significantly exceed the accuracy of base plane approximation (see the Table 2). In case of Scenario II (Figure 18II) the systematic error can be observed, too, but its value, which ranges between 2 and 3 mm, remains within the measurement accuracy and the accuracy of base plane approximation. Moving on to Scenarios II (Figure 18II) and IV (Figure 18IV), it can be noticed that the proper orientation of data with SfM method on all EM sensor stations was not possible. The range of calculated deviations is fairly wide starting from 3 mm and ending with 17 mm. Observed differences are the consequence of several problems occurring while using SfM and MVS algorithms, like errors during image matching and depth reconstruction which are particularly frequent in case of poorly textured areas. In addition, similar to the Scenario I (Figure 18I), in case of Scenario III (Figure 18III) the presence of a systematic error is visible, too—again its probable cause is decreased quality of bundle adjustment.
Figure 19 and Figure 20 present the results obtained for test sites II and III.
Test Sites II and III were memorial tablets made from marble and sandstone, respectively. Consequently, compared to the Test Site I, both of them were characterized by a more complex texture. Analysing the results obtained for Test Sites II (Figure 19) and III (Figure 20), it can be noticed that deviations of points from the base planes are fairly small and do not exceed 5 mm. This means that compared to the results obtained on the Test Site I, both values of the deviations and their dispersion are smaller, indicating higher accuracy and of data registration and higher repeatability of acquired results. Moreover, in case of both test sites the differences occurring between Scenarios I and II as well as III and IV are small and may be considered as negligible. Such improvement compared to the results obtained on the Test Site I is a consequence of principles of applied SfM and MVS algorithms. Smaller deviation values are the result of fairly complex diverse texture, making it possible to detect more key points of high quality, which resulted in higher accuracy of image matching and higher quality of final dense point cloud.
In conclusion, the choice of a scenario to be applied for data registration, the characteristics of tested object must be taken into consideration. In the case of large areas of plain, uniform texture lacking characteristic features it is recommended to apply methods such as Scenario I and Scenario II, while methods involving SfM are not suitable. On the other hand, all four scenarios are proved to provide similar results in case of objects and areas characterised by diverse texture.
In conclusion, the choice of a scenario to be applied for data registration, the characteristics of tested object must be taken into consideration. In case of large areas of plain, uniform texture lacking characteristic features it is recommended to apply methods such as Scenario I and Scenario II, while methods involving SfM are not suitable. On the other hand, all four scenarios are proved to provide similar results in case of objects and areas characterised by diverse texture.

4.8. The Quality Assessment of the Planes Fitting into the Multi-View Stereo (MVS) Point Clouds/Surveying

In the previous Section 4.7 reference planes (base planes) were determined by fitting them into two sets of source data: control points measured with a total station and control points measured with a TLS. However, the authors decided to check if it is possible to avoid the use of additional measuring devices (like a total station or TLS) at this stage. In order to verify this hypothesis, a set of reference planes were fitted into MVS point clouds which were then subsampled with different resolution. Authors decided to use the subsampling method implemented in the Open3D library [86]. For this purpose, four subsampling radius values were chosen: 5 mm, 1 cm, 3 cm, and 5 cm. Point clouds subsampled with different resolution were compared to four reference planes—each one fitted into an MVS point cloud generated for each scenario. Figure 21, Figure 22 and Figure 23 show the deviations calculated for point clouds acquired in all four scenarios. Again, the deviations are presented in the form of box plots showing basic statistics.
Analysing sets of box plots (Figure 21) prepared for each scenario (I–IV), at first glance some trend can be observed. There are similarities which can be observed in case of a pair: Scenario I and Scenario III as well as for the pair Scenario III and Scenario IV. Consequently, the results obtained with these pairs of methods will be analysed separately. Starting with the Scenarios I and III it can be observed that mean deviation values do oscillate around zero, but the range of obtained values is quite wide in both cases, visibly wider that in the case of Scenarios II and IV. Moreover, especially in the case of Scenario III it can be seen that the accuracy improves with decreasing resolution of the MVS reference point cloud, and it can be concluded that lower resolution allows the elimination of some errors present in the original data. Mean values of deviations calculated for Scenarios II and IV are close to zero. Additionally, the box plot sizes and both minimum and maximum values are similar. Like in Scenarios I and III, in the second scenario a slight improvement of accuracy achieved for the point with 5 cm resolution was observed. On the other hand, such a phenomenon was not the case in Scenario IV. Furthermore, it should be stressed that in all cases the maximum deviations do not exceed +/−1.5 cm.
For the Test Site II (Figure 22) results shows that the deviations between the dense point cloud and plane fitted into the downsampled point cloud do not exceed +/−2 mm (yellow boxplot). The best results were obtained for Scenario I. Similar to the previous results, it might be seen that SfM method (Scenarios III and IV) allows to obtain worse results and the size of the boxplot is higher. In the case of the Test Site III (Figure 23), similar to the Test Site II, better results were obtained in the case of the Scenario I and also for Scenario II. The worst results were obtained by the Scenarios III and IV, but they are still acceptable.

4.9. The Analysis of Skews and the Orientation Angled of the Sensor Frame

The final stage of the evaluation of methods for exterior orientation determination involved the examination of two additional factors—the influence of the reference plane determination with regard to the base plane and the quality of assessment of the sensor orientation in relation to the base plane. The process of base plane definition corresponding to walls and the calculation of reference planes for each sensor station were described in detail in Section 4.1 and Section 4.7. The aim of the analyses presented below is to examine the spatial relations between the base planes determined for each test site and scenario and the reference planes calculated using dense point clouds generated for each sensor station. For each test site and orientation scenario (I–IV) the skews between mentioned pairs of planes were computed. Table 6 contains mean linear values of skews (Equation (16)) divided into X, Y, and Z components for reference planes fitted into marked points in relation to the base planes.
In order to analyse the linear deviation, the skew values were changed into the linear values in the following form:
X = 16   [ c m ] sin ( ω ) Y = 16   [ c m ] sin ( φ ) Z = 11   [ c m ] sin ( κ )   ,
where 11 cm is the half of the width of the frame, 16 cm is a half of the height and depth of the frame, ω , φ , κ is the skew angels of the plane.
Table 7 presents the mean values of skews calculated for reference planes fitted into dense point clouds resampled with 3 cm resolution in relation to the base planes.
Analysing the results obtained for the Test Site I, no significant differences were observed either with regard to the way the reference plane was defined, or the applied orientation scenario (Table 6 and Table 7). All skew values are similar and not exceeding the measurement accuracy. On the contrary, the results acquired on test sites II and III are much more varied depending on the scenario. It is a consequence of higher values of measurement errors appeared both in case of geodetic surveys and the TLS measurements. Nevertheless, the skew values are within the measurement accuracy.
The last stage of data processing is the computation of the orientation angles of the sensor: ω , φ , κ . To calculate these angles, the rigid-body orientation approach described in the Section 2.5 was applied. Next, the distances between the vertexes of the sensor frame and the wall were measure with the use of a metal ruler, that served as a reference. Based on these distances, the angles ω , φ , κ were calculated once more, providing the second set of data for comparison. The analysis of differences occurring between these two sets of angles was performed for all three test sites and all four orientation scenarios. The results are presented in Table 8 and Table 9. The angular values were then converted into linear values (see Equation (16)). In the first case (Table 8) the base plane determined using marked control points served as a reference plane. In the second case (Table 9), the reference plane fitted into a reference dense point cloud resampled with 3 cm resolution, generated for each sensor station, was used as a reference.
The results presented in Table 8 and Table 9 contain final accuracy of the sensor orientation angles achieved with the use of four orientation methods. First, it can be seen that there are no significant differences in acquired results, depending on the choice of the reference plane, as their values range between 0–1.2 mm. The accuracy achieved for Test Sites II and III is correct and considering the absolute deviation values ranging from 0 mm up to 3.9 mm, they are within the measurement accuracy of classic measurements. However, the deviations observed on the Test Site I are slightly higher than the others (above 4 mm) despite the use of more accurate measurement methodology. The probable cause of this phenomenon can be a small systematic error.

5. Conclusions

The preparation of comprehensive architectural documentation of a monumental object is one of the key factors in the field of preservation of cultural heritage. The value of such documentation increases with the number of data sources and measuring techniques used for its creation. The electromagnetic sensor, which is a component of the proposed multisensor platform for structural health monitoring, makes it possible to examine the state of conservation of a cultural heritage object together as well as to investigate its inner structure and destructive processes (moisture, cracks) causing its progressing degradation. The integration of measurements performed with this sensor and with data acquired by other measurement techniques (e.g., terrestrial laser scanning, close-range photogrammetry) would extend the scope of possible structural analyses with regard to the “spatial element”—the spatial distribution of occurring phenomena and the object features. However, for this purpose, the multi-source data needs to be georeferenced in the same referenced system. Until now, there had been no methodology proposed for the determination of the EM sensor system’s geolocalisation. Furthermore, the orientation of the sensor in relation to the investigated object directly affects the sensor response. Consequently, known orientation parameters are necessary for the calculation of corrections which need to be applied to the observations in order to make them correct and comparable in case of differential multitemporal analyses. In this article the Authors proposed a methodology for geolocalising the EM sensor in an assumed external reference system and to determine the orientation parameters with the accuracy characterized by RMSE not exceeding 5 mm. In order to find the parameters of relative and exterior orientation, three measuring techniques were applied: classical surveying, terrestrial laser scanning and close-range photogrammetry—Structure-from-Motion and MultiView-Stereo.
The whole experimentation consisted on three main steps: (1) data acquisition and pre-processing (including the extraction of the reference plane and the determination of coordinates of reference points); (2) sensor orientation and bundle adjustment, and (3) the estimation of exterior and relative orientation parameters together with accuracy analyses and the quality assessment.
The study of orientation methods possible to be applied for the sensor involved four different approaches (scenarios). In order to check which of the proposed scenarios would provide the highest accuracy of resulting relative and exterior orientation parameters, a series of analyses were performed. These analyses covered the following issues: (1) the accuracy analysis concerning the determination of the base plane (which was used later during relative orientation computation and at further stages of data processing) based on two sources of data: total station measurements and TLS data, (2) the accuracy analysis of the relative orientation of the images covering the walls and the sensor at each station, based on the analysis of reprojection error values, (3) the accuracy analysis of exterior orientation parameters using RMSE values obtained on control and check points, additional control using scale bars located on the walls and on the sensor’s frame, (4) the analyses concerning the accuracy of reference plane approximation based on MVS dense point clouds and the accuracy of the orientation of the sensor in relation to mentioned reference plane. Consequently, the accuracy of relative and exterior orientation of the EM sensor should be considered separately.
The reprojection error is an indicator of the quality of relative orientation of images and therefore, it allows to verify the correctness of a photogrammetric model. A set of properly aligned photos should be characterized by the reprojection error, which value does not exceed considerably 1 pixel, regardless of the quality of the object texture. However, significant differences are to be observed in the maximum reprojection error values.
The inclusion of control points during the bundle adjustment should not negatively affect the final accuracy of the process. In fact, using them as manually identified tie points should rather increase the accuracy of relative orientation. Differences observed in minimum, maximum, and mean reprojection error values are dependent of the effectiveness of feature detection algorithms and they are particularly visible in case of poorly textures areas containing a small number of features. Additional, manually determined tie points should considerably improve the accuracy of relative orientation of images characterized by uniform texture and minimise the reprojection error. Nevertheless, in case of photographs of complex, diverse texture the addition of extra tie points would not result in a significant improvement in bundle adjustment accuracy. However, it is noteworthy that the specification of proper observation weights is crucial as wrong weights may result in the increase of the reprojection error value or even make it impossible to calculate the relative orientation parameters.
Based on the quality assessment of the relative orientation performed on image sets covering only the reference walls, it can be concluded that on all three test sites proper weights were assigned to the observations and the relative orientation was calculated properly (Figure 7). In case of the Test Site I, which was a plain wall of poor, uniform texture, the addition of control points measured by TLS (Scenario II) as manual tie points improved the accuracy of bundle adjustment decreasing the mean reprojection error twice. However, in case of test sites II and III, which were characterized with more complex texture rich of features, the addition of TLS control points did not influence the final relative orientation accuracy. Moreover, the use of TS (Scenario I) control points did not influence the accuracy of bundle adjustment in the case of test sites I and III and what is more, in the case of Test Site II, mean reprojection error increased.
Moving on to the relative orientation of images covering both the EM sensor the reference walls, it is clear that the relative orientation of the image sets obtained on all test sites was calculated properly in all scenarios as the mean reprojection error only in a few cases slightly exceeded 1 pixel, mostly remaining below 1 pixel. On the grounds of performed analyses it can be concluded that at the stage of relative orientation computation the choice of orientation scenario is not a factor, which affects the accuracy of bundle adjustment significantly.
In order to calculate the exterior orientation of images as well as to assess the accuracy of performed computation, two types of reference points of known coordinates were used. Control points were included in the adjustment process. Check points were used to check the accuracy of resulting photogrammetric model. RMSE values obtained on check points are the independent verification of the model accuracy since their coordinates in an external coordinate system are computed based on exterior orientation parameters and their location on images expressed in pixels. Then, their coordinates derived from the photogrammetric model are compared with reference coordinates measured with another technique (in this case: classical surveying and TLS). To avoid the influence of errors coming from the transformation between the local coordinate system of the scanner and the external coordinate system of each test site, the Authors decided to examine the linear values of RMSE obtained on control and check points since they are independent on the applied reference system. Additionally, the accuracy of exterior orientation was verified using scalebars established both on the reference walls and on the sensor’s frame. Their lengths were measured with the accuracy of 0.5 mm with a metal ruler.
The analysis performed on reference points suggests that compared to TLS, classical surveying allowed to achieve higher accuracy of exterior orientation on all three test sites, especially in case of Test Site II, where the difference between the results acquired with these two methods is particularly visible (Figure 8). However, it must be stressed that RMSE values obtained for TS do not exceed 0.5 mm and in case of TLS measurements the max RMSE is 1.2 mm—both results indicate high accuracy of exterior orientation of the images and are acceptable. Moving on to the analysis concerning the accuracy of exterior orientation of the images covering both the EM sensor and the reference walls, it can be concluded that we managed to achieve high orientation accuracy on all test sites as the RMSE values are mostly below 3 mm on each sensor station. Furthermore, the differences between RMSE values obtained for Scenario I and II are less than 2 mm. Analysing the deviations observed on scalebars on the Test Site I (Figure 12 and Figure 15), it was noticed that the results obtained for scenarios I/II and II/IV are pairwise similar, indicating that scenarios I and II provide higher accuracy than the methods involving the use of SfM. The probable reason is the plain texture of this test site. The differences between all four scenarios are very subtle on remaining test sites (Figure 13, Figure 14, Figure 16 and Figure 17) and can be considered as negligible. Moreover, comparing the deviations obtained on scalebars located on the wall and these calculated for the sensor’s frame, it can be concluded that SfM-based orientation methods provide better results in case of characterized by more complex structure (3D objects).
The positions of the sensor and the examined monumental object relative to each other need to be known before the sensor data are processed. To determine the relative position of the sensor, the reference plane needs to be established. For this purpose, two different approaches were applied in order to determine the parameters of abovementioned reference plane (the “base plane”). The first approach relied on the determination of the base plane parameters using the control points, which coordinates were measured by two techniques: classical surveying and TLS (Table 1). The second approach used dense point clouds resulting from further processing of the images with the use of SfM/MVS algorithms (see the Section 4.8 and Section 4.9). The Authors decided to analyse the results obtained by these two approaches separately, because the first one relies on point coordinates measured directly while in the second case the plane was fitted into a dense cloud which is a result of several subsequent stages of processing of an image set. The analyses performed lead to the conclusion that the use of signalised control points provided higher accuracy of determination of the base plane parameters, regardless on the used scenario (I–IV). The measurement accuracy is also influenced by the way the centre of the marked point is determined. On the Test Site I the points were measured from three stations and their coordinates were calculated in the process of geodetic adjustment. On the contrary, on test sites II and III only one instrument station was established, and the points were measured with the polar method. In case of the TLS measurements, the centre of the marker was determined by means of statistical template matching. The use of automatic detection algorithm based on statistical analysis made it possible to eliminate the observer errors, which are a common cause of blunders occurring in classical surveying. In addition, it is worth noting that the amount of time needed to determine the point coordinates with the use of automatic marker detection performed on TLS data is approximately three times shorter than measurements performed with a total station. It is a very important advantage in cases where a cultural heritage objects is concerned. In cases concerning monumental objects, the possible time of measurement is limited due to the tourist traffic and conservation works.
The next series of analyses concerned the quality assessment of reference plane determination based on different data sources, regarding the base planes. The first type of reference planes was determined, using the marked points—the same point sets as those used in the base plane determination, but with coordinates calculated in the bundle adjustment. The second type of reference planes was created for each scenario by fitting planes into dense point clouds resampled with the resolution of 3 cm. For each type, the linear deviations from base planes were calculated (this step was described in detail in the Section 4.9). Comparing the results obtained by TLS and classical surveying, it can be observed that higher accuracy was achieved in case of TLS measurements—only on the Test Site I the results obtained with these two methods were comparable (Table 6). However, it should be noticed that on this test site the accuracy was improved thanks to the increased number of measurement stations.
Before using TLS point clouds as source data for reference plane determination, the material from which the examined object is made must be taken into consideration. It has been proved that the accuracy of TLS measurements is low in case of objects made from highly reflective and translucent materials, such as marble, which is particularly common in historical objects. For this reason, in case of such objects TLS is not recommended as source data.
In the case of the Test Site II, where marble was a dominant material, the shape deformations (the deviations from the base planes) reached 1.7 cm. Comparing the results obtained for MVS dense point clouds and for signalised points, it can be seen on Test Sites II and III the approach using signalised points higher accuracy, for the Scenarios II and IV and in the case of the Test Site I similar for all Scenarios (Table 7). Despite the observed differences, the deviations obtained are within the measurement accuracy. The analysis of skews (Table 8) requires the results obtained on Test Site I to be discussed separately from test sites II and III since the structure of the object on this site as well as its texture varies significantly from the others. As mentioned before, unlike test sites II and III, which are characterised by complex and diverse texture, Test Site I is a plain white wall lacking any decorations or other characteristic features. Analysing the deviation values, it can be observed that the deviations obtained on Test Site I are higher than these calculated on test sites II and III. However, the maximum difference between the results achieved on these test sites hardly exceeds 3 mm, remaining below 2 mm in most cases. This means that these differences are within the measurement accuracy and can be considered as negligible. The analysis of results obtained for each sensor station (Table 9) leads to similar conclusions.
On the grounds of the research, it can be concluded that it is impossible to propose one versatile workflow for determination of exterior orientation parameters of the electromagnetic sensor and its orientation in relation to an examined object. All proposed scenarios for relative orientation of the sensor allow to calculate necessary corrections, which need to be applied to the signal sent and received by the sensor due to the distance and tilt between the sensor and the object. The investigation of exterior orientation methods revealed several aspects, which need to be taken into account. Firstly, in cases when the possible time of measurement is strictly limited, terrestrial laser scanning is a fast technique providing sufficient accuracy of coordinates of points forming geodetic network. However, it is impossible to establish the measurement station in the same position again in the next measurement session in order to perform multitemporal analysis. For this purpose, the use of a total station is required.
The choice of method, which should be applied in order to determine the reference plane (the “base plane”), is strictly dependent on the characteristics of the examined object. In the case of objects of simple structure and plain texture it is recommended to either use the TLS technique or, if possible, marked control points. In the case of such objects it is not recommended to apply methods involving the SfM algorithms since a small number of features results in decreased accuracy of the final adjustment. For this reason, SfM-based approaches like Scenarios III and IV are not suitable for the determination of the sensor’s position in the case of such objects. Furthermore, objects of non-highly reflective and non-transparent surface characterised by diverse texture (like in the case of Test Site III) allows the application of TLS measurements, marked control points, and MVS dense point clouds to determine the parameters of a base plane. However, TLS-based methods are not recommended for marble surfaces, which are very common in cultural heritage objects. In case of marble and other translucent or semi-translucent structures, it is possible to determine the sensor positions by applying methods based on signalised control points (Scenario I and II) and if the marked points cannot be placed, then the SfM-based methods (Scenario III and IV) can be used.

Author Contributions

A.T., J.M., S.Ł., and P.K. organised the conceptualisation of the idea and the methodology employed in this paper. Following that, J.M., A.T., S.Ł., M.M. and J.N. worked on the critical evaluation of the existing techniques. P.K., J.M., and S.Ł., worked on the data acquisition at Warsaw University of Technology. The original writing and draft preparation were carried out by A.T., J.N., J.M., S.Ł., P.K. and M.M. The review and the editing were carried out by A.S. and A.A.-S. All authors have read and agreed to the published version of the manuscript.

Funding

The program is co-financed from the European Social Fund under the Operational Program Knowledge Education Development, a non-competition project entitled "International scholarship exchange of doctoral students and academic staff" implemented as part of the Activity specified in the application for co-financing of the project no. POWR.03.03.00-00-PN13/18. The authors would like to thank gratefully the European Horizon 2020 Research programs for their kind finding of this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. ECHOES Heritage Conservation & Regeneration. Available online: https://www.echc.eu/wp-content/uploads/2019/04/Position-Paper-FP9_RM_revised_low.pdf (accessed on 31 January 2020).
  2. Stylianidis, E. CIPA-Heritage Documentation: 50 Years: Looking Backwards. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W14, 1–130. [Google Scholar] [CrossRef] [Green Version]
  3. Gil, Z. Obiekty Zabytkowe Bez Rys. Available online: https://www.renowacjeizabytki.pl/artykuly-techniczne/Obiekty-zabytkowe-bez-rys,1408 (accessed on 24 November 2019).
  4. ICOMOS/Iscarsah Committee, Recommendations for the Analysis, Conservation and Structural Restoration of Architectural Heritage. Available online: https://www.icomos.org/en2003 (accessed on 24 November 2019).
  5. COMOS CHARTER, Principles for the Analysis, Conservation and Structural Restoration of Architectural Heritage. Available online: http://www.icomos.org/charters/structures_e.pdf (accessed on 24 November 2019).
  6. Lourenço, P.B. Conservation of cultural heritage buildings: Methodology and application to case studies. Rev. Alconpat 2013, 3, 98–110. [Google Scholar] [CrossRef] [Green Version]
  7. Bláha, J.; Novotný, J. Report Assessing Innovative Restoration Techniques, Technologies and Materials Used in Conservation; Institute of Theoretical and Applied Mechanics of the Czech Academy of Sciences: Prague, Czech Republic, 2018; ISBN 8086234010. [Google Scholar]
  8. Bruno, S.; Fatiguso, F. Building conditions assessment of built heritage in historic building information modeling. Int. J. Sustain. Dev. Plan. 2018, 13, 36–48. [Google Scholar] [CrossRef] [Green Version]
  9. Gattulli, V.; Lepidi, M.; Potenza, F. Dynamic testing and health monitoring of historic and modern civil structures in Italy. Struct. Monit. Maint. 2016, 3, 71–90. [Google Scholar] [CrossRef]
  10. Kot, P.; Shaw, A.; Jones, K.O.; Cullen, J.D.; Mason, A.; Al-Shamma’a, A.I. The feasibility of using electromagnetic waves in determining the moisture content of building fabrics and the cause of the water ingress. Proc. Int. Conf. Sens. Technol. ICST 2014, 2014, 604–608. [Google Scholar] [CrossRef] [Green Version]
  11. Gkantou, M.; Muradov, M.; Kamaris, G.S.; Hashim, K.; Atherton, W.; Kot, P. Novel Electromagnetic Sensors Embedded in Reinforced Concrete Beams for Crack Detection. Sensors 2019, 19, 5175. [Google Scholar] [CrossRef] [Green Version]
  12. Teng, K.H.; Kot, P.; Muradov, M.; Shaw, A.; Hashim, K.; Gkantou, M.; Al-Shamma’a, A. Embedded smart antenna for non-destructive testing and evaluation (NDT&E) of moisture content and deterioration in concrete. Sensors 2019, 19, 547. [Google Scholar] [CrossRef] [Green Version]
  13. Kot, P.; Shaw, A.; Riley, M.; Ali, A.S.; Cotgrave, A. The Feasibility of Using Electromagnetic Waves in Determining Membrane Failure Through Concrete. Int. J. Civ. Eng. 2017, 15, 355–362. [Google Scholar] [CrossRef]
  14. Kot, P.; Ali, A.S.; Shaw, A.; Riley, M.; Alias, A. The application of electromagnetic waves in monitoring water infiltration on concrete flat roof: The case of Malaysia. Constr. Build. Mater. 2016, 122, 435–445. [Google Scholar] [CrossRef]
  15. Tobiasz, A.; Markiewicz, J.S.; Lapinski, S.; Nikel, J.; Kot, P.; Muradov, M. Review of Methods for Documentation, Management and Sustainability of Cultural Heritage. Case Study: Museum of King Jan III ’ s Palace at Wilanów. Sustainability 2019, 11, 7046. [Google Scholar] [CrossRef] [Green Version]
  16. Puertolas Montañez, J.A.; Mendoza Rodriguez, A.; Sanz-Prieto, I. Smart Indoor Positioning/Location and Navigation: A Lightweight Approach. Int. J. Artif. Intell. Interact. Multimed. 2013, 2, 43–50. [Google Scholar] [CrossRef] [Green Version]
  17. Mautz, R. Indoor Positioning Technologies; ETH: Zurich, Switzerland, 2012. [Google Scholar]
  18. Sakpere, W.; Adeyeye Oshin, M.; Mlitwa, N.B.W. A state-of-the-srt survey of indoor positioning and navigation systems and technologies. S. Afr. Comput. J. 2017, 29, 145–197. [Google Scholar] [CrossRef] [Green Version]
  19. Amsters, R.; Stevens, N.; Lauwers, Q. Evaluation of Low-Cost / High-Accuracy Indoor Positioning Systems. In Proceedings of the Fourth International Conference on Advances in Sensors, Actuators, Metering and Sensing-ALLSENSORS 2019, Athens, Greece, 24–28 February 2019; pp. 15–20. [Google Scholar]
  20. Jekabsons, G.; Kairish, V.; Zuravlyov, V. An Analysis of Wi-Fi Based Indoor Positioning Accuracy. Sci. J. Riga Tech. Univ. Comput. Sci. Appl. Comput. Syst. 2011, 44, 131–137. [Google Scholar] [CrossRef]
  21. Mahfouz, M.R.; Zhang, C.; Merkl, B.C.; Kuhn, M.J.; Fathy, A.E. Investigation of High-Accuracy Indoor 3-D Positioning Using UWB Technology. IEEE Trans. Microw. Theory Tech. 2008, 56, 1316–1330. [Google Scholar] [CrossRef]
  22. Mraz, L. Accuracy Considerations for UWB Indoor Tracking in an Industrial Environment. Available online: https://www.sewio.net/accuracy-considerations-for-uwb-indoor-tracking-in-an-industrial-environment/ (accessed on 31 January 2020).
  23. Zafari, F.; Gkelias, A.; Leung, K.K. A Survey of Indoor Localization Systems and Technologies. IEEE Commun. Surv. Tutor. 2019, 21, 2568–2599. [Google Scholar] [CrossRef] [Green Version]
  24. Woźniak, M.; Odziemczyk, W.; Nagórski, K. Investigation of practical and theoretical accuracy of wireless indoor positioning system Ubisense. Rep. Geod. Goeinform. 2013, 95, 36–48. [Google Scholar] [CrossRef] [Green Version]
  25. Marcellino, D. An Examination of Ultra-Wideband (UWB) for Positioning & Location Tracking. Available online: https://www.airfinder.com/blog/ultra-wideband-positioning-location-tracking (accessed on 31 January 2020).
  26. Xu, H.; Ding, Y.; Li, P.; Wang, R.; Li, Y. An RFID Indoor Positioning Algorithm Based on Bayesian Probability and K-Nearest Neighbor. Sensors 2017, 17, 1806. [Google Scholar] [CrossRef] [Green Version]
  27. Wang, J.; Adib, F.; Knepper, R.; Katabi, D.; Rus, D. RF-Compass: Robot Object Manipulation Using RFIDs. In Proceedings of the 19th Annual International Conference on Mobile Computing & Networking-MobiCom’13, Miami, FL, USA, 30 September–4 October 2013; ACM Press: Miami, FL, USA, 2013; p. 3. [Google Scholar]
  28. Shen, L.; Zhang, Q.; Pang, J.; Xu, H.; Li, P.; Xue, D. ANTspin: Efficient Absolute Localization Method of RFID Tags via Spinning Antenna. Sensors 2019, 19, 2194. [Google Scholar] [CrossRef] [Green Version]
  29. Davidson, P.; Piché, R. A Survey of Selected Indoor Positioning Methods for Smartphones. IEEE Commun. Surv. Tutor. 2017, 19, 1347–1370. [Google Scholar] [CrossRef]
  30. Bonde, G.D.; Pal, S.R.; Barwal, P.U.; Khan, S.I.; Ablankar, K. Finding Indoor Position of Person Using Wi-Fi & Smartphone: A Survey. Int. J. Innov. Res. Sci. Technol. 2015, 1, 202–207. [Google Scholar]
  31. Banerjee, S.; Suski, W.; Hoover, A. Sensor Set Switching Noise in UWB Indoor Position Tracking. In Proceedings of the IEEE International Conference on Ultra-Wideband, Syracuse, NY, USA, 17–20 September 2012; pp. 297–301. [Google Scholar]
  32. Hazas, M.; Hopper, A. Broadband Ultrasonic Location Systems for Improved Indoor Positioning. IEEE Trans. Mob. Comput. 2006, 5, 536–547. [Google Scholar] [CrossRef] [Green Version]
  33. Li, J.; Han, G.; Zhu, C.; Sun, G. An Indoor Ultrasonic Positioning System Based on TOA for Internet of Things. Mob. Inf. Syst. 2016, 2016. [Google Scholar] [CrossRef]
  34. Ijaz, F.; Yang, H.K.; Ahmad, A.W.; Lee, C. Indoor Positioning: A Review of Indoor Ultrasonic Positioning systems. In Proceedings of the the 15th International Conference on Advanced Communications Technology—ICACT2013, PyeongChang, Korea, 27–30 January 2013; pp. 1146–1150. [Google Scholar]
  35. Mautz, R.; Tilch, S. Survey of optical indoor positioning systems. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Guimarães, Portugal, 21–23 September 2011. [Google Scholar]
  36. Mendoza-Silva, G.M.; Torres-Sospedra, J.; Huerta, J. A Meta-Review of Indoor Positioning Systems. Sensors 2019, 19, 4507. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Faragher, R.M.; Sarno, C.; Newman, M. Opportunistic radio SLAM for indoor navigation using smartphone sensors. In Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, Myrtle Beach, SG, USA, 23–26 April 2012; pp. 120–128. [Google Scholar]
  38. Kahmen, H.; Faig, W. Surveying; De Gruyter: Boston, MA, USA; Berlin, Germany, 2012; ISBN 9783110845716. [Google Scholar]
  39. Teunissen, P.J.G. Adjustment Theory; Delft Academic Press/VSSD: Delft, The Netherlands, 2003; ISBN 90-407-1974-8. [Google Scholar]
  40. Börlin, N.; Murtiyoso, A.; Grussenmeyer, P.; Menna, F.; Nocerino, E. Modular bundle adjustment for photogrammetric computations. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII–2, 133–140. [Google Scholar]
  41. Kraus, K. Photogrammetry, Geometry from Images and Laser Scans, Second Edition, Karl Kraus.pdf, 2nd ed.; Walter de Gruyter: Berlin, Germany; New York, NY, USA, 2007; ISBN 978-3-11-019007-6. [Google Scholar]
  42. Berenyi, A.; Lovas, T.; Barsi, A. Terrestrial Laser Scanning–Civil Engineering Applications. In Proceedings of the International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences, Newcastle upon Tyne, UK, 21–24 June 2010; pp. 80–85. [Google Scholar]
  43. Liang, X.; Kankare, V.; Hyyppä, J.; Wang, Y.; Kukko, A.; Haggrén, H.; Yu, X.; Kaartinen, H.; Jaakkola, A.; Guan, F.; et al. Terrestrial laser scanning in forest inventories. ISPRS J. Photogramm. Remote Sens. 2016, 115, 63–77. [Google Scholar] [CrossRef]
  44. Olsen, M.J.; Kuester, F.; Chang, B.J.; Hutchinson, T.C. Terrestrial laser scanning-based structural damage assessment. J. Comput. Civ. Eng. 2010, 24, 264–272. [Google Scholar] [CrossRef]
  45. Pritchard, D.; Sperner, J.; Hoepner, S.; Tenschert, R. Terrestrial laser scanning for heritage conservation: The Cologne Cathedral documentation project. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Ottawa, OA, Canada, 28 August–1 September 2017; pp. 213–220. [Google Scholar]
  46. Suchocki, C. Comparison of Time-of-Flight and Phase-Shift TLS Intensity Data for the Diagnostics Measurements of Buildings. Materials 2020, 13, 353. [Google Scholar] [CrossRef] [Green Version]
  47. Voegtle, T.; Schwab, I.; Landes, T. Influences of different materials on the measurements of a terrestrial laser scanner (TLS). In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 3–11 July 2008; pp. 1061–1066. [Google Scholar]
  48. Tan, K.; Cheng, X. Correction of incidence angle and distance effects on TLS intensity data based on reference targets. Remote Sens. 2016, 8, 251. [Google Scholar] [CrossRef] [Green Version]
  49. Alkan, R.M.; Karsidag, G. Analysis of The Accuracy of Terrestrial Laser Scanning Measurements. In Proceedings of the FIG Working Week 2012-Knowing to Manage the Territory, Protect the Environment, Evaluate the Cultural Heritage, Rome, Italy, 6–10 May 2012; p. 16. [Google Scholar]
  50. Lichti, D.; Stewart, M.; Tsakiri, M.; Snow, A.J. Benchmark Tests on a Three-Dimensional Laser Scanning System. Geomat. Res. Australas. 2000, 72, 1–24. [Google Scholar]
  51. Lichti, D.D. A review of geometric models and self-calibration methods for terrestrial laser scanners. Bol. Ciencias Geod. 2010, 16, 3–19. [Google Scholar]
  52. Li, X.; Li, Y.; Xie, X.; Xu, L. Lab-built terrestrial laser scanner self-calibration using mounting angle error correction. Opt. Express 2018, 26, 14444. [Google Scholar] [CrossRef] [PubMed]
  53. Medić, T.; Kuhlmann, H.; Holst, C. Automatic In-Situ self-calibration of a panoramic TLS from a single station using 2d keypoints. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Enschede, The Netherlands, 10–14 June 2019; pp. 413–420. [Google Scholar]
  54. Soudarissanane, S. The Geometry of Terrestrial Laser Scanning: Identification of Errors, Modeling and Mitigation of Scanning Geometry; GILDEPRINT: Orléans, France, 2016; ISBN 9789462332034. [Google Scholar]
  55. Lerma, J.L.; Garcí, D. Self-calibration of terrestrial laser scanners: Selection of the best geometric additional parameters. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Riva del Garda, Italy, 23–25 June 2014; pp. 219–226. [Google Scholar]
  56. Soudarissanane, S.; Lindenbergh, R.; Gorte, B. Reducing the error in terrestrial laser scanning by optimizing the measurement set-up. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 3–11 July 2008; pp. 615–620. [Google Scholar]
  57. Lenda, G.; Marmol, U.; Mirek, G. Accuracy of laser scanners for measuring surfaces made of synthetic materials. Photogramm. Fernerkund. Geoinf. 2015, 5, 357–372. [Google Scholar] [CrossRef]
  58. San José Alonso, J.I.; Martínez Rubio, J.; Fernández Martín, J.J.; García Fernández, J. Comparing Time-of-Flight and Phase-Shift. the Survey of the Royal Pantheon in the Basilica of San Isidoro (León). In Proceedings of the ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Trento, Italy, 2–4 March 2011; pp. 377–385. [Google Scholar]
  59. Tan, K.; Zhang, W.; Shen, F.; Cheng, X. Investigation of TLS intensity data and distance measurement errors from target specular reflections. Remote Sens. 2018, 10, 1077. [Google Scholar] [CrossRef] [Green Version]
  60. Markiewicz, J.; Zawieska, D. Quality assessment of the TLS data in conservation of monuments. Proc. SPIE 9527 Opt. Arts Archit. Archaeol. 2015, 9527, 95270V. [Google Scholar]
  61. Boehler, W.; Vicent, M.; Marbs, A. Investigating Laser Scanner Accuracy. In Proceedings of the XIXth CIPA Symposium, Antalya, Turkey, 30 September–4 October 2003; pp. 696–701. [Google Scholar]
  62. Scharf, A. Terrestrial Laser Scanning for Wooden Façade-system Inspection; Luleå University of Technology: Lulea, Sweden, 2019. [Google Scholar]
  63. Kersten, T.; Mechelke, K.; Lindstaedt, M.; Sternberg, H. Geometric accuracy investigations of the latest terrestrial laser scanning systems. In Proceedings of the FIG Working Week, Stockholm, Sweden, 14–19 June; pp. 1–16.
  64. Valença, J.; Júlio, E.; Helder, A. Application of Photogrammetry to Bridge Monitoring. In Proceedings of the 12th International Conference on Structural Faults & Repair, Edinburg, UK, 10–12 June 2008. [Google Scholar]
  65. Han, J.; Hong, K.; Kim, S. Application of a Photogrammetric System for Monitoring Civil Engineering Structures. In Special Applications of Photogrammetry; Carneiro Da Silva, D.D., Ed.; InTech: Rijeka, Croatia, 2012; p. 136. ISBN 978-953-51-0548-0. [Google Scholar]
  66. Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef] [Green Version]
  67. Piermattei, L.; Karel, W.; Wang, D.; Wieser, M.; Mokroš, M.; Surový, P.; Koreň, M.; Tomaštík, J.; Pfeifer, N.; Hollaus, M. Terrestrial Structure from Motion Photogrammetry for Deriving Forest Inventory Data. Remote Sens. 2019, 11, 950. [Google Scholar] [CrossRef] [Green Version]
  68. Roncella, R.; Forlani, G.; Remondino, F. Photogrammetry for geological applications: Automatic retrieval of discontinuity orientation in rock slopes. In Proceedings of the SPIE-The International Society for Optical Engineering, San Jose, CA, USA, 17 January 2005; p. 17. [Google Scholar]
  69. Klawitter, M.; Pistellato, D.; Webster, A.; Esterle, J. Application of photogrammetry for mapping of solution collapse breccia pipes on the Colorado Plateau, USA. Photogramm. Rec. 2017, 32, 443–458. [Google Scholar] [CrossRef] [Green Version]
  70. Gianolio, S.; Mermati, F.; Genovese, G. Image-based 3D modeling for the knowledge and the representation of archaeological dig and pottery: Sant’omobono and sarno project’s strategies. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences-ISPRS Archives, Riva del Garda, Italy, 23–25 June 2014; pp. 243–250. [Google Scholar]
  71. Gajski, D.; Solter, A.; Gašparovic, M. Applications of macro photogrammetry in archaeology. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences-ISPRS Archives, Prague, Czech Republic, 12–19 July 2016; pp. 263–266. [Google Scholar]
  72. Waas, M.; Zell, D. Practical 3D photogrammetry for the conservation and documentation of Cultural Heritage Introduction. In Proceedings of the International Conference on Cultural Heritage and New Technologies, Vienna, Austria, 11–13 November 2013; pp. 1–13. [Google Scholar]
  73. Remondino, F.; Rizzi, A.; Barazzetti, L.; Scaioni, M.; Fassi, F.; Brumana, R.; Pelagotti, A. Review of geometric and radiometric analyses of paintings. Photogramm. Rec. 2011, 26, 439–461. [Google Scholar] [CrossRef]
  74. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  75. El-Hakim, S.; Beraldin, J.-A.; Blais, F. Critical Factors and Configurations for Practical 3D Image-Based Modeling. In Proceedings of the 6th Conference on Optical 3D Measurements Techniques, Zurich, Switzerland, 22–25 September 2003; pp. 159–167. [Google Scholar]
  76. Nyimbili, P.H.; Demirel, H.; Şeker, D.Z.; Erden, T. Structure from Motion (SfM)-Approaches and Applications. In Proceedings of the International Scientific Conference on Applied Sciences, Antalya, Turkey, 27–30 September 2016. [Google Scholar]
  77. Bianco, S.; Ciocca, G.; Marelli, D. Evaluating the Performance of Structure from Motion Pipelines. J. Imaging 2018, 4, 98. [Google Scholar] [CrossRef] [Green Version]
  78. Lichti, D.; Pfeifer, N. Introduction to Terrestrial Laser Scanning. Available online: ftp://ftp.ecn.purdue.edu/bethel/tls.pdf (accessed on 16 March 2020).
  79. Vosselman, G.; Maas, H.-G. Airborne and Terrestrial Laser Scanning. Current 2010, XXXVI, 318. [Google Scholar]
  80. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging, 2nd ed.; De Gruyter: Berlin, Germany, 2015; ISBN 9783110302691. [Google Scholar]
  81. Habib, A.F. Photogrammetric Bundle Adjustment. Available online: https://dprg.geomatics.ucalgary.ca/system/files/akam_531_ppt_ch1_relase-compatibility-mode.pdf (accessed on 16 March 2020).
  82. Buratowski, T. Teoria Robotyki [Robotics Theory]. Available online: https://www.robotyka.com/teoria.php/teoria.71?fbclid=IwAR3wprGRF6JNb3LYXD7UsDWpQQzrcYteIIvWXEG-WVTcUjb2O8lTG2tj-To (accessed on 31 January 2020).
  83. Malissiovas, G.; Neitzel, F.; Petrovic, S. Götterdämmerung over total least squares. J. Geod. Sci. 2016, 6. [Google Scholar] [CrossRef] [Green Version]
  84. Mathworks.com. Available online: http://uk.mathworks.com/help/matlab/ref/eig.html?searchHighlight=eigenvalue&s_tid=doc_srchtitle (accessed on 13 March 2020).
  85. Torr, P.H.S.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef] [Green Version]
  86. Zhou, Q.-Y.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. Available online: http://www.open3d.org/wordpress/wp-content/paper.pdf (accessed on 13 March 2020).
Figure 1. Overview of indoor technologies in dependence on accuracy and coverage [17].
Figure 1. Overview of indoor technologies in dependence on accuracy and coverage [17].
Sensors 20 02915 g001
Figure 2. Three rotations about the coordinate axes; (a) primary rotation ω about X0—axis, (b) secondary rotation ϕ about Y’—axis, and (c) tertiary rotation κ about Z1—axis [41].
Figure 2. Three rotations about the coordinate axes; (a) primary rotation ω about X0—axis, (b) secondary rotation ϕ about Y’—axis, and (c) tertiary rotation κ about Z1—axis [41].
Sensors 20 02915 g002
Figure 3. The coordinate system related to the vector r .
Figure 3. The coordinate system related to the vector r .
Sensors 20 02915 g003
Figure 4. Setup of the proposed sensor system: (a) back view—electromagnetic (EM) sensor mounted on a special tripod with Vector Network Analyzer (VNA) Rohde and Schwarz ZVL13, (b) side view—EM sensor mounted in a wireframe box with VNA and four low-cost cameras (not used in this investigation), (c) the example of a total-station (TLS) position used for control/check points determination, and (d) the visualisation of the full-frame camera Canon EOS 5D Mark II positions.
Figure 4. Setup of the proposed sensor system: (a) back view—electromagnetic (EM) sensor mounted on a special tripod with Vector Network Analyzer (VNA) Rohde and Schwarz ZVL13, (b) side view—EM sensor mounted in a wireframe box with VNA and four low-cost cameras (not used in this investigation), (c) the example of a total-station (TLS) position used for control/check points determination, and (d) the visualisation of the full-frame camera Canon EOS 5D Mark II positions.
Sensors 20 02915 g004
Figure 5. Diagram of the performed experiments: data acquisition, processing, and analysis.
Figure 5. Diagram of the performed experiments: data acquisition, processing, and analysis.
Sensors 20 02915 g005
Figure 6. Dense point clouds generated for each test site: (a) Test site I—the plastered brick wall, (b) Test site II—marble plaque, (c) Test site III—sandstone plaque, (d) sensor with marked scalebars and check points, (e) the dimensions of the horn antenna; marked reference points are visible (black and white crosses); control points (green dots), check points (red dots), EM sensor positions (blue dots), and scale bars (yellow lines).
Figure 6. Dense point clouds generated for each test site: (a) Test site I—the plastered brick wall, (b) Test site II—marble plaque, (c) Test site III—sandstone plaque, (d) sensor with marked scalebars and check points, (e) the dimensions of the horn antenna; marked reference points are visible (black and white crosses); control points (green dots), check points (red dots), EM sensor positions (blue dots), and scale bars (yellow lines).
Sensors 20 02915 g006
Figure 7. The sketch of the reprojection errors with mean (marked as circles), maximum and minimum values on marked points for three test sites for Structure form Motion method: SfM—without coordinates of the reference points, TS—total station method, and TLS—TLS method.
Figure 7. The sketch of the reprojection errors with mean (marked as circles), maximum and minimum values on marked points for three test sites for Structure form Motion method: SfM—without coordinates of the reference points, TS—total station method, and TLS—TLS method.
Sensors 20 02915 g007
Figure 8. The graph presenting the RMSE values calculated for marked control/check points on three test sites.
Figure 8. The graph presenting the RMSE values calculated for marked control/check points on three test sites.
Sensors 20 02915 g008
Figure 9. Test Site I: Histograms of deviations observed on control and check points (expressed in mm): (a) Control points classical surveying, (b) Control points TLS, (c) Check points classical surveying, (d) Checkpoints TLS, (e) Check points SfM with classical surveying, (f) Control points SfM with TLS for Test Site I.
Figure 9. Test Site I: Histograms of deviations observed on control and check points (expressed in mm): (a) Control points classical surveying, (b) Control points TLS, (c) Check points classical surveying, (d) Checkpoints TLS, (e) Check points SfM with classical surveying, (f) Control points SfM with TLS for Test Site I.
Sensors 20 02915 g009aSensors 20 02915 g009b
Figure 10. The sketch of the RMSE values on marked control/check points for Test Site II (Marble).
Figure 10. The sketch of the RMSE values on marked control/check points for Test Site II (Marble).
Sensors 20 02915 g010
Figure 11. The sketch of the RMSE values on marked control/check points for Test Site II (Sandstone).
Figure 11. The sketch of the RMSE values on marked control/check points for Test Site II (Sandstone).
Sensors 20 02915 g011
Figure 12. Test Site I: Histograms of deviations observed on scalebars located on the wall (expressed in mm): (a) Classical surveying, (b) TLS, (c) SfM with classical surveying, (d) SfM with TLS.
Figure 12. Test Site I: Histograms of deviations observed on scalebars located on the wall (expressed in mm): (a) Classical surveying, (b) TLS, (c) SfM with classical surveying, (d) SfM with TLS.
Sensors 20 02915 g012aSensors 20 02915 g012b
Figure 13. Bar charts illustrating mean deviations observed on scalebars locates on the wall (expressed in mm)—Test Site II.
Figure 13. Bar charts illustrating mean deviations observed on scalebars locates on the wall (expressed in mm)—Test Site II.
Sensors 20 02915 g013
Figure 14. Bar charts illustrating mean deviations observed on scalebars located on the wall (expressed in mm)—Test Site III.
Figure 14. Bar charts illustrating mean deviations observed on scalebars located on the wall (expressed in mm)—Test Site III.
Sensors 20 02915 g014
Figure 15. Test Site I: Histograms of deviations observed on scalebars located on the sensor frame (expressed in mm): (a) Classical surveying, (b) TLS, (c) SfM with classical surveying, (d) SfM with TLS.
Figure 15. Test Site I: Histograms of deviations observed on scalebars located on the sensor frame (expressed in mm): (a) Classical surveying, (b) TLS, (c) SfM with classical surveying, (d) SfM with TLS.
Sensors 20 02915 g015
Figure 16. Bar chart illustrating mean deviations observed on scalebars located on the sensor frame (expressed in mm)—Test Site II.
Figure 16. Bar chart illustrating mean deviations observed on scalebars located on the sensor frame (expressed in mm)—Test Site II.
Sensors 20 02915 g016
Figure 17. Bar chart illustrating mean deviations observed on scalebars located on the sensor frame (expressed in mm)—Test Site III.
Figure 17. Bar chart illustrating mean deviations observed on scalebars located on the sensor frame (expressed in mm)—Test Site III.
Sensors 20 02915 g017
Figure 18. Test Site I: Box plots illustrating the deviations (in meters) of point clouds from the base planes acquired for four scenarios: (I)—Classical surveying, (II)—TLS, (III)—SfM with classical surveying, (IV)—SfM with TLS.
Figure 18. Test Site I: Box plots illustrating the deviations (in meters) of point clouds from the base planes acquired for four scenarios: (I)—Classical surveying, (II)—TLS, (III)—SfM with classical surveying, (IV)—SfM with TLS.
Sensors 20 02915 g018
Figure 19. Test Site II: Box plots illustrating the deviations (in meters) of point clouds from the reference planes acquired for four scenarios: (I)—Classical surveying, (II)—TLS, (III)—SfM with classical surveying, (IV)—SfM with TLS.
Figure 19. Test Site II: Box plots illustrating the deviations (in meters) of point clouds from the reference planes acquired for four scenarios: (I)—Classical surveying, (II)—TLS, (III)—SfM with classical surveying, (IV)—SfM with TLS.
Sensors 20 02915 g019
Figure 20. Test Site III: Box plots illustrating the deviations (in meters) of point clouds from the reference planes acquired for four scenarios: (I)—Classical surveying, (II)—TLS, (III)—SfM with classical surveying, (IV)—SfM with TLS.
Figure 20. Test Site III: Box plots illustrating the deviations (in meters) of point clouds from the reference planes acquired for four scenarios: (I)—Classical surveying, (II)—TLS, (III)—SfM with classical surveying, (IV)—SfM with TLS.
Sensors 20 02915 g020
Figure 21. Test Site I: Box plots illustrating the deviations (in meters) of point clouds subsampled with different sampling radius from the reference planes fitted into Multi-View Stereo (MVS) point clouds: (a) 5 mm, (b) 1 cm, (c) 3 cm, (d) 5 cm for four scenarios (I—Classical surveying, II—TLS, III—SfM with classical surveying, IV—SfM with TLS).
Figure 21. Test Site I: Box plots illustrating the deviations (in meters) of point clouds subsampled with different sampling radius from the reference planes fitted into Multi-View Stereo (MVS) point clouds: (a) 5 mm, (b) 1 cm, (c) 3 cm, (d) 5 cm for four scenarios (I—Classical surveying, II—TLS, III—SfM with classical surveying, IV—SfM with TLS).
Sensors 20 02915 g021
Figure 22. Test Site II: Box plots illustrating the deviations (in meters) of point clouds subsampled with different sampling radius from the reference planes fitted into MVS point clouds (a) 5 mm, (b) 1 cm, (c) 3 cm, (d) 5 cm for four scenarios (I—Classical surveying, II—TLS, III—SfM with classical surveying, IV—SfM with TLS).
Figure 22. Test Site II: Box plots illustrating the deviations (in meters) of point clouds subsampled with different sampling radius from the reference planes fitted into MVS point clouds (a) 5 mm, (b) 1 cm, (c) 3 cm, (d) 5 cm for four scenarios (I—Classical surveying, II—TLS, III—SfM with classical surveying, IV—SfM with TLS).
Sensors 20 02915 g022
Figure 23. Test Site III: Box plots illustrating the deviations (in meters) of point clouds subsampled with different sampling radius from the reference planes fitted into MVS point clouds (a) 5 mm, (b) 1 cm, (c) 3 cm, (d) 5 cm for four scenarios (I—Classical surveying, II—TLS, III—SfM with classical surveying, IV—SfM with TLS).
Figure 23. Test Site III: Box plots illustrating the deviations (in meters) of point clouds subsampled with different sampling radius from the reference planes fitted into MVS point clouds (a) 5 mm, (b) 1 cm, (c) 3 cm, (d) 5 cm for four scenarios (I—Classical surveying, II—TLS, III—SfM with classical surveying, IV—SfM with TLS).
Sensors 20 02915 g023
Table 1. RMSE values of the reference points coordinates—the table includes the results obtained with both measuring methods considering all stations of the instrument.
Table 1. RMSE values of the reference points coordinates—the table includes the results obtained with both measuring methods considering all stations of the instrument.
Test SiteSurveyingTerrestrial Laser Scanning
RMSE [mm]
XYZLinear
I0.20.30.12.3
II2.02.02.02.2
III2.02.02.01.8
Table 2. Quality assessment of the reference plane determination.
Table 2. Quality assessment of the reference plane determination.
Test SiteNo. of PointsSurveyingTerrestrial Laser Scanning
ControlCheckLinear Deviations from the Plane [mm]
ControlCheckMarked ControlMarked CheckNatural Check
I340.64.10.41.63.8
II300.1X0.2X17.5
III320.11.30.90.93.6
Table 3. Relative orientation of the chosen EM sensor stations [in pixels]—Test site I.
Table 3. Relative orientation of the chosen EM sensor stations [in pixels]—Test site I.
ScenarioStation 1Station 2Station 3
MinMaxMeanMinMaxMeanMinMaxMean
SFMWall0.12.70.60.02.70.80.11.50.8
Sensor0.01.90.50.01.60.50.11.20.5
IWall0.12.70.60.02.70.80.11.50.8
Sensor0.01.90.50.01.60.50.11.20.6
IIWall0.12.70.60.02.70.80.11.50.8
Sensor0.01.90.50.01.60.50.11.20.5
IIIWall0.12.70.60.02.70.80.11.50.8
Sensor0.01.90.50.01.60.50.11.20.5
IVWall0.12.70.60.02.70.80.11.50.8
Sensor0.01.90.50.01.60.50.11.20.5
Table 4. Relative orientation of the EM sensor [in pixels]—Test site II.
Table 4. Relative orientation of the EM sensor [in pixels]—Test site II.
ScenarioStation 1Station 2Station 3
MinMaxMeanMinMaxMeanMinMaxMean
SFMWall0.10.60.40.01.20.50.01.50.6
Sensor0.00.80.40.00.80.50.01.40.5
IWall0.11.30.90.11.90.70.02.00.7
Sensor0.01.20.60.01.10.70.11.80.8
IIWall0.10.60.40.01.20.50.01.50.6
Sensor0.00.80.40.00.80.50.01.40.5
IIIWall0.10.60.40.01.20.50.01.50.6
Sensor0.00.80.40.00.80.50.01.40.5
IVWall0.10.60.40.01.20.50.01.50.6
Sensor0.00.80.40.00.80.50.01.40.5
Table 5. Relative orientation of the EM sensor [in pixels]—Test site III.
Table 5. Relative orientation of the EM sensor [in pixels]—Test site III.
ScenarioStation 1Station 2Station 3
MinMaxMeanMinMaxMeanMinMaxMean
SFMWall0.12.10.80.02.10.90.01.20.5
Sensor0.11.70.90.22.31.30.13.31.2
IWall0.11.20.60.11.10.50.00.80.4
Sensor0.11.30.70.11.61.00.11.10.7
IIWall0.11.10.60.01.20.50.00.80.4
Sensor0.12.31.00.12.51.30.13.41.3
IIIWall0.12.10.80.02.10.90.01.20.5
Sensor0.11.70.90.22.31.30.13.31.2
IVWall0.12.10,80.02.10.90.01.20.5
Sensor0.11.70.90.22.31.30.13.31.2
Table 6. Mean values of linear skews between the planes created using signalized points and base planes.
Table 6. Mean values of linear skews between the planes created using signalized points and base planes.
ScenarioTest Site 1Test Site 2Test Site 3
dX [mm]dY [mm]dZ [mm]dX [mm]dY [mm]dZ [mm]dX [mm]dY [mm]dZ [mm]
I0.280.540.263.461.770.223.501.780.03
II0.150.930.180.190.080.180.702.680.10
III0.531.040.273.631.860.213.001.530.10
IV0.221.360.400.680.280.540.632.400.09
Table 7. Mean values of linear skews between the planes formed from point clouds (resampling 3 cm) and base planes.
Table 7. Mean values of linear skews between the planes formed from point clouds (resampling 3 cm) and base planes.
ScenarioTest Site 1Test Site 2Test Site 3
dX [mm]dY [mm]dZ [mm]dX [mm]dX [mm]dY [mm]dZ [mm]dY [mm]dX [mm]
I0.591.160.723.561.820.053.521.801.47
II0.201.250.591.350.531.021.355.111.02
III0.460.910.420.560.280.431.840.931.01
IV0.171.100.870.620.240.561.726.550.71
Table 8. Mean skew deviation of the EM sensor (relative to the base plane).
Table 8. Mean skew deviation of the EM sensor (relative to the base plane).
ScenarioTest Site 1Test Site 2Test Site 3
dX [mm]dY [mm]dZ [mm]dX [mm]dY [mm]dZ [mm]dX [mm]dY [mm]dZ [mm]
I4.40.30.72.41.30.02.61.2−2.0
II4.31.50.71.11.8−0.23.73.4−2.1
III4.40.70.91.11.80.32.61.2−2.1
IV4.31.40.91.11.8−0.13.73.4−2.2
Table 9. Mean skew deviations of the EM sensor (relative to the reference plane fitted into resampled dense point cloud).
Table 9. Mean skew deviations of the EM sensor (relative to the reference plane fitted into resampled dense point cloud).
ScenarioTest Site 1Test Site 2Test Site 3
dX [mm]dY [mm]dZ [mm]dX [mm]dY [mm]dZ [mm]dX [mm]dY [mm]dZ [mm]
I4.11.11.83.20.80.03.50.6−1.3
II4.22.21.90.91.80.53.92.5−1.8
III4.21.01.82.61.20.23.50.6−1.6
IV4.32.01.81.31.60.23.54.2−1.9

Share and Cite

MDPI and ACS Style

Markiewicz, J.; Łapiński, S.; Kot, P.; Tobiasz, A.; Muradov, M.; Nikel, J.; Shaw, A.; Al-Shamma’a, A. The Quality Assessment of Different Geolocalisation Methods for a Sensor System to Monitor Structural Health of Monumental Objects. Sensors 2020, 20, 2915. https://doi.org/10.3390/s20102915

AMA Style

Markiewicz J, Łapiński S, Kot P, Tobiasz A, Muradov M, Nikel J, Shaw A, Al-Shamma’a A. The Quality Assessment of Different Geolocalisation Methods for a Sensor System to Monitor Structural Health of Monumental Objects. Sensors. 2020; 20(10):2915. https://doi.org/10.3390/s20102915

Chicago/Turabian Style

Markiewicz, Jakub, Sławomir Łapiński, Patryk Kot, Aleksandra Tobiasz, Magomed Muradov, Joanna Nikel, Andy Shaw, and Ahmed Al-Shamma’a. 2020. "The Quality Assessment of Different Geolocalisation Methods for a Sensor System to Monitor Structural Health of Monumental Objects" Sensors 20, no. 10: 2915. https://doi.org/10.3390/s20102915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop