Next Article in Journal
Deep Monocular Depth Estimation Based on Content and Contextual Features
Next Article in Special Issue
Craniofacial 3D Morphometric Analysis with Smartphone-Based Photogrammetry
Previous Article in Journal
Micro-Electro-Mechanical Systems in Light Stabilization
Previous Article in Special Issue
Multispectral UAV Data and GPR Survey for Archeological Anomaly Detection Supporting 3D Reconstruction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Critical Review of Remote Sensing Approaches and Deep Learning Techniques in Archaeology

1
Environment and Sustainability Institute, University of Exeter, Penryn Campus, Penryn, Cornwall TR10 9FE, UK
2
College of Engineering, University of Baghdad, Baghdad 10001, Iraq
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 2918; https://doi.org/10.3390/s23062918
Submission received: 18 January 2023 / Revised: 2 March 2023 / Accepted: 6 March 2023 / Published: 8 March 2023

Abstract

:
To date, comprehensive reviews and discussions of the strengths and limitations of Remote Sensing (RS) standalone and combination approaches, and Deep Learning (DL)-based RS datasets in archaeology have been limited. The objective of this paper is, therefore, to review and critically discuss existing studies that have applied these advanced approaches in archaeology, with a specific focus on digital preservation and object detection. RS standalone approaches including range-based and image-based modelling (e.g., laser scanning and SfM photogrammetry) have several disadvantages in terms of spatial resolution, penetrations, textures, colours, and accuracy. These limitations have led some archaeological studies to fuse/integrate multiple RS datasets to overcome limitations and produce comparatively detailed outcomes. However, there are still knowledge gaps in examining the effectiveness of these RS approaches in enhancing the detection of archaeological remains/areas. Thus, this review paper is likely to deliver valuable comprehension for archaeological studies to fill knowledge gaps and further advance exploration of archaeological areas/features using RS along with DL approaches.

1. Introduction

Geospatial data derived from Remote Sensing (RS) standalone, e.g., Laser Scanning (LS) and photogrammetry, and combination approaches are being increasingly used for the digital preservation of archaeological information [1,2,3,4,5,6,7]. The effective application of these approaches is important, since there are hundreds of archaeological sites worldwide that have already been destroyed [8], and once an archaeological site is changed/distorted it is difficult to reconstruct it digitally without using appropriate and robust geospatial datasets. Some conservation studies, e.g., Moussa [1], Liang et al. [2], and Jaber and Abed [9], have used multi-RS data fusion to minimise the limitations (e.g., occlusion, Level Of Detail (LOD), precision) associated with standalone approaches and to generate relatively more detailed three-dimensional (3D) products.
Geospatial data have remarkable value in archaeological practice, as they are likely to contribute to the preservation of endangered sites for future generations [10,11]. These digital data can also be applied to discover new archaeological areas and detect previously unknown archaeological remains [12]. RS standalone and combination approaches in archaeology have a variety of applications, ranging from virtual reality consideration methods and 3D models to data storage, interpretation, and visualization; all possible without exposing sites to the prospect of demolition/excavation [13]. In parallel with the use of geospatial data, over the last several years, several archaeological studies have started applying Artificial Intelligence (AI) approaches, such as Machine Learning (ML) and Deep Learning (DL), to analyse RS datasets. These AI approaches are being used for the classification, identification, and segmentation of archaeological features.
The intention of this literature review is to review and critically discuss the findings of previous archaeological studies that have adopted RS (standalone and fusion/integration) approaches and DL techniques. This review highlights the importance of these approaches in detecting and preserving archaeological sites, and identifies a number of critical research gaps.

Advanced Archaeological Techniques

Non-invasive techniques are applied in archaeological applications [14]. These techniques include RS standalone and combination approaches, and AI techniques are also used in archaeology. RS approaches are a further aspect of geophysical techniques. They are neither destructive nor invasive, and can accurately measure spatial data and attribute information (shape, size, areas) about AOIs.
The previous review paper led by Adamopoulos and Rinaudo [15] discussed the applications of UAV-based remote sensing approaches reported in archaeology. In this paper, we aim to discuss the applications of RS approaches that are particularly based on laser scanning and photogrammetry in detecting and preserving archaeological remains. Many archaeological studies have applied RS standalone approaches (e.g., photogrammetry, Terrestrial Laser Scanning (TLS), and Light Detection and Ranging (LiDAR)) and combination approaches to reveal archaeological features and digitally preserve them. Object detection with DL is another advanced technique that has been adopted in archaeology in the last three years (since 2018) [16]. It semi-automatically detects objects based on raster images derived from RS approaches, as well as determines the likelihood of predicted features. These merits can be achieved through RS approaches without causing any damage/change to a site with respect to the original remains [17,18]. Previous studies are actively re-viewed and critically discussed in the following sections in order to identify research gaps. This review is divided into three sections: RS standalone approaches, including image-based modelling (photogrammetry) and range-based modelling, e.g., TLS and LiDAR; RS combination (integration and fusion) approaches; and object detection with DL (Figure 1). Previous studies have referred to combination, integration, fusion, and blending/merging terms as interchangeable terms. The research led by Kadhim et al. [19] referred to the fusion approach as a combination of 2D images with digital models (DSM/DTM) to create a fused model. In contrast, the term integration denotes a combination of raster images (2.5D image) derived from two different sources (LiDAR and photogrammetry).
Studies that adopted these approaches were collated from the database Scopus (http://www.scopus.com/) (accessed on 15 January 2023). Three filters were used to identify previous studies: For the standalone approaches, the key terms of “archaeology”, “hidden features”, “buried features”, “digital preservation”, “documentation”, “cultural heritage”, “3D modelling”, “3D reconstruction”, “GIS “, prospection”, “remote sensing”, “photogrammetry”, “aerial images”, “UAV” “Laser scanning”, “LiDAR”, “Terrestrial Laser Scanning” were used. The second filter was related to the RS combination approaches, and the key terms used were: “combination approaches”, “fusion”, “integration”, “merging”, “remote sensing”, “prospection”, “archaeology”, “cultural heritage”, “documentation”, “digital preservations”, “3D models”. Lastly, the third filter was for artificial intelligence, and the key terms used were “artificial intelligence”, “CNN”, “ANN”, “object detection”, “deep learning”, “training data”, “prospection”, “remote sensing”, “archaeology”. A number of scientific publications were obtained from the database Scopus (Figure 2). Results were categorized into three groups: (1) standalone, (2) combination, and (3) AI techniques in digital archaeology. Further relevant publications were identified from the citations/references of the studies identified from the database Scopus.

2. RS Standalone Approaches

As presented in Figure 2, many archaeological studies have applied RS standalone approaches, e.g., individually, TLS, LiDAR, and photogrammetry [7,13,18,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]. The main reasons for applying such approaches in archaeology are to assess the quality (accuracy and precision) of the delivered data, create 3D models, and discover hidden archaeological areas/features.
Photogrammetric models for archaeology had been rarely used prior to 2003, but since the emergence of the Structure from Motion-Multiview Stereo (SfM-MVS) method, it has been widely used for modelling historical areas [13,35,37]. LS has also become a popular technique to observe archaeological areas/constructions and create 3D models [38]. Early work by Hodge et al. [39] found that the high level of automated photogrammetric processing can provide an opportunity for efficiently creating visual data of AOI. However, the final products might not have adequate qualities for applications that require centimetre precision (such as monitoring deformation), which could be obtained from LS data. This finding also accords with the observations by Nuttens et al. [21], who carried out an accuracy assessment of TLS and photogrammetry by establishing Ground Control Points (GCPs) in a cultural site (Sint-Baafs Abbey in Belgium). They argued that these techniques should not only be applied to create 3D models, but also to determine the model accuracy of archaeological areas. Nuttens et al. [21] found that TLS errors were two times smaller than those obtained from photogrammetry. However, aerial photogrammetry is the most effective method for modelling the roof of archaeological constructions compared to other non-invasive methods, e.g., TLS and terrestrial photogrammetry, due to their inability to capture the top perspectives (rooftops) [1,9,26,40]. Accordingly, detailed digital 3D models could be, arguably, created by aerial photogrammetry.
The observations of TLS in archaeology were proposed by Nuttens et al. [21] and Hodge et al. [39], studies that established LS as an appropriate approach for modelling archaeological sites. The LS system has the capability to capture dense point clouds that can be processed to produce relatively highly accurate 3D models. Fassi et al. [22] disagreed and debated whether the accuracy and resolution of LS could be adversely affected by environmental conditions. For instance, wind, temperature, dust, raindrops, and fog are not ideal conditions for conducting fieldwork using LS [41,42]. A similar conclusion was reached by Grenzdörffer et al. [26], as they evaluated the limitations of the TLS and stated that the incident angles and the range (distance between laser beams and an object) relied on the reflection properties of the surface properties. The range measurement was also examined by Shanoer and Abed [29], who observed the Root Mean Square Errors (RMSE) of the TLS data for cultural heritage preservation. They found the RMSE of the minimum measurement range (3.5 m) in a Stonex X300 TLS device (www.stonex.it) (accessed on 27 November 2022) was 0.006 m, while the RMSE of a 7-m measurement range was 0.012 m. Therefore, the accuracy of the TLS modelling tends to be higher than those of photogrammetry, yet other factors (range measurements, incident angle, and surface properties) could adversely impact the quality of the modelling.
With regard to the data quality comparison between photogrammetry and LS, the work of Grenzdörffer et al. [26] determined the deviations between the LS and photogrammetry in modelling an ancient building (Cathedral of St. Nikolai in Germany) and found that the average deviations between the two observations were fluctuating from 0.02 m to 0.03 m. The outcome of the latter assessment means that the differences between these two techniques are not significant. The accuracy of the 3D models derived from photogrammetry was also examined by Hatzopoulos et al. [13], who employed the SfM photogrammetric method to reconstruct archaeological monuments/features. They yielded centimetre precision, relying on GCPs as well as camera exposure positions. Marín-Buzón et al. [36] also found that the SfM photogrammetry provided the most accurate data, compared to TLS data, for archaeological excavations. The main differences between LS and photogrammetry are summarised in Table 1. Shanoer and Abed [29] claimed that the TLS data must be interpreted and assessed based on the registration methods. They applied two fine registration algorithms—Levenberg-Marquardt Iterative Closest Point (LM-ICP) and Nearest Neighbour Iterative closest point (NN-ICP). The average registration errors were reported as 0.0026 m and 0.0039 m for the LM-ICP and NN-ICP, respectively.
The standalone RS approaches are not only applied to create detailed 3D models, but also to detect new archaeological areas/features. The use of these approaches for the detection of archaeological areas has been critically demonstrated in several archaeological studies [23,27,43,44,45,46,47]. More specifically, RS techniques including LiDAR and aerial photography can be adopted to both automatically and manually identify archaeological topographies [20,48,49]. LiDAR and Photogrammetry-derived digital models have been adopted in several archaeological projects to demonstrate how RS approaches can be used to identify, interpret, and assess the characteristics of archaeological sites [49,50,51,52]. For example, the DTMs generated from LiDAR data of Devil’s Furrow in the Czech Republic were used to identify terrain discontinuities, such as tracks and erosion furrows [23]. Bachagha et al. [43] illustrated the capability of DSM derived from LiDAR data along with 1-m resolution satellite imagery in identifying possible hidden ancient areas in Wadi El-Melah Valley in Gafsa, Tunisia. Based on spatial and pixel-based analysis methods, Bachagha et al. [43] discovered two possible Roman forts. These findings suggest that the combined application of RS datasets is a robust approach in archaeological prospection due to the detailed information that was obtained in terms of detection, localization, classification, and mapping of ancient features.
The raster images derived from DSMs/DTMs are being used towards the further successful detection of archaeological features [44,53]. Specifically, several Visual Analysis Techniques (VATs), e.g., hillshade, gradient, aspect, and Sky View Factor (SVF) derived from digital models, are applied to highlight topographic features of AOIs. VATs are adopted to identify topographic features and improve the understanding of archaeological areas. The gradient raster emphasises the altitude variations in AOIs, while aspect images show the directions of altitude variations [53]. In Bennett’s study [53], mounds and a potential new shell ring were detected through gradient and aspect images, as well as other VATS (e.g., hillshade). Cowley et al. [44] generated hillshade raster from LiDAR data (1 point/m2 point density). This raster successfully identified several archaeological features of AOI (Barwhill in the north of Gatehouse of the Fleet in Scotland). Examples of these detected remains are linear features that represent old water drainage and a Roman road. However, features could not be extracted from hillshade images, in some cases, the influence of the illumination in hillshade raster generates distortion and, in some cases, led to obscuring topographic features and hiding some archaeological remains [45]. Thompson and Prufer [27] support the claims of the previous observations, e.g., Bennett [53], as they observed that the LiDAR-derived hillshade, unlike the gradient raster, is not effective in recognising small structures. For this reason, they consider gradient raster images to be a robust alternative technique for detecting archaeological remains. Both Laser and photogrammetric data are being used to uncover archaeological information that might be unobtainable through the use of destructive excavation methods. The uses of standalone approaches (LS and Photogrammetry) in archaeology, experimental and analysis setup, their merits, and key findings from previous studies have been summarised in Table 2.
Therefore, RS data have been independently applied in many archaeological studies to evaluate how such approaches can be adopted to discover, interpret, and examine the physical characteristics of archaeological areas/objects [55,56,57,58,59]. Given that, there are strengths and weaknesses in both LS and photogrammetry. The range sensors are capable of creating 3D point clouds that are utilised later to produce fine geometric models [60]. In contrast, image sensors are more capable and appropriate for building 3D texture models of an object structure [60,61]. As such, combining the datasets is likely to create, to some degree, more complete 2.5D/3D models. Previous studies by Hatzopoulos et al. [13] and Dostal and Yamafune [60] argued that these methods could complement each other in generating relatively highly precise digital models of archaeological areas. More details about the combination approaches are discussed in Section 3.

3. RS Combination Approaches

The main purpose of combining multi-datasets is to address the limitations of the standalone approaches. Addressing these limitations is accomplished by combining multi-datasets derived from the same sensors and multi-sensors datasets. The development of both photogrammetry and LiDAR data in terms of quality and efficiency led some studies to recommend applying one technique over another to enhance the construction of digital models, as well as to improve the detection of archaeology [21,62]. An example of data integration from the same sensors is image-based modelling or range-based modelling [58,63], while an example of fusing/integrating different sensors is image-based modelling with range-based modelling [3,64].
Data integration from the same sensor has been applied in several archaeological studies, such as [57,58,63]. The concept of this approach is to combine two or more different raster layers derived from the same source (e.g., photogrammetry or LiDAR); this integration is based on the VATs. The intention of integrating multiple VATs is to address the limitations of single-raster images, generate newly enhanced datasets, and acquire clear topographical features of the AOI. The limitations of the standalone approaches are mainly associated with the illumination, distortions of the raster, and filtering [53,65]. Inomata et al. [57] suggested applying Red Relief Image Maps (RRIM) for object detection; RRIM is a VAT based on multi-layered topographic data; i.e., gradient and differential topographic data that are derived from the same sensor. It is a shade-free raster that signifies a fine feature of topographic data. This raster is normally used in archaeological studies, since it provides a clearer and less distorted view of topographic changes than the standalone VATs.
With regard to the RRIM, a relatively finer distinction of archaeological features (e.g., structures smaller than 50 cm) can be observed in the RRIM [57]. These results reflect those of Davis et al. [45], who found that the edges of archaeological features of Beaufort County, South Carolina, were emphasised in RRIMs. Kokalj and Somrak [58] and Kokalj et al. [66] corroborated the conclusions of the previous studies, as they suggested enhancing the existing individual VATs to improve archaeological prospection and avoid missing possible remains. Kokalj et al. [66] created open-access Relief Visualisation Tools (RVT) to integrate various raster images derived from various fine LiDAR data (e.g., 50 cm/pix and 25 cm/pix) in different archaeological areas to improve the visibility of the detected remains. Comparisons of various LiDAR-derived VATs (hillshade, gradient, and RRIM) and raster images from the RVT with 1-m/pix spatial resolution were implemented by Inomata et al. [57]. The results of the latter study showed that the edge of traces was relatively more emphasized in the RRIM than in other applied VATs.
In addition to the detection of archaeological remains based on integration approaches of VATs obtained from the same data source, combining multi datasets from different sensors is being applied to create relatively detailed 3D models for digital preservation. For instance, Papasaika et al. [67] applied the integration method to enhance accuracy, density, and reduced data voids. This enhancement is achieved by the combination of two different DSMs derived from different sources (LiDAR and IKONOS satellite imagery). The LiDAR data and IKONOS satellite imagery are different in terms of Ground Sampling Distance (GSD) and acquisition times. The weaknesses (e.g., voids, discontinuities) in the standalone data (e.g., LiDAR and satellite data) are often addressed in the fused/integrated data [67]. In accordance with the previous studies, other datasets were applied by Tapete et al. [68] to boost final outcomes. They combined synthetic aperture radar with TLS data to monitor the deformations of archaeological artefacts. Integrating these two techniques results in generating new datasets, consequently, enhancing data interpretation of segregations’ impact ancient monuments [68].
Recent archaeological studies, e.g., Liang et al. [6], Filzwieser et al. [69], and Luhmann et al. [40], found that combining multiple techniques is likely to boost the benefits of the acquired data to generate consistent and, to some extent, complete results. These approaches could complement each other to enhance 2.5D and 3D models of archaeological areas [13,60]. Therefore, the second approach discussed in this review is data combination from different sensors. This type of data combination is progressively becoming a vital factor in several RS applications, including archaeology [1,6,70]. Prior archaeological studies have noted the importance of combining multiple datasets derived from different sources (after individual processing) [1,3,5,40,64,70,71]. This approach includes the integration of 3D-to-3D dense clouds models [1,40,61,72], as well as 2D-to-2D raster images that were generated from photogrammetry and laser data [3,9,40] in order to generate integrated 3D models and digitally preserve ancient buildings/archaeological status.
The purposes for combining multiple datasets derived from different sensors have varied in individual previous studies. For instance, Forkuo and King [73] fused terrestrial photogrammetry with TLS data to generate realistic 3D models. The aim of their study was to develop a geometric relationship between the 2D digital images-based photogrammetry extracted from the LS 3D point clouds (the collinearity equations are often used for image-to-image registration to successfully merge multi-datasets that have different projections) [73]. To further support the idea of generating realistic models, an insight investigation on integrating photogrammetric and TLS data to create 3D models of a historical building (Villa Giovanelli) was carried out by Guarnieri et al. [61]. This study was based on highlighting the limitations of individual standalone approaches, as they used terrestrial photogrammetry to capture the basic detail, such as walls and facades, while more complex structures, e.g., turrets, statues, and the staircase, were determined by employing range-based modelling—Time Of Flight (TOF) TLS. The combination was executed by Guarnieri et al. [61] after establishing ten GCPs measured by total stations to geo-reference and merge both datasets towards achieving a realistic 3D integrated model. Thus, improving the visual quality and geometry of 2.5D/3D models is required to reduce or even eliminate occlusions in standalone data, generate relatively more detailed information, and digitally preserve archaeological sites.
The previous study led by Jaber and Abed [9] evaluated the effectiveness of the fusion approach in producing 3D modelling in both indoor and outdoor case studies; the statue of the Lady of Hatra (indoor case study) and Abbasid Mustansiriya School (outdoor case study) in Baghdad. The fusion was implemented by combining synthetic images derived from TLS point clouds and aerial images captured from a digital camera after identifying the limitations of individual sensors in archaeological preservation. An automatic co-registration scheme was executed, which involves the creation of synthetic images from the TLS to be combined with 2D images through the SfM method, and later applying simultaneous bundle block adjustment and Helmart transformation to eliminate discrepancies [64]. They found that fusing LS with digital images had significant benefits for digital preservation as it provided more detailed models by filling data occlusion and increasing the overall data density. Different RS datasets (e.g., digital images and LiDAR), in some cases, have varied data formats and projections. As a result, it is challenging to implement direct registration and identify common points to match both datasets [1,74]. The automatic registration of LS data and camera images was developed in Yanga et al. [74]; registration usually refers to LS aided by photogrammetry [5]. Moussa [1] stated that different types of methods can be used for registration, which can be categorised into manual registration and automatic registration; the latter method is highly preferable to minimise costs and time. The registration and the alignment processing with the Iterative Closest Point (ICP) algorithm can be applied with georeferencing processing. The registration errors between LiDAR and photogrammetric data (DSMs of Mount Cornello in Italy) did not exceed two meters. The resulting 3D model allowed for the identification and digitization of geologic features [75]. Thus, successful registration between two different datasets of the same AOI tends to achieve photorealistic 3D models.
As noted above, the combination approaches are mostly applied in archaeology to produce relatively more detailed 3D models than those obtained from the standalone approaches for digital preservation. Nonetheless, combination approaches, specifically LiDAR with photogrammetric data, are not commonly applied to improve the detection of archaeological remains. Specifically, limited archaeological studies have integrated/fused different datasets for object detection, such as [3,76,77,78,79]. From the archaeological perspective, the use of geophysics methods, e.g., Ground-penetrating radar (GPR) has facilitated the detection of unknown buried features [76]. The latter study by Deiana et al. [76] found that the integration of GPR with Electrical Resistivity Tomography (ERT) data into a single map can demonstrate an adequate method to interpret the geophysical anomalies and buried remains of Nora, in southern Sardinia. The detection of archaeological features using non-invasive techniques (geophysics and RS) should be implemented prior to the excavation process [76]. The purpose of applying such approaches is to examine the effectiveness of integrating/fusing multi-datasets obtained from different sources based on excavation evidence to accurately detect the presence of some hidden tombs. In this respect, Elfadaly et al. [77] are in agreement with Deiana et al. [76], as they also integrated the analysis of excavation and GPR with magnetic and aerial images of the Northern Nile Delta, Egypt to create an archaeological map that includes all the recorded and detected features/artifacts. Additionally, integrating radar data, topographic maps, and optical satellite imagery were applied to discover possible ancient settlement areas.
In terms of integrating LiDAR and photogrammetry, Holata et al. [3] created an integrated 2.5D model of the archaeological site (the deserted medieval settlement, Hound Tor, in south-west England) to digitally preserve the structure of the site, e.g., stone walls, the field remains (field enclosures, ridge, debris of constructions, and furrow). GCPs were established to georeference both datasets. Holata et al. [3] processed LiDAR point clouds to generate DSMs; unwanted points were removed (such as those captured by more than one flight line) in LAStools. They also generated a photogrammetric model of the AOI through the SfM-MVS method. The photogrammetric models can be converted to point clouds by applying the ‘Raster to point function’ in ArcMap [3]. The integrated DSM was achieved after georeferencing the point clouds obtained from LiDAR and photogrammetry. The key findings from previous studies are summarised in Table 3.
Therefore, the combination of various RS data generated from multiple sensors plays an important role in the interpretation and revealing of archaeological information. These approaches (e.g., fusing TLS with photogrammetric data) are mostly applied to improve the quality of 3D models by filling data gaps and increasing data density. Choosing appropriate combination approaches for a certain application relies on several factors, such as the complexity of AOIs, data availability, and the aim of the study. In spite of the merits acquired from the RS data in digital preservation, archaeological prospection, and detection, Guyot et al. [80] claimed that such detection based mainly on the VATs is time-consuming. Hence, DL algorithms could be used for automatic detection.

4. Object Detection with Deep Learning

With the considerable development of digital archaeology, several studies have focused on using DL Neural Networks (NN) to accelerate the object detection process and exceed output potential [30,81,82,83,84,85]. DL-NN, and in particular, ANNs and Convolutional Neural Networks (CNNs), are being used in pre-trained models and are being adopted in the automated mapping processing of archaeological areas. It has been found that applying DL algorithms in archaeology is likely to efficiently classify and identify ancient objects/features (saving time and cost) [86].
Several archaeological studies have found that DL-based LiDAR data has made remarkable contributions to digital archaeology. For instance, Somrak et al. [84] applied CNN with six different VATs (e.g., SVF, slope, hillshade, and positive openness) derived from LiDAR data to determine whether they can effectively classify archaeological structures (ancient Maya structures of Chactún area in Mexico). The classification was based on the Visual Geometry Group-19 (VGG-19) CNN and additional augmentation. VGG19 is an advanced CNN architecture with pre-trained layers, which is often applied to interpret the characterises of input data in terms of colour, form, and shape. Furthermore, [84] found that DL models using LiDAR-derived VATs without hillshade performed comparatively better than the models with hillshade. The overall classification (e.g., platforms, surrounding terrain, and constructions) of the LiDAR-derived VATs achieved 95% accuracy. Consistent with the previous study, i.e., Somrak et al. [84] and Trier et al. [30] found that DL had the potential for automatically mapping archaeological areas; they pre-trained 1.2 million images based on the LiDAR of the archaeological area in Arran, Scotland. The VAT used in the DL pipeline of the Trier et al. [30] study was the Local Relief Model (LRM) derived from LiDAR. The DL-NN was executed on the SLRM visualisations to classify three archaeological monuments (cairns, shieling huts, and roundhouses).
The latter study led by Guyot et al. [80] also demonstrated that DL algorithms can make significant contributions to archaeology, and they used them to reveal ancient structures of Tumulus du Moustoir in France. They also noted, however, that a large amount of input data is required to accurately train CNN models. With more training data, the more accurate the predictions that could be achieved [30]; this might clarify why most archaeological studies, as mentioned earlier, have applied LiDAR data on DL rather than aerial imagery [81]. LiDAR can be applied to capture raw data up to 300 m over AOIs, which means more coverage of an area can be obtained by airborne LiDAR than by UAV photogrammetry. Consequently, more datasets will be trained on the DL pipeline. LiDAR-derived VATs are being widely used for classification and segmentation due to their capability to generate a wealth of topographical information [87], specifically, in digital archaeology for large-scale mapping (e.g., 1:2500) [84].
Researchers have demonstrated several factors that could contribute to improving the accuracy of CNN models’ performance, such as the amount of training data, data augmentation, normalisation, and epochs [80]. The number of training data directly impacts the quality of DL functions; more training data are more likely to limit the overfitting of the CNN models. Trier et al. [30] argued that applying a large set of labelled data to pre-train the CNN models could potentially improve the identification of archaeological features. This argument was also supported by archaeological studies, such as Guyot et al. [80]; Küçükdemirci and Sarris [83], Somrak et al. [84], and Davis et al. [85], who recommended applying data augmentation in the DL pipeline to avoid adverse consequences (imprecise results) of the small datasets (e.g., less than 500). Guyot et al. [80] and Somrak et al. [84] corroborates the ideas of Trier et al. [30] and Maxwell et al. [87], who suggested applying data augmentation to enhance the performance of the CNN models and the ultimate results, even if a large number of datasets is used. Data augmentation has the capability to artificially enhance and transform (rotations/zooming/flips/scaling) the existing training data [87], but it does not generate new data. In other words, the purpose of using augmentation is to give a few other perspectives for individual datasets, consequently, limiting overfitting and making relatively better predictions [87]. Additionally, increasing the number of epochs, in some cases, leads to enhancing the validation accuracy, and thus, generating fit models [80,87].
Enhancing the validation accuracy and predictions of the DL CNN cannot only be achieved based on the augmentation number of epochs, but also on the normalisation function [30]. This function is normally used in the CNN models to scale and normalise outputs between 0 and 1 to rescale datasets without misshaping/deforming the variations in a range of values [88]. Hence, it contributes to making the CNN process relatively more effective and stable. Furthermore, Ioffe and Szegedy [89] found that Batch Normalisation (BN) is another powerful procedure to normalise the stimulations in the layers of DL CNN, Bjorck et al. [90] agreed and stated that the BN tends to speed up the CNN process and enhance training accuracy, since this technique can be performed on the input and intermediate layers [90,91]. The BN has been adopted in DL studies due to its capability to stabilise the CNN and simplify the optimisation process. Based on the arguments mentioned above, several aspects (training data, augmentation, normalisation, and epochs) should be considered toward enhancing the validation accuracy and predictions, as well as speeding up the DL-CNN performance. Previous archaeological studies are summarised in Table 4 using DL-CNN models. The review of the literature indicates that the application of DL CNN based on RS data, particularly photogrammetric data, in archaeology is still limited.

5. Conclusions and Future Work

The objective of this paper was to review and discuss the existing literature on the adoption of advanced techniques in digital archaeology, including RS standalone, combination approaches, and DL. This review provides an overview of how these approaches have been applied, with a specific focus on digital preservation and archaeological detection.
Despite the significant number of archaeological studies that have applied digital approaches, there are still knowledge gaps in investigating the application and accuracy of approaches, such as RS standalone, combination, and DL. Specifically, the integration of airborne LiDAR data with photogrammetric data is still not a commonly utilized method in archaeology, and there is also limited evidence of the use of combined approaches in detecting hidden remains. Furthermore, there has been a scarcity of research examining the limitations of standalone, integration, and fusion approaches when applied in combination to detect archaeological remains. This means that methods for applying standalone and combination approaches together in the same archaeological study have not yet been refined in an intensive and concentrated way. In addition, DL CNN models are still not commonly used in detecting archaeological remains. Thus, further assessment and articulation of various advanced approaches for the detection of archaeological features and digital preservation are critically needed.
To fill the knowledge gaps, our recent study (2023) led by Kadhim et al. [19] demonstrated a detailed workflow to investigate the potential of applying standalone, integration, and fusion approaches in detecting and recording archaeological remains of Cahokia’s Grand Plaza, Southwestern Illinois, based on aerial photogrammetry and LiDAR data. We argue that there is a high possibility that this investigation could make considerable further contributions to archaeological practice. In addition, various DL CNN models based on the RS datasets generated from both standalone and combination approaches should also be adopted in future archaeological studies. Improving the discovery of archaeological areas/remains using DL algorithms based on RS data is the most sophisticated and efficient way to identify possible new archaeological areas that have not been recorded in archaeological and cultural documents/archives.

Author Contributions

Conceptualization, I.K. and F.M.A.; writing—original draft preparation, I.K.; Writing—review and editing, I.K. and F.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

The publication of this article was supported by the University of Exeter.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank Caitlin DeSilvey for her contributions to conceptualisation and review, and for her valuable comments which helped to considerably improve the quality of this paper. We would also like to thank Karen Anderson and Andrew Cunliffe for their guidance in the early stages of this research. Big thanks to the English for Academic Purposes (EAP) tutors, Isabel Noon and Richard Little, for providing feedback on the organization and flow of ideas of this manuscript. Special thanks to the University of Exeter and CARA for the studentship stipend. Finally, we would like to thank the editors and the three anonymous reviewers for the valuable comments that have improved this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moussa, W. Integration of Digital Photogrammetry and Terrestrial Laser Scanning for Cultural Heritage Data Recording. Ph.D. Thesis, University of Stuttgart, Stuttgart, Germany, 2014. [Google Scholar]
  2. Lerma, J.L.; Navarro, S.; Cabrelles, M.; Seguí, A.E.; Haddad, N.; Akasheh, T. Integration of Laser Scanning and Imagery for Photorealistic 3D Architectural Documentation. In Laser Scanning, Theory and Applications; IntechOpen: London, UK, 2011; Volume 26, pp. 414–430. [Google Scholar] [CrossRef] [Green Version]
  3. Holata, L.; Plzák, J.; Světlík, R.; Fonte, J. Integration of Low-Resolution ALS and Ground-Based SfM Photogrammetry Data. A Cost-Effective Approach Providing an ‘Enhanced 3D Model’ of the Hound Tor Archaeological Landscapes (Dartmoor, South-West England). Remote Sens. 2018, 10, 1357. [Google Scholar] [CrossRef] [Green Version]
  4. Klapa, P.; Mitka, B.; Zygmunt, M. Application of Integrated Photogrammetric and Terrestrial Laser Scanning Data to Cultural Heritage Surveying. IOP Conf. Ser. Earth Environ. Sci. 2017, 95, 032007. [Google Scholar] [CrossRef] [Green Version]
  5. Rönnholm, P.; Honkavaara, E.; Litkey, P.; Hyyppä, H.; Hyyppä, J. Integration of Laser Scanning and Photogrammetry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 355–362. [Google Scholar]
  6. Liang, H.; Li, W.; Lai, S.; Zhu, L.; Jiang, W.; Zhang, Q. The integration of terrestrial laser scanning and terrestrial and unmanned aerial vehicle digital photogrammetry for the documentation of Chinese classical gardens—A case study of Huanxiu Shanzhuang, Suzhou, China. J. Cult. Herit. 2018, 33, 222–230. [Google Scholar] [CrossRef]
  7. Barreau, J.-B.; Bernard, Y.; Gaugne, R.; Le Cloirec, G.; Gouranton, V. The West Digital Conservatory of Archaeological Heritage project. In Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October–1 November 2013; Volume 1, pp. 547–554. [Google Scholar] [CrossRef] [Green Version]
  8. Al-Houdalieh, S.H.A.; Tawafsha, S.A. The Destruction of Archaeological Resources in the Palestinian Territories, Area C: Kafr Shiyān as a Case Study. Near East. Archaeol. 2017, 80, 40–49. [Google Scholar] [CrossRef]
  9. Jaber, A.S.; Abed, F.M. Revealing the potentials of 3D modelling techniques; a comparison study towards data fusion from hybrid sensors. IOP Conf. Ser. Mater. Sci. Eng. 2020, 737, 012230. [Google Scholar] [CrossRef]
  10. Toz, G.; Duran, Z. Documentation and analysis of cultural heritage by photogrametric methods and GIS: A case study. In Proceedings of the XXth ISPRS Congress, Istanbul, Turkey, 12–23 July 2004; pp. 438–441. [Google Scholar]
  11. Abed, F.M.; Ibrahim, O.A.; Jasim, L.K. Terrestrial Laser Scanning to Preserve Cultural Heritage in Iraq Using Monitoring Techniques. In Proceedings of the 2nd International Conference of Buildings, Construction and Environmental Engineering (BCEE2-2015), Beirut, Lebanon, 17–18 October 2015. [Google Scholar] [CrossRef]
  12. Orengo, H.; Krahtopoulou, A.; Garcia-Molsosa, A.; Palaiochoritis, K.; Stamati, A. Photogrammetric re-discovery of the hidden long-term landscapes of western Thessaly, central Greece. J. Archaeol. Sci. 2015, 64, 100–109. [Google Scholar] [CrossRef]
  13. Hatzopoulos, J.N.; Stefanakis, D.; Georgopoulos, A.; Tapinaki, S.; Volonakis, S.; Volonakis, P.; Liritzis, I. Use of various surveying technologies to 3D digital mapping and modelling of cultural heritage structures for maintenance and restoration purposes: The Tholos in Delphi, Greece. Mediterr. Archaeol. Archaeom. 2017, 17, 311–336. [Google Scholar] [CrossRef]
  14. Crutchley, S. Ancient and modern: Combining different remote sensing techniques to interpret historic landscapes. J. Cult. Herit. 2009, 10, e65–e71. [Google Scholar] [CrossRef]
  15. Adamopoulos, E.; Rinaudo, F. UAS-Based Archaeological Remote Sensing: Review, Meta-Analysis and State-of-the-Art. Drones 2020, 4, 46. [Google Scholar] [CrossRef]
  16. Bewes, J.; Low, A.; Morphett, A.; Pate, F.D.; Henneberg, M. Artificial intelligence for sex determination of skeletal remains: Application of a deep learning artificial neural network to human skulls. J. Forensic Leg. Med. 2019, 62, 40–43. [Google Scholar] [CrossRef] [PubMed]
  17. Boardman, C.; Bryan, P. 3D Laser Scanning for Heritage: Advice and Guidance on the Use of Laser Scanning in Archaeology and Architecture; Historic England: London, UK, 2018. [Google Scholar]
  18. Rüther, H.; Smit, J.; Kamamba, D. A comparison of close-range photogrammetry to terrestrial laser scanning for heritage documentation. S. Afr. J. Geomat. 2012, 1, 149–162. [Google Scholar]
  19. Kadhim, I.; Abed, F.M.; Vilbig, J.M.; Sagan, V.; DeSilvey, C. Combining Remote Sensing Approaches for Detecting Marks of Archaeological and Demolished Constructions in Cahokia’s Grand Plaza, Southwestern Illinois. Remote Sens. 2023, 15, 1057. [Google Scholar] [CrossRef]
  20. Thompson, A.E.; Prufer, K.M. Airborne LiDAR for detecting ancient settlements, and landscape modifications at Uxbenká, Belize. Res. Rep. Belizean Archaeol. 2015, 12, 251–259. [Google Scholar]
  21. Kanashin, N.; Nikitchin, A.; Svintsov, E. Application of Laser Scanning Technology in Geotechnical Works on Reconstruction of Draw Spans of the Palace Bridge in Saint Petersburg. Procedia Eng. 2017, 189, 393–397. [Google Scholar] [CrossRef]
  22. Shanoer, M.M.; Abed, F.M. Evaluate 3D laser point clouds registration for cultural heritage documentation. Egypt. J. Remote Sens. Space Sci. 2018, 21, 295–304. [Google Scholar] [CrossRef]
  23. Trier, D.; Cowley, D.; Waldeland, A.U. Using deep neural networks on airborne laser scanning data: Results from a case study of semi-automatic mapping of archaeological topography on Arran, Scotland. Archaeol. Prospect. 2018, 26, 165–175. [Google Scholar] [CrossRef]
  24. Vilbig, J.M.; Sagan, V.; Bodine, C. Archaeological surveying with airborne LiDAR and UAV photogrammetry: A comparative analysis at Cahokia Mounds. J. Archaeol. Sci. Rep. 2020, 33, 102509. [Google Scholar] [CrossRef]
  25. Kadhim, I.; Abed, F.M. The Potential of LiDAR and UAV-Photogrammetric Data Analysis to Interpret Archaeological Sites: A Case Study of Chun Castle in South-West England. ISPRS Int. J. Geo-Inf. 2021, 10, 41. [Google Scholar] [CrossRef]
  26. Fiorillo, F.; Fernández-Palacios, B.J.; Remondino, F.; Barba, S. 3d Surveying and modelling of the Archaeological Area of Paestum, Italy. Virtual Archaeol. Rev. 2015, 4, 55–60. [Google Scholar] [CrossRef]
  27. López, J.B.; Jiménez, G.A.; Romero, M.S.; García, E.A.; Martín, S.F.; Medina, A.L.; Guerrero, J.E. 3D modelling in archaeology: The application of Structure from Motion methods to the study of the megalithic necropolis of Panoria (Granada, Spain). J. Archaeol. Sci. Rep. 2016, 10, 495–506. [Google Scholar] [CrossRef]
  28. Manajitprasert, S.; Tripathi, N.K.; Arunplod, S. Three-Dimensional (3D) Modeling of Cultural Heritage Site Using UAV Imagery: A Case Study of the Pagodas in Wat Maha That, Thailand. Appl. Sci. 2019, 9, 3640. [Google Scholar] [CrossRef] [Green Version]
  29. Eisenbeiss, H.; Zhang, L. Comparison of DSMs generated from mini UAV imagery and terrestrial laser scanner in a cultural heritage application. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 90–96. [Google Scholar]
  30. Nuttens, T.; De Maeyer, P.; De Wulf, A.; Goossens, R.; Stal, C. Comparison of 3D Accuracy of Terrestrial Laser Scanning and Digital Photogrammetry: An Archaeological Case Study. In Proceedings of the 31st EARSeL Symposium: Remote Sensing and Geoinformation Not Only for Scientific Cooperation, Prague, Czech Republic, 30 May–2 June 2011; pp. 66–74. [Google Scholar]
  31. Fassi, F.; Fregonese, L.; Ackermann, S.; De Troia, V. Comparison between Laser Scanning and Automated 3D Modelling Techniques to Reconstruct complex and extensive Cultural Heritage Areas. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5/W1, 73–80. [Google Scholar] [CrossRef] [Green Version]
  32. Faltýnová, M.; Nový, P. Airborne Laser Scanning and Image Processing Techniques for Archaeological Prospection. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 231–235. [Google Scholar] [CrossRef] [Green Version]
  33. Korumaz, A.G.; Korumaz, M.; Tucci, G.; Bonora, V.; Niemeier, W.; Riedel, B. UAV Systems for documentation of cultural heritage. In ICONARCH International Congress of Architecture and Planning; Selçuk University: Konya, Turkey, 2014; pp. 448–459. [Google Scholar]
  34. Nettley, A.; DeSilvey, C.; Anderson, K.; Wetherelt, A.; Caseldine, C. Visualising Sea-Level Rise at a Coastal Heritage Site: Participatory Process and Creative Communication. Landsc. Res. 2013, 39, 647–667. [Google Scholar] [CrossRef] [Green Version]
  35. Grenzdörffer, G.J.; Naumann, M.; Niemeyer, F.; Frank, A. Symbiosis of uas photogrammetry and tls for surveying and 3d modeling of cultural heritage monuments—A case study about the cathedral of st. nicholas in the city of greifswald. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 91–96. [Google Scholar] [CrossRef] [Green Version]
  36. Marín-Buzón, C.; Pérez-Romero, A.M.; León-Bonillo, M.J.; Martínez-Álvarez, R.; Mejías-García, J.C.; Manzano-Agugliaro, F. Photogrammetry (SfM) vs. Terrestrial Laser Scanning (TLS) for Archaeological Excavations: Mosaic of Cantillana (Spain) as a Case Study. Appl. Sci. 2021, 11, 11994. [Google Scholar] [CrossRef]
  37. Westoby, M.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  38. Barber, D.; Mills, J.; Bryan, P. Towards a standard specification for terrestrial laser scanning of cultural heritage. CIPA Int. Arch. Doc. Cult. Herit. 2003, 19, 619–624. [Google Scholar]
  39. Hodge, R.A.; Brasington, J.; Richards, K.S. Analysing laser-scanned digital terrain models of gravel bed surfaces: Linking morphology to sediment transport processes and hydraulics. Sedimentology 2009, 56, 2024–2043. [Google Scholar] [CrossRef]
  40. Luhmann, T.; Chizhova, M.; Gorkovchuk, D.; Hastedt, H.; Chachava, N.; Lekveishvili, N. Fusion of UAV and terrestrial photogrammetry with laser scanning for 3D reconstruction of historic churches in georgia. Drones 2020, 7, 753–761. [Google Scholar] [CrossRef]
  41. Wilkes, P.; Lau, A.; Disney, M.; Calders, K.; Burt, A.; de Tanago, J.G.; Bartholomeus, H.; Brede, B.; Herold, M. Data acquisition considerations for Terrestrial Laser Scanning of forest plots. Remote Sens. Environ. 2017, 196, 140–153. [Google Scholar] [CrossRef]
  42. Achille, C.; Adami, A.; Chiarini, S.; Cremonesi, S.; Fassi, F.; Fregonese, L.; Taffurelli, L. UAV-Based Photogrammetry and Integrated Technologies for Architectural Applications—Methodological Strategies for the After-Quake Survey of Vertical Structures in Mantua (Italy). Sensors 2015, 15, 15520–15539. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Bachagha, N.; Wang, X.; Luo, L.; Li, L.; Khatteli, H.; Lasaponara, R. Remote sensing and GIS techniques for reconstructing the military fort system on the Roman boundary (Tunisian section) and identifying archaeological sites. Remote Sens. Environ. 2020, 236, 111418. [Google Scholar] [CrossRef]
  44. Cowley, D.; Jones, R.; Carey, G.; Mitchell, J. Barwhill Revisited: Rethinking Old Interpretations Through Integrated Survey Datasets. Trans. Dumfries. Galloway Nat. Hist. Antiqu. Soc. 2019, 93, 9–26. [Google Scholar]
  45. Davis, D.S.; Sanger, M.C.; Lipo, C.P. Automated mound detection using lidar and object-based image analysis in Beaufort County, South Carolina. Southeast. Archaeol. 2018, 38, 23–37. [Google Scholar] [CrossRef]
  46. Lozić, E.; Štular, B. Documentation of Archaeology-Specific Workflow for Airborne LiDAR Data Processing. Geosciences 2021, 11, 26. [Google Scholar] [CrossRef]
  47. Patruno, J.; Fitrzyk, M.; Blasco, J.M.D. Monitoring and Detecting Archaeological Features with Multi-Frequency Polarimetric Analysis. Remote Sens. 2019, 12, 1. [Google Scholar] [CrossRef] [Green Version]
  48. Verhoeven, G.; Nowak, M.; Nowak, R. Pixel-level image fusion for archaeological interpretative mapping. In 8th International Congress on Archaeology, Computer Graphics, Cultural Heritage and Innovation (ARQUEOLÓGICA 2.0); Editorial Universitat Politècnica de València: València, Spain, 2016; pp. 404–407. [Google Scholar]
  49. De Reu, J.; Plets, G.; Verhoeven, G.; De Smedt, P.; Bats, M.; Cherretté, B.; De Maeyer, W.; Deconynck, J.; Herremans, D.; Laloo, P.; et al. Towards a three-dimensional cost-effective registration of the archaeological heritage. J. Archaeol. Sci. 2013, 40, 1108–1121. [Google Scholar] [CrossRef]
  50. Daponte, P.; De Vito, L.; Mazzilli, G.; Picariello, F.; Rapuano, S. A height measurement uncertainty model for archaeological surveys by aerial photogrammetry. Measurement 2017, 98, 192–198. [Google Scholar] [CrossRef]
  51. Remondino, F.; Barazzetti, L.; Nex, F.; Scaioni, M.; Sarazzi, D. UAV photogrammetry for mapping and 3d modelling–current status and future perspectives. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 25–31. [Google Scholar] [CrossRef] [Green Version]
  52. Verhoeven, G.; Vermeulen, F. Engaging with the Canopy—Multi-Dimensional Vegetation Mark Visualisation Using Archived Aerial Images. Remote Sens. 2016, 8, 752. [Google Scholar] [CrossRef] [Green Version]
  53. Bennett, R.; Welham, K.; Hill, R.A.; Ford, A. A Comparison of Visualization Techniques for Models Created from Airborne Laser Scanned Data. Archaeol. Prospect. 2012, 19, 41–48. [Google Scholar] [CrossRef]
  54. Fregonese, L.; Barbieri, G.; Biolzi, L.; Bocciarelli, M.; Frigeri, A.; Taffurelli, L. Surveying and Monitoring for Vulnerability Assessment of an Ancient Building. Sensors 2013, 13, 9747–9773. [Google Scholar] [CrossRef] [Green Version]
  55. Sevara, C.; Salisbury, R.B.; Totschnig, R.; Doneus, M.; Löcker, K.; Tusa, S. New discoveries at Mokarta, a Bronze Age hilltop settlement in western Sicily. Antiquity 2020, 94, 686–704. [Google Scholar] [CrossRef]
  56. Chiba, T.; Kaneta, S.I.; Suzuki, Y. Red relief image map: New visualization method for three dimensional data. The international archives of the photogrammetry, remote sensing and spatial information sciences. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1071–1076. [Google Scholar]
  57. Inomata, T.; Pinzón, F.; Ranchos, J.L.; Haraguchi, T.; Nasu, H.; Fernandez-Diaz, J.C.; Aoyama, K.; Yonenobu, H. Archaeological Application of Airborne LiDAR with Object-Based Vegetation Classification and Visualization Techniques at the Lowland Maya Site of Ceibal, Guatemala. Remote Sens. 2017, 9, 563. [Google Scholar] [CrossRef] [Green Version]
  58. Kokalj, Ž.; Somrak, M. Why Not a Single Image? Combining Visualizations to Facilitate Fieldwork and On-Screen Mapping. Remote Sens. 2019, 11, 747. [Google Scholar] [CrossRef] [Green Version]
  59. Orengo, H.; Garcia-Molsosa, A. A brave new world for archaeological survey: Automated machine learning-based potsherd detection using high-resolution drone imagery. J. Archaeol. Sci. 2019, 112, 105013. [Google Scholar] [CrossRef]
  60. Dostal, C.; Yamafune, K. Photogrammetric texture mapping: A method for increasing the Fidelity of 3D models of cultural heritage materials. J. Archaeol. Sci. Rep. 2018, 18, 430–436. [Google Scholar] [CrossRef]
  61. Guarnieri, A.; Remondino, F.; Vettore, A. Digital Photogrammetry and TLS Data Fusion Applied to Cultural Heritage 3D Modeling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 1–6. [Google Scholar]
  62. Yurtseven, H. Comparison of GNSS-, TLS- and Different Altitude UAV-Generated Datasets on the Basis of Spatial Differences. ISPRS Int. J. Geo-Inf. 2019, 8, 175. [Google Scholar] [CrossRef] [Green Version]
  63. Kadhim, I.; Abed, F. Investigating the old city of Babylon: Tracing buried structural history based on photogrammetry and integrated approaches. In Earth Resources and Environmental Remote Sensing/GIS Applications XII; SPIE: Bellingham, WA, USA, 2021. [Google Scholar] [CrossRef]
  64. Jaber, A.; Abed, F. The Fusion of Laser Scans and Digital Images for Effective Cultural Heritage Conservation. MSc Thesis, University of Baghdad, Baghdad, Iraq, 2020. [Google Scholar]
  65. Tzvetkov, J. Relief visualization techniques using free and open source GIS tools. Pol. Cartogr. Rev. 2018, 50, 61–71. [Google Scholar] [CrossRef] [Green Version]
  66. Kokalj, Ž.; Zakšek, K.; Pehani, P.; Čotar, K.; Oštir, K. Visualization of small scale structures on high resolution DEMs. In Proceedings of the EGU General Assembly Conference Abstracts, Vienna, Austria, 12–17 April 2015. [Google Scholar]
  67. Papasaika, H.; Poli, D.; Baltsavias, E. A framework for the fusion of digital elevation models. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 811–818. [Google Scholar]
  68. Tapete, D.; Casagli, N.; Luzi, G.; Fanti, R.; Gigli, G.; Leva, D. Integrating radar and laser-based remote sensing techniques for monitoring structural deformation of archaeological monuments. J. Archaeol. Sci. 2013, 40, 176–189. [Google Scholar] [CrossRef] [Green Version]
  69. Filzwieser, R.; Olesen, L.H.; Verhoeven, G.; Mauritsen, E.S.; Neubauer, W.; Trinks, I.; Nowak, M.; Nowak, R.; Schneidhofer, P.; Nau, E.; et al. Integration of Complementary Archaeological Prospection Data from a Late Iron Age Settlement at Vesterager—Denmark. J. Archaeol. Method Theory 2017, 25, 313–333. [Google Scholar] [CrossRef]
  70. Megahed, Y.; Shaker, A.; Sensing, W.Y.-R. Fusion of Airborne LiDAR Point Clouds and Aerial Images for Heterogeneous Land-Use Urban Mapping. Remote Sens. 2021, 13, 814. [Google Scholar] [CrossRef]
  71. Papasaika, H.; Baltsavias, E. Fusion of LIDAR and photogrammetric generated Digital Elevation Models. In Proceedings of the ISPRS Hannover Workshop on High-Resolution Earth Imaging for Geospatial Information, Hannover, Germany, 2–5 June 2009. [Google Scholar]
  72. Voltolini, F.; Rizzi, A.; Remondino, F.; Girardi, S.; Gonzo, L. Integration of non-invasive techniques for documentation and preservation of complex architectures and artworks. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 1–7. [Google Scholar]
  73. Forkuo, E.K. Automatic Fusion of Photogrammetric Imagery and Laser Scanner Point Clouds. Doctoral Dissertation, Hong Kong Polytechnic University, Hong Kong, China, 2005. [Google Scholar]
  74. Yang, M.Y.; Cao, Y.; McDonald, J. Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments. ISPRS J. Photogramm. Remote Sens. 2011, 66, S52–S61. [Google Scholar] [CrossRef] [Green Version]
  75. Franceschi, M.; Martinelli, M.; Gislimberti, L.; Rizzi, A.; Massironi, M. Integration of 3D modeling, aerial LiDAR and photogrammetry to study a synsedimentary structure in the Early Jurassic Calcari Grigi (Southern Alps, Italy). Eur. J. Remote Sens. 2015, 48, 527–539. [Google Scholar] [CrossRef]
  76. Deiana, R.; Bonetto, J.; Mazzariol, A. Integrated Electrical Resistivity Tomography and Ground Penetrating Radar Measurements Applied to Tomb Detection. Surv. Geophys. 2018, 39, 1081–1105. [Google Scholar] [CrossRef]
  77. Elfadaly, A.; Abouarab, M.A.R.; El Shabrawy, R.R.M.; Mostafa, W.; Wilson, P.; Morhange, C.; Silverstein, J.; Lasaponara, R. Discovering Potential Settlement Areas around Archaeological Tells Using the Integration between Historic Topographic Maps, Optical, and Radar Data in the Northern Nile Delta, Egypt. Remote Sens. 2019, 11, 3039. [Google Scholar] [CrossRef] [Green Version]
  78. Lasaponara, R.; Masini, N.; Holmgren, R.; Forsberg, Y.B. Integration of aerial and satellite remote sensing for archaeological investigations: A case study of the Etruscan site of San Giovenale. J. Geophys. Eng. 2012, 9, S26–S39. [Google Scholar] [CrossRef]
  79. Sarris, A.; Papadopoulos, N.; Agapiou, A.; Cristina, M.; Hadjimitsis, D.G.; Parkinson, W.A.; Yerkes, R.W.; Gyucha, A.; Duffy, P.R. Integration of Geophysical Surveys, Ground Hyperspectral Measurements, Aerial and Satellite Imagery for Archaeological Prospection of Prehistoric Sites: The Case Study of Vésztő-Mágor Tell, Hungary. J. Archaeol. Sci. 2013, 40, 1454–1470. [Google Scholar] [CrossRef]
  80. Guyot, A.; Lennon, M.; Lorho, T.; Hubert-Moy, L. Combined Detection and Segmentation of Archeological Structures from LiDAR Data Using a Deep Learning Approach. J. Comput. Appl. Archaeol. 2021, 4, 1–19. [Google Scholar] [CrossRef]
  81. Altaweel, M.; Khelifi, A.; Li, Z.; Squitieri, A.; Basmaji, T.; Ghazal, M. Automated Archaeological Feature Detection Using Deep Learning on Optical UAV Imagery: Preliminary Results. Remote Sens. 2022, 14, 553. [Google Scholar] [CrossRef]
  82. Garcia-Molsosa, A.; Orengo, H.A.; Lawrence, D.; Philip, G.; Hopper, K.; Petrie, C.A. Potential of deep learning segmentation for the extraction of archaeological features from historical map series. Archaeol. Prospect. 2021, 28, 187–199. [Google Scholar] [CrossRef]
  83. Küçükdemirci, M.; Sarris, A. Deep learning based automated analysis of archaeo-geophysical images. Archaeol. Prospect. 2020, 27, 107–118. [Google Scholar] [CrossRef]
  84. Somrak, M.; Džeroski, S.; Kokalj, Ž. Learning to Classify Structures in ALS-derived Visualizations of Ancient Maya Settlements with CNN. Remote Sens. 2020, 12, 2215. [Google Scholar] [CrossRef]
  85. Davis, D.S.; Caspari, G.; Lipo, C.P.; Sanger, M.C. Deep learning reveals extent of Archaic Native American shell-ring building practices. J. Archaeol. Sci. 2021, 132, 105433. [Google Scholar] [CrossRef]
  86. Jamil, A.H.; Yakub, F.; Azizan, A.; Roslan, S.A.; Zaki, S.A.; Ahmad, S.A. A Review on Deep Learning Application for Detection of Archaeological Structures. J. Adv. Res. Appl. Sci. Eng. Technol. 2022, 26, 7–14. [Google Scholar] [CrossRef]
  87. Maxwell, A.E.; Pourmohammadi, P.; Poyner, J.D. Mapping the Topographic Features of Mining-Related Valley Fills Using Mask R-CNN Deep Learning and Digital Elevation Data. Remote Sens. 2020, 12, 547. [Google Scholar] [CrossRef] [Green Version]
  88. Al-Najjar, H.A.H.; Kalantar, B.; Pradhan, B.; Saeidi, V.; Halin, A.A.; Ueda, N.; Mansor, S. Land Cover Classification from fused DSM and UAV Images Using Convolutional Neural Networks. Remote Sens. 2019, 11, 1461. [Google Scholar] [CrossRef] [Green Version]
  89. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; Volume 1, pp. 448–456. [Google Scholar]
  90. Bjorck, J.; Gomes, C.; Selman, B.; Weinberger, K.Q. Understanding Batch Normalization. Adv. Neural Inf. Process. Syst. 2018, 31, 7694–7705. [Google Scholar]
  91. Lemley, J.; Bazrafkan, S.; Corcoran, P. Smart Augmentation Learning an Optimal Data Augmentation Strategy. IEEE Access 2017, 5, 5858–5869. [Google Scholar] [CrossRef]
  92. Soroush, M.; Mehrtash, A.; Khazraee, E.; Ur, J.A. Deep Learning in Archaeological Remote Sensing: Automated Qanat Detection in the Kurdistan Region of Iraq. Remote Sens. 2020, 12, 500. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Advanced archaeological techniques. There is a research gap in assessing these approaches together to detect archaeological remains. These approaches are discussed in this review based on previous studies.
Figure 1. Advanced archaeological techniques. There is a research gap in assessing these approaches together to detect archaeological remains. These approaches are discussed in this review based on previous studies.
Sensors 23 02918 g001
Figure 2. Literature count (2010–2022) from the database Scopus (http://www.scopus.com/) (accessed on 15 January 2023) for archaeological studies that applied Remote Sensing (RS) standalone and combination approaches (fusion/integration) and Artificial Intelligence (AI) in preserving and identifying archaeological features. The figure illustrates that there is a continuous and increasing trend of studies with the applications of LiDAR, photogrammetry, and AI techniques in archaeology—from 2010 to 2022.
Figure 2. Literature count (2010–2022) from the database Scopus (http://www.scopus.com/) (accessed on 15 January 2023) for archaeological studies that applied Remote Sensing (RS) standalone and combination approaches (fusion/integration) and Artificial Intelligence (AI) in preserving and identifying archaeological features. The figure illustrates that there is a continuous and increasing trend of studies with the applications of LiDAR, photogrammetry, and AI techniques in archaeology—from 2010 to 2022.
Sensors 23 02918 g002
Table 1. The limitations of LS and photogrammetry.
Table 1. The limitations of LS and photogrammetry.
LimitationsPhotogrammetryLaser Scanning
AccuracyCentimetre accuracy (geotagged images).Millimetre accuracy.
ModellingTexturing and coloring are relatively better
than LS.
Relatively better in penetrating and detecting features covered by dense vegetation.
TimeData collection: depends on area coverage,
number of exposures,
overlap, speed.
Processing: with advanced methods
(e.g., the SfM), it might take less time than
LS processing.
Data collection: scans thousands of points per second, but the time relies on area coverage, number of stations.
Processing: it might take longer.
Cost (£)>500 depending on drone and
camera types.
Around 100,000
Weight (g)~2000Around 14,000
Table 2. A meta-analysis of some previous archaeological studies for Remote Sensing (RS) standalone approaches—Laser Scanning (LS) and image-based photogrammetry.
Table 2. A meta-analysis of some previous archaeological studies for Remote Sensing (RS) standalone approaches—Laser Scanning (LS) and image-based photogrammetry.
StudyArchaeological Site RS DataFinings/Conclusions
[32]Chun Castle, UKLiDAR and aerial photogrammetry(I) Both LiDAR and photogrammetric data-derived VATs revealed archaeological features, such as huts/houses, linear features (possible paths), circular structures, and castle well.
(II) In general, relatively less archaeological remains were detected by LiDAR data than those from photogrammetry.
(III) The Red Relief Image Map (RRIM) of both data sources provided a comparatively higher level of detail compared to hillshade, aspect, and gradient raster images.
[31]Cahokia Mounds, USALiDAR and aerial photogrammetry(I) In some cases, photogrammetric data are appropriate alternatives to LiDAR data, specifically in areas with low vegetation coverage.
(II) Aerial photogrammetry is faster and costs less than a LiDAR survey in observing archaeological areas.
(III) Photogrammetry is relatively better in interpreting archaeological data due to its capability in generating true colour mosaics.
[45]Beaufort County, South CarolinaLiDAR data(I) Revealed 160 undetected mounds.
[44]Barwhill in ScotlandLiDAR data(I) Some archaeological remains (e.g., Roman roads and water drainage) were identified.
(II) The influence of the illumination in LiDAR-derived hillshade (1-m spatial resolution) generates distorted raster, which led to the burying of some archaeological remains.
[29]Lamassu and Sargon II the king of Assyria, IraqTLS and terrestrial photogrammetry(I) Two TLS registration methods (LM-ICP and NN-ICP) were examined; the average errors were 0.004 m and 0.003 m for the NN-ICP and LM-ICP, correspondingly.
[13]The Tholos of Delphi, GreeceClose range photogrammetry,
TLS, GNSS
(I) A 3D map of ancient structures was created.
[28] Palace Bridge, RussiaTLS data(I) The draw spans of the bridge structure were reconstructed by creating 3D models.
(II) TLS point clouds provided complete detail for modelling the bridge structure.
[27]Uxbenká site core architecture, Toledo District, BelizeLiDAR data(I) The Hillshade derived from LiDAR data (1-m spatial resolution), in some cases, provides a less robust method for revealing small structures if only LiDAR data is applied while gradient raster is relatively more effective in that case.
[26]An ancient building ‘Cathedral St. Nikolai in Germany’TLS, aerial photogrammetry,
and total stations
(I) The standard deviations between models generated from the TLS and photogrammetry are not significant (between 0.03 m to 0.09 m) and variations in overlapping two models ranging between 0.02 m and 0.03 m, are determined through algorithms in CloudCompare.
[25]Cotehele Quay, Cornwall in UK LiDAR, TLS, and aerial images(I) A realistic 3D model of Cotehele Quay was created.
(II) Digital formats were translated from spatial data of coastal change to be available for general audiences.
(III) Mixed-media films were designed to be used for climate and coastal change communications.
[24]An old construction was built in 1874, GermanyPhotogrammetry, TLS, total stations(I) Photorealistic models were generated for digital visualization and reconstruction.
[23]The southern part of Devil’s Furrow in the Czech RepublicLiDAR data(I) Some archaeological features, such as tracks, pathways, and erosion furrows were detected and digitally preserved through various VATs, e.g., hillshade, gradient, and aspect images derived from LiDAR DTM.
[54] An ancient building “Palazzo del Capitano”, Italy TLS data(I) Observing and monitoring ancient buildings.
[21]A cultural site ‘Sint-Baafs Abbey
in Belgium’
TLS, photogrammetry, and total stations(I) 3D models of the AOI were created.
(II) The horizontal and vertical accuracy of the TLS is two times higher than those generated from terrestrial photogrammetry.
[21]Pinchango Alto, in the south of
Lima, Peru
TLS and UAV imagery(I) The standard deviation between models generated from TLS and photogrammetric data was 6 cm and the mean difference was less than 1 cm. These differences result from occlusions in both datasets.
Table 3. A meta-analysis of some studies that applied Remote Sensing (RS) combination approaches—Laser Scanning (LS) and image-based photogrammetry.
Table 3. A meta-analysis of some studies that applied Remote Sensing (RS) combination approaches—Laser Scanning (LS) and image-based photogrammetry.
StudyArchaeological SiteCombination ApproachFindings/Conclusion
[64]The Lady of Hatra (indoor statue), Al- Mustansiriya School, and Baghdad Qushla Tower (outdoor statues) in IraqFusing TLS and digital aerial images(I) 3D models of indoor and outdoor statues from TLS and photogrammetry.
(II) Photogrammetry provides a comparatively denser, smoother, as well as more detailed model of the indoor statue than TLS.
(III) The TLS model of the outdoor statues has a higher spatial resolution model than photogrammetric data.
(IV) The Fusion of the two datasets has filled occlusions, produced more details by improving data density, and reduced the level of TLS data roughness.
[40]Historic Churches in GeorgiaFusing TLS with terrestrial and aerial photogrammetry(I) Both TLS and photogrammetry supply similar outcomes, but when both datasets are fused, a more complete 3D model is generated.
(II) The aerial photogrammetry records the tower and roof of the construction that did not cover by terrestrial photos, nor TLS.
(III) Applying the fusion approach through advanced software (e.g., RealityCapture) may save processing times and result in high-quality models.
[58]Chactún area in Mexico, Celtic fields in Netherlands, and Julian Alps in SloveniaIntegrating VATs derived from LiDAR (same sensor)(I) Combining visualization images can enhance the visibility and preserve the physical characteristics of the individual images.
(II) The integrated outcome does not create artificial artifacts.
(III) Applying a single visualization image is likely to miss valuable traces in the archaeological areas.
[3]Hound Tor Deserted Medieval Village in south-west Englandintegrating LiDAR data with photogrammetry(I) 3D enhanced, detailed, and precise model is produced from the integration approach.
(II) Integration enhances the quality of the DSM/DTM created from low-resolution (1 pixel/m2) LiDAR data.
(III) Various types of remains are digitised, such as farm fences, debris of buildings, ridges, and furrows in the study area.
(IV) The main limitation of the results is that some parts of the study area have not been recorded by the SfM method due to the dense vegetation.
[75]Mount Cornello, Southern Alps in ItalyIntegrating aerial LiDAR and photogrammetric models(I) Image point clouds achieved a relatively better 3D textured model than LiDAR point clouds.
(II) Aerial LIDAR provides data of the flaws’ traces/geologic boundaries in areas covered with vegetation.
(III) The integration of two models derived from airborne LiDAR and photogrammetric data results in a complete 3D model.
[1]The temple of Heliopolis, Egypt and Hirsau Abbey in GermanyTLS and terrestrial photogrammetry(I) The combination of synthetic images derived from TLS with digital images is an effective solution to overcome the limitations of the standalone data.
(II) The combination approach resolved several issues including occlusions in TLS point clouds and providing 3D models with a higher level of detail.
[61]Villa Giovanelli Colonna: a historical palace in ItalyIntegrating TLS and terrestrial photogrammetric models(I) The main structures of the palace (porch and façades) are modelled by image-based photogrammetry, while fine detail (staircase, turrets, and statues) are modelled by the TLS.
(II) TLS point clouds need to be optimised to create adequate dense datasets.
(III) Improper outcomes from image-based modelling were generated due to vegetation and shadows.
(IV) A 3D complete and detailed model is generated from a combination of two RS data.
Table 4. Summarising some of the archaeological studies that applied Deep Learning—Convolutional Neural Networks (DL-CNN) models.
Table 4. Summarising some of the archaeological studies that applied Deep Learning—Convolutional Neural Networks (DL-CNN) models.
StudyArchaeological SiteData SourceFindings/Conclusion
[30]Arran in Scotland, UKLiDAR(I) Automatically mapping the archaeological area.
(II) Three archaeological monuments (roundhouses, cairns, and shieling huts) were classified.
[83]Demetrias site, GreeceGPR(I) Anomalies identified.
[92]Qanat systems of the Erbil, Kurdistan Region of IraqCORONA Satellite Imagery(I) The qanat shafts were detected.
[84]ancient Maya, MexicoLiDAR(I) Various types of ancient structures (building, terrain, aguada, and platform) were classified and distinguished. The overall accuracy exceeded 95%.
(II) The performance of DL CNN models using VATs (without the hillshad raster) perform relatively better than models with the hillshade raster.
(III) VATs derived from LiDAR are effective datasets for DL-based classification.
[80]Tumulus du Moustoir site, FranceLiDAR(I) The DL-CNN accurately and semi-automatically identified and characterized archaeological anomalies.
[85]Beaufort, Charleston, and George-town County in South Carolina, USALiDAR, SAR, multispectral(I) The detection accuracy did not exceed 77%.
(II) Over 100 shell rings were detected.
(III) Preserving cultural deposits, as well as clarifying archaeological records.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kadhim, I.; Abed, F.M. A Critical Review of Remote Sensing Approaches and Deep Learning Techniques in Archaeology. Sensors 2023, 23, 2918. https://doi.org/10.3390/s23062918

AMA Style

Kadhim I, Abed FM. A Critical Review of Remote Sensing Approaches and Deep Learning Techniques in Archaeology. Sensors. 2023; 23(6):2918. https://doi.org/10.3390/s23062918

Chicago/Turabian Style

Kadhim, Israa, and Fanar M. Abed. 2023. "A Critical Review of Remote Sensing Approaches and Deep Learning Techniques in Archaeology" Sensors 23, no. 6: 2918. https://doi.org/10.3390/s23062918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop