Are measured InSAR displacements a function of the chosen processing method?

: The benefits of InSARto the civil engineering industry have been demonstrated on manyoccasions,however there is still a limited uptake by end-users, due to perceived differences between data providers and uncertainty around how to interpret results. This paper critically compares three datasets for London: Radarsat-2 (RS2) from 2011 to 2015, TerraSAR-X (TSX) from 2011 to 2017, and Sentinel-1 (STL1) from 2015 to 2017. Two of the datasets (TSX & RS2) were processed by commercial data providers, while the STL1 data were processed using ENVI ® SARscape ® by the authors. The results show an inverse relationship between the Pearson Correlation Coefficient and absolute total displacement of Persistent Scatterers (PS). There is a strong correlation between datasets for total displacement greater than 5 mm, but a weak or no correlation in the 0 – 3 mm range. Consequently, standard commercial InSAR datasets, processed with no a priori knowledge of the area of interest, have error margins below 3 – 5 mm but correctly detect all deformation phenomena exceeding this threshold. RS2-TSX both capture the spatial extent of the investigated area of dewatering induced subsidence, however STL1 measures a much broader, less pronounced zone of heave than TSX.


PSI algorithms and previous comparison studies
Over the last 30 years, many algorithms have been developed for PSI time series analysis: Permanent Scatterer InSAR (PSInSAR®) (Ferretti et al. 2000, Small BAseline Subset (SBAS) (Berardino et al. 2002;Lanari et al. 2004), Interferometric Point Target Analysis (IPTA) (Werner et al. 2003), Stanford Method for Persistent Scatterers (StaMPS) (Hooper et al. 2004), Delft Persistent Scatterer Interferometry (DePSI) (Kampes 2005), Coherent Pixels Technique (CPT) (Blanco-Sanchez et al. 2008), Stable Points Network (SPN) (Duro et al. 2004), SqueeSAR® (Ferretti et al. 2011) and Quasi Persistent Scatterers (Perissin and Wang 2012). These algorithms have both theoretical and practical differences, such as whether they rely on Persistent Scatterers (PS) or Distributed Scatterers (DS). PS are coherent targets exhibiting high phase stability over the entire observation period, such as buildings, rocky outcrops and train tracks. DS are contiguous point clusters with a lower individual amplitude and coherence than PS, but collectively have an adequate amplitude and coherence for reliable measurement, such as areas of short vegetation and deserts (Ferretti et al. 2011). The algorithms are outlined in Osmanoglu et al. (2016).
In order to assess the precision ( process validation) and accuracy ( product validation) of PSI techniques, during the early 2000s, a number of PSI validation activities were carried out through the Terrafirma project, part of the GMES (Global Monitoring for Environment and Security) programme, run by the European Space Agency (ESA). The Terrafirma project ran from 2003 to 2012 (GMES Terrafirma 2010) and GMES was renamed to Copernicus in 2012. Two major validation projects took place: PSIC4 (Persistent Scatterer Interferometry Codes Cross Comparison and Certification for long term differential interferometry) (Raucoules et al. 2009) and the Terrafirma Validation project (Adam et al. 2009;Crosetto et al. 2009a).
During the PSIC4 project, eight teams, both commercial and academic, processed ERS and ENVISAT images to produce their own PSI deformation products (Raucoules et al. 2009). The teams processed the data without any prior knowledge of the type and location of deformation in the test area, on an anonymous basis. The detected movements were then compared with external validation measurements such as levelling points. The results showed that the subsidence velocity intercomparison between each team had a standard deviation of 0.6 to 1.9 mm a -1 . The validation of PS velocity against levelling showed a standard deviation of between 5 and 7 mm a -1 (Raucoules et al. 2009). During the Terrafirma validation project, the product and process were validated. The product validation utilized ground truth information and assessed the final geocoded displacement data, whereas the process validation compared the intermediate data in slant range geometry. Crosetto et al. (2009a) describes the main outputs of the Terrafirma validation project at two sites in the Netherlands: Alkmaar and Amsterdam. They found the standard deviation of velocity of each team was 0.4-0.5 mm a -1 and validation with levelling found RMS error ranges of 1.0-1.5 mm a -1 for ERS and 1.2-1.3 mm a -1 for ENVISAT. Adam et al. (2009) determined a lower bound for PSI deformation deviation for ERS or ENVISAT data, for an ideal scatterer (coherence of one) undergoing linear deformation: is the number of interferograms. There have been several other examples of comparisons between InSAR and ground-based measurements: Casu et al. (2006) compared SBAS results from ERS data with GPS and levelling measurements and found a standard deviation of 5 mm on the lineof-sight deformation time series in both SAR/levelling and SAR/ GPS comparisons. They also observed an increase in the standard deviation value with distance from the reference point, with an estimated variation of 0.05 mm km -1 . Heleno et al. (2011) describes the results from two independent PSI processing chains and validation with levelling and GPS data, as part of the Terrafirma project. In London, UK, there are several studies comparing PSI results with ground-monitoring data from the large-scale tunnelling project Crossrail, demonstrating how PSI can accurately capture the settlement trough associated with tunnelling in an urban area (Robles et al. 2016;Giardina et al. 2018).
In addition to comparisons of PSI results with ground-based measurements, there have been several studies that compare PSI processing chains. Crosetto et al. (2016) provides a review of the main PSI algorithms and Osmanoglu et al. (2016) describes the different algorithms for time series analysis of InSAR data. Sousa et al. (2011) compared two PSI approaches, DePSI (Delft PS-InSAR processing package) and StaMPS (Stanford Method for Persistent Scatterers) and Yan et al. (2012) compared PS and SBAS processing results for Mexico City. These comparisons focus on the differences between algorithms that use phase variation in time compared to variation in space and the influence on point density and distribution. Herrera et al. (2009) compared the Stable Point Network (SPN) and Coherent Pixel Technique (CPT) in the city of Murcia, Spain and observed an absolute difference of the deformation time series of 6 mm.
In order to understand why there are so many varying algorithms, some InSAR theory will be briefly discussed.

InSAR theory
InSAR allows measurement of ground deformation by detecting a change in phase between two or more SAR images. InSAR phase is the sum of several factors (Osmanoglu et al. 2016): where :Df phase change between SAR acquisitions, f flat flat earth phase, f topo topographic phase contribution, f orbit phase error induced by errors in orbit information, f defo phase contribution due to ground deformation, f tropo tropospheric phase contribution, f iono ionospheric phase contribution, f scat phase contribution related to the scatterers electrical properties, f noise combined noise phase.
The deformation phase (f defo ) is the contribution an end-user is interested in; therefore, the other components must be isolated, modelled and removed.
One component that is likely to cause differences between data providers and software is the method of removing atmospheric contributions, f iono and f tropo . Differences in the atmospheric conditions between the master and slave images can contribute to interferometric phase measurements. Variations in electron density in the ionosphere (upper atmosphere, c. 85 km to 600 km altitude) cause slowly varying large scale distortions but only have a minor effect at mid-latitudes (such as the UK) on C and X-band. Ionospheric effects including Line-Of-Sight (LOS) displacement error and azimuth pixel shift, have about one-sixteenth of the effect on C and X-band compared to L-band (Eineder and Bramler 2014;Liang et al. 2019). The troposphere (lower atmosphere, sea level to c. 10 km altitude) has a more variable effect due to its high spatiotemporal variability. The tropospheric phase component is commonly referred to as the atmospheric phase screen (APS). Factors affecting the APS include: temperature, partial pressure of water vapour, pressure of dry air, water droplets, electron density and radar wavelength (Eineder and Bramler 2014). These factors affect the signal propagation velocity and can cause phase artefacts.
To remove the APS, two basic assumptions are made: signals highly correlated in space but uncorrelated in time are atmospheric, whereas signals correlated in time and to a varying degree in space are deformation. The APS is modelled and then removed to leave the deformation signal. The parameters, filtering mechanisms and models used to remove the APS will differ between each data provider and software. APS removal is a growing research field, with some recent papers including Biondi et al. (2019) and Manzoni et al. (2020).
All multi-temporal PSI measurements are made relative to a reference point or network of points; hence the reliability and stability of the reference points are important. The location and number of reference points will be unique to each data provider and software. When there is some prior knowledge of the area of deformation to be studied, the reference point is properly placed at a location that is considered stable in terms of deformation. Depending on the size of the area, more than one reference point may be used. For example, the ENVI ® SARscape ® PSI processing algorithm follows the approach that having a reference point every 5 km 2 is best for removing atmospheric effects (Sarmap 2014).
PSI measurements are also relative in time with respect to the first observation. The temporal frequency of measurements affect how well deformation phenomena can be modelled. If there is a long time period between image acquisitions, deformation phenomena may be missed, and it also increases the likelihood of decorrelation. A shorter wavelength, e.g. X-band, and a shorter repeat cycle will result in a greater number of PS. Coherence also affects the PS point density since a lower coherence will yield more PS points but at a higher time series noise level. Shorter wavelengths are more sensitive to displacement magnitude, because the measured displacement corresponds to a larger fraction of the X-band wavelength but are also more sensitive to atmospheric effects.
The 3D geolocation accuracy of PS is significantly lower than the displacement precision, so it is often impossible to link PS to specific objects, causing some ambiguity in any potential interpretations. Precise location of PS can be established (Ferretti et al. 2000, but it is rarely achievable in large area processing. The location and stability of the reference point can also affect the elevation accuracy of all the PS. The elevation of PS is estimated from the phase; therefore the processing algorithm has an influence on geolocation precision. Elevation precision is also dependent on baseline, the shorter the baseline, the lower the elevation precision. If the elevation of the reference point is unknown, a value is extracted from an external Digital Elevation Model (DEM), which also has an error associated with value and location accuracy. An error in the nominal elevation of the reference point will produce a corresponding bias in the measured absolute elevations of all other PS, which will further lead to a miscalculation of the range of each PS and the associated calculation of their ground range geolocation (east-west coordinate) (Montazeri et al. 2018). Furthermore, errors can be introduced if the reference point is not stable in time (Montazeri et al. 2018). Geocoding accuracy can be improved by using a 'bare-Earth' Digital Surface Model (DSM) in combination with ground control points (GCP's) (Yang et al. 2016;Montazeri et al. 2018) and/or an ortho-rectified aerial image (Raucoules et al. 2009). The resolution cell of a SAR sensor is dependent on the satellite and sensor characteristics. The return signal recorded in a SAR image is the vector sum of all the scatterers within a resolution cell and this is the reason for the difficulty in attributing the location of a PS within a resolution cell. The spatial resolution of the sensor also affects the geolocation accuracy; usually, the shorter the wavelength, the smaller the ground resolution cell, providing greater accuracy in the PS location.
In summary, there are many variables and complexities in an InSAR processing chain that could result in differences in measured InSAR displacements. Here we present results from a rare opportunity to compare results from two InSAR data service providers and one dataset processed in-house using commercial software, from three different satellite sensors and two different wavelengths.

Datasets
London is the chosen study area because of the availability of SAR data acquired over a substantial period of time. Its urbanized nature provides an abundance of PS points, with continual ongoing construction projects plus frequent natural and anthropogenic changes in groundwater levels providing plenty of displacements at the millimetric scale level to be observed.
Three datasets have been used for this study: Radarsat (RS2), TerraSAR-X (TSX) and Sentinel-1 (STL1). All three datasets were processed independently and not for the purpose of this study. Each dataset covers a different time period (Fig. 1). The TSX data partially overlap both the RS2 and STL1 datasets and hence is used as the baseline for comparison with both. The TSX data has been validated with levelling data from the Crossrail Elizabeth Line monitoring at Bond Street (Bischoff 2019).
The TSX and RS2 datasets are from opposing orbital passes, descending and ascending respectively, and the TSX dataset has more than double the number of processed images over the time period studied; 107 v. 41 (Fig. 1). The lower number of included images and less frequent temporal sampling for RS2 would be expected to result in a higher uncertainty level for the RS2 time series data. The STL1 and TSX comparison involves data from descending passes, where there is a 0.2°difference in incidence angle, and they provide a total of 45 and 69 images respectively. Both the RS2 and STL1 data are from C-band satellites, with a wavelength of 5.6 cm, whereas TSX is a higher resolution X-band satellite with a wavelength of 3.1 cm. Higher resolution satellites have an increased PS density at the same coherence, due to the physical fact that at a higher resolution and shorter wavelength, smaller scatterers show a higher signal to clutter ratio. Further details of the datasets are listed in Table 1. All the displacement data used are measured along the LOS.
Each of the three datasets has a slightly different geographic extent, and the area of study is in the region of overlap between them (Fig. 2). Between 2011 and 2015, a large area of subsidence is observed around area (1) in Figure 2, which is caused mainly by dewatering at the Limmo Shaft as part of the Crossrail project (Semertzidou 2016;Bischoff 2019). Post-dewatering recharge heave is clearly observable between 2015 and 2018. Irregularities in the surface displacements at (2) are described in Scoular et al. (2020) and Newman et al. (2017). The Crossrail tunnel settlement trough can also be seen at (3) in Figure 2a and b.

Methods
To prepare the data for comparison, the datasets had to be rebased to the same start and end date, because displacements are relative  to the first measurement. For the RS2-TSX comparison, the first acquisitions, in each case, are within one week of each other (25/06/ 2011 and 21/06/2011 respectively), and the last date with an acquisition together is 11/08/2015. For the TSX and STL1 data, the start and end dates where there are corresponding acquisitions from both satellites are the 03/05/2015 and the 28/04/2017. The first common date is set as zero (21/06/2011 RS2-TSX and 03/05/2015 STL1-TSX), the original value of displacement on that date is subtracted from every other date and the average velocity over the new time period calculated. Due to the differing temporal frequency of acquisition and the irregular spatial distribution and locations of PS points in each dataset, the datasets were linearly interpolated, since linear models are often used in PS processing, although this is not always the case. No adjustments have been made to account for the differing incidence angles and viewing geometry of each satellite, because only the magnitude of LOS displacements are compared. The lateral component of motion in London has been assessed in Mason et al. (2015) and in East London is c. 1 ± 1.6 mm a -1 eastward averaged over c. 25 km 2 , between 2002 and 2010. For STL1 and TSX, which both have a descending viewing geometry and a difference in incidence angle of 0.2°, a 1 mm a -1 eastward movement (with no vertical movement) would lead to a negligible 0.007 mm a -1 difference in the magnitude of LOS displacement. For RS2 and TSX, the difference in magnitude of LOS displacement for 1 mm a -1 of eastward motion is 0.18 mm a -1 .
The relationship between the correlation coefficient and the absolute total displacement (magnitude) is tested to assess and quantify the value of displacement below which the PSI results deviate between datasets. This analysis was performed using randomly selected points within the geographic bounds of the area shown in Figure 3 (66 km 2 ) and averaging all the PS points within a radius of a set distance around each of the random points. Initially 300 random points were selected, and a 0.5 km radius applied around each one (sensitivity tests were conducted with 1.0 and 1.5 km radii). Although only 84 points are needed to meaningfully sample the entire area, 300 points were selected at random to better capture the entire area. The XYZ geolocation accuracy is in the order of several metres for all three datasets; hence it is difficult to compare datasets on a PS point by PS point basis (Dheenathayalan et al. 2013(Dheenathayalan et al. , 2014. PS on the ground will behave differently to those, for example, on a tall building which may experience the effects of thermal dilation. By choosing a large radius and averaging >1000 points within the radius, variations in displacement caused by slight differences in PS location are averaged out. The interpolated time series of the two datasets (RS2-TSX and STL1-TSX) can then be compared for each of the 300 selected random points to find the Pearson Correlation Coefficient (PCC). PCC is a measure of the strength of a linear correlation between two variables and can take values from +1 to −1, where a PCC greater than 0.7 is considered a strong, 0.3-0.7 a moderate, and less than 0.3 a low correlation. The average total displacement is also calculated for each point in dataset pairs to determine the ratio between PCC and absolute total displacement. Absolute displacement is used to account for both subsidence and heave, because the significance is in the overall magnitude of displacement, not the direction. For the displacement data, a positive PCC indicates that both PSI datasets register subsidence or heave together, whereas a negative PCC indicates that one dataset is measuring heave while the other is measuring subsidence, or vice versa. The location and time series of the random points with the highest, median, and lowest PCC have been analysed in further detail. Time-series cross-sections were constructed to assess the spatiotemporal correlation using the average velocity, averaged at  A', Fig. 3). Cross-sections were constructed for each of the four datasets, at every acquisition date (each line is a new date) and additionally cross-sections of meandisplacement for each dataset. The PCC was calculated between the mean-displacement cross-section lines for both RS2-TSX and STL1-TSX.

Relationship between absolute total displacement and PCC
This relationship between correlation coefficient and absolute total displacement was first tested using average values from 0.5 km radius circles around the 300 randomly selected points within the 66 km 2 test area (Fig. 3). An inverse relationship of the form y ¼ (a=x) þ b, with a horizontal asymptote of 1, can be observed between PCC and absolute total displacement (Fig. 4a).
This inverse relationship is particularly clear for the RS2-TSX comparison (blue dots, Fig. 4) with the PCC increasing with absolute displacement between 0 and 5 mm and then stabilizing at a strongly correlated PCC >0.8 for displacements greater than 5 mm. This relationship is less clear for STL1-TSX (orange x symbols, Fig. 4). The maximum total displacement for the STL1-TSX (2015 to 2017) dataset is 4 mm and the strongest PCC observed is 0.66. The STL1-TSX point values follow the same trajectory as the RS2-TSX dataset so higher correlations would likely be observed if there had been a higher total displacement over that time period.
The effect of varying radius size on the correlation is examined in Figure 4 and the same inverse relationship is observed with radii of 1.0 km and of 1.5 km. The number of points averaged in each radius size is shown in Table 2; the average number of points for a 0.5, 1.0 and 1.5 km radius are 1200, 4600 and 9800.
Using the 0.5 km radius data (Fig. 4a), it is evident that the areas with an absolute total displacement greater than 5 mm show a consistently strong correlation between datasets, >0.7. For the larger radii (1.0 km and 1.5 km, Fig. 4b, c), points with displacement greater than 4 mm have a consistently strong PCC. A moderate correlation (0.3 to 0.7) can be observed between 3 and 5 mm total displacement at all radii. Some areas with a total displacement <3 mm also have a strong or moderate correlation, but the coefficients are much more variable. For example, at a total displacement of 2.5 mm, some points for STL1-TSX have a negative PCC of − 0.35, whereas others have a PCC of 0.65 for the same total displacement (orange x symbols, Fig. 4). For the RS2-TSX data, 39% of points have a strong correlation and 30% moderate, and for STL1-TSX, there are no strong correlations and 23% moderate (Table 3).  (1), and the median correlated areas (2) appear to be just East of the zone (1.0 km apart). The least correlated areas (3) are 3.5 km apart in each dataset, for RS2-TSX it is to the north of the area and for STL1-TSX it is to the NE.

Points with the highest, median and lowest PCC
The RS2 data have a gap in acquisitions between November 2012 and May 2013, there are also missing TSX data between February 2013 and May 2013 and it is during this period that the RS2 show the biggest deviation from the TSX data, at each of the 3 points. The RS2 also deviates from the TSX by more than 2 mm in August 2011, May 2014 and February 2015. Locations (1) and (2) (Fig. 5b, c) have a negative total displacement from RS2 and TSX of c. −25 mm at (1) and −1 mm for TSX and −4 mm for RS2 at (2). At location (3) (Fig. 5c) TSX records a positive total displacement of 1 mm whereas RS2 records a negative displacement of −0.5 mm, hence the correlation coefficient is negative.
The STL1 data are notably noisier than the TSX data, with many 1-2 mm amplitude shifts between positive and negative displacement. Many of these smaller peaks and troughs align with those in the TSX data but appear exaggerated. At location (1) both datasets have a positive displacement trend (heave) and start following a similar upwards trend until October 2015 when TSX records an acceleration in heave, but from September 2016 the trends are similar again. Total displacement is 6 mm for TSX and 3 mm for STL1. Location (2) is stable, with 1 mm cumulative displacement measured by TSX and 0.3 mm by STL1. Location (3) again has no substantial total displacement, however TSX measures 1.5 mm of heave and STL1 −1.2 mm of subsidence and therefore the correlation coefficient is negative.

Spatiotemporal variations in correlation
The spatiotemporal relationships of each of the four datasets are shown in Figure 6. PS velocity values are averaged within a 50 m buffer, and at 50 m intervals along the west to east cross-section line (A to A', Fig. 3). The four cross-sections demonstrate the extent to which the spatial extent of displacement is captured equally between the datasets. As well as capturing the wide-scale settlement between c. 2000 and 7000 m along the cross-section, both 6a and 6b have additionally sharp troughs of settlement at c. 450, 5200, 5400, 6330-6700 and 8300 m. The TSX 2015 to 2017 and STL1, 6c and 6d respectively, appear less similar. Figure 6c has an area of heave with a geographic extent like that of the settlement bowl in 6a and 6b, around 2000 to 7000 m. However, the STL1 data (6d), appears to have a broader, less pronounced area of heave, although many of the smaller width peaks and troughs are still aligned, such as the peak at c. 2000 m and trough at 6500 m, which is observed in all four datasets.
The average displacement (2011 to 2015 and 2015 to 2017) of the four datasets along the cross-section line is shown in Figure 7. The RS2 and TSX data closely align until about 8000 m and overall have a moderate PCC of 0.64. The STL1 and TSX align well between 0 and 2000 m and 6000-9000 m but deviate in between, with TSX   Table 3. Percentage of points in Figure 4 with a strong, moderate, low or negative correlation, using a 1.0 km radius

Discussion
An inverse relationship of the form y ¼ (a=x) þ b, with a horizontal asymptote of 1, between absolute total displacement and PCC, with a threshold of c. 5 mm has been found, with a strong correlation (>0.7) for total displacements greater than this value and much more variable correlation coefficients below it.
Reliable statistical correlations are difficult to perform on such variable data. To enable comparison, the datasets are rebased and interpolated, but the accuracy of the interpolation is dependent on the temporal frequency of acquisitions, with the real timing of ground movements appearing at different times in the data. For example, if a 5 mm subsidence occurred on 12th January, with the next TSX acquisition being on the 15th January, and the next RS2 acquisition on 28th January, the troughs would appear to be offset in the graphs of displacement against time and hence appear to be poorly correlated. Gaps in acquisition times affect the correlations markedly. Non-linear deformation would also be poorly represented in the interpolated data.
Low correlations for the 0-3 mm range of ground movements are not surprising given the number of variables between datasets, such as reference points, incidence angles, temporal frequency of images and different processing algorithms. Furthermore, movements in   this range are within the typical error margins for time-series deformation (Colesanti et al. 2001;Raucoules et al. 2009;Crosetto et al. 2009b). For such strong correlations to occur for ground movements >5 mm in RS2 and TSX datasets, which are from different satellites, wavelengths and geometries (ascending and descending), the horizontal component of motion must be minimal. However, an imaging frequency in TSX of more than three times that of RS2 used in these datasets, means that smaller, higher frequency deformations are absent in the RS2 dataset.
Differences in the mean displacement for STL1-TSX crosssections A to A' are apparent between 3000 and 6000 m (Fig. 6). A potential reason for this is that the reference point in the SARscape ® processing to which all displacement is relative, is in the area of heave and therefore the reference point will itself be displacing, leading to a measured heave lower than the true value.
A comparison of individual PS points at locations common to all datasets would be beneficial but is not possible because of the high density of reflectors resulting from London's urban nature. The many potential PS in a dense urban area coupled with differing satellite geometries and slight differences in incidence angles can affect which part of a building, road, railway track, etc., becomes a PS. Furthermore, the geolocation precision of the datasets combined with the high density of PS, makes it impractical to attribute PS to particular objects and therefore cannot be compared on a point by points basis.
Buildings are moving constantly as a result of numerous factors including: ground movement (clay shrink-swell, subsidence, vibration etc), foundation failure, decay of the building fabric, moisture changes (causing materials to expand and contract), thermal movement, deformation under load, tree root growth, absence of foundations in older buildings etc., (Kelsey 1987). In London, there is often a background shrink-swell cycle produced in the London Clay, but isolating the amplitude of these signals within PS time-series is challenging (Scoular et al. 2019). Because each satellite observes different PS points within the same area, systematic ground movements such as shrink swell, must be larger than the threshold of motion of an individual building to be detected. TSX will be more sensitive to these movements than RS2 or STL1 because of its smaller wavelength.
The datasets were not processed specifically to monitor the deformation phenomena in this area of East London. The data shown here are only a subset of much larger datasets. This is important to acknowledge because parameters can be adjusted during data processing, depending on the deformation of interest and size of the area being processed.

Conclusions
An inverse relationship between PCC and absolute total displacement is observed in PS points in London, UK. For displacements greater than ±5 mm, there is a strong correlation between datasets, but below 3 mm the different datasets are inconsistently correlated. This reflects the high number of variables involved in getting to an end-user product, in terms of both the raw data, including the look direction, temporal and spatial resolution, as well as the processing methodology. Another factor is that all the data are relative, but relative to different points that may themselves be deforming, which leads to systematic differences between datasets. Furthermore, individual points, particularly those on infrastructure, may appear to move at random by a few millimetres due to thermal and moisture changes in the building fabric and ground beneath.
The implications of these results are that standard commercial InSAR datasets have measurement errors within 5 mm, but deformation phenomena above this threshold will be correctly detected by all commercial software. The comparisons presented here are based on single datasets for one area and these may therefore be unrepresentative of the true precision of the algorithms. Variability in InSAR time-series accuracy may represent a barrier to user uptake and is a frequently cited criticism of the technique but ground-based conventional surveys made on multiple, separate occasions by different providers can also produce variable results.
Ground truth data were not available for this project but other work (Luo et al. 2015;Robles et al. 2016;Bischoff et al. 2020) shows that reliable correlations with levelling and other data can be made but are challenging because in-situ measurements are also relative to both a start date and a reference point or network, and so have systematic offsets. Furthermore, in-situ measurements (e.g. levelling) typically show vertical deformation, but PSI data measure along the line of sight (LOS) which is usually 20°to 40°from vertical. PSI displacements can be projected to vertical, either assuming negligible horizontal motion, or by combining displacements from ascending and descending passes.
Many factors must be considered when deciding if and what type of PSI dataset is right for an engineering project: (a) Should the PSI dataset be processed specifically for the area of interest and deformation phenomena to be monitored, or for a wider area? This is particularly important with regards to the reference point chosen. If the aim is to directly compare PSI and ground-based measurement a common reference point or network should be used for both.
(b) The spatial and temporal characteristics of the expected deformation, relative to the acquisition interval and spatial resolution of the available datasets. The density of PS points required ( per km 2 ) and the precision of their geolocation.
(c) The temporal frequency of acquisition, which affects both PS density and processing time and costs.
(d) Whether LOS motion is suitable, or if ascending and descending passes are needed to resolve vertical and east-west movements.
(e) The budget available to spend on the dataset and if appropriate SAR data is available.