A Rapid Method of the Rock Mass Surface Reconstruction for Surface Deformation Detection at Close Range

Characterizing the surface deformation during the inter-survey period could assist in understanding rock mass progressive failure processes. Moreover, 3D reconstruction of rock mass surface is a crucial step in surface deformation detection. This study presents a method to reconstruct the rock mass surface at close range in a fast way using the improved structure from motion—multi view stereo (SfM) algorithm for surface deformation detection. To adapt the unique feature of rock mass surface, the AKAZE algorithm with the best performance in rock mass feature detection is introduced to improve SfM. The surface reconstructing procedure mainly consists of image acquisition, feature point detection, sparse reconstruction, and dense reconstruction. Hereafter, the proposed method was verified by three experiments. Experiment 1 showed that this method effectively reconstructed the rock mass model. Experiment 2 proved the advanced accuracy of the improved SfM compared with the traditional one in reconstructing the rock mass surface. Eventually, in Experiment 3, the surface deformation of rock mass was quantified through reconstructing images before and after the disturbance. All results have shown that the proposed method could provide reliable information in rock mass surface reconstruction and deformation detection.


Introduction
Limited to the topographic and environmental conditions, projects in Southwest China that have been or are under construction are closely related to rock masses, for example, tunnel engineering, slope engineering, and foundation engineering. Much work has been done to evaluate the properties of rock engineering by analyzing the mechanics and failure characteristics of rock masses [1,2]. The surface deformation analysis of the rock masses could also provide useful information to understand the failure mechanism and stability [3,4]. Rock mass surface deformation detection is of great significance in the safety management of a construction project, which could even predict an initial danger of rock engineering to some extent. The surface reconstruction is the basis to quantify the surface deformation process of rock engineering. This study is going to explore a rapid method of the rock mass surface three-dimensional (3D) reconstruction for surface deformation detection in a close range.
Biological monitoring is an essential means of surface deformation detection in rock engineering. In the past decade, remote surveying technology has made significant progress in rapidly acquiring

Proposed Method
The SfM, an algorithm of 3D reconstruction based on various unordered images, is introduced to reconstruct the surface of rock mass with multiple images acquired at close range. The accuracy and cost of 3D reconstruction depend on the number of feature points extracted and matched, calculation time, central processing unit (CPU) utility, and mismatched accuracy of feature points. Regarding the feature extraction part of SfM, the SIFT algorithm with scale and rotation invariance is the mainstream. However, the edge information is still probably lost because the Gaussian blur would smooth all scales of the target image to the same degree of detail and noise. To build a 3D reconstruction model that is more suitable for rock mass image characteristics, this study emphasized improving the feature extraction of the SfM algorithm.
The image data sets of rock mass were taken in Aba Tibetan, Sichuan, China. A total of 41 images were classified into four groups according to the transform of intensity, rotation, scale, and fuzzy, respectively. Tables 1 and 2 list the parameters of five feature extraction algorithms (AKAZE, ORB, SURF, SIFT, and BRISK), including the number of feature points, number of interior points, number of matching points, execution time, CPU utility, efficiency of feature points, and matching accuracy. Test results were analyzed comprehensively with the Entropy Weight-TOPSIS. The comprehensive evaluation index C * i is calculated as follows: where SIFT is 0.1234, SURF is 0.1325, ORB is 0.3304, AKAZE is 0.3656, and BRISK is 0.0481. According to the comprehensive evaluation index, the five algorithms are ranked as follows: AWAKE, ORB, SURF, SIFT, and BRISK. Therefore, AKAZE was used to improve the traditional SfM-MVS. The framework for 3D reconstruction of rock mass surface mainly includes image acquisition, feature point detection, sparse reconstruction, and dense reconstruction, as shown in Figure 1.  Test results were analyzed comprehensively with the Entropy Weight-TOPSIS. The comprehensive evaluation index * is calculated as follows: * = 0.1234, 0.1325, 0.3304, 0.3656, 0.0481 where SIFT is 0.1234, SURF is 0.1325, ORB is 0.3304, AKAZE is 0.3656, and BRISK is 0.0481. According to the comprehensive evaluation index, the five algorithms are ranked as follows: AWAKE, ORB, SURF, SIFT, and BRISK. Therefore, AKAZE was used to improve the traditional SfM-MVS.
The framework for 3D reconstruction of rock mass surface mainly includes image acquisition, feature point detection, sparse reconstruction, and dense reconstruction, as shown in Figure 1.

Image Acquisition
Reconstruction using SfM has less requirements for image acquisition sensors. Table 3 shows

Image Acquisition
Reconstruction using SfM has less requirements for image acquisition sensors. Table 3 shows the related technical specifications of sensors for image acquisition. The primary purpose of image acquisition sensor design is to select the most appropriate sensor to acquire the image with sufficient resolution. A high-performance single-lens reflex camera is used for image acquisition, in order to mostly adapt to the actual situation of the rock engineering site and reduce time and efficient costs. Most of these cameras enable acquiring ground resolution in centimeters and have strong robustness.

Feature Point Detection
Feature extraction. The AKAZE algorithm is used to improve the SfM in surface reconstruction owing to the optimization in rock mass image characteristics. AKAZE is a feature extraction algorithm with good robustness due to the introduced modified-local difference binary (M-LDB). The main principle is as follows.
A nonlinear diffusion filter describes the variation of image brightness L in different scales using the divergence of flow function, Formula (1): where L is the brightness matrix of the image; div and ∇ represent the divergence and the gradient of the image, respectively; c(x, y, t) is the conduction function; and t is evolutionary time.
The conduction function allows the diffusion equation to adapt to the local structure characteristics of the image. The conduction function is defined as Formula (2).
where ∇L σ is the image smoothed by Gaussian. The conduction kernel function is selected as Formula (3) for optimal diffusion smoothing.
where λ is the deciding factor, which is used to control the degree of diffusion and to determine the edge region that should be enhanced and flat filtered.

of 18
The evolutionary time t i is derived by converting the scale parameter in pixels σ i .
A nonlinear scale-space equation could be acquired using the fast explicit diffusion (FED) algorithm [31] to solve the partial differential equation of Formula (1).
where A L i is the conductance matrix to the image encoding and is constructed by the histogram of the Gaussian filtered scale image; I is the identity matrix; and τ is step-size, which comes from the factorization of the filter [32]. The matrix A L i remains unchanged throughout the FED cycle.
When the FED loop ends, the algorithm recalculates the value of the matrix A L i . After the nonlinear scale space is constructed, the Hessian matrix is used to extract the feature points. Meanwhile, SIFT is used to compare whether 26 points of the same position two layers above and below (include the current layer) are still extreme points. With this method, local feature points are extracted. To verify the feature extraction effect of the AKAZE algorithm, a slop image taken in Aba Tibetan, Sichuan, China was used to display the extraction results of feature points, shown in Figure 2. After the nonlinear scale space is constructed, the Hessian matrix is used to extract the feature points. Meanwhile, SIFT is used to compare whether 26 points of the same position two layers above and below (include the current layer) are still extreme points. With this method, local feature points are extracted. To verify the feature extraction effect of the AKAZE algorithm, a slop image taken in Aba Tibetan, Sichuan, China was used to display the extraction results of feature points, shown in Figure 2. Feature matching. After the feature extraction, the neighborhood matching is established to find all matching points. Euclidean distance is adopted to screen the feature point pairs. If the result does not meet the threshold, it would be removed.
In this step, the FLANN feature point matching algorithm is adopted through the K-Dimensional Tree (KD-Tree) in achieving the feature point search first, then the matching degree is determined according to the Euclidean distance formula. This method could segment the feature points of different spaces and obtained the matching point pairs in different spatial domains effectively.
Mismatched elimination. Exact matching is conducted with the Random Sampling Consistency (RANSAC) algorithm. This step allows to obtain the transformation relationship between images [33].
The idea of the RANSAC algorithm is as follows: (1) the data consist of inliers; (2) outliers are prohibited to fit the model; (3) other data are noise points. RANSAC enables to estimate highprecision parameters from a data set containing a large number of outliers, which is an excellent mismatched elimination method. Figure 3 shows the schematic of sparse reconstruction for two images. Feature matching. After the feature extraction, the neighborhood matching is established to find all matching points. Euclidean distance is adopted to screen the feature point pairs. If the result does not meet the threshold, it would be removed.

Sparse Reconstruction
In this step, the FLANN feature point matching algorithm is adopted through the K-Dimensional Tree (KD-Tree) in achieving the feature point search first, then the matching degree is determined according to the Euclidean distance formula. This method could segment the feature points of different spaces and obtained the matching point pairs in different spatial domains effectively.
Mismatched elimination. Exact matching is conducted with the Random Sampling Consistency (RANSAC) algorithm. This step allows to obtain the transformation relationship between images [33].
The idea of the RANSAC algorithm is as follows: (1) the data consist of inliers; (2) outliers are prohibited to fit the model; (3) other data are noise points. RANSAC enables to estimate high-precision parameters from a data set containing a large number of outliers, which is an excellent mismatched elimination method.  prohibited to fit the model; (3) other data are noise points. RANSAC enables to estimate highprecision parameters from a data set containing a large number of outliers, which is an excellent mismatched elimination method. Figure 3 shows the schematic of sparse reconstruction for two images.  After the interior orientation parameters obtained by camera self-calibration, the exterior parameters of the structure need to be solved. Set the world coordinate system to coincide with the camera coordinate system in the first image, so that the first image could be expressed as R = I. T is the translation vector, T = (0, 0, 0) T , and R is the rotation matrix. The projection matrix P 1 of the first image is shown as Formula (6):

Sparse Reconstruction
where I is the unit matrix and K is the intrinsic parameter matrix. Similarly, the projection matrix P 2 of the second image could be represented as Formula (7): As the eigenmatrix contains the rotation and translation matrices, it could be obtained according to Formula (8): where F is the fundamental matrix, which could be obtained according to the matching points in the initial image pair. The camera poses between the two cameras could be obtained by the singular value decomposition (SVD) of the eigenmatrix, Formula (9): Generally, U and V are orthogonal matrices of order 3, and D is the diagonal matrix.
There are four possible solutions to the projection matrix restored by the eigenmatrix (see in Figure 4 and Formula (10)): where u 3 is the last column of the matrix W = Sensors 2020, 20, 5371

of 18
There are four possible solutions to the projection matrix restored by the eigenmatrix (see in Figure 4 and Formula (10)): where is the last column of the matrix = 0 −1 0 1 0 0 0 0 1 . Three-dimensional points could be calculated through the position information of the points from the projection matrices of and . This process is called triangulation. Suppose P and P are row vectors of P and P , respectively. M = ( , , , 1) is the space coordinates of point M. ( , , 1) and ( , , 2) are the image coordinates of image 1 and image 2, respectively. The linear equations are obtained as Formula (11) using the coordinate system transforming relations.
As the number of equations in Formula (11) is more than the unknowns, the least square is introduced to solve the space coordinates of point M. Errors may exist in the obtained coordinates as a result of the error in feature matching. Therefore, the beam adjustment (BA) is introduced to further Three-dimensional points could be calculated through the position information of the points from the projection matrices of P 1 and P 2 . This process is called triangulation.
Suppose P 1k and P 2k are row vectors of P 1 and P 2 , respectively. M w = (X w , Y w , Z w , 1) T is the space coordinates of point M. (u 1 , v 1 , 1) T and (u 2 , v 2 , 2) T are the image coordinates of image 1 and image 2, respectively. The linear equations are obtained as Formula (11) using the coordinate system transforming relations.
As the number of equations in Formula (11) is more than the unknowns, the least square is introduced to solve the space coordinates of point M. Errors may exist in the obtained coordinates as a result of the error in feature matching. Therefore, the beam adjustment (BA) is introduced to further improve the precision of coordinates because it enables to optimize the camera parameters and 3D coordinates by minimizing the error of reprojection.
The BA needs to be initialized with a good image pair. Firstly, the first BA is performed for the two initialized images. Then, add new images cyclically for a new BA. The BA is an iterative process in which all valid images are computed continuously until the end of the iteration. Finally, camera parameters and scene geometry information are obtained. Reprojection errors are the distances between projection points and real points in images. For m images and n trajectory points, the reprojection error is shown as Formula (12): min where P k is the projection matrix and x ki is the image position of pint i in image k. The purpose of the BA is to minimize this function. Multiple images reconstructing is consistent with the reconstruction of two images. After the initial projection matrix P is solved, use Formula (11) to recover the 3D coordinates of the other image matching points in the nth images. However, as the number of images increases, the difference between the newly added images and the previous images becomes larger. Moreover, the fewer the image matching pairs, the more difficult it is to calculate the fundamental matrix.
Therefore, the projection matrix P is calculated on the position of the newly added images through the reconstructed 3D point coordinates. Suppose (u 1 , v 1 , 1) is the image coordinates of space point M i = (X i , Y i , Z i , 1) T in newly added images, the equation could be derived in Formula (13): The projection matrix has 11 degrees of freedom (DOF). The projection matrix of the nth images could be obtained using the projections of six reconstructed 3D points on the new image. When more than six points satisfy this requirement, RANSAC could be helpful for a more accurate projection matrix.

Dense Reconstruction
The sparse point cloud is only useful for regular objects with obvious features, but fails to present the surface information of the object well. Therefore, the complex scenarios need a denser point cloud, for example, rock engineering. The patched-based multi-view stereo (PMVS) algorithm [34] could reconstruct high-precision models with rich surface details for scenes with unclear texture, limited point view, large curvature, and so on.
Patch is a rectangle of a local tangent plane that approximates the object's surface. V(p) is defined as the image set of containing all visible patches. R(p) is the patches set of reference images, R(p) ∈ V(p). The discrepancy function could be defined as Formula (14): where, V(p) − R(p) means patches of V(p) that remove R(p), and h(p, I 1 , I 2 ) is the grayscale consistency function of I and R(p). The steps of the solution are as follows [35,36]: (1) Divide patch p into smaller squares, u × u.
(2) Calculate the difference value of patch p on the image I i to obtain the pixel gray q(p, I i ), through bilinear interpolation. (3) Subtract the normalized cross correlation (NCC) value of q(p,I 1 ) and q(p,I 1 ) from 1. (4) Initialize and optimize the relevant parameters.
The continuity of the patches is a major disadvantage. To solve this problem, the image I i is divided into many β 1 × β 1 pixel pieces C i (x, y), where i is the ith image and (x, y) is the subscript of an image piece. For a patch p and the corresponding set V(p), project p onto the image of V(p) to obtain the image piece corresponding to patch p. Set Q i (x, y) records all the patches projected onto the image pieces.

Experiment Preparation
A series of experiments for 3D reconstruction based on images from different angles and directions using the A-SfM algorithm were performed.
Three experiments were designed. Experiment 1 was used to evaluate the results of 3D reconstruction with proposed A-SfM. The second one was employed to test the accuracy of A-SfM. Experiment 3 was conducted to detect the deformation of rock mass surface. Experiment 1. Two groups of images were acquired: (1) the rock in the indoor environment without interference; (2) the surface of the soil outdoors. The number of the images in the two groups was 32 and 16, respectively. The iPhone XR camera is selected to acquire images in a counterclockwise direction around the objects, with the distance between the object and camera of 2 m. The examples of image samples are shown in Figure 5.

Experiment 2.
A slope model was built in a laboratory environment to explore the accuracy of the A-SfM algorithm, as shown in Figure 6. The dimension of the model is 35 cm in length, 35.5 cm in width, 12 cm in height, and 50 in gradient. The component mainly consists of the sand, low-grade gravel, and a small amount of mudstone. Mark points were used for binocular vision monitoring to serve as reference data for A-SfM analysis. Three groups of tests were designed to distinguish different mark points and different photograph distances: Group 1: The photograph distance was 2 m, and the mark points were common. Group 2: The photograph distance was 1 m, and the mark points were common. Group 3: The photograph distance was 1 m, and the mark points were concentric circles.

Experiment 3.
The surface deformation of rock mass was quantified based on the 3D reconstruction results of the surface before and after the disturbance. Geodetic control points were applied to compare the two results in the same coordinate system (Figure 7). The procedure was as follows: (1) The geodetic control points were measured and recorded with the total station electronic tachometer.
(2) The distance between the wall and each image capture station was measured using a laser range finder, and the locations were marked. of an image piece. For a patch p and the corresponding set V(p), project p onto the image of V(p) to obtain the image piece corresponding to patch p. Set Q (x, y) records all the patches projected onto the image pieces.

Experiment Preparation
A series of experiments for 3D reconstruction based on images from different angles and directions using the A-SfM algorithm were performed.
Three experiments were designed. Experiment 1 was used to evaluate the results of 3D reconstruction with proposed A-SfM. The second one was employed to test the accuracy of A-SfM. Experiment 3 was conducted to detect the deformation of rock mass surface. Experiment 1. Two groups of images were acquired: (1) the rock in the indoor environment without interference; (2) the surface of the soil outdoors. The number of the images in the two groups was 32 and 16, respectively. The iPhone XR camera is selected to acquire images in a counterclockwise direction around the objects, with the distance between the object and camera of 2 m. The examples of image samples are shown in Figure 5.

Experiment 2.
A slope model was built in a laboratory environment to explore the accuracy of the A-SfM algorithm, as shown in Figure 6. The dimension of the model is 35 cm in length, 35.5 cm in width, 12 cm in height, and 50 in gradient. The component mainly consists of the sand, low-grade gravel, and a small amount of mudstone. Mark points were used for binocular vision monitoring to serve as reference data for A-SfM analysis. Three groups of tests were designed to distinguish different mark points and different photograph distances: Group 1: The photograph distance was 2 m, and the mark points were common. Group 2: The photograph distance was 1 m, and the mark points were common. Group 3: The photograph distance was 1 m, and the mark points were concentric circles.

Experiment 3.
The surface deformation of rock mass was quantified based on the 3D reconstruction results of the surface before and after the disturbance. Geodetic control points were applied to compare the two results in the same coordinate system (Figure 7). The procedure was as follows: (1) The geodetic control points were measured and recorded with the total station electronic tachometer. (2) The distance between the wall and each image capture station was measured using a laser range (3) Eight images before the disturbance were captured sequentially.
(4) Four sandstone samples of 50 mm in diameter and 50 mm in height were used to simulate the uplift and deformation. The samples were placed lightly on the model to avoid disturbing the rock mass at other locations.
(5) Eight images after the disturbance were captured sequentially.

Results for Experiment 2
The results of binocular vision measurement were served as reference data for accurate analysis of the reconstructions. Studies related to binocular vision measurements have been completed and published [5,37]. The measurement results, as shown in Table 4, Table 5, and Table 6, were obtained through camera calibration, pixel coordinate positioning, and space coordinate calculation.

Results for Experiment 2
The results of binocular vision measurement were served as reference data for accurate analysis of the reconstructions. Studies related to binocular vision measurements have been completed and published [5,37]. The measurement results, as shown in Tables 4-6, were obtained through camera calibration, pixel coordinate positioning, and space coordinate calculation. Two methods, SfM and A-SfM, were used to establish the 3-D reconstruction, and the results of A-SfM are shown in Figure 9.
Sensors 2020, 20 Two methods, SfM and A-SfM, were used to establish the 3-D reconstruction, and the results of A-SfM are shown in Figure 9.
(a) (b) (c) Figure 9. Results of A-SfM reconstruction in experiment 2: (a) the photograph distance was 2 m, and the mark points were common; (b) the photograph distance was 1 m, and the mark points were common; (c) the photograph distance was 1 m, and the mark points were concentric.

Results for Experiment 3
The aforementioned images captured before and after the disturbance were then computerized and reconstructions using A-SfM before and after the disturbance were established, as shown in Figure 10. . Results of A-SfM reconstruction in Experiment 2: (a) the photograph distance was 2 m, and the mark points were common; (b) the photograph distance was 1 m, and the mark points were common; (c) the photograph distance was 1 m, and the mark points were concentric.

Results for Experiment 3
The aforementioned images captured before and after the disturbance were then computerized and reconstructions using A-SfM before and after the disturbance were established, as shown in Figure 10.
(a) (b) (c) Figure 9. Results of A-SfM reconstruction in experiment 2: (a) the photograph distance was 2 m, and the mark points were common; (b) the photograph distance was 1 m, and the mark points were common; (c) the photograph distance was 1 m, and the mark points were concentric.

Results for Experiment 3
The aforementioned images captured before and after the disturbance were then computerized and reconstructions using A-SfM before and after the disturbance were established, as shown in Figure 10.
(a) Reconstruction before the disturbance (b) Reconstruction after the disturbance Figure 10. Reconstruction before and after the disturbance.

Reconstruction Results Analysis
The results of experiment 1 showed that two groups of images were reconstructed well, and the reconstructed models were almost identical to the real objects. However, there were still some disturbance point clouds that affect the modeling. The disturbance point clouds were predominated by the environmental features of shadows and interferers, which were apparent in multiple images. This affected the eigenmatrix of the object and reconstruction of the contour. Besides, the first image took about 80 times longer than others in feature extraction because it needed more time in selecting the initial relative features. It could be inferred that the results of reconstruction using the A-SfM performed well in an environment with prominent characteristics and strong contrast, for example, the rock engineer environment studied in this paper. This gives rise to the application value of rock mass surface detection. Even so, the model accuracy needed further discussion, which is why experiment 2 was designed.
In experiment 2, to evaluate the accuracy of the A-SfM algorithm, the results of reconstructions should be compared with those of binocular vision measurements. However, the distance calculated by the reconstructing results is the relative distance instead of the actual measured distance in the

Reconstruction Results Analysis
The results of Experiment 1 showed that two groups of images were reconstructed well, and the reconstructed models were almost identical to the real objects. However, there were still some disturbance point clouds that affect the modeling. The disturbance point clouds were predominated by the environmental features of shadows and interferers, which were apparent in multiple images. This affected the eigenmatrix of the object and reconstruction of the contour. Besides, the first image took about 80 times longer than others in feature extraction because it needed more time in selecting the initial relative features. It could be inferred that the results of reconstruction using the A-SfM performed well in an environment with prominent characteristics and strong contrast, for example, the rock engineer environment studied in this paper. This gives rise to the application value of rock mass surface detection. Even so, the model accuracy needed further discussion, which is why Experiment 2 was designed.
In Experiment 2, to evaluate the accuracy of the A-SfM algorithm, the results of reconstructions should be compared with those of binocular vision measurements. However, the distance calculated by the reconstructing results is the relative distance instead of the actual measured distance in the space coordinate system. The scaling factor is introduced to map the relative distances into physical distances with Formula (15) [38]: where S is the scaling factor, d known is the physical length of an object, and I known is the pixel length of the object on the imaging plane. Table 7 lists the calculated scaling factor. According to the scaling factor and the pixel length, the physical length between the mark points in the two reconstructions, one for SfM and the other for A-SfM, was calculated, as shown in Tables 8 and 9.  Table 9. Pixel length and physical length between the mark points in reconstruction using A-SfM.  Table 10 shows the results of comparing the two reconstructions, using SfM and A-SfM, with the binocular vision measurements. Comparing with the results of reconstructions (both SfM and A-SfM) with those of binocular vision measurement, the 3D reconstruction performance of SfM before and after the improvement was verified. Moreover, the improved SfM algorithm significantly promoted the measurement accuracy, which effectively reflects the real situation ( Figure 11). The measurement accuracy was improved from 2.7 mm to 4.58 mm in group 1; from 0.53 mm to 3.51 mm in group 2; and from 0.25 mm to 2.01 mm in group 3.  Figure 11. The reconstruction errors of the three groups of tests.

Rock Mass Surface Deformation Analysis
Different from most of the traditional measuring methods of single point deformation detection, the 3D point cloud could qualify the variation of the whole monitoring region. The surface deformation of rock mass could be quantified with the deformation detection by the 3D reconstruction results of the surface before and after the disturbance. This proposed procedure included point cloud data cleaning, geodetic control point registration, iterative closest point (ICP) registration, Euclidean distance calculation between registration, and reference point clouds.
Point cloud data cleaning. As discussed in experiment 1, numerous invalid points and outliers would be generated when 3D point cloud data reconstructed by A-SfM were imported into CloudCompare. Because only the detected deformation area should be preserved, the point cloud data could first be preprocessed through the Bounding Box algorithm. Then, outlier data could be eliminated using the statistical analysis filter. Finally, the invalid points could be cut and divided manually. Figure 12 shows the results before and after the point cloud data cleaning, shown here for the reconstruction before the disturbed data 1 and after the disturbed data 2. Figure 11. The reconstruction errors of the three groups of tests.

Rock Mass Surface Deformation Analysis
Different from most of the traditional measuring methods of single point deformation detection, the 3D point cloud could qualify the variation of the whole monitoring region. The surface deformation of rock mass could be quantified with the deformation detection by the 3D reconstruction results of the surface before and after the disturbance. This proposed procedure included point cloud data cleaning, geodetic control point registration, iterative closest point (ICP) registration, Euclidean distance calculation between registration, and reference point clouds.
Point cloud data cleaning. As discussed in Experiment 1, numerous invalid points and outliers would be generated when 3D point cloud data reconstructed by A-SfM were imported into CloudCompare. Because only the detected deformation area should be preserved, the point cloud data could first be preprocessed through the Bounding Box algorithm. Then, outlier data could be eliminated using the statistical analysis filter. Finally, the invalid points could be cut and divided manually. Figure 12 shows the results before and after the point cloud data cleaning, shown here for the reconstruction before the disturbed data 1 and after the disturbed data 2.
Geodetic control point registration. To make the 3D point clouds before and after the disturbance in the same world coordinates system, the geodetic control point in the reconstructions was registered after the point cloud data cleaning. This proposed procedure took the cloud data before the disturbance as the reference point and the cloud data after the disturbance as the matching point. Figure 13 shows the results of geodetic control point registration, where (a) represents the position and distribution of data before the geodetic control point registration, and (b) is the data after the geodetic control point registration.
Iterative closest point (ICP) registration. There were some errors because the control points were manually selected during the geodetic control point registration. Therefore, it was necessary to use ICP for precise registration. The cloud data before the disturbance were defined as the reference point, and the cloud data after the disturbance were defined as the matching point. Figure 14 presents the results after ICP registration. Geodetic control point registration. To make the 3D point clouds before and after the disturbance in the same world coordinates system, the geodetic control point in the reconstructions was registered after the point cloud data cleaning. This proposed procedure took the cloud data before the disturbance as the reference point and the cloud data after the disturbance as the matching point. Figure 13 shows the results of geodetic control point registration, where (a) represents the position and distribution of data before the geodetic control point registration, and (b) is the data after the geodetic control point registration. Iterative closest point (ICP) registration. There were some errors because the control points were manually selected during the geodetic control point registration. Therefore, it was necessary to use ICP for precise registration. The cloud data before the disturbance were defined as the reference  Geodetic control point registration. To make the 3D point clouds before and after the disturbance in the same world coordinates system, the geodetic control point in the reconstructions was registered after the point cloud data cleaning. This proposed procedure took the cloud data before the disturbance as the reference point and the cloud data after the disturbance as the matching point. Figure 13 shows the results of geodetic control point registration, where (a) represents the position and distribution of data before the geodetic control point registration, and (b) is the data after the geodetic control point registration. Iterative closest point (ICP) registration. There were some errors because the control points were manually selected during the geodetic control point registration. Therefore, it was necessary to use ICP for precise registration. The cloud data before the disturbance were defined as the reference point, and the cloud data after the disturbance were defined as the matching point. Figure 14 presents the results after ICP registration. Rock mass surface deformation detection. The Euclidean distance between points and neighbor points was calculated using the precise registration data. The deformation ranged from 8.97 × 10 −5 m to 0.61 × 10 −1 m, and the color scale is shown in Figure 15. Rock mass surface deformation detection. The Euclidean distance between points and neighbor points was calculated using the precise registration data. The deformation ranged from 8.97 × 10 −5 m to 0.61 × 10 −1 m, and the color scale is shown in Figure 15. Rock mass surface deformation detection. The Euclidean distance between points and neighbor points was calculated using the precise registration data. The deformation ranged from 8.97 × 10 −5 m to 0.61 × 10 −1 m, and the color scale is shown in Figure 15. Results analysis. The uplift to simulate the deformation was rock samples with a diameter of 50 mm. However, the lower part of the rock was slightly inserted into the rock mass model and the upper part was placed on the surface of the model. It can be seen in Figure 15 that the smallest surface deformation in the undisturbed zones is 0.094 mm, whereas the maximum deformation in disturbed zones is 48.43 mm. The results presented were generally with the actual situation.

Conclusions
Three-dimensional reconstruction of rock mass surface is a crucial step in surface deformation detection, which could assist in understanding rock mass progressive failure processes. On the basis of the SfM method, an A-SfM method was proposed for rock engineering applications so as to acquire the 3D reconstruction that is suitable for the characteristics of rock mass surface. The AKAZE algorithm is used to improve the structure flow of SfM so as to extract the features of the rock mass more easily at close range. Three experiments verified the ability of the proposed A-SfM method. The specific conclusions can be drawn as follows: (1) The results of 3D reconstruction in experiment 1 using the proposed A-SfM showed the reconstructed models were almost identical to the real objects. (2) In experiment 2, the measurement accuracy of the A-SfM improves compared with the measurement accuracy of the SfM. Results analysis. The uplift to simulate the deformation was rock samples with a diameter of 50 mm. However, the lower part of the rock was slightly inserted into the rock mass model and the upper part was placed on the surface of the model. It can be seen in Figure 15 that the smallest surface deformation in the undisturbed zones is 0.094 mm, whereas the maximum deformation in disturbed zones is 48.43 mm. The results presented were generally with the actual situation.

Conclusions
Three-dimensional reconstruction of rock mass surface is a crucial step in surface deformation detection, which could assist in understanding rock mass progressive failure processes. On the basis of the SfM method, an A-SfM method was proposed for rock engineering applications so as to acquire the 3D reconstruction that is suitable for the characteristics of rock mass surface. The AKAZE algorithm is used to improve the structure flow of SfM so as to extract the features of the rock mass more easily at close range. Three experiments verified the ability of the proposed A-SfM method. The specific conclusions can be drawn as follows: (1) The results of 3D reconstruction in Experiment 1 using the proposed A-SfM showed the reconstructed models were almost identical to the real objects. (2) In Experiment 2, the measurement accuracy of the A-SfM improves compared with the measurement accuracy of the SfM.
(3) Experiment 3 shows that the results detected were generally consistent with the actual situation.
The deformation detection by the 3D reconstruction results of the surface before and after the disturbance confirmed that the proposed method effectively quantified the surface deformation of the rock mass.