A Cross-Source Point Cloud Registration Algorithm Based on Trigonometric Mutation Chaotic Harris Hawk Optimisation for Rockfill Dam Construction

A high-precision three-dimensional (3D) model is the premise and vehicle of digitalising hydraulic engineering. Unmanned aerial vehicle (UAV) tilt photography and 3D laser scanning are widely used for 3D model reconstruction. Affected by the complex production environment, in a traditional 3D reconstruction based on a single surveying and mapping technology, it is difficult to simultaneously balance the rapid acquisition of high-precision 3D information and the accurate acquisition of multi-angle feature texture characteristics. To ensure the comprehensive utilisation of multi-source data, a cross-source point cloud registration method integrating the trigonometric mutation chaotic Harris hawk optimisation (TMCHHO) coarse registration algorithm and the iterative closest point (ICP) fine registration algorithm is proposed. The TMCHHO algorithm generates a piecewise linear chaotic map sequence in the population initialisation stage to improve population diversity. Furthermore, it employs trigonometric mutation to perturb the population in the development stage and thus avoid the problem of falling into local optima. Finally, the proposed method was applied to the Lianghekou project. The accuracy and integrity of the fusion model compared with those of the realistic modelling solutions of a single mapping system improved.


Introduction
Rockfill dams are among the types of dams mainly used in plateau areas because the materials for their construction are easy to access. Moreover, they can adapt to the environment and have satisfactory seismic resistance [1]. As sensors continue to advance and technologies (such as artificial intelligence, the Internet of Things, and cloud computing) progressively mature [2], rockfill dam construction has also accelerated in terms of digital and intelligent development. High-precision three-dimensional (3D) models are required for visualising construction simulations and engineering topology carriers. These models are the basis and prerequisite of digitalising water conservancy projects and have been applied to activities, such as dam safety inspection [3], project progress display [4], and augmented reality (AR) model construction [5]. The application of 3D models to water conservancy projects aims to establish the mapping relationship between the natural space entity of a rockfill dam and the virtual space of a computer 3D model. Furthermore, the natural engineering entity is simulated according to the physical equation of change trend over time. Finally, the 3D model is dynamically updated according to the actual construction state to provide dynamic decision making and scheduling control instructions for the reasonable and efficient construction of engineering projects. Presently, various sensors have been applied to rockfill dam mapping during construction. They generate numerous point clouds and images containing spatial and texture information, thus providing a database for the fusion modelling of 3D point clouds from multiple sources of rockfill dam data. Therefore, the rapid acquisition, analysis, and processing of the massive cross-source data provided by sensors during dam construction are necessary. Then, establishing a high-precision, high-integrity, and high-fidelity 3D model of the dam is key to enhancing rationality and efficiency in the construction management and control of rockfill dams.
Unmanned aerial vehicle (UAV) tilt photography and 3D laser scanning are new geodetic technologies developed in recent years, which have overturned traditional manual modelling methods based on computer graphics and empirical knowledge [6]. Their modelling efficiency and fidelity have considerably improved. Using multiple sensors, UAV tilt photography captures images from one vertical and four tilt angles [7,8]. Textures are automatically added using office operations to build a 3D realistic model matching human eye vision [9]. In comparison, 3D laser scanning is a noncontact measurement technology. It rapidly acquires massive point clouds with irregular spatial distribution on the surface of the measurement target by emitting laser pulses and resolving the reflected signals [10]. Furthermore, with its high accuracy and no need to touch the measurement target, 3D laser scanning has been widely used in the field of planimetric mass assessment [11], cadastral mapping [12], volume calculation [13], landslide monitoring [14], and 3D model reconstruction [15,16]. Previous research has relied on single surveying and mapping technology to reconstruct a 3D real-world model. This has resulted in 3D dam models that are inaccurate and incomplete. Consider the following examples: (1) Light and shadow effects lead to local model voids and texture dislocations; (2) data collection is difficult to implement in areas where UAV flying is prohibited and satellite positioning cannot be performed; and (3) blind spots for data collection are present because the dam surface environment is complex, roads are messy, and vehicle movement is frequent. In contrast, integrating the environmental information provided using 3D laser scanning and UAV tilt photography for 3D reconstruction can considerably improve the integrity, accuracy, and fidelity of the model.
Because UAV tilt photography and 3D laser scanning are two independent data acquisition systems, realising the deep integration of point clouds and images, integrating their advantages, and improving the consistency between 3D models and actual scenes are the research focus of multi-source 3D reconstruction. The traditional method is to export the 3D model reconstructed using UAV tilt photography to a point cloud format. Then, the same name marker points or marker surface cusps are arranged as control points under different data acquisition systems to complete the comprehensive sensing and deep integration of environmental information supported by point cloud compatibility. The current study has several limitations. The construction environment of the dam face is complex: Firstly, numerous mechanical vehicles, such as sprinklers, roller compactors, and bulldozers are present, and the stability of the marker points cannot be ensured. Secondly, manual participation is required owing to variations in dimension, scale, accuracy, and viewing angle among different data acquisition systems. This is time-consuming and labour-intensive and may introduce certain deviations [17]. Therefore, this paper aims to accomplish the unification of spatial relative positions of multimodal point cloud coordinate systems under different viewpoints and acquisition systems quickly and accurately from an algorithmic perspective. The most classic is the iterative closest point (ICP) algorithm [18], which is simple in principle and easy to operate but requires strict initial poses of the original point cloud and is prone to local optimality.
To resolve the abovementioned limitations, this paper proposes the use of a crosssource point cloud-trigonometric mutation chaotic Harris hawk optimisation (TMCHHO) fusion model for the 3D reconstruction of rockfill dams during construction. The key to this lies in (i) devising a means for collecting environmental information on the dam surface in a complex production environment as well as overcoming the phenomenon of data blindness and (ii) unifying the spatial coordinate system of multiple data sources in a fast and robust manner.
To achieve the first goal, an air-ground integrated environmental information sensing strategy based on multiple mapping systems is proposed. Furthermore, algorithms such as voxel filters and internal shape descriptors are used to realise the real-time acquisition and preprocessing of multi-source data. To achieve the second goal, an accurate fusion strategy based on multimodal data for rockfill dams during the construction period is proposed. In this method, the improved Harris hawk optimisation (TMCHHO) is integrated with the preliminary determination of the relative position of the point cloud in 3D space, and the ICP algorithm is incorporated to align the spatial positions accurately. Among them, the TMCHHO algorithm uses a piecewise linear chaotic map (PWLCM) sequence to generate a more randomly distributed initial population, and trigonometric mutation is applied to enhance the local chemotaxis ability [19,20]. Finally, considering the Lianghekou rockfill dam as an example, the reconstruction scheme of the 3D model of the dam surface is optimised. The distribution of Euclidean distance between aligned point clouds is used as an evaluation index to verify the fusion effect. Compared with the single surveying and mapping method, the effectiveness of the fusion model is verified.

Related Research
In the early 21st century, with the successful application of high-precision sensors, such as tilt photography cameras and LiDAR, to engineering construction, some scholars have conducted research on the fusion of cross-source data to achieve more realistic 3D model reconstruction. Escarcena et al. [21] devised a high-precision method for modelling ancient sites based on both photogrammetry and terrestrial laser scanning techniques. Kedzierski et al. [22] used different laser scanning systems to provide a database for highprecision urban model building. Nahon et al. [23] designed a collaborative measurement model of 3D laser scanning and UAV aerial surveys for terrain mapping dune corridors. Some studies proposed cross-source point cloud registration algorithms that provide new ideas for the comprehensive sensing and accurate analysis of a construction site environment. According to the basic principle, cross-source point cloud registration can be broadly classified into two types: learning-based and optimisation-based methods. The learning-based registration method uses a neural network framework composed of multilayer interconnected nodes to solve the point cloud spatial transformation matrix [24]. Aoki et al. [25] used PointNet as an imaging function to provide a new idea for point cloud registration. Lu et al. [26] generated key points based on learned matching probabilities of candidate sets. Yew et al. [27] replaced the space-based distance metric in the feature extraction stage of neural networks with hybrid feature-based distances to solve the problem of insufficient stability in the learning-based method. However, in learning-based registration algorithms, the design of transformation parameters is difficult to interpret, and often extensive experimentation is required to find the right architecture and hyperparameters. Moreover, the distribution of real-scene data cannot be too different from the training data [28,29]. An optimisation-based registration algorithm describes the process of point cloud registration as an optimisation problem. It mainly includes the variant algorithm of ICP, graph-based optimisation, and other methods. Yao et al. [30] queried the closest point based on the similarity matching of point cloud curve features. Serafin et al. [31] used normal vector and curvature to remove the points corresponding to errors after feature point matching and the angular difference in the normal vector of the corresponding points was also used as an additional error control term to enable fast computation.
However, previous research has not resolved the inadequacies of the traditional ICP algorithm, which has strict requirements on the initial position. Some scholars divide point cloud registration into two stages. The coarse registration stage roughly matches the spatial position of the point cloud, and the fine registration stage completes the precise alignment. Rusu et al. [32] proposed fast point feature histograms based on geometric relations to be used as descriptors of local features. Aiger et al. [33] proposed four-point congruent sets to solve low-overlap or noisy point cloud registration. Chua et al. [34] used point signatures to describe the local features of point clouds and applied them to automatic face recognition. In recent years, swarm intelligence optimisation algorithms simulating the laws of nature have been widely used in engineering. Some scholars have also attempted to achieve faster and more accurate point cloud registration using genetic [35] and particle swarm [36] Sensors 2023, 23, 4942 4 of 25 algorithms. Experiments demonstrate that the HHO algorithm proposed by Heidari [37] performs better than other algorithms of the same classification. However, because it generates an initial population based on the random numbers of normal distribution, and the correlation among individuals in the population is ignored in a single iteration, the algorithm has a slow solution speed. Accordingly, this paper proposes a multi-strategy method for improved Harris hawk point cloud registration with strong robustness and high optimisation accuracy.

Research Framework
The research framework of the cross-source point cloud-TMCHHO fusion model for the high-precision, high-fidelity, and high-integrity 3D reconstruction of rockfill dams during construction is shown in Figure 1. It includes three components: data, method, and application layers. automatic face recognition. In recent years, swarm intelligence optimisation algorithms simulating the laws of nature have been widely used in engineering. Some scholars have also attempted to achieve faster and more accurate point cloud registration using genetic [35] and particle swarm [36] algorithms. Experiments demonstrate that the HHO algorithm proposed by Heidari [37] performs better than other algorithms of the same classification. However, because it generates an initial population based on the random numbers of normal distribution, and the correlation among individuals in the population is ignored in a single iteration, the algorithm has a slow solution speed. Accordingly, this paper proposes a multi-strategy method for improved Harris hawk point cloud registration with strong robustness and high optimisation accuracy.

Research Framework
The research framework of the cross-source point cloud-TMCHHO fusion model for the high-precision, high-fidelity, and high-integrity 3D reconstruction of rockfill dams during construction is shown in Figure 1. It includes three components: data, method, and application layers. Data layer: This part is based on multiple mapping systems to sense the environmental information of the dam during the construction period and produce multi-view tilt images from UAV tilt photography into a dense point cloud model. Other processes, such as noise filtering, point cloud simplification, and feature point extraction, are performed for dense and disordered point clouds to construct data layers as inputs.
Method layer: To achieve high-precision, high-realism, and high-integrity 3D reconstruction of rockfill dams, the cross-source point cloud-TMCHHO fusion model mainly Data layer: This part is based on multiple mapping systems to sense the environmental information of the dam during the construction period and produce multi-view tilt images from UAV tilt photography into a dense point cloud model. Other processes, such as noise filtering, point cloud simplification, and feature point extraction, are performed for dense and disordered point clouds to construct data layers as inputs.
Method layer: To achieve high-precision, high-realism, and high-integrity 3D reconstruction of rockfill dams, the cross-source point cloud-TMCHHO fusion model mainly consists of two parts. The first part determines the objective function and optimisation parameters according to the principle of point cloud space transformation in different coordinate systems. The second part combines the improved HHO algorithm with the PWLCM system and trigonometric mutation perturbation strategy to optimise algorithm performance. Application layer: In this part, the strategy proposed in this paper is applied to a core-wall rockfill dam project, and the fusion accuracy is evaluated using the multimodal point cloud Euclidean distance distribution in the measurement area.

Cross-Source Point Cloud Registration Model
The cross-source point cloud registration model is described as a mathematical optimisation problem. Given two 3D point clouds with different coordinate systems, the optimal rigid transformation matrix is solved such that the relative spatial positions of the two clouds are aligned after the transformation. Let the mathematical expressions of the reference and target point clouds be as follows: P s = p is ∈ R 3 , is = 1, 2, . . . , ms and Q t = q jt ∈ R 3 , jt = 1, 2, . . . , nt , where ms and nt are the number of data points that make up the point clouds, respectively. The conversion process can be described by the 3 × 3 rotation matrix (R) and 3D translation vector (T). They can be expressed by Equation (1), where V x , V y , and V z are the translation lengths along the direction of the spatial coordinate axis, and θ x , θ y , and θ z are the rotation angles around the spatial coordinate axis in the clockwise direction. The problem of multimodal point cloud spatial coordinate matching is abstracted as minimising the value of the F(R, T) function in Equation (1). The root mean square (RMS) value of corresponding points is used to characterise the accuracy of multimodal point cloud spatial matching.
where N S is the number of data points contained in the intersection region of the multimodal point cloud.

Point Cloud Data Processing
Both 3D laser scanning and UAV tilt photography mapping generate massive quantities of data, which must be downsampled, and feature points should be extracted to eliminate the impact of overly numerous points on subsequent storage, transfer, and computation. The collected cross-source data may contain outliers affecting the subsequent feature point extraction and registration stages, and noise points must be filtered out. Hence, cross-source point cloud data preprocessing involves noise filtering, downsampling, and feature point extraction. For noise filtering, the typical radius filtering method is employed. In what follows, downsampling and feature point extraction are discussed, which are special processes for deriving cross-source point cloud characteristics.
Point cloud downsampling involves extracting the least number of data possible from areas with small curvatures and the greatest number of data possible from areas with large curvatures [38]. To simplify a point cloud swiftly without destroying its internal geometric features, a voxel filter is used. First, a 3D voxel grid coordinate system is created. Then, all the points inside each grid are replaced by the centroids inside each grid.
Feature points contain rich geometric feature information and can be used as an abstract representation of the original data shape. The point clouds of overlapping areas from different mapping systems and perspectives are relatively consistent. In this study, the intrinsic shape signature (ISS) [39] detection method is utilised to find feature points. The algorithm is easy to implement and yields stable results. Assume there exists a point cloud P containing n t data points, and the point coordinates are denoted by p m . The specific implementation steps are as follows: (1) Define a local coordinate system for each point p m in the point cloud, P, and set the search radius, d iss , for the neighbourhood query. (2) Query all points in the region of P centred on p m with radius d iss . Calculate their weight values, v mn : (3) Calculate the covariance matrix corresponding to each point p m : (4) Calculate the eigenvalues, λ 1 m , λ 2 m , λ 3 m , of each covariance matrix, and rank them from largest to smallest. (5) Set the threshold values ε 1 and ε 2 . The points satisfying the two following inequalities are considered feature points and kept:

Traditional HHO Algorithm
The HHO algorithm is a gradient-free algorithm proposed in 2019 [37]. It solves the optimisation problem by simulating biological behaviours such as tracking, encircling, repelling, and attacking in the process of collaborative prey hunting by Harris hawk groups in nature, where each member of the Harris hawk population represents a candidate solution. The HHO algorithm consists of three phases: exploration, transition, and exploitation. When |E cv | ≥ 1, the Harris hawk algorithm performs a global exploration. In this stage, Harris hawks are randomly perched in a location, searching and tracking their prey through keen vision. The hawks select one of the two strategies with equal probabilities to determine the perching locations. When |E cv | < 1, the algorithm entered the exploitation phase, several Harris hawks swept up at the same time to form an encirclement and waited for the time of surprise attack. The HHO algorithm selects a suitable location update strategy based on two parameters: prey escape energy, E cv , and successful escape probability, c. Figure 2 is a simple example of the process through which a Harris hawk chases its prey. The whole search process of the algorithm is as follows:

TMCHHO
An excellent swarm intelligence optimisation algorithm must maintain diversity when the population is initialised [40]. Thus, a global search is used to rapidly converge near the optimal solution after the optimisation task starts to execute, and the strong local convergence ability is maintained to fully explore the candidate solution space in its latter stage. The position vector at initialisation of the Harris hawk population is composed of normally distributed random numbers, thus increasing the time and computational costs of the global search. Furthermore, when the algorithm enters the development stage, the position update is prone to stagnation owing to the lack of learning among individuals of the population in a single iteration. This affects the accuracy of the algorithm for finding the best solution. To resolve these problems, the two following improvements are introduced: (1) Mapping the PWLCM sequence to optimise the distribution of the initial population: A reasonable initial solution for the population can increase the search space and accelerate the convergence of the population. Chaos [41], as a nonlinear phenomenon in nature, has been widely used in swarm intelligence optimisation algorithms because of its ergodicity, randomness, and sensitivity to initial conditions. Presently, logistic chaotic maps are widely used in metaheuristic algorithms. Previous studies emphasise that PWLCM systems have unique advantages compared with logistic maps [42,43].  In the above equation, T max represents the maximum number of times that the population can updated; E iv denotes the initial escape energy corresponding to the algorithm when the prey perceives the crisis and starts to escape, which is a randomly distributed value on the range [−1, 1]; E cv is the escape energy value after the t s th iteration; O avg (t s ) is the average value obtained after counting the positions of all population members; O(t s ) is the position vector of the Harris hawk; O prey (t s ) denotes the position of the prey at the t s th iteration; L min and L max are the lower and upper boundaries that constrain the space of values taken by the candidate solutions; O r (t s ) represents the current location of a random Harris hawk individual; q is the control parameter of the global search policy; q, e 1 , e 2 , e 3 , and e 4 are all random values distributed in the range of [0, 1]; ∆O(t s ) represents the positional distance between the Harris hawk and its prey in the t s th iteration; S r is a two-dimensional vector composed of random numbers distributed on [0, 1]; f (·) denotes the adaptation value calculation for the current decision variable. J r is a random value distributed in the range of [0, 2]. LF(·) indicates that the parameters are updated using Lévy flight for long-and short-distance searches.

TMCHHO
An excellent swarm intelligence optimisation algorithm must maintain diversity when the population is initialised [40]. Thus, a global search is used to rapidly converge near the optimal solution after the optimisation task starts to execute, and the strong local convergence ability is maintained to fully explore the candidate solution space in its latter stage. The position vector at initialisation of the Harris hawk population is composed of normally distributed random numbers, thus increasing the time and computational costs of the global search. Furthermore, when the algorithm enters the development stage, the position update is prone to stagnation owing to the lack of learning among individuals of the population in a single iteration. This affects the accuracy of the algorithm for finding the best solution. To resolve these problems, the two following improvements are introduced: (1) Mapping the PWLCM sequence to optimise the distribution of the initial population: A reasonable initial solution for the population can increase the search space and accelerate the convergence of the population. Chaos [41], as a nonlinear phenomenon in nature, has been widely used in swarm intelligence optimisation algorithms because of its ergodicity, randomness, and sensitivity to initial conditions. Presently, logistic chaotic maps are widely used in metaheuristic algorithms. Previous studies emphasise that PWLCM systems have unique advantages compared with logistic maps [42,43].  Figure 3 indicates that the probability of the logistic sequence in [0, 0.1] and [0.9 higher than that in other intervals. The PWLCM sequence exhibits better randomnes ergodicity. In this study, the PWLCM system is mapped into the value space of the misation variables, thus replacing the pseudo-random numbers in initialising the po tion of Harris hawks and reducing the influence of insufficient population diversity o global exploration capacity. The expression for the PWLCM system mentioned abo as follows: In the above formula, the value range of the control parameter tcr is (0, 0.5), wh set to 0.4 in this paper; 1, 2, , kn = ; and rk and rk+1 are the chaotic numbers in the ch  is higher than that in other intervals. The PWLCM sequence exhibits better randomness and ergodicity. In this study, the PWLCM system is mapped into the value space of the optimisation variables, thus replacing the pseudo-random numbers in initialising the population of Harris hawks and reducing the influence of insufficient population diversity on the global exploration capacity. The expression for the PWLCM system mentioned above is as follows: In the above formula, the value range of the control parameter t cr is (0, 0.5), which is set to 0.4 in this paper; k = 1, 2, . . . , n; and r k and r k+1 are the chaotic numbers in the chaotic sequence.
(2) Introduction of trigonometric mutation to improve location update strategies: Trigonometric mutation [44] refers to randomly extracting three mutually different solutions, i.e., Z t1,n , Z t2,n , and Z t3,n , as vertices in the population. Moreover, the vector differences, i.e., (Z t1,n − Z t2,n ), (Z t2,n − Z t3,n ), and (Z t3,n − Z t1,n ), are used as the side lengths of the hypergeometric triangular search space. Three fitness values f (Z t1,n ), f (Z t2,n ), and f (Z t3,n ) are used to perturb the stagnant population. In the development stage of the Harris hawk algorithm, a trigonometric mutation is introduced to control the population to always mutate toward the individual with the optimal objective function value and adaptively adjust the forward progress of each Harris hawk to improve the learning ability among the individuals of the population in each iteration. The equation for trigonometric mutations is derived as follows: In the above, f (Z tm,n ) represents the objective function value of the current individual; K is the sum of the objective function values of the three extracted solutions; k 1 , k 2 , and k 3 are the ratios of the corresponding function values; td 1 , td 2 , and td 3 represent the forward progress along the directions of three individuals, respectively; and or m,n is the Gaussian distribution scaling factor, that is, or m,n~N (0, β). In this paper, β is set to 0.1.
The centre point of the hypergeometric triangle search space is used as the base vector position, and the individual mutation direction and step size are controlled by the weighted vector. When k m − k n < 0, the base vector moves towards individual Z m . When k m − k n > 0, the base vector moves towards individual Z n , ensuring that the candidate solution updates its position in the direction of the better individual at each iteration. As shown in Figure 4, when k 3 < k 2 < k 1 , the individual moves in direction Z t3,n , which has the smallest objective function value. Furthermore, the Gaussian distribution factor, or m,n , regulates the degree of trigonometric mutation perturbation. Figure 5 shows the flowchart of the TMCHHO algorithm.
tate toward the individual with the optimal objective function value and ada the forward progress of each Harris hawk to improve the learning ability am viduals of the population in each iteration. The equation for trigonometric derived as follows: In the above, f(Ztm,n) represents the objective function value of the curre K is the sum of the objective function values of the three extracted solutions are the ratios of the corresponding function values; td1, td2, and td3 represen progress along the directions of three individuals, respectively; and orm,n is distribution scaling factor, that is, orm,n~N(0, β). In this paper, β is set to 0.1.
The centre point of the hypergeometric triangle search space is used as tor position, and the individual mutation direction and step size are cont weighted vector. When

ICP Algorithm
After rough registration, the point cloud coordinate system is roughly aligned, but it still cannot meet the needs of engineering. It is necessary to use the ICP algorithm to further improve the registration accuracy. The specific implementation steps are as follows:

Simulation Experiments and Engineering Application
This section contains three parts; firstly, the robustness and accuracy of the TMCHHO algorithm are verified based on optimisation experiments using benchmark functions. In addition, the feasibility of the TMCHHO algorithm to solve the point cloud coordinate system unification problem is verified based on registration experiments using a standard point cloud dataset. Finally, the TMCHHO point cloud alignment algorithm is applied to the massive multimodal point clouds generated during the construction of the Lianghekou project, demonstrating the improved accuracy and completeness of the fusion model compared with the existing single reverse modelling approach.

ICP Algorithm
After rough registration, the point cloud coordinate system is roughly aligned, but it still cannot meet the needs of engineering. It is necessary to use the ICP algorithm to further improve the registration accuracy. The specific implementation steps are as follows: (1) Find the corresponding point (Q i ) of each point (P i ) in the source point cloud (P S ) in the target point cloud (Q t ). (2) Calculate D dis and solve the spatial transformation parameters using the singular value decomposition method.
(3) The spatial transformation parameters obtained in the previous step are applied to P S , and the new point cloud obtained is named P n . (4) Use P n and Q t to calculate the target value (D dis ) in Equation (8). If D dis is less than a certain threshold, stop the iteration; otherwise, the algorithm proceeds to the next iteration and repeats steps (1)-(3).

Simulation Experiments and Engineering Application
This section contains three parts; firstly, the robustness and accuracy of the TMCHHO algorithm are verified based on optimisation experiments using benchmark functions. In addition, the feasibility of the TMCHHO algorithm to solve the point cloud coordinate system unification problem is verified based on registration experiments using a standard point cloud dataset. Finally, the TMCHHO point cloud alignment algorithm is applied to the massive multimodal point clouds generated during the construction of the Lianghekou project, demonstrating the improved accuracy and completeness of the fusion model compared with the existing single reverse modelling approach.

Benchmark Function Optimisation Experiments
In order to verify the superiority of the TMCHHO algorithm, seven representative benchmark functions were selected to carry out optimisation experiments, as shown in Table 1, where f min denotes the optimal value at which the benchmark function can converge theoretically, and range is the constraint on the candidate solution. The functions [f 1 (x) − f 4 (x)] are single-peaked benchmark functions with only one local extreme value, which are suitable for testing the global exploration ability. The functions [f 5 (x) − f 7 (x)] are complex multi-peaked benchmark functions with multiple local extreme values, which are suitable for testing the ability to jump out of premature convergence.

Expressions
Dimensions f min Range To ensure the scientific nature of the experiments, 10 control groups were set up. Therefore, we chose the widely concerned whale optimisation algorithm (WOA), particle swarm optimisation (PSO), butterfly optimisation algorithm (BOA), the traditional HHO algorithm, and the dingo optimisation algorithm proposed in 2021 [45] to conduct experiments when the decision variable dimensions were 10 and 50, respectively. Additionally, the performance of the algorithms was analysed based on the pairs of experimental results for each test group. The control parameters of each algorithm are shown in Table 2, and the statistical results of the optimisation experiments of each group are shown in Table 3. Regardless of whether the dimension was 10 or 50, the TMCHHO algorithm could approach the global solution on the seven benchmark test functions. Among them, the results of convergence on the f 1 (x), f 2 (x), f 3 (x), f 4 (x), and f 6 (x) functions were close to the theoretical optimal value, and in the complex multi-peaked functions f 5 (x) and f 7 (x), the experimental results of the TMCHHO algorithm could reach the theoretical optimal value. Although the traditional HHO algorithm performed comparably to the TMCHHO algorithm on f 3 (x), f 5 (x), and f 7 (x), the performance of the TMCHHO algorithm on the other four functions was the best among the six algorithms, indicating that the use of trigonometric mutation and the PWLCM system led to a significant improvement in the ability of the HHO algorithm to seek the optimal solution and robustness.   Figure 6 illustrates the variation curves of the fitness values of the six algorithms on the single-peaked test functions f 1 (x) and f 2 (x), and the multi-peaked test functions f 5 (x) and f 6 (x), when the dimensionality of the decision variables was 10. As can be seen from the graphs, the improved Harris hawk algorithm converged faster and had a higher optimisation-seeking accuracy than other population intelligence optimisation algorithms. rithm on f3(x), f5(x), and f7(x), the performance of the TMCHHO algorithm on the four functions was the best among the six algorithms, indicating that the use of trig metric mutation and the PWLCM system led to a significant improvement in the a of the HHO algorithm to seek the optimal solution and robustness. Figure 6 illustrates the variation curves of the fitness values of the six algorithm the single-peaked test functions f1(x) and f2(x), and the multi-peaked test functions and f6(x), when the dimensionality of the decision variables was 10. As can be seen the graphs, the improved Harris hawk algorithm converged faster and had a higher misation-seeking accuracy than other population intelligence optimisation algorithm

Standard Point Cloud Dataset Registration Experiments
To verify the feasibility of the TMCHHO algorithm in the field of point cloud tration, experiments were conducted using the data obtained from the Armadillo, Bu Dragon, and Happy datasets, and they were compared with the data from other rithms. Figure 7 shows the standard point cloud set used in this paper. Following th gle-variable principle of comparative experiments, the same population size, iter times, and initial value range were set for each swarm intelligence optimisation algor In the experiments presented in this section, the number of iterations, population size initial value range were set as 500, 30, and [−50, 50], respectively. All algorithms were grammed in MATLAB R2016b, and an Intel Core i7-9750H 32 G computer was used

Standard Point Cloud Dataset Registration Experiments
To verify the feasibility of the TMCHHO algorithm in the field of point cloud registration, experiments were conducted using the data obtained from the Armadillo, Bunny, Dragon, and Happy datasets, and they were compared with the data from other algorithms. Figure 7 shows the standard point cloud set used in this paper. Following the single-variable principle of comparative experiments, the same population size, iteration times, and initial value range were set for each swarm intelligence optimisation algorithm.
In the experiments presented in this section, the number of iterations, population size, and initial value range were set as 500, 30, and [−50, 50], respectively. All algorithms were programmed in MATLAB R2016b, and an Intel Core i7-9750H 32 G computer was used. Point cloud data P and Q from different perspectives were set as the operation objects, and the spatial position alignment of the two point clouds was set as the optimisation goal. Accordingly, various swarm intelligence optimisation algorithms were employed to conduct 20 experiments on multi-point cloud sets.
The point clouds were first processed using different methods, such as voxel filtering and ISS feature point extraction. The reasonable values of r, ε1, and ε2 (voxel filtering and feature point extraction parameters) could reduce computational effort as well as preserve its inherent geometric feature information. Based on experience, and after several experiments, the values of r, ε1, and ε2 were set to 0.001, 0.65, and 0.6, respectively.
In this section, we use the RMS values mentioned in Section 4.1 to characterise the alignment accuracy of the two point clouds. In order to observe the results of the point cloud registration experiments more intuitively, Figure 8   Point cloud data P and Q from different perspectives were set as the operation objects, and the spatial position alignment of the two point clouds was set as the optimisation goal. Accordingly, various swarm intelligence optimisation algorithms were employed to conduct 20 experiments on multi-point cloud sets.
The point clouds were first processed using different methods, such as voxel filtering and ISS feature point extraction. The reasonable values of r, ε 1 , and ε 2 (voxel filtering and feature point extraction parameters) could reduce computational effort as well as preserve its inherent geometric feature information. Based on experience, and after several experiments, the values of r, ε 1 , and ε 2 were set to 0.001, 0.65, and 0.6, respectively.
In this section, we use the RMS values mentioned in Section 4.1 to characterise the alignment accuracy of the two point clouds. In order to observe the results of the point cloud registration experiments more intuitively, Figure 8 plots the variation curves of RMS values during the iterations of the six algorithms.
As shown in Figure 8, the iterative curves of each optimisation algorithm had a downward trend as the number of population update times increased. In contrast, the TMCHHO algorithm had a relatively large RMS value in the first iteration because of the more dispersed initial population; however, the reduction rate was rapid in the early stage. After more than 100 iterations, all the algorithms exhibited a stable trend; however, the convergence value of the TMCHHO algorithm was smaller after 500 iterations, proving the superiority of the algorithm.
To analyse the performance of the TMCHHO algorithm further, Table 4 outlines the experimental results of each algorithm on different datasets. It can be seen from the table that the performance of the TMCHHO algorithm on the two data indicators of the average and worst values was the best among the six algorithms, and its stability performance was second only to the traditional HHO algorithm and better than the WOA, BOA, DOA, and PSO algorithms. Compared with the traditional HHO algorithm, the registration accuracy on the Armadillo, Bunny, Dragon, and Happy datasets improved by 52.11%, 72.91%, 48.51%, and 77.69%, respectively. Figure 9 shows a boxplot that intuitively reflects the distribution and average level of each algorithm. The TMCHHO algorithm achieved satisfactory results for the coordinate matching problem on the above four standard point cloud datasets. Other comparative analysis algorithms were prone to falling into local loops or having inadequate stability, resulting in unsatisfactory final convergence results.
The above statistical data demonstrate the feasibility and superiority of the TMCHHO application in the field of point cloud registration. As shown in Figure 8, the iterative curves of each optimisation algorithm had a downward trend as the number of population update times increased. In contrast, the TMCHHO algorithm had a relatively large RMS value in the first iteration because of the more dispersed initial population; however, the reduction rate was rapid in the early stage. After more than 100 iterations, all the algorithms exhibited a stable trend; however, the convergence value of the TMCHHO algorithm was smaller after 500 iterations, proving the superiority of the algorithm.
To analyse the performance of the TMCHHO algorithm further, Table 4 outlines the experimental results of each algorithm on different datasets. It can be seen from the table that the performance of the TMCHHO algorithm on the two data indicators of the average and worst values was the best among the six algorithms, and its stability performance was second only to the traditional HHO algorithm and better than the WOA, BOA, DOA, and PSO algorithms. Compared with the traditional HHO algorithm, the registration accuracy on the Armadillo, Bunny, Dragon, and Happy datasets improved by 52.11%, 72.91%, 48.51%, and 77.69%, respectively. Figure 9 shows a boxplot that intuitively reflects the distribution and average level of each algorithm. The TMCHHO algorithm achieved satisfactory results for the coordinate matching problem on the above four standard point cloud datasets. Other comparative analysis algorithms were prone to falling into local loops or having inadequate stability, resulting in unsatisfactory final convergence results. The above statistical data demonstrate the feasibility and superiority of the TMCHHO application in the field of point cloud registration.   As shown in Figure 10, only the ICP algorithm was used for registration, which was greatly affected by the initial pose of the point cloud, and the registration effect was not ideal. The use of the TMCHHO coarse registration algorithm overcame this defect. Table 5 also shows that the combination of TMCHHO and ICP algorithms can significantly improve registration accuracy. As shown in Figure 10, only the ICP algorithm was used for registration, which was greatly affected by the initial pose of the point cloud, and the registration effect was not ideal. The use of the TMCHHO coarse registration algorithm overcame this defect. Table  5 also shows that the combination of TMCHHO and ICP algorithms can significantly improve registration accuracy.

Engineering Application
The Lianghekou rockfill dam project was used as the research object. The crest of the dam is 668.7 m long and 16 m wide. Firstly, information on the complex environment of the dam surface during the construction period was collected using multiple mapping systems. Secondly, the TMCHHO algorithm was used to complete the unification of multimodal point cloud spatial locations. Finally, the Euclidean distance distribution of the measurement area was used to evaluate the point cloud fusion accuracy.
The construction environment of the dam surface is complex, and vehicles such as

Engineering Application
The Lianghekou rockfill dam project was used as the research object. The crest of the dam is 668.7 m long and 16 m wide. Firstly, information on the complex environment of the dam surface during the construction period was collected using multiple mapping systems. Secondly, the TMCHHO algorithm was used to complete the unification of multimodal point cloud spatial locations. Finally, the Euclidean distance distribution of the measurement area was used to evaluate the point cloud fusion accuracy.
The construction environment of the dam surface is complex, and vehicles such as trucks, vibratory rollers, and sprinklers operate continuously, rendering continuous measurement difficult to implement. Therefore, separate 3D laser scanning processes cannot collect the complete environmental information of the dam surface and surrounding terrain. By contrast, UAV tilt photography is less affected by the ground construction environment and has flexible air-to-ground observation capability, but its limitation is that some spatial information at the bottom and side of the feature may be missing or deformed. Figure 11 shows a schematic of the data acquisition strategy, which adopts an integrated ground-to-air strategy to collect point clouds and images to overcome the blind area phenomenon and reduce labour costs. The equipment used for point cloud data acquisition was a Faro Focus S 350 terrestrial 3D laser scanner. During the field acquisition, we surveyed the measurement area in advance, adjusted the station location according to the actual scanning situation, ensured an overlap rate of at least 30% among adjacent stations, and set the scanning range to 360° × 300°. A total of 22 stations were deployed for this scan. Figure 12a shows the acquisition device, and Figure 12b shows the acquired point cloud data. For image collection, DJI Genie UAV was employed. To ensure the safe flight and high-quality data acquisition of the UAV, the route was planned in advance, the heading and side overlap rates were set to 80%, and the UAV was manually controlled to take The equipment used for point cloud data acquisition was a Faro Focus S 350 terrestrial 3D laser scanner. During the field acquisition, we surveyed the measurement area in advance, adjusted the station location according to the actual scanning situation, ensured an overlap rate of at least 30% among adjacent stations, and set the scanning range to 360 • × 300 • . A total of 22 stations were deployed for this scan. Figure 12a shows the acquisition device, and Figure 12b shows the acquired point cloud data. The equipment used for point cloud data acquisition was a Faro Focus S 350 terrestrial 3D laser scanner. During the field acquisition, we surveyed the measurement area in advance, adjusted the station location according to the actual scanning situation, ensured an overlap rate of at least 30% among adjacent stations, and set the scanning range to 360° × 300°. A total of 22 stations were deployed for this scan. Figure 12a shows the acquisition device, and Figure 12b shows the acquired point cloud data. For image collection, DJI Genie UAV was employed. To ensure the safe flight and high-quality data acquisition of the UAV, the route was planned in advance, the heading and side overlap rates were set to 80%, and the UAV was manually controlled to take For image collection, DJI Genie UAV was employed. To ensure the safe flight and high-quality data acquisition of the UAV, the route was planned in advance, the heading and side overlap rates were set to 80%, and the UAV was manually controlled to take supplementary shots in local areas. In the postprocessing stage, the supporting software Context Capture was used to produce the collected multi-view oblique images into a 3D dense point cloud model and output it in .las format, as shown in Figure 13b. The slope part was cropped and used as the data source for solving the transformation matrix. After entering the data processing stage, downsampling and feature point extraction were performed for massive multimodal point clouds with disorder characteristics, where the edge length of the voxel grid was set to 0.1, and the spherical search radius in the internal shape descriptor algorithm was set to 0.3. Table 6 lists the number of data points in the point cloud before and after downsampling. In this study, the fusion effect of the TMCHHO algorithm was verified based on the massive geometric spatial information provided by multiple mapping systems during the construction session of the rockfill dam. Using six algorithms to register multimodal point clouds in practical engineering, the obtained registration accuracy histogram is shown in Figure 14a. It can be seen from the figure that the TMCHHO algorithm performed best, with an RMS value of 1.01. Figure  14b shows the RMS value variation curve when applying the six algorithms to multimodal point clouds. The WOA, PSO, BOA, and DOA algorithms converged at 200 iterations, while the HHO and TMCHHO algorithms were superior in the second half of the iterative process. The fitness value obtained using the TMCHHO algorithm was the smallest, and the rigid transformation matrix obtained with this algorithm was used as the final application scheme. Figure 15 shows the process of spatial coordinate system matching for a multimodal point cloud with an unknown orientation pose. It can be seen from the figure that the use of the TMCHHO registration algorithm enabled the source point cloud to also obtain a good initial pose. The two point clouds were completely aligned after applying the ICP fine registration algorithm. After entering the data processing stage, downsampling and feature point extraction were performed for massive multimodal point clouds with disorder characteristics, where the edge length of the voxel grid was set to 0.1, and the spherical search radius in the internal shape descriptor algorithm was set to 0.3. Table 6 lists the number of data points in the point cloud before and after downsampling. In this study, the fusion effect of the TMCHHO algorithm was verified based on the massive geometric spatial information provided by multiple mapping systems during the construction session of the rockfill dam. Using six algorithms to register multimodal point clouds in practical engineering, the obtained registration accuracy histogram is shown in Figure 14a. It can be seen from the figure that the TMCHHO algorithm performed best, with an RMS value of 1.01. Figure 14b shows the RMS value variation curve when applying the six algorithms to multimodal point clouds. The WOA, PSO, BOA, and DOA algorithms converged at 200 iterations, while the HHO and TMCHHO algorithms were superior in the second half of the iterative process. The fitness value obtained using the TMCHHO algorithm was the smallest, and the rigid transformation matrix obtained with this algorithm was used as the final application scheme. Figure 15 shows the process of spatial coordinate system matching for a multimodal point cloud with an unknown orientation pose. It can be seen from the figure that the use of the TMCHHO registration algorithm enabled the source point cloud to also obtain a good initial pose. The two point clouds were completely aligned after applying the ICP fine registration algorithm.  The conventional evaluation method is used to lay control points of the same name under multiple measurement techniques, use a total station to collect geodetic coordinates, and compare the absolute accuracy of model 3D reconstruction [23]. However, the coordinate acquisition of control points cannot be guaranteed due to the blind area phenomenon caused by the complex construction environment of the dam face and the special topographic conditions of the rockfill dam, so we performed a relative comparison of the dense point cloud models produced using the two measurement techniques. As shown in Figure 16, the dense point clouds produced using UAV tilt photography and 3D laser scanning were defined as the reference model and the target model, respectively, and a colour map with the Euclidean distance of the measured area was considered the index. The closer the colour was to red, the greater the deviation of the corresponding point cloud. To further evaluate the multimodal point cloud fusion, Table 7 lists the various indicators of Euclidean distances of the multimodal point cloud in the five measurement areas. Figure 17 shows the histogram of the Euclidean distance of the point cloud within the five measurement areas. Among them, the point cloud deviations were most distributed in the range of [0, 0.06], the average deviation range was 1.8-3.1 cm, and the degree of agreement between the two models was high. The intercepted point cloud data can be used to complement the missing parts. For example, the missing point cloud data of UAV tilt photography in the box shown in Figure 18a,c can be filled by a point cloud derived from 3D laser scanning. Figure 18b   The conventional evaluation method is used to lay control points of the same name under multiple measurement techniques, use a total station to collect geodetic coordinates, and compare the absolute accuracy of model 3D reconstruction [23]. However, the coordinate acquisition of control points cannot be guaranteed due to the blind area phenomenon caused by the complex construction environment of the dam face and the special topographic conditions of the rockfill dam, so we performed a relative comparison of the dense point cloud models produced using the two measurement techniques. As shown in Figure 16, the dense point clouds produced using UAV tilt photography and 3D laser scanning were defined as the reference model and the target model, respectively, and a colour map with the Euclidean distance of the measured area was considered the index. The closer the colour was to red, the greater the deviation of the corresponding point cloud. To further evaluate the multimodal point cloud fusion, Table 7 lists the various indicators of Euclidean distances of the multimodal point cloud in the five measurement areas. Figure 17 shows the histogram of the Euclidean distance of the point cloud within the five measurement areas. Among them, the point cloud deviations were most distributed in the range of [0, 0.06], the average deviation range was 1.8-3.1 cm, and the degree of agreement between the two models was high. The intercepted point cloud data can be used to complement the missing parts. For example, the missing point cloud data of UAV tilt photography in the box shown in Figure 18a,c can be filled by a point cloud derived from 3D laser scanning. Figure 18b  The conventional evaluation method is used to lay control points of the same name under multiple measurement techniques, use a total station to collect geodetic coordinates, and compare the absolute accuracy of model 3D reconstruction [23]. However, the coordinate acquisition of control points cannot be guaranteed due to the blind area phenomenon caused by the complex construction environment of the dam face and the special topographic conditions of the rockfill dam, so we performed a relative comparison of the dense point cloud models produced using the two measurement techniques. As shown in Figure 16, the dense point clouds produced using UAV tilt photography and 3D laser scanning were defined as the reference model and the target model, respectively, and a colour map with the Euclidean distance of the measured area was considered the index. The closer the colour was to red, the greater the deviation of the corresponding point cloud. To further evaluate the multimodal point cloud fusion, Table 7 lists the various indicators of Euclidean distances of the multimodal point cloud in the five measurement areas. Figure 17 shows the histogram of the Euclidean distance of the point cloud within the five measurement areas. Among them, the point cloud deviations were most distributed in the range of [0, 0.06], the average deviation range was 1.8-3.1 cm, and the degree of agreement between the two models was high. The intercepted point cloud data can be used to complement the missing parts. For example, the missing point cloud data of UAV tilt photography in the box shown in Figure 18a,c can be filled by a point cloud derived from 3D laser scanning. Figure 18b

Conclusions
The accuracy, completeness, and authenticity of the 3D model reconstruction of the dam during the construction period determine its consistency with the actual project. In the traditional technology involving the 3D reconstruction of dams, the simultaneous acquisition of high-precision 3D information and texture features of real ground objects from multiple angles is difficult to achieve. Accordingly, a TMCHHO fusion model for the 3D reality modelling of rockfill dams during construction based on multi-source point clouds was developed and validated based on actual projects. The following findings were observed: (1) An air-ground comprehensive perception system based on multiple surveying and mapping systems was proposed, which realises the real-time acquisition of point clouds and images containing considerable amounts of spatial and texture information of rockfill dams during construction. The collected multi-source data were preprocessed using a voxel filter and ISS descriptors. (2) A high-precision, high-integrity, and high-fidelity TMCHHO point cloud fusion model was developed for the 3D reconstruction of rockfill dams during dam construction. The model integrates the improved Harris hawk coarse registration and ICP fine registration algorithms. The former combines the PWLCM system in the initialisation stage to reduce the impact of initial values on global exploration. In the local development stage, a trigonometric mutation was introduced to perturb the population individuals such that falling into local optima can be avoided. (3) The TMCHHO algorithm was verified to achieve better accuracy than the BOA, PSO, WOA, and HHO using standard point cloud datasets (i.e., Armadillo, Bunny, Dragon, and Happy) in point cloud registration experiments. (4) The method proposed in this paper was applied to the Lianghekou rockfill dam, and the improvement in model integrity compared with a single surveying and mapping system was verified.
The results of this study can automatically achieve the fusion of multimodal point clouds, saving labour time costs while improving the integrity of the model. Accordingly,

Conclusions
The accuracy, completeness, and authenticity of the 3D model reconstruction of the dam during the construction period determine its consistency with the actual project. In the traditional technology involving the 3D reconstruction of dams, the simultaneous acquisition of high-precision 3D information and texture features of real ground objects from multiple angles is difficult to achieve. Accordingly, a TMCHHO fusion model for the 3D reality modelling of rockfill dams during construction based on multi-source point clouds was developed and validated based on actual projects. The following findings were observed: (1) An air-ground comprehensive perception system based on multiple surveying and mapping systems was proposed, which realises the real-time acquisition of point clouds and images containing considerable amounts of spatial and texture information of rockfill dams during construction. The collected multi-source data were preprocessed using a voxel filter and ISS descriptors. (2) A high-precision, high-integrity, and high-fidelity TMCHHO point cloud fusion model was developed for the 3D reconstruction of rockfill dams during dam construction. The model integrates the improved Harris hawk coarse registration and ICP fine registration algorithms. The former combines the PWLCM system in the initialisation stage to reduce the impact of initial values on global exploration. In the local development stage, a trigonometric mutation was introduced to perturb the population individuals such that falling into local optima can be avoided. (3) The TMCHHO algorithm was verified to achieve better accuracy than the BOA, PSO, WOA, and HHO using standard point cloud datasets (i.e., Armadillo, Bunny, Dragon, and Happy) in point cloud registration experiments. (4) The method proposed in this paper was applied to the Lianghekou rockfill dam, and the improvement in model integrity compared with a single surveying and mapping system was verified.
The results of this study can automatically achieve the fusion of multimodal point clouds, saving labour time costs while improving the integrity of the model. Accordingly, the study results contribute to advancing the intelligent development of rockfill dam construction management and decision making. This study can be extended and applied to other hydropower construction projects during their construction period in the realistic 3D modelling of cross-source data, which have certain universality. In addition, owing to the multiple data format changes involved in the fusion modelling process, in the future, the authors intend to conduct research on cross-source point-cloud-integrated data acquisition and processing, as well as fused big data processing platforms, to achieve highly accurate and stable engineering application targets.