Multiview intelligent networking based on the genetic evolution algorithm for precise 3D measurements

: The use of multi-visual network 3D measurements is increasing; however, finding ways to apply low-cost industrial cameras to achieve intelligent networking and efficient measurement is a key problem that has not been fully solved. In this paper, the multivision stereo vision 3D measurement principle and multivision networking process constraints are analyzed in depth, and an intelligent networking method based on the genetic evolution algorithm (GEA) is proposed. The genetic operation is improved, and the fitness function is dynamically calibrated. Based on the visual sphere model, the best observation distance is assigned as the radius of the visual sphere, and the required constraints are fused to establish an intelligent networking design of the centering multivision. A simulation and experiment show that the proposed algorithm is widely feasible, and its measurement accuracy meets the requirements of the industrial field. Our multiview intelligent networking algorithms and methods provide solid theoretical and technical support for low-cost and efficient on-site 3D measurements of industrial structures.


Introduction
3D vision measurement technology uses the type of image information obtained by a visual sensor as the carrier. The geometric information of the object to be measured in the space is calculated, and then the target object is reconstructed, and its 3D morphology information is restored [1][2][3][4]. High-The various visual measurement methods described above attempt to predict the visibility of the target points and the feasibility of the camera station to optimize the initial imaging network without using CAD models [17]. There are known disadvantages; all of these technologies need a good initial network expert (i.e., prior knowledge) [18]. Some network design strategies are suitable for coordination and are used for specific object features or texture changes of the 3D reconstructions from multiple views. Their method considers the difference in the image texture in the overlapping area, and by adjusting the feature vectors of the 3D points perpendicular to the camera view direction, the feature values are minimized to ensure the accuracy of the 3D reconstructions. However, the constraints related to the measurement range are not considered, and the self-occlusion area is not applicable.
There are many constraints within the measurement range, such as the image size and resolution constraints, number and distribution of the feature points, camera field of view, depth of field, the imaging and size overlapping area of the target working area [19][20][21][22][23][24]. These range constraints are related to another set of visibility constraints, including the incidence angle, occlusion and selfocclusion area. The design of a multiview visual network is characterized by the use of multiple parameters, multiple constraints, a large amount of computation and so on. These characteristics determine that the search for an absolute exact optimal solution requires a very large amount of computation. Theoretically, this is a complex optimization strategy problem, which is difficult to describe theoretically and requires a highly technical implementation. Therefore, the efficiency of the algorithm must be considered in the implementation. If the above factors are considered in the planning of a visual measurement network, and intelligent algorithms extend stereo vision measurement technology to the topography measurement engineering practice. Then, in the industrial vision measurement field, the accuracy and reliability of the measurements will make a qualitative leap.
The feasibility of a GA solution in the search space has broad possibilities, and the population search process has significant advantages. Moreover, the algorithm can search for the global optimal solution in parallel, and it can be selected and applied in combination with other intelligent algorithms for different engineering problems. In this paper, the multiview stereovision 3D measurement principle and the multivision networking constraints are analyzed, and the parameters to be optimized are obtained. To seek an accurate position and pose of the camera in the vision measurement network, it is proposed that the genetic evolutionary algorithm (GEA) is integrated into the network design. In this algorithm, the genetic operation step is improved, and the fitness function is dynamically calibrated. Then, the centering networking method is designed for the workpiece structure, resulting in easy centering. The spherical model is adopted, and the optimal observation distance is taken as the spherical radius. The required constraint conditions are considered, and the optimal centering camera network is iteratively selected by GEA to implement precise 3D measurements.

The 3D measurement principle based on multiview stereo vision
To achieve full measurement coverage, numerous measurement modules must be arranged in space to form a multiview stereo vision measurement network. Based on the binocular vision 3D measurement principle, the multivision 3D measurement model is further derived. The 3D measurement principle is shown in Figure 1. The cameras of each measuring viewpoint are set as , ,⋅⋅⋅, . For the convenience of calculation, the camera coordinate system, ( ), of Camera 1 overlaps with the world coordinate system, ( ) analogously with the 3D measurement principle of binocular stereovision. The characteristic points on the object to be measured are 1,2,3. . . . The coordinate of in the coordinate system of Camera 1 is , , , and the coordinate system of in the coordinate system of Camera 2 is , , . The rotation matrix between the coordinate systems of Cameras 1 and 2 is , and the translation matrix is . Therefore, the transformation formula for the coordinate system of Cameras 1 and 2 is shown in Eq (1).
(1) By analogy, the coordinates of in the coordinate system of Camera n-1 are , , , and the coordinates of in the coordinate system of Camera n are , , . The transformation relation between the coordinate systems of Cameras n-1 and n is shown in Eq (2). (2) The coordinate transformation matrix of the camera is obtained from Eqs (1) and (2), as shown in Eq (3).
According to the above derivation, any camera coordinate system in the measurement network of a multiview stereo vision can be expressed as 2 , and the spatial 3D coordinates of its characteristic point are shown in Eq (4).
The 3D information of any object to be measured in space can be handled through integrating the binocular and multiview stereo vision 3D measurement principle.

Multiview intelligent networking method based on GEA
Visual networking has many constraints, a large amount of computation, complex parameters and requires interdisciplinary knowledge. Based on these characteristics, the stability and operational efficiency of algorithm technology must be considered in networking design implementation, however, it the method of finding the global optimal solution is difficult to use in traditional mathematical modeling. This paper improves on GA to solve the visual networking design problem.

Implementation process of GEA
According to Darwinian evolution theory, a genetic algorithm (GA) can automatically find a solution model for the theoretical global optimal solution [25]. Genetic operations include three methods: selection, crossover and mutation. GEA is a new calculation model that evolved based on the GA mechanism. In addition to having all the characteristics of GA, GEA has different improvement methods for different genetic operation problems, and can also conduct individual optimizations for multiple objectives [26]. The specific implementation process is as follows: Step 1. Input the total number of individuals N, the maximum number G of iterations, the cross probability ∈ 0.4, 0.99 , the mutation probability ∈ 0, 0.05 and the intensity factor α. The parameters of fitness function threshold Fit and the discrete precision epsDiscrete are set according to the measurement accuracy requirements. The N algorithm individuals xi (i = 1, 2, …, N) are randomly initialized.
Step 2. Since each function to be solved has an interval constraint, the operation process uses binary and is coded according to the iterative precision. N coded individuals are uniformly and randomly generated, and the fitness function is calculated for each individual.
Step 4. Record the initial generated optimal individual and its fitness function .
Step 5. When the number of iterations k < G, consider whether a large mutation is needed. The mutation operation in the genetic operation is used to avoid the "precocity" of the algorithm. However, since, in the operation, the mutation probability is usually small, it is necessary to iterate a large progeny group before the mutation of new individuals can occur, which greatly increases the amount of computation. Define: where is the average fitness value, is the minimum fitness value and is the maximum fitness value. If Eq (5) is satisfied, there is no need for large variations. If Eq (5) is not satisfied, all the individuals, except for the best ones, need to conduct large genetic mutation operations.
Step 6. The fitness function after large mutations is dynamically calibrated. In the algorithm operation process, the fitness function of each individual may be slightly different, resulting in the selection operation weakening or even disappearing, which is not conducive to the stability of the algorithm. In this paper, the fitness function is calibrated dynamically to ensure the relative difference of the individual fitness function.
The formula is used to calculate the individuals of the optimal fitness function and replace the worst individual, where Q is the initial adjustment value, ∈ 0.9, 0.999 represents the constringency coefficient, and f , and are calculated again.
Step 7. Equation (6) is used to calculate the selection probability and cumulative probability: Step 8. N offspring are produced. When , for each offspring i; based on the idea of "survival of the fittest" in GEA, the roulette algorithm is used to select the paternal line in the GA, and then the maternal line is randomly selected.
The calculation of adaptive genetic crossover probability is shown in Eq (7): If , a cross operation is performed. The probability of adaptive genetic mutation is calculated as shown in Eq (8): , perform the mutation operation. All the individuals, except the optimal one, are replaced by the next generation.
Step 9. Update the iterative optimal individual , and record the iterative number k of offspring. When / ℎ , the fitness function exceeds the threshold, the algorithm converges and the global optimal solution is obtained. At the end of the iteration, the optimal visual networking method is obtained by decoding.

Multivision measurement networking method
For the structural parts with easy centering, take the center of the target to be measured as the center of the sphere to build the visual sphere model, and combine the GEA and visual constraints. The theoretical global optimal camera networking method is obtained, and in this paper, it is called peeringtype networking.
3.2.1. Visual sphere model and parameters to be optimized First, the target structure to be measured is centered (refer to the literature [27]); the visual sphere model is established based on this. In peering-type networking, the distance between the camera and the object to be measured is fixed in the measurement process, and the optical axis of the camera is toward the center of the sphere, so the camera networking is on the surface of the spherical model with a fixed radius, as shown in Figure 2. In the visual sphere model, each viewpoint represents a camera station point, and the angle on the camera projection plane is called the azimuth angle . The included angle on the vertical plane is called the elevation angle β, where ∈ 0,2 , ∈ /2, /2 . D is the equivalent focal length of the camera. In the visual sphere model, the camera is fixed on a spherical surface with a radius of D, and then the spatial position and orientation coordinates of camera i can be expressed as , , . The external parameter vector of the camera can also be further estimated from the above information, where the rotation vector /2 and the translation vector 0 0 . Assuming m cameras are given, the parameters to be optimized for the multiview measurement network are shown in Eq (9): A reasonable camera spatial position is determined by , , , , , , which has 6 parameters to be optimized. In addition, the constant parameters that affect the camera's position mainly include the focal length f and the baseline B. Theoretically, the spatial position coordinates (x, y) of the camera remain unchanged. Increasing z can increase the number of measurable points of the camera and improve the networking coverage. However, increasing z continuously will reduce the spatial resolution of the camera, resulting in the surface details of the target to be measured being ignored, and reducing the networking accuracy. Therefore, it is crucial to reasonably select the optimal distance of the camera. According to the conclusion of Literature [28] in Section 3 (this paper contains our team's the research work), the change in the baseline distance between the two cameras will directly lead to a change in the camera's spatial pose coordinates , , , thus affecting the measurement accuracy of the image depth. The depth information measurement error and baseline variation curve can be obtained from Eq (10) [28], as shown in Figure 3.
where f is the focal length and B is the baseline. According to the above analysis, in terms of system accuracy, any two vision sensors in the measurement network can be combined into a binocular stereo vision system, and the baseline distance B can be calculated through the Cartesian coordinates of the viewpoint, so increasing the baseline distance B of the binocular vision in a certain range can effectively reduce the measurement error of the system. As shown in Figure 2, the changes in the azimuth angle and elevation angle are the key parameters to control the direction of the camera's optical axis, which directly determines the camera's spatial position and orientation coordinates. It is necessary to analyze the relationship between the two factors and the accuracy of the measurement network. In the actual binocular measurement process, to meet the requirements of the measurement task, the two azimuth angles, α and α , may be different, so it is necessary to analyze the influence of the two azimuth angle changes on the measurement error. Based on the mechanism analysis results of the impact of the projection angle on the measurement accuracy in Literature [28], taking 0, π/2 as an example, the change curve is shown in Figure 4, and the other three quadrants can be obtained based on the mutual complement. Δ represents the measurement error (unit: mm). Figure 4 shows that with an increasing α and α , the measurement error Δ also increases. In the same way, it can be deduced that the change curve of the relationship between the Elevation Angles and and the measurement error is shown in Figure 5. With the increase of the Elevation Angles and , the measurement error first decreases, tends to be stable, and then increases. Figure 5 only shows the change curve between β ∈ [0, π/2]. The curve trend of β ∈ [-π/2, 0] and β ∈ [0, π/2] is the same, so the interval is 70 ∘ , 10 ∘ . In binocular stereo vision, the Azimuth Angle and Elevation Angle simultaneously restrict the accuracy of the measurement network.  To select a reasonable range of azimuth angles, based on the range of the interval, two complementary azimuth angles α and α are arranged, and a two-dimensional change curve is drawn, as shown in Figure 6. At this time, it is easy to observe the change in measurement error in the azimuth range. As shown in Figure 6, the measurement error Δ is evenly distributed within the interval 25 ∘ , 45 ∘ of α with a small error value. Therefore, the optimal interval of measurement error Δ within α ∈ 0,2π is 25 ∘ , 45 ∘ ， 115 ∘ , 135 ∘ ， 205 ∘ , 225 ∘ and 295 ∘ , 315 ∘ .

Target model to be measured and constraint conditions
The visual measurement process is affected by various kinds of constraints. The major influencing factors include the multiview measurement principle, the structural parameters of the camera itself, the measurement environment, etc. To establish a multiview measurement network, it is necessary to analyze and screen out the constraints of factors with significant influence weights from the complex constraints and establish a mathematical model. This process will directly affect the efficiency of the algorithm in the process of network design. The main constraints are summarized as the following: the field angle constraint, visibility constraint, incidence angle constraint, depth of field constraint, common view constraint, curvature constraint and empty set constraint.
1) Field angle constraint. The field angle is defined as a measure of the imaging range in a camera system. It refers to the angle between two rays formed from the camera lens and along the boundary of the maximum object that can be contained by the lens. The larger the field of view angle, the larger the camera's field of view. If the angle between the boundary of the target to be measured and the lens exceeds the field of view angle, the part beyond the field of view angle will not be imaged in the camera, as shown in Figure 7(a). The visual area within the pyramid region represents the visual area, and the mathematical formula to constrain the pyramid region is the field of view angle constraint, as shown in Eq (11).
where ⃑ is the unit direction vector of the camera's optical axis, ⃑ is the optical axis of the camera and represents the field angle of the camera. 2) Visibility constraint. As shown in Figure 7(b), ⃑ is the normal vector at measurement target point P. If the angle between the normal vector ⃑ and the direction vector of the sensor viewpoint is equal to 90°, then the part on the right cannot be collected. The mathematical model of the visibility constraint is shown in Eq (12).
3) Constraints on incident angle constraint. As shown in Figure 7(c), in the actual visual measurement process, to reduce the pixel error value of the image points after two-dimensional imaging, it is not expected that the position of some viewpoints is coplanar with the surface of the measured object.
is the maximum acceptable angle between the camera viewpoint direction vector ⃑ and the target point normal vector ⃑, which is defined as the incident angle. The constraint conditions of the incident angle are shown in Eq (13). Figure 7 (d), where represents the foreground depth, and represents the back depth of the field. The object to be measured has a distance between the front and back depth of field, and the imaging is clear. Therefore, camera imaging should be constrained within this area, which is called a depth of field constraint, and the constraint formula is shown in Eq (14).

4) Depth of field constraint. A schematic diagram of the depth of field is shown in
where F is the camera aperture, δ is the diameter of the allowable dispersion circle, L is the shooting distance and is the focal length of the camera. 5) Coview constraint. In the process of multiview visual measurement, any point P in space is observed by at least two cameras at the same time to attain high observation accuracy, as shown in Figure 7(e). Limited by the structure of the target under test, there is self-occlusion or mutual occlusion, and the P point can only be observed by one camera, so the spatial 3D coordinates of the P point cannot be obtained. Therefore, it is necessary to constrain each area to meet the multicamera coview condition, which is called a coview constraint, as shown in Eq (15): 6) Curvature constraint. According to the analysis in Section 3.2.1, the measurement accuracy of the visual measurement network is closely related to the camera spatial pose (α, β, D). The higher the surface complexity of the object structure to be measured, the greater the curvature is, and the more information it contains, which requires higher camera pose requirements. Therefore, the constraints need to be carried out based on the actual situation, and are called curvature constraints. The mapping relationship is shown in Eq (16).
where is the curvature of the object structure corresponding to the camera at the i-th station point. 7) Null set constraint. To ensure the independence of each view in the multiview measurement network and avoid the view overlap increasing the calculation amount, it is also necessary to limit the overlap part of each view, which is called a null set constraint. Equation (16) is as follows:

Fitness function
In the GEA, the fitness function is used to represent the superiority and inferiority of each individual to determine the chance that it can be inherited. In the visual measurement networking design process, it represents the measurement networking accuracy. For the point cloud of the target to be measured, under the above constraints, it is reasonable to measure when it will be visible by two or more cameras. The parameters involved in visual network design are multidimensional, and the measurement accuracy of the point clouds are effectively estimated by using the covariance.
The line of sight intersection equation under reasonable measurement can be expressed in the nonhomogeneous form ，and the object point coordinates obtained from the intersection conditions are expressed in the normal equation , which can be converted into , and Eq (18) can be obtained: (18) where f represents the mapping relationship, and x represents the parameters related to A and b. Then, Eq (19) is obtained as follows.
, ,⋅⋅⋅, The covariance matrix of the image point is shown in Eq (20).
According to the Monte Carlo simulation method [26], if the covariance of an image point in the point cloud cannot be obtained, that is, let be the unit matrix, the Gaussian noise covariance of this point's coordinate is "1 pixel". The covariance matrix of object points obtained from the above analysis is shown in Eq (21).
where ′ ′ , ′ ⋅⋅⋅, ′ , ′ is the first derivative matrix of the image point on the j-th camera to the measurement marker point, which is 2 rows and 3 columns, and the covariance matrix of the image point is shown in Eq (22).
If a mark point on the object structure to be measured is visible, its covariance matrix can be calculated by Eq (21), and the three diagonal elements of the covariance matrix are represented by , , . Then, the measurement accuracy of the object point can be obtained as the maximum error of the three coordinates in the space, as shown in Eq (23).
If the object point is not visible, the measurement accuracy is defined as 10 , and is the worst spatial resolution of all the measuring cameras. The measurement marker points on the target structure to be measured are classified into visible points and invisible points, which are numbered n1 and n2, respectively. Then, the improved individual fitness function table is shown in Eq (24): According to genetic knowledge, individuals with poor fitness will be gradually eliminated. By analogy, in the measurement networking design, the larger the number of invisible measurement marks n2, the worse the fitness will be, and this kind of networking design method will eventually be eliminated.

Implementation of GEA in intelligent networking
An input operation is required before the GEA is implemented that includes the structure model of the target to be tested, the selection of constraints, the number and type of cameras, the value range of each parameter in the visual sphere model of the camera, the fitness function threshold and the measurement accuracy threshold.
The target structure to be measured is discretized into a uniform point cloud, the coordinates of the center point , , and the unit normal vector , , of the triangular grid area are calculated, and the radius r of the visual sphere is determined. The parameters to be optimized are coded and decoded, and the fitness function is calculated.
The algorithm's operation flow chart is shown in Figure 8. First, the structure of the target to be measured is input, as well as the number and type of cameras. The parameters , of the camera pose that are the azimuth angle and the elevation angle, the rotation vector , , /2 and the translation vector 0,0, of the camera external parameter matrix are encoded by binary code to generate the new individuals. To perform subsequent genetic operations, the individuals are divided into subpopulation A and subpopulation B. According to the constraint conditions, the fitness functions and corresponding to subpopulations A and B are calculated, respectively, to obtain the optimal individual of the two subpopulations initially generated. Except for the optimal individual, all the other individuals are replaced by the next generation. The fifth step in Section 3.1 considers whether large mutations, adaptive crossovers and mutations are needed. The above steps are repeated, update the iteration optimal individual and record the iteration subalgebra k. When / , the fitness function exceeds the threshold, the algorithm converges, the global optimal solution is obtained and the iteration ends. The optimal individual is decoded, and the networking method of the visual measurement network is obtained.

Simulation experiment analysis
To verify the correctness of the centering networking method, a simulation environment is set up, and the conditions required by the GEA are given. A rectangular plane with one side length 298 × 148 mm is set on the XOY plane, and then the normal vector of the plane is 0, 0, 1 . Taking the center of the plane and four corners within the plane, that is, there are a total of five marked circles, the diameter of the large center circle is 59 and the diameter of the four small circles is 29 mm. For the convenience of calculation, the resolution of the camera is defined as 2000 × 2000 pixels, the equivalent focal length is 3000 mm and the radius of the visual sphere is 1500 mm. Then, the spatial resolution of the camera is 0.5 mm/pixel, the range of the azimuth angle is ∈ 0, 2 and the range of the elevation angle is ∈ /2, /2 . Different numbers of cameras are used for the networking method, and 100 individuals are selected for iteration until the algorithm converges. The camera pose information of each networking camera is shown in Table 1, and the camera pose information is shown in Figure 8. Figure 9(a) is the result of two-camera networking, Figure 9(b) is the result of three-camera networking, Figure 9(c) is the result of four-camera networking, Figure 9(d) is the result of five-camera networking and Figure 9(e) is the result of six-camera networking.
According to the analysis in Section 3.2.1, the optimal range of the azimuth angle of the visual sphere network is 10 ∘ , 70 ∘ . In the case of binocular vision, the azimuth angle optimal interval 25 ∘ , 45 ∘ should be considered. All the parameters of the visual sphere obtained by the above simulation experiments fall into the optimal interval, and the fitness is small when 0.063; that is, the maximum error of the 3D coordinates in space is 0.063 mm, and the absolute value decreases with increasing camera numbers, which proves the rationality of the centering networking method in this paper. Notably, the acceleration of the fitness reduction decreases gradually, indicating that the fitness will become stable when the number of cameras continues to increase, and the benefit of measurement networking will begin to decrease because of the measurement costs.

3D measurement experiment of multiview networking for a real workpiece
The experimental platform is composed of an upper computer, mechanical arm system and camera system. The upper computer adopts a Windows 10 x 64-bit control system with 8.00 GB memory for the model ""Inter" (R)"Core" ('TM')"i5-4200H CPU@2.80GHz " ". The upper computer is used to control the motion of the manipulator, take pictures of the target to be measured and extracts its three-dimensional position information. A six-axis manipulator is used to accurately and quickly move the camera system to each viewpoint position of the network. The layout of the experimental platform is shown in Figure 10. The workpiece to be tested is shown in Figure 11(a), and the 3D CAD model structure diagram of the model to be tested is obtained through the motion recovery structure. Then, the structure diagram is gridded through MATLAB, as shown in Figure 11 In the experiment, the measurement distance is set to 300 mm, that is, the radius of the visual sphere is 300 mm. The range of the known camera azimuth angle is ∈ 0,2 , and the range of the elevation angle is ∈ /2, /2 . To ensure the best imaging, based on the optimal analysis of the azimuth angle and elevation angle in Section 3.2.1, the settings are as shown in Eqs (25) Different numbers of cameras were used for the networking method. After running the GEA many times until convergence, the performance of the comprehensive fitness function under different camera numbers draws the fitness of 10,15,20,22,24,26,28 and 30 cameras, as shown in Table 2. The centering networking method with 24 cameras as an example is shown in Figure 12(a), and the dense 3D reconstruction results obtained under this networking method are shown in Figure 12(b).

Precision analysis of measurement
The reconstruction measurement results are analyzed based on the photographic geometry idea, and the feasibility and accuracy of the core networking method are analyzed quantitatively. To obtain the mapping scale relationship between the actual size and the reconstruction size, the calibration plate is reconstructed and measured. Three squares, A, B and C, with rich reconstruction information are selected, and three intersections are recorded as , and , respectively. The results are shown in Figure 13. The mapping scale Factor s represents the ratio between reconstruction size Lreconstruction and the actual size Ltrue, as shown in Eq (27). The scale factor remains unchanged under the same camera. The error between the reconstruction size and actual size can be compared and analyzed by using the scale factor calculated by the calibration plate, and the error calculation is shown in Eq (28). Thus, a reconstructed coordinate system with the pixel size in units is constructed. The size information is shown in Table 3.
In the formula, reconstruction is the reconstruction size, is the actual size, e is the size error, is the actual size of the target to be measured and is the reconstruction size after photographic geometric mapping. The shape of the object measured by the centering network is regular, and the size can be used as the evaluation target. The average of each mapping ratio in Table 3 is calculated to obtain s = 0.009573. Then, the reconstructed dimensions and errors of the target under the centering networking method are shown in Table 4.  Table 4 shows that the error between the reconstructed size and the actual size under the centering networking method is controlled within half a millimeter. Compared with the expensive noncontact measurement system, the experimental layout is simple, which saves significant costs and meet the needs of industrial topography measurements. The 3D reconstruction image is clear and has a fine texture, which meets the needs of human-computer interaction in industrial measurements. The correctness and feasibility of the centering networking method designed in this paper are verified.

Conclusions
The intelligence of multivision networking enables 3D measurement technology based on vision to implement low cost and real-time measurements. In this paper, a multivision intelligence networking method based on a genetic evolution algorithm is proposed. By using the proposed method, centering multivision intelligence networking is established. GEA is a heuristic algorithm, and each operation may have different results, so it is necessary to deploy it as many times as possible and select the best individual according to the fitness function. In the centering networking method, the camera position coordinates and pose coordinates are designated as the parameters to be optimized in combination with the visual sphere model. Considering the constraints encountered in the camera measurements, the fitness function is improved to reduce the computation amount, and ultimately, an optimal centering networking method is obtained.
The following work further studies the intelligent visual networking measurement of irregular shapes based on the method proposed in this paper, and is called, scattered networking. It is expected that scattered networking will implement real-time 3D measurements of complex shapes and large structures with multivision flexible networking.

Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.