A clipping algorithm for real-scene 3D models

ABSTRACT
 The development of unmanned aerial vehicle (UAV) oblique photogrammetric technology provides a good foundation for the rapid construction of large-scale and high-definition real-scene 3D models. However, due to the limitations of the modeling process, irrelevant feature data cannot be eliminated in the modeling stage. The built models contain irrelevant features and model distortions caused by errors. At present, most existing clipping algorithms cannot effectively clip real-scene 3D models that are organized as a whole or with levels of detail (LODs). Therefore, this paper proposes a novel algorithm for clipping real-scene 3D models from any perspective based on clipping boundary lines that fit the surfaces of the models. The results of the clipping experiments for 3D models constructed with oblique UAV images show that this algorithm can effectively clip any part of the 3D models, that the clipping results of each level model closely fit the corresponding clipping boundary lines, and that the accuracy of the clipping results is very high. Additionally, the time complexity of the algorithm is O(n 2). In conclusion, the algorithm proposed in this paper provides correct and effective clipping results for real-scene 3D models with LODs that are constructed with photogrammetric or 3D laser scanning data.


Introduction
As a very important kind of spatial data, 3D data have been applied well in digital cities (White et al. 2021), digital pipelines (Gao, Song, and Zhao 2021), digital geologies (Arias et al. 2021), etc.In the past few decades, many scholars have proposed a variety of different data structures (Zhang and Zhu 2018a;Yuan et al. 2011), which have not only improved the efficiency of 3D data access and storage but also more truly reflected ground features.Without distinguishing true 3D and quasi-3D, 3D models can be uniformly classified into surface-based models, volume-based models, and mix-based models according to geometric relationships (Chen and Che 2016;de la Losa and Cervelle 1999).
The target area (e.g.outcropping strata) of oblique photogrammetry generally includes a large range of ground objects covered by many images.The number of points in a point cloud generated with these images is very large (Vetrivel et al. 2018).If the TIN model consisting of numerous points is separately stored under the same resolution, the rendering efficiency of the model will be significantly reduced.To solve this problem, the concept of levels of detail (LODs) is usually employed for 3D models (Biljecki et al. 2016).LODs have the following characteristics: as the root node decreases, the model representation range decreases, whereas the point cloud density and model resolution increase.However, LOD-based 3D models generated with oblique photogrammetric data have the following problems: (1) Due to sensor errors, model reconstruction errors, and other factors, these models are partially distorted, and the local displays of these models are poor, which affects the overall quality of the models (Wu et al. 2018); (2) There are irrelevant features (e.g.buildings in a 3D model of outcropping strata) in these models, which causes data redundancy and can even affect the analysis of the target features of interest.
Therefore, an algorithm for clipping the irrelevant parts of an oblique photogrammetric 3D model is urgently required to improve the rendering efficiency and usability of the model.
The difficulty of clipping a 3D model depends on the geometric structure of the model.The clipping process of a volume-based model, which is usually used in fields such as 3D printing, is more complicated.This clipping method uses a plane to clip the surface of the model and clip the internal part of the model at the same time.After clipping, it is necessary to ensure a good connection between the internal and external geometries (Yan et al. 2013).A mix-based model is constructed with geometric shapes such as prisms (Wang et al., 2018b), which are often used in 3D geological models containing internal information (Cui et al. 2017).When clipping a mix-based model, it is necessary to clip each intersection surface of the prism (Zhou et al. 2020).When clipping a surfacebased model, only the model surface is required to be clipped because this kind of model often has no internal information (Lindenbeck et al. 2002).Taking a triangular mesh model as an example, a plane is often used to clip the model (Minetto et al. 2017).First, the intersections between the plane and the model are calculated, and they are connected as the clipping boundary in turn.Then, the triangles that intersect with the boundary are divided into multiple new triangles.Finally, according to the relationship between the newly generated triangles and the plane, it is determined whether the newly generated triangles are retained.It is easy to obtain the intersections between the triangular mesh model and the plane.Generally, the intersections are on two sides of a triangle or at a vertex of a triangle (Fábián and Gergó 2014).However, clipping complex surface models with planes often requires multiple operations to obtain the expected results.At the same time, the boundary of the clipping results becomes straight and does not have the original undulations.Using a curve to select the clipping area can solve this problem, as it can flexibly select the clipping area and ensure that the clipping boundary retains the undulations of the model.However, when a curve is taken as the clipping factor, the intersections of the curve and a triangle become complicated, with a variety of possible results.The existing algorithms inevitably miss some intersections (Ye, Ma, and Shi 2017).Therefore, we need to find a triangle clipping method that is suitable for all intersecting cases.
Although oblique photogrammetric 3D models have been widely used, it is difficult to find effective algorithms in the literature for clipping these LOD-based 3D models.SuperMap (SuperMap 2021) software can use a plane to clip an oblique photogrammetric 3D model, but it can only clip the model along the direction perpendicular to the ground.This method has good support for extracting buildings and other objects that are vertical from urban building models, but it is not suitable for clipping ground objects with inclined angles or clipping models with overlapping parts in the vertical direction, such as rock surfaces.In addition, it has the disadvantage that the clipping area cannot be selected well.
Therefore, this paper presents a novel algorithm for clipping real-scene 3D models with LODs based on boundary lines that fit the surfaces of the models.The algorithm can clip a model flexibly from different viewing angles, can use a curve that fits a model surface to finely clip the model, and can clip each level of an LOD-based 3D model.This paper presents experiments and compares the proposed algorithm with existing methods.

Methodology
The basic idea of the proposed clipping algorithm can be summarized as follows: for a TIN structure-based real-scene 3D model, first, we project the three-dimensional space to a two-dimensional space and judge the topological relationships in the two-dimensional space between each triangle of the model and the boundary line that approximately fits the model.According to the topological relationships, the triangles are classified as inside the boundary, intersecting with the boundary, and outside the boundary.Then, triangles inside the boundary are deleted, and triangles outside the boundary are retained.For triangles that intersect with the boundary, we use Delaunay triangulation (Shewchuk 2009) to regenerate the triangulation network and retain the corresponding triangles according to the topological relationships between the newly generated triangles and the boundary line.Then, the texture coordinates of the new triangle vertices are calculated.Finally, the method is applied to each level of the LOD-based 3D model.
A boundary line is composed of multiple points, which are determined one by one by moving the mouse on a 3D model.The method of obtaining these points is described as follows: first, the coordinates of the mouse are sampled every three frames (30 frames per second, approximately 100 ms) and converted from the screen coordinate system to the world coordinate system.Second, we use the converted world coordinates and the current camera coordinates to form a ray.Last, by using collision detection, we quickly calculate (less than 2 ms) the intersection point of the ray and the model, which becomes a point of the boundary line.

Construction of triangle topological relationships
Before acquiring the topological relationships between the triangles and the clipping boundary line, we must establish the adjacency between the edges and the triangles and the adjacency among the triangles of a 3D model.The necessary and sufficient condition for two triangles to be adjacent is that they have two identical vertices.The conventional algorithm for determining triangle adjacency requires a nested loop on the array of triangles, and the time complexity of the algorithm is O(n 2 ).This paper presents an improved algorithm for determining triangle adjacency, which can effectively reduce the running time.The idea of the algorithm can be summarized as follows: if triangles T 1 and T 2 are adjacent, there must be two identical vertices, so T 2 must be one of the triangles containing the vertices of T 1 .Then, we just need to identify T 2 in the triangles that contain the vertices of T 1 .Figure 1 shows that the three triangles adjacent to triangle A are B, F, and I and that only 10 triangles contain the three vertices of triangle A: B, C, D, E, F, G, H, I, J, and K.It is easy to obtain the inclusion relationships between vertices and triangles, as the triangle list needs to be looped through only once.Therefore, on the premise that the inclusion relationships between triangles and vertices are obtained, the adjacent triangles of triangle T 1 can be found among the triangles containing the vertices of T 1 .The specific steps are as follows: (1) The triangle list is traversed, and the index of a triangle in the adjacent triangle list of each vertex of the triangle is stored to obtain the topological relationships between the vertices and the triangles.Then, every vertex stores all the triangle indexes that contain the vertex; (2) The triangle list is traversed again, and the triangles adjacent to each triangle are identified in the triangle's vertex adjacent triangle lists.
Assuming that there are n triangles in the triangle list, the time complexity of the algorithm is O(n).

Triangle classification
The topological relationships between the triangles and the clipping boundary line include the triangles inside, intersecting, and outside the boundary line.In a three-dimensional space, the topological relationships between the triangles and the boundary line cannot be accurately expressed.Therefore, from the perspective of drawing the boundary line, the three-dimensional space coordinates are projected to the screen coordinates, and the topological relationships are determined in the two-dimensional space.
We first construct the axially aligned bounding box (AABB) of the boundary line.A 2D AABB is a simple rectangle with each edge parallel to an axis.The AABB can determine the approximate range of the boundary line and can quickly and roughly determine the topological relationships between the triangle vertices and the boundary line.Selecting a triangle as the unit, if a vertex of the triangle is outside the AABB, the vertex is outside the boundary line.Otherwise, we use the ray method (O'Rourke 1998) to determine the topological relationship between the vertex and the boundary line.
The principle of the ray method is shown in Figure 2. Assume that the drawing direction of the boundary line is counterclockwise and that the boundary line is composed of several small line segments.A ray is made along the horizontal direction from point P 0 , and a record is made every time this ray traverses the boundary line.If the total number of records is an odd number, then point P 0 is within the boundary line; otherwise, point P 0 is outside the boundary line.However, the  intersections of the ray from point P 1 are quite special.The line segment formed by intersection points 6 and 7 coincides with the ray, and intersection point 8 is the intersection point of two line segments.To address the special case of point P 1 , the type of intersection between the ray and the segments of the bounding box must be considered.If the line segment and the ray coincide, the intersection is not recorded.If the end point of the line segment intersects the ray, only the starting point of the line segment will be recorded (the starting point is the first point of the line segment along the drawing direction).In this way, the ray from point P 1 and the boundary line have three intersecting points, 7, 8, and 9, showing that P 1 is within the boundary line.The time complexity of the algorithm is O(n).
Figure 2 shows that there are few line segments that actually intersect with the ray, and they are located in the same horizontal band.Therefore, to reduce unnecessary intersection determination, the ray method is improved.The main idea of the improved method is summarized in Figure 3.The boundary line is divided into N regions from top to bottom in the vertical direction by dividing lines S 0 , S 1 , S 2 , S 3 , … , S n , and the value of N generally is chosen according to the total number of line segments of the boundary line.To ensure that all line segments are correctly divided, the Y value of S 0 is slightly larger than the maximum Y value of the boundary, and the Y value of S n is slightly smaller than the smallest Y value of the boundary.The division step size is the difference between S n and S 0 divided by N. It should be noted that a line segment that intersects with a dividing line should be stored separately in the upper and lower areas of the dividing line; otherwise, the divided line segment needs to be divided into two line segments belonging to two areas.After all line segments are divided, the Y value of point P 0 is greater than S 2 and less than S 1 , and then P 0 only needs to determine the number of intersections in the area (S 2 , S 1 ) (Figure 3).In theory, this method can reduce the operation time to 1/N of traversing all boundary lines.
Based on the improved ray method, we need to obtain the topological relationships between the triangles and the boundary line.If all three vertices of a triangle are inside or outside the boundary, the triangle is temporarily determined to be inside or outside the boundary.If the topological relations of the three vertices are different, the triangle and boundary line intersect.
For a triangle that intersects with the boundary line, it is necessary to determine the intersection state of each side of the triangle and the boundary line and to calculate the intersections.The principle of determining whether two line segments intersect is described as follows: supposing that two straight lines in 2D P 0 + sd 0 and P 1 + t d 1 are given, the type of topology between the two lines can only be intersecting, parallel or coincident.To facilitate the calculation, we define an operation C( u , v ) = u x v y − v x u y of two vectors in a two-dimensional space, which is similar to the cross product in a three-dimensional space (Schneider and Eberly 2002).From the definition, we know that C( u , u ) = 0. Assuming that two line segments intersect, we obtain Eq. ( 1): Transforming Eq. (1) yields Eq. ( 2): We define D = P 1 − P 0 .Equation ( 3) is obtained by applying the C( u , v ) operation with d 1 on both sides of Eq. ( 2).
Equation ( 4) is obtained by applying the C( u , v ) operation with d 0 on both sides of Eq. ( 2).
We assume that C(d 0 , d 1 ) is not equal to 0. Thus, the values of s and t are calculated according to Equations ( 3) and ( 4): (5) According to Eq. ( 5) and Eq. ( 6), if C(d 0 , d 1 ) = 0, then s and t can be calculated, and the two straight lines intersect at a point; if C(d 0 , d 1 ) = 0, then the two lines are coincident or parallel.However, the coincident or parallel subdivision has no effect on the results of the algorithm, and we will not go into further details.Therefore, we can obtain the following conclusions.Under the premise of C(d 0 , d 1 ) = 0, the necessary and sufficient conditions for the intersection between two line segments are 0 ≤ s ≤ 1 and 0 ≤ t ≤ 1. Figure 4 (a) shows that in the screen coordinates, the edge (T a-1 , T a-2 ) of the triangle intersects with the boundary line segment (P a-1 , P a-2 ) at point I a .Figure 4 (b) shows the corresponding topological relationship between the triangle and the boundary line in the spatial coordinates, which is different from the clipping perspective.In Figure 4 (b), I b is the intersection point between the triangle edge (T b-1 , T b-2 ) and the boundary line segment (P b-1 , P b-2 ) under the clipping view.The projection of edge (T b-1 , T b-2 ) on the screen coordinates is edge (T a-1 , T a-2 ).If the screen coordinates of I a are directly converted to spatial coordinates through a matrix transformation, the error in the transformation will cause the transformed intersection point to not be on the edge of the spatial triangle, which will cause the final clipping result to change the direction of the triangle.Therefore, using the result s of Eq. ( 5),, we calculate the coordinates of I b by Eq. ( 7).
Now, the intersection states of all triangles intersecting with the clipping boundary line have been determined, but there are still special cases, as shown in Figure 5.The three vertices of the red triangles in Figure 5 are all inside or outside the boundary line, but the red triangles all intersect with the boundary line.Therefore, they also need to be assigned as triangles that intersect with the boundary line.We refer to the set of triangles that intersect with the boundary line the boundary triangulation network.The set of white and red triangles in Figure 5 forms the boundary triangulation network .At present, there are only white triangles in the set of the boundary triangulation network, and all red triangles can be found through the topological relationships between triangles.The specific method is as follows: (1) The set of boundary triangulations is traversed, and triangles are obtained, where every triangle is adjacent to one or more traversed triangles and has intersections between the adjacent side(s) and the boundary line; (2) We judge whether these adjacent triangles belong to the boundary triangles; if they already belong to boundary triangles, they will not be dealt with; otherwise, they are defined as new boundary triangles and stored in the boundary triangle set; (3) Then, we update the set of boundary triangles so that we can continue to perform steps (1) and (2) on the newly added triangles to ensure that no triangle that intersects with the boundary line is missed.
The algorithm of triangle classification is described in detail in Algorithm 1.

Boundary segmentation
According to the difference in the number of intersection points between the triangles and the boundary line, triangles can be classified into three categories: one intersection, two intersections, and more than two intersections.Some of the intersections between a triangle and a boundary line are shown in Figure 6.
If there is only one intersection point between the triangle and the boundary line, a vertex of the triangle is on the boundary line (e.g. Figure 6 (a) and 6 (b)).At this time, it is necessary to determine whether one of the two other vertices is within the boundary line.If it is, it means that the triangle is within the clipping boundary line and should be deleted; otherwise, the triangle should be kept.When a boundary triangle and the clipping boundary line intersect (e.g. Figure 6 (c) to 6 (f)), the boundary triangle needs to be reconstructed and clipped into smaller triangles.If the small divided triangles are within the boundary line (blue area in Figure 6), they will be deleted; otherwise (yellow area in Figure 6), they will be added to the main triangle set of the 3D model.We use the Delaunay triangulation method with constraints (Shi et al. 2016) to generate a new set of triangles by taking the vertices of triangles, the intersections of the triangles and the boundary line, and the start and end points of the boundary line segments between the intersection points as the vertex set.
To address many different intersection situations, this paper proposes a new triangle reconstruction method.Figure 7 is a special intersection case in a twodimensional space.P 1 , P 2 , P 3 , P 4 , P X-1 , and P X are the endpoints of the boundary line, and I 1 , I 2 , I 3 , I 4 , I 5 , and I 6 are the intersection points of the boundary and the triangle.We need to sort the intersection points according to the order of the line segments in the boundary line; for example, intersection point I 3 is sorted according to the index of the line segment (P 1 , P 2 ) in the boundary line.To ensure the correct ordering of line segments, we propose three rules: Figure 6.Six possible intersection cases between a triangle and a boundary line.In the case of an intersection, the triangle is inside (a) or outside (b) the boundary.In the case of two or more intersections, the intersections are on different sides (c or e) or on the same side (d or f).Note that more than these 6 cases may occur.(1) The index of the line segment (P 1 , P 2 ) is 1, the index of the line segment (P i , P i + 1 ) is i, and the line segments are sorted in ascending order.. (2) The index values of the line segments at the end of the boundary line, such as the line segment (P x-1 , P x ) and line segment (P x , P 1 ), are defined as less than 1, and then (P x , P 1 ) is ranked before (P 1 , P 2 ).If the indexes of two line segments are both less than 1, they are sorted according to Rule (1), and then (P x-1 , P x ) is sorted before (P x , P 1 ).(3) Two points I x and I x + 1 are located on the same line segment (P x , P x + 1 ), if vector P x P x+1 and vector I x I x+1 are in the same direction, that is, if P x P x+1 • I x I x+1 .0, then I x should be arranged before I x + 1 , otherwise, I x should be arranged after I x + 1 .
Now we explain why these three rules are proposed by ordering the intersections in Figure 7. First, we proposed Rule (1).The order of the intersection points obtained by Rule (1) is I 3 , I 4 , I 5 , I 6 , I 1 , and I 2 .However, the current sorting result is wrong; I 6 and I 1 should not be adjacentbecause P X belongs to the end of the boundary and should be adjacent to P 1 .To solve this problem, we proposed Rule (2) .The order of the intersection points obtained by Rule (1) and Rule (2) is I 1 , I 2 , I 3 , I 4 , I 5 , and I 6 .It should be noted that I 3 and I 4 are on the same line, and the order between them cannot be determined by the line segment indexes, which can only be accurately determined by combining the vector relationships.Therefore, we proposed Rule (3).Using these three rules, we obtain the correct intersection sorting results.
After obtaining the order of the intersection points, we need to connect the intersection points and the line segment points in the triangle according to the direction of the boundary line.The starting points of the boundary line segments are defined as the order discriminant points, and the end points are not treated.It is worth noting that it is not guaranteed that the line segment points are in the plane of a triangle when they are in a three-dimensional space.To ensure that the normal direction of the triangle is not destroyed after clipping, it is necessary to intersect the triangular plane through the ray formed by the viewpoint and the node of a line segment, and the intersection point is used as the point for clipping the triangle.A fast method of finding the intersection point between a ray and a triangular plane in space (Möller 1997) is shown in Figure 8, where C is the position of the camera's viewpoint, P is the node of the line segment, and P' is the intersection point of the ray and the triangle plane, where point P' is called the projection point of point P.
The triangulation constraint is a set containing multiple points, which are connected in turn to form a polyline.As shown in Figure 7, the discriminant point of the line segment where intersection point I 1 is located is P x-1 , but the projection point of P x-1 is not in the triangle, so it is not a constraint point; only I 1 is added to the constraint point set.The discriminant point of the line segment where intersection point I 2 is located is P x , and the projection point of P x is in the triangle, so the projection point of P x and point I 2 are sequentially added to the constraint.The discriminant point of the line segment where intersection point I 3 is located is P 1 , but the projection point of P 1 is not in the triangle; only I 3 is added to the constraint in turn.After judging all the intersection points according to the above method, the final order of the points in the constraint is I 1 , the projection point of P x , I 2 , I 3 , I 4 , I 5 , the projection point of P 3 , and I 6 (Figure 9).
The Delaunay triangulation method generates a series of continuous triangular meshes according to given discrete points.According to the normal triangulation method, the divided triangles may be crossed by the constraint boundary (Figure 10 (a)), which does not complete the division of the triangles.Therefore, in Delaunay triangulation, a constraint boundary is added to stipulate that the triangles after division cannot be crossed by the constraint boundary.If there is a triangle that has been crossed, it needs to be redivided (Figure 10 (b)).
On the basis of triangulation, we need to filter the triangulated triangles and decide whether to keep them.The conventional filtering algorithm finds a point (usually the center of gravity) in the triangle that needs to be judged and judges whether the point is within the clipping boundary line; if it is inside, the triangle should be deleted, and otherwise, it will be kept.However, the calculation time of this algorithm is limited by the length of the boundary line segments.
Therefore, this paper proposes a new method that can quickly determine whether a triangle is within the boundary.First, we obtain the barycenter coordinates of each triangulated triangle.Then, we connect these barycenters with any vertex V of the original triangle and calculate the number n of intersections between each connected line segment and the constraint boundary.Note that if the line segment intersects with the constraint boundary at a vertex, the intersection will not be recorded.Assuming that vertex V is inside the clipping boundary, if n is even or 0, the corresponding triangulated triangle is inside the boundary; otherwise, it is outside.Assuming that vertex V is outside the boundary, if n is even or 0, the corresponding triangulated triangle is outside the boundary; otherwise, it is inside.If vertex V is on the boundary line, we use another vertex.
The number of cycles of the proposed algorithm is obviously reduced and depends only on the number of line segments of the constraint boundary.As shown in Figure 11, the barycenter of triangle (I 1 , P X , I 2 ) is point C 1 , vertex V 0 is within the boundary, and the number of intersections  between the line segment (C 1 , V 0 ) and the constraint boundary is 2; therefore, the triangle (I 1 , P X , I 2 ) is within the clipping boundary and needs to be deleted.
The algorithm of boundary segmentation is described in detail in Algorithm 2.

Texture coordinate calculation
Texture coordinates specify each unique texture pixel in the texture space.By assigning texture coordinates to vertex coordinates, the 3D renderer knows which part of the texture is mapped into the space, thus attaching the texture to the 3D model.Now, we have the intersections between the clipping boundary line and the triangles and the projection points of the starting points of the boundary line segments within the triangles, but these points do not have texture coordinates.When obtaining these points, we ensure that they are in the correct planes, where the corresponding triangles are located.Therefore, the texture coordinates of the three vertices of every triangle can be used to calculate their corresponding texture coordinates without causing texture distortion.The specific method calculates the weights of the unknown points relative to the three vertices of the corresponding triangle, multiplies the weights with the corresponding texture coordinates, and sums them to obtain the texture coordinates of every unknown point.As shown in Figure 12, suppose that the three weighting factors corresponding to point P are a, b, and c and that a + b + c = 1.The texture coordinates of the three vertices are UV 0 , UV 1 and UV 2 , and the texture coordinates of point P are obtained by Eq. ( 8).
By using point P as the fourth vertex, the triangle (V 0 , V 1 , V 2 ) can be divided into three smaller triangles whose areas are proportional to the weighting factors.Therefore, a can be derived from , and c can be derived from A 2 A .The magnitude of the cross-product of any two side vectors of a triangle is equal to twice the area of the triangle.Therefore, under assumption V 0 V 1 V 0 V 2 = 0, the values of a, b, and c are obtained by Eq. ( 9).
Putting Eq. ( 9) into Eq.( 8) can yield the texture coordinates of point P.

LOD-based model clip
The LOD method is mainly employed to solve the problem of slow-loading models, which is caused by the models being too complex.This method stores a 3D model as multiple files with different scopes and different resolutions according to certain organizational rules.The software needs to load models with different precisions and resolutions according to the scope of the field of view.
There are many ways to divide a 3D model into an LOD-based model.The results of the division generally have a tree-like data structure, which allows the nodes to be searched according to the model range, and each node is an independent file.We suppose a model is divided according to the quad-tree, as shown in Figure 13.Each node is divided into four identical subnodes and contains the indexes of the four subnodes.It is worth noting that only nonempty nodes will continue to be divided downwards.An LOD-based model has the characteristics that the deeper the model is, the smaller the spatial scope, the denser the triangulation network, and the higher the definition.
A LOD-based model takes a file as the smallest organizational unit; a file corresponds to a partial model and contains the file paths of its child nodes.The proposed clipping algorithm is applied from the root model.After clipping the root model, all models under the first submodel are clipped in turn by using the method of preorder traversal.The condition of the end of traversal is that the model to be clipped does not intersect with the clipping boundary line or it no longer has submodels.Using the same traversal method, all other subtrees under the root model are clipped to achieve the purpose of clipping all models.

Results
Based on the Windows 10 operating system, using C++ and the Open Scene Graph (OSG) 3D engine, the clipping algorithm was implemented and run on a computer with an Intel Core i5-3470 (3.2 GHz) processor and 16 GB memory.Context Capture software (Bentley 2021) was used to construct a 3D model with oblique photographic images acquired by a UAV, and the 3D model with LODs was exported in the Open Scene Graph Binary (OSGB) format and was used as the experimental data.

Clipping results
Figure 14 shows a part of the experimental 3D model.Due to the limitation of the shooting angle of the UAV, there is an obvious hole in the middle of the model.Therefore, we clip the hole using the proposed algorithm.
First, we draw a red line as the clipping boundary that fits the model.Figure 14 (a 1 ) shows the model from the perspective of drawing the boundary line.Since our algorithm clips the triangles, we show the triangulation model (Figure 14 (a 2 ), the same below) of Figure 14 (a 1 ).Because the rendering of the 3D model varies with the viewing angle, the success of clipping from a single viewing angle does not mean that the model is correctly clipped.Therefore, we show the model observed from the side (Figure 14 (b 1 ) and 14 (b 2 )).
Second, we clip the model from the perspective of drawing the clipping boundary.Figure 15 (a 1 ) and 15 (a 2 )are the clipping results of the model from the perspective of drawing the boundary line.Figure 15 (b 1 ) and 15 (b 2 ) are the clipping results of the model from the side view.
Third, to check that the algorithm clipped the model for each level along the boundary, we zoomed in the model to change the level that is displayed.The results are shown in Figure 16. Figure 16 (a 1 ) and 16 (a 2 ) are the middle-level clipping results of the model.Figure 16 (b 1 ) and 16 (b 2 ) are the deepest-level clipping results of the model.
Below, we compare our results with the results of SuperMap iDesktop 10i software.First, we draw a clipping boundary similar to Figure 14 (a

Running time
In experiments, the influence of the number of triangles in the triangulation network and the number of nodes of the boundary line segments on clipping time is analyzed for single models (Table 1, Figure 19).The results show that in the same model, the clipping time is proportional to the number of nodes of the boundary line segments and the number of model triangles.
The clipping time of LOD-based models are further analyzed (Table 2).The results are as follows: (1) The more models there are, the longer the clipping time is; (2) The number of nodes of the boundary line segments does not determine the number of intersecting models, which is instead determined by the location of the models and the boundary line; (3) The more models intersecting with the boundary line there are, the longer the clipping time will be.

Correctness
It can be concluded from the clipping results (Figures 14-18) that the algorithm proposed in this paper has a perfect clipping effect on LOD-based 3D models and that the model boundaries closely fit the clipping boundary lines after clipping.After scaling up the 3D models, the slight curvature     sequence along the boundary lines, and they fit the boundary lines very well.We can conclude that the proposed triangle reconstruction method can ensure that models are clipped strictly along the clipping boundary lines.
In Figure 15 (a 1 ) and 17 (a), the clipping regions with the same scope and location are drawn for the same model, but the clipping results of the model are quite different.The clipping results of the method proposed in this paper are correct, and there is no problem of incorrect clipping; the clipping results of the SuperMap iDesktop 10i software are wrong in some areas, and some parts of the model that are not inside the clipping boundary line are clipped.This finding shows that from this perspective (Figure 15 (a 1 ) and 17 (a)), the method proposed in this paper is much more effective.
According to a comparison of Figure 17 (a) and 17 (b), at a view angle perpendicular to the horizontal plane, the model boundary and clipping boundary line roughly coincide after clipping, but there are still some nonoverlapping places in some small areas.This finding shows that the SuperMap  iDesktop 10i software uses a fixed view angle perpendicular to the horizontal plane to clip the model, which produces clipping errors.At the same time, the software's clipping results for small areas that are complex are not ideal, and the accuracy is not sufficiently high.The clipping method that uses the vertical perspective directly discards the Z values of the model coordinates.This method has the characteristic of fast projection and is suitable for the singulation of urban buildings.However, it is not suitable for models that focus on vertical planes such as rock formations.From a comparison of Figure 18 (a 1 ) and 18 (b 1 ) and by choosing different clipping areas and different perspectives to clip the same model, it can be concluded that the method proposed in this paper can correctly clip the model and has a good clipping effect.However, the proposed clipping method needs to project the three-dimensional coordinates to the screen coordinates, which increases the calculation time of the projection compared to the vertical-perspective clipping method.

Time complexity
To evaluate the time complexity, a clipping boundary line with m nodes and a 3D model with n triangles are used as inputs for the proposed algorithm.In the stage of constructing the triangle topological relationships, the computing time of the algorithm is related to the number of model triangles, and the time complexity of establishing the topological relationships between the vertices and triangles is O(n).According to the topological relationships of the vertices and triangles, the establishment of the triangulation network topological relationships requires traversing the triangle list only once and loops one time in the k triangles adjacent to the vertices.However, k and n are not related, and k is significantly smaller than n, so the time complexity is kO(n).Therefore, the time complexity of constructing the triangle topological relationships is: In the triangulation stage, the time complexity of obtaining the bounding box of the clipping boundary line is O(m).We assume that when the ray method is applied to determine whether the triangle vertices are within the boundary line, the boundary line is vertically divided into s regions, so the time complexity of judging whether all the vertices are within the boundary line is O( m s ), which is approximately equal to O(m).When classifying all triangles, it is necessary to traverse the entire triangle list, to use the ray method to determine the topological relationships between the boundary line and the three vertices of every triangle in the triangle list, and to further establish the boundary triangle list according to the vertex conditions.Thus, the time complexity is O(3mn).After constructing the boundary triangle list, the time complexity of filtering the incorrectly judged triangles according to the topological relationships is O(m).Therefore, the final time complexity of triangle classification is: The time complexity of boundary segmentation is related to the number of boundary triangles v and the number of constraint points w in a boundary triangle and has nothing to do with m or n.In addition, v and w are much smaller, so the time complexity is not included.The texture coordinate calculation is passingly performed in the boundary segmentation, so the time complexity is also not considered.Therefore, the total time complexity of the algorithm is: Considering the clipping running time (Tables 1 and 2) and the influencing factors of the algorithm time complexity, the number of nodes of the boundary line is an important factor that affects the algorithm running time.Under the premise of discarding a certain clipping precision, the nodes of the clipping boundary line can be thinned out appropriately to reduce the running time of the algorithm.

Limitations and improvement directions
In the experiments, it is found that the constrained Delaunay triangulation stage consumes much computing time.In this paper, we use the conventional constrained Delaunay triangulation method to triangulate a triangle with two intersecting points with the clipping boundary line, for a total of five vertices and two constraint points; the calculation time is approximately 4.5 ms.However, there are often more than 100 boundary triangles when clipping, which leads to considerable time spent on triangulation.Therefore, reducing the running time of constrained Delaunay triangulation is one of the improvement directions.
The model resolution and the density of triangles are different among different levels of an LODbased 3D model.From the root model down, the greater the depth is, the higher the model resolution and the greater the triangle density.However, this paper uses the same clipping boundary line when clipping, which leads to similar triangle density in the boundary area of each level of the model after the boundary triangulation network is reconstructed.In this way, the clipping boundary of each level of the model can be highly overlapping to avoid the sudden appearance or disappearance of local parts of the model caused by hierarchical switching, but this increases the amount of calculation.Therefore, the next improvement direction of this paper is to use thinning boundary lines for shallower-level models and complex boundary lines for deeper-level models.After clipping, we should also ensure that the boundaries of the clipping results are not apparently different between different-level models.

Conclusions
The main contribution of this paper lies in the proposal of the clipping algorithm for real-scene LODbased 3D models from different perspectives.In the current software and literature, there are no

Figure 2 .
Figure2.The principle of the ray method.

Figure 4 .
Figure 4. Intersections in different coordinate systems.(a) is the intersection in screen coordinates, and (b) is the intersection in the 3D coordinate system.

Figure 7 .
Figure7.A case in which a triangle and the boundary line intersect.

Figure 8 .
Figure 8. Projection from a point to a triangle.

Figure 9 .
Figure 9. Constraint line in red within a triangle.

Figure 10 .
Figure 10.Different Delaunay triangulation results, where (a) is the normal triangulation and (b) is the constrained triangulation.
Figure14shows a part of the experimental 3D model.Due to the limitation of the shooting angle of the UAV, there is an obvious hole in the middle of the model.Therefore, we clip the hole using the proposed algorithm.First, we draw a red line as the clipping boundary that fits the model.Figure14(a 1 ) shows the model from the perspective of drawing the boundary line.Since our algorithm clips the triangles, we show the triangulation model (Figure14 (a 2), the same below) of Figure14 (a 1).Because the rendering of the 3D model varies with the viewing angle, the success of clipping from a single viewing angle does not mean that the model is correctly clipped.Therefore, we show the model observed from the side (Figure14 (b 1) and 14 (b 2 )).Second, we clip the model from the perspective of drawing the clipping boundary.Figure15 (a 1) and 15 (a 2 )are the clipping results of the model from the perspective of drawing the boundary line.Figure15 (b 1) and 15 (b 2 ) are the clipping results of the model from the side view.Third, to check that the algorithm clipped the model for each level along the boundary, we zoomed in the model to change the level that is displayed.The results are shown in Figure16.Figure16 (a 1) and 16 (a 2 ) are the middle-level clipping results of the model.Figure16 (b 1) and 16 (b 2 ) are the deepest-level clipping results of the model.Below, we compare our results with the results of SuperMap iDesktop 10i software.First, we draw a clipping boundary similar to Figure14 (a1 ) in the SuperMap iDesktop 10i.Second, we use this boundary to clip the holes in the model; the results are shown in Figure 17 (a).The results are not good, many parts of the clipped model are not within the boundary line.Because the

Figure 14 .
Figure 14.An original model before clipping.(a 1 ) is the model from the perspective of drawing the boundary line; (a 2 ) is the triangulation model of (a 1 ).(b 1 ) is the model observed from the side; (b 2 ) is the triangulation model of (b 1 ).

Figure 15 .
Figure 15.Clipping results.(a 1 ) represents the clipping results of the model from the perspective of drawing the boundary line; (a 2 ) is the triangulation model of (a 1 ); (b 1 ) represents the clipping results of the model from the side view; and (b 2 ) is the triangulation model of (b 1 ).
changes on the clipping boundary lines can also be reflected in the clipping results, which indicates that the clipping accuracy of the algorithm is very high.Moreover, models of different levels are clipped correctly, which shows that the algorithm has good adaptability to LOD-based 3D models.Comparing the clipping results of the triangulation models (Figure15, Figure16(a 2 ), 16 (b 2 ), Figure18(a 2 ), and 18 (b 2 )), the triangles adjacent to the clipping boundary lines are arranged in

Figure 16 .
Figure 16.Different-level results of clipping.(a 1 ) represents the middle-level clipping results of the model; (a 2 ) is the triangulation model of (a 1 ); (b 1 ) represents the deepest-level clipping results of the model; and (b 2 ) is the triangulation model of (b 1 ).

Figure 17 .
Figure 17.Clipping results of SuperMap iDesktop 10i software.(a) represents the clipping results observed from the perspective of drawing the boundary line; (b) represents the clipping results observed from the perspective perpendicular to the horizontal plane.

Figure 18 .
Figure 18.Clipping results of the 3D model under different clipping perspectives.(a 1 ) represents the results of viewing the model from the perspective of clipping the lower part of the model; (a 2 ) is the triangulation model of (a 1 ); (b 1 ) represents the results of viewing the model from the perspective of clipping the upper part of the model; and (b 2 ) is the triangulation model of (b 1 ).

Figure 19 .
Figure 19.Clipping time of the different boundary line segments and models.

Table 1 .
Clipping time of single models.

Table 2 .
Clipping time of LOD-based models.