An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation

Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs) from airborne LiDAR (light detection and ranging) data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high accuracy. In this paper, we present a new filtering method which only needs a few easy-to-set integer and Boolean parameters. Within the proposed approach, a LiDAR point cloud is inverted, and then a rigid cloth is used to cover the inverted surface. By analyzing the interactions between the cloth nodes and the corresponding LiDAR points, the locations of the cloth nodes can be determined to generate an approximation of the ground surface. Finally, the ground points can be extracted from the LiDAR point cloud by comparing the original LiDAR points and the generated surface. Benchmark datasets provided by ISPRS (International Society for Photogrammetry and Remote Sensing) working Group III/3 are used to validate the proposed filtering method, and the experimental results yield an average total error of 4.58%, which is comparable with most of the state-of-the-art filtering algorithms. The proposed easy-to-use filtering method may help the users without much experience to use LiDAR data and related technology in their own applications more easily.


Introduction
High-resolution digital terrain models (DTMs) are critical for flood simulation, landslide monitoring, road design, land-cover classification, and forest management [1]. Light detection and ranging (LiDAR) technology, which is an efficient way to collect three-dimensional point clouds over a large area, has been widely used to produce DTMs. To generate DTMs, ground and non-ground measurements have to be separated from the LiDAR point clouds, which is a filtering process. Consequently, various types of filtering algorithms have been proposed to automatically extract ground points from LiDAR point clouds. However, developing an automatic and easy-to-use filtering algorithm that is universally applicable for various landscapes is still a challenge.
Many ground filtering algorithms have been proposed during previous decades, and these filtering methods can be mainly categorized as slope-based methods, mathematical morphology-based methods, and surface-based methods. The common assumption of slope-based algorithms is that the change in the slope of terrain is usually gradual in a neighborhood, while the change in slope between buildings or trees and the ground is very large. Based on this assumption, forces from the LiDAR point clouds. Minimizing this energy function determines the ground surface, which behaves like a membrane that sticks to the lowest points. This algorithm is a new idea to model ground surface from LiDAR data, but the optimization only achieve an global optimum solution, which does not guarantee to get all the local optimums. Thus, some local details may be ignored. Meanwhile, this algorithm performs relatively poorly in complex areas as reported in [25]. They may also fail to effectively model terrain with steep slopes and large variability because they are based on the assumption that the terrain is a smooth surface. Furthermore, another challenge of these methods is how to increase the efficiency when the accuracy is fixed [18].
The use of the aforementioned filtering algorithms has proven to be successful, but the performance of these algorithms changes according to the topographic features of the area, and the filtering results are usually unreliable in complex cityscapes and very steep areas. In addition, the implementation of these filtering methods requires a number of suitable parameters to achieve satisfactory results, which are difficult to determine because the optimal filter parameters vary from landscape to landscape, so these filtering methods are not easy to use by users without much experience. To cope with these problems, this paper proposes a novel filtering algorithm which is capable of approximating the ground surface with a few parameters. Different from other algorithms, the proposed method filters the ground points by simulating a physical process that an virtual cloth drops down to an inverted (upside-down) point cloud. Compared to existing filtering algorithms, the proposed filtering method has some advantages: (1) few parameters are used in the proposed algorithm, and these parameters are easy to understand and set; (2) the proposed algorithm can be applied to various landscapes without determining elaborate filtering parameters; and (3) this method works on raw LiDAR data.
The remainder of this paper is organized as follows. A new ground filtering algorithm is proposed in Section 2. Section 3 presents the experimental results, and the proposed algorithm is discussed in Section 4. Finally, Section 5 concludes this paper.

Method
Our method is based on the simulation of a simple physical process. Imagine a piece of cloth is placed above a terrain, and then this cloth drops because of gravity. Assuming that the cloth is soft enough to stick to the surface, the final shape of the cloth is the DSM (digital surface model). However, if the terrain is firstly turned upside down and the cloth is defined with rigidness, then the final shape of the cloth is the DTM. To simulate this physical process, we employ a technique that is called cloth simulation [26]. Based on this technique, we developed our cloth simulation filtering (CSF) algorithm to extract ground points from LiDAR points. The overview of the proposed algorithm is illustrated in Figure 1. First, the original point cloud is turned upside down, and then a cloth drops to the inverted surface from above. By analyzing the interactions between the nodes of the cloth and the corresponding LiDAR points, the final shape of the cloth can be determined and used as a base to classify the original points into ground and non-ground parts.

Fundamental of the Cloth Simulation
Cloth simulation is a term of 3D computer graphics. It is also called cloth modeling, which is used for simulating cloth within a computer program. During cloth simulation, the cloth can be modeled as a grid that consists of particles with mass and interconnections, called a Mass-Spring Model [27]. Figure 2 shows the structure of the grid model. A particle on the node of the grid has no size but is assigned with a constant mass. The positions of the particles in three-dimensional space determine the position and shape of the cloth. In this model, the interconnection between particles is modeled as a "virtual spring", which connects two particles and obeys Hooke's law. To fully describe the characteristics of the cloth, three types of springs have been defined: shear spring, traction spring and flexion spring. A detailed description about the functions of these different springs can be found in [27]. To simulate the shape of the cloth at a specific time, the positions of all of the particles in the 3D space are computed. The position and velocity of a particle are determined by the forces that act upon it. According to Newton's second law, the relationship between position and forces is determined by Equation (1): where X means the position of a particle at time t; F ext (X, t) stands for the external force, which consists of gravity and collision forces that are produced by obstacles when a particle meets some objects in the direction of its movement; and F int (X, t) stands for the internal forces (produced by interconnections) of a particle at position X and time t. Because both the internal and external forces vary with time t, Equation (1) is usually solved by a numerical integration (e.g., Euler method) in the conventional implementation of cloth simulation.

Modification of the Cloth Simulation
When applying the cloth simulation to LiDAR point filtering, a number of modifications have been made to make this algorithm adaptable to point cloud filtering. First, the movement of a particle is constrained to be in vertical direction, so the collision detection can be implemented by comparing the height values of the particle and the terrain (e.g., when the position of a particle is below or equal to the terrain, the particle intersects with the terrain). Second, when a particle reaches the "right position", i.e., the ground, this particle is set as unmovable. Third, the forces are divided into two discrete steps to achieve simplicity and relatively high performance. Usually, the position of a particle is determined by the net force of the external and internal forces. In this modified cloth simulation, we first compute the displacement of a particle from gravity (the particle is set as unmovable when it reaches the ground, so the collision force can be omitted) and then modify the position of this particle according to the internal forces. This process is illustrated in Figure 3.

Implementation of CSF
As described above, the forces that act on a particle are considered as two discrete steps. This modification was inspired by [28]. First, we calculate the displacement of each particle only from gravity, i.e., solve Equation (1) with internal forces equal to zero. Then, the explicit integration form of this equation is where m is the mass of the particle (usually, m is set to 1) and ∆t is the time step. This equation is very simple to solve. Given the time step and initial position, the current position can be calculated directly because G is a constant.
To constrain the displacement of particles in the void areas of the inverted surface, we consider the internal forces at the second step after the particles have been moved by gravity. Because of internal forces, particles will try to stay in the grid and return to the initial position. Instead of considering neighbors of each particle one by one, we simply traverse all the springs. For each spring, we compare the height difference between the two particles which form this spring. Thus, the 2-dimensional (2-D) problem are abstracted as a one-dimensional (1-D) problem, which is illustrated in Figure 4. As we have restricted the movement directions of the particles, two particles with different height values will try to move to the same horizon plane (cloth grid is horizontally placed at the beginning). If both connected particles are movable, we move them by the same amount in the opposite direction. If one of them is unmovable, then the other will be moved. Otherwise, if these two particles have the same height value, neither of them will be moved. Thus, the displacement (vector) of each particle can be calculated by the following equation: where − → d represents the displacement vector of a particle; b equals to 1 when the particle is movable, otherwise it equals to 0. − → p 0 is the position of current particle that is ready to be moved. − → p i is the position neighboring particle that connects with p 0 ; and − → n is a normalized vector that points to vertical direction, − → n = (0, 0, 1) T . This movement process can be repeated; we set a parameter rigidness (RI) to represent the repeated times. This parameterization process is shown in Figure 5. If RI is set to 1, the movable particle is just moved only once, and the displacement is half of the vertical distance (VD) between the two particles. If the RI is set to 2, the movable particle will be moved twice, the total displacement is 3/4VD. Finally, if RI is set to 3, the movable particle will be moved three times and the total displacement is 7/8VD. The value of 3 is enough to produce a very hard cloth. Thus, we constrain the rigidness to values of 1, 2 and 3. The larger the rigidness is, the more rigidly the cloth will behave. The main implementation procedures of CSF are described as follows. First, we project the cloth particles and LiDAR points into the same horizontal plane and then find a nearest LiDAR point (named corresponding point, CP) for each cloth particle in this 2D plane. An intersection height value (IHV) is defined to record the height value (before projection) of CP. This value represents the lowest position that a particle can reach (i.e., if the particle reaches the lowest position that is defined by this value, it cannot move forward anymore). During each iteration, we compare the current height value (CHV) of a particle with IHV; if CHV is equal or lower than IHV, we move the particle back to the position of IHV and make the particle unmovable.
An approximation of the real terrain is obtained after the simulation, and then the distances between the original LiDAR points and simulated particles are calculated by using a cloud-to-cloud distance computation algorithm [29]. LiDAR points with distances that are less than a threshold h cc are classified as BE (bare earth), while the remaining points are OBJ (objects).
The procedure of the proposed filtering algorithm is presented as follows: 1. Automatic or manual outliers handling using some third party software (such as cloudcompare).
2. Inverting the original LiDAR point cloud.
3. Initiating cloth grid. Determining number of particles according to the user defined grid resolution (GR). The initial position of cloth is usually set above the highest point. 4. Projecting all the LiDAR points and grid particles to a horizontal plane and finding the CP for each grid particle in this plane. Then recording the IHV. 5. For each grid particle, calculating the position affected by gravity if this particle is movable, and comparing the height of this cloth particle with IHV. If the height of particle is equal to or less than IHV, then this particle is placed at the height of IHV and is set as "unmovable". 6. For each grid particle, calculating the displacement of each particle affected by internal forces. 7. Repeating (5)-(6). The simulation process will terminate when the maximum height variation (M_HV) of all particles is small enough or when it exceeds the maximum iteration number which is specified by the user. 8. Computing the cloud to cloud distance between the grid particles and LiDAR point cloud. 9. Differentiating ground from non-ground points. For each LiDAR points, if the distance to the simulated particles is smaller than h cc , this point is classified as BE, otherwise it is classified as OBJ.

Post-Processing
For steep slopes, this algorithm may yield relatively large errors because the simulated cloth is above the steep slopes and does not fit with the ground measurements very well due to the internal constraints among particles, which is illustrated in Figure 6. Some ground measurements around steep slopes are mistakenly classified as OBJ. This problem can be solved by a post-processing method that smoothes the margins of steep slopes. This post-processing method finds an unmovable particle in the four adjacent neighborhoods of each movable particle and compares the height values of CPs. If the height difference is within a threshold (h cp ), the movable particle is moved to the ground and set as unmovable. For example, for point D in Figure 6, we find that point A is the unmovable particle from the four adjacent neighbors of D. Then, we compare the height values between C and B (the CPs for D and A, respectively). If the height difference is less than h cp , then this candidate point D is moved to C and is set as unmovable. We repeat this procedure until all the movable particles are properly handled (either set as unmovable or kept movable).
To implement post-processing, all the movable particles should be traversed, if we scan the cloth grid row by row, the results may be affected by this particular scan direction. Thus, we first build up sets of strongly connected components (SCCs) and each SCC contains a set of connected movable particles. In a SCC, it usually contains two kinds of particles, those particles that have at least one unmovable neighbor are marked as type M1, and the others are marked as M2 (see Figure 7). Using M1 as initial seeds, we perform the breath-first traversal for the SCC, with which movable particles are handled one by one from 1 to 18 in Figure 7. This process guarantees that the post-processing are performed from edge to center, regardless of scan direction.

Parameters
CSF mainly consists of four user-defined parameters: grid resolution (GR), which represents the horizontal distance between two neighboring particles; time step (dT), which controls the displacement of particles from gravity during each iteration; rigidness (RI), which controls the rigidness of the cloth; and an optional parameter steep slope fit factor (ST), which indicates whether the post-processing of handling steep slopes is required or not.
In addition to these user-defined parameters, two threshold parameters have been used in this algorithm to aid the identification of ground points. The first is a distance threshold (h cc ) that governs the final classification of the LiDAR points as BE and OBJ based on the distances to the cloth grid. This parameter is set as a fixed value of 0.5 m. Another threshold parameter is the height difference (h cp ), which is used during post-processing to determine whether a movable particle should be moved to the ground or not. This parameter is set to 0.3 m for all of the datasets.

Validation of the Filtering
This method was first tested by datasets that were provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) Working Group III/3 to quantitatively test the performance of different filters and identify directions for future research [30]. In these datasets, fifteen samples with different characteristics were selected to test the performance of the proposed CSF algorithm, which are shown in Table 1. The reference datasets were generated by manually filtering the LiDAR datasets, and each point in the samples was classified as BE or OBJ. According to the implementation of CSF, when the original LiDAR point cloud is turned upside down, the objects above ground will appear below the ground measurements. Then, the complexity of object measurements (such as rooftops) seldom influences the simulation process. Based on this feature, we visually classified the samples into different groups according to the properties of the topography. These properties indicate the existence of steep slopes or terraced slopes. If the terrain is very flat and has no steep or terraced slopes, RI is set to a relatively large value (RI = 3), and no post-processing is needed (ST = f alse). If steep slopes exist (e.g., river bank, ditch, and terrace), a medium soft cloth (RI = 2) and post-processing (ST = true) are needed. When handling very steep slopes, we need post-processing (ST = true) and a very soft cloth (RI = 1). Thus, the fifteen samples are classified into three groups, each sharing the same set of parameters, which are illustrated in Table 2. The main parameters that control the results and vary with the scene type were RI and ST, which reduce the complexity and improve the usability of this algorithm. For dT and GR, we set them as fixed values of 0.65 and 0.5, respectively. These two values are universally applicable to all of the reference datasets according to our tests. The influence of these two parameters on the results is discussed in Section 4.2.
To evaluate the performance of this algorithm, the type I (T.I), type II (T.II) and total errors (T.E.) for all the fifteen samples were calculated. The type I error is the number of BE points that are incorrectly classified as OBJ divided by the true number of BE points; the type II error is the number of OBJ points that that incorrectly classified as BE points divided by the true number of OBJ points, and the total error is the number of mistakenly classified points divided by the total points. Besides, the Cohen's Kappa coefficient [31], which measures the overall agreement between two judges [15,21], is also calculated in this study. The calculations of T.I, T.II, T.E. and Kappa coefficient can refer to Hu et al. [32].
The errors and the Kappa coefficients are shown in Table 3. The results show that CSF has relatively good results for samp21, samp31, samp42, samp51 and samp54 respecting total error and kappa coefficient. All these samples belong to group I, which indicates that the most suitable types for our method is urban areas. For group II and group III, the total errors are relatively large (especially for samp11) compared to group I, which shows that CSF performs relatively poorly in complex regions similar to other filtering algorithms [21]. Overall, our method is not sensitive to the type and distribution of object above ground because the LiDAR point cloud is inverted and the shape of the terrain mostly determines the filtering accuracy. However, in high relief areas with very steep slopes (e.g., pit, cliff) and low rise buildings, our method perform worst, because when a soft cloth fit with the terrain, it may also reach the rooftops of low rise buildings. Table 3. Errors and Kappa coefficients for all samples.  Figure 8 shows the original dataset, reference DTM, produced DTM and the space distribution of Type I and Type II errors of some representative samples (samp31, samp11 and samp51). Compared with the reference DTM, the produced DTM has successfully preserved the main terrain shape and also the microtopography, especially in mine field (Figure 8k). It can also be seen that the error points mainly exist on the edges of objects for samp11 and samp31, in which some object measurements with low height values may be classified as BE and some ground measurements with relatively high height values may also be classified as OBJ. samp11 has a large group of error points (type II) that almost classifies a whole building as ground measurements, it occurs because the building is located on a slope and the roof is nearly connected to the ground, causing the building to be treated as ground in the post-processing step. For samp53 (the same as samp52 and samp61), the very soft cloth (group III) makes some small vegetation points mistakenly classified as BE. Besides, the total number of OBJ points is much smaller compared to BE points. These jointly cause a large Type II error. However, if a harder cloth is used it may yield the opposite error (BE points around the steep slopes may be identified as OBJ). Thus, adjusting the parameters is necessary to balance type I and type II errors. If a large number of low objects exists above the ground, the cloth should be harder (RI should be set larger), which will guarantee that fewer object measurements are mistakenly classified as ground objects. Among the samples which have very poor Type II error, samp22 and samp71 are caused by the incorrectly identification of bridges, which is classified as BE under CSF. The details will be discussed in Section 4.4. To quantitatively analyze the accuracy of the CSF algorithm, we compared the total error and kappa coefficient with some existing top algorithms. The total errors and Kappa coefficients of these algorithms and our algorithm are shown in Tables 4 and 5. On the whole, the accuracy of our method is close to some top filtering algorithms, except for samples 22 and 71. The total errors for sample 22 and 71 are relatively high, which are also mainly caused by the bridge.

Testing with Dense Point Cloud
The datasets from ISPRS were obtained many years ago, as the development of LiDAR technology, the density of collected point cloud are continuously arising. Thus, we tested the performance of CSF with datasets that have more dense points with average point distance equal to 0.6 m-0.8 m, the information of these datasets are shown in Table 6. By comparing with the true ground obtained by standard industrial semi-automatic software, the accuracy of CSF is evaluated. The results are shown in Table 7. It can be seen that CSF has achieved a relatively high accuracy for all the datasets from urban to rural areas. However, A large T.I error also has been noted for dataset 3, this is because ground measurements is very sparse in this area. For dataset 4, the error mainly occurs around steep slopes, since this area contains large number of steep slopes.  Table 7. Accuracy evaluation with true ground measurements. To analyse the details of CSF, we presented the generated DTM and the cross section of each dataset. Since CSF will first invert the original point cloud, thus large buildings will produce large holes. However, when applying a relative hard cloth, this hole can be crossed. Then large buildings can be removed (see Figure 9). If post-processing is enabled (ST = true), the cloth can fit the ground very well. Through this way, microtopography (e.g., low-lying areas) in urban areas can be preserved ( Figure 10). In mountain areas, CSF performs relatively poorly, especially in dense vegetation areas where ground measurements are usually sparse. If the cloth is too soft, many object measurements may be mistakenly classified as BE. Otherwise, ground measurements may be classified as OBJ due to the hilly topography (see Figure 11). For areas with large number of steep slopes, cloth should be more soft and post-process is also needed. Figure 12 shows a typical area with large number of steep slopes, it can be seen that the main skeleton of terrain has been preserved well. However, some small houses may be missed (red circle in Figure 12c). (c) Produced DTM. In this dataset, it contains a number of connected large low buildings (see the cross section), when the cloth is relatively hard, it will not drop into this large hole, then these buildings can be removed.  (c) Produced DTM. In some hilly topography areas, some parts of the cloth may not sticks to the ground well, this will cause classification errors (BE may be treated as OBJ).

Accuracy
Among all the reported algorithms, the Axelsson's algorithm had been implemented in a commercial software package called Terrasolid [33] and the Pfeifer's algorithm was implemented into a commercially available software package called SCOP++ from the German company Inpho GmbH [34]. The overall performance of our algorithm also showed high accuracy and stability, as both the mean total error (4.39) and standard deviation (2.76) of all the samples are relatively low compared to all of the other algorithms. This result is inspirational and demonstrates that our algorithm can be adapted to various environments and achieves relatively high accuracy.

Parameter Setting
Usually, we only modify RI and ST for different groups of samples and set dT and GR as fixed values. These universally applicable parameters (dT and GR) were determined by a number of tests. Theoretically, smaller dT value would make behavior of simulated cloth more like a true cloth, but it dramatically increases the computing time. To quantitatively evaluate the influence of dT, we tested all of the samples with different time steps (from 0.4 to 1.5 with steps of 0.05; 0.4 was chosen because the value would take too much time to compute when it was smaller than 0.4). The total errors of each group and the mean total error of all the samples, which depend on the time step, are illustrated in Figure 13. This figure indicates that the total error increases after an initial decline for all groups, and all of them achieve the lowest total error around the 0.65 time step. When dT is small, the displacement of particle in each step is also small. As we have set a maximum iteration number of 500, if dT is too small, the cloth may not reach the LiDAR measurements or fit with the terrain well after simulation process. Thus, very small dT may produce large error. On the other hand, When the dT is too large, the simulated cloth may stick to rooftops, this also increases the total error. Thus, 0.65 was chosen as the value of dT because it produces relatively good results for all of the samples and it can be applied to many situations without adjustments.
The grid resolution (GR) parameter in the simulation process has strong relationship with simulation time because it determines how many cloth particles are created for a specific dataset. Figure 14 shows the total errors at different GR values. It can be seen that the accuracies of group I and group III are relatively stable than group II because group II usually have complicated terrain shape and buildings (e.g., areas with terraced slopes and low rise buildings). However, almost all samples get the highest accuracy around 0.5, which was then been used as a fixed value.
Except the parameters above, there are two threshold parameters: h cc and h cp . h cc governs the final classification which separates LiDAR measurements into BE or OBJ. Most particles will stick to ground after simulation. And OBJ measurements (e.g., buildings and trees) are usually taller than 0.5 m. Thus, we set h cc as 0.5, which is also a fixed value. The influences of h cc on total errors are illustrated in Figure 15. It shows that this value has limited impact on total errors. As for the parameter h cp , it is used in the post-processing to decide whether a movable particle should be moved to ground according to its neighbors. We simply set this parameter to 0.3 m, which indicates the height difference between two adjacent ground measurements is usually less than 0.3 m on a flat terrain. Since this parameter is only used when post-process is enabled, and it also only influence the movable particles over steep slopes, the influence is also limited.   During the simulation, CSF will terminate when the M_HV is less than a threshold or the maximum iterations reach a user specified value. Usually, this user-defined value is set to 500. However in most cases, CSF will end according to the former criterial. Figure 16 shows the trend of M_HV and A_HV (average height variation) of a typical scene. It can be seen that both M_HV and A_HV decreased to a very low value around 100 iterations. Actually, M_HV is 0.0033 when iteration number equal to 150. That means the CSF will give a satisfied result with a relatively low iteration times. From the discussion above, the parameter settings in the CSF algorithm are relatively simple and intuitive. Only two parameters (RI and ST) must be determined through visual judgment by the user. A rough estimation is enough to determine these parameter values. Usually, RI can be set to 1, 2 or 3 according to the features of the terrain, they are applied to areas with high steep slopes, terraced slopes and gentle slopes, respectively. ST is set to "true" or "false". "true" means that there exists steep slopes and post-processing is needed. "false" means post-processing is not needed (See details in Section 4.3).

Steep Slopes
For group II and group III, which contain many steep or terraced slopes in the scene, the slope is an important factor that influences the accuracy of the CSF algorithm. The simulated cloth will lie over the slope but usually cannot stick to the ground perfectly; at the edge of the slope, some distances will appear between the cloth and the ground. If this distance exceeds h cc , the ground measurements will be mistakenly classified as OBJ. A direct method to mitigate this problem is to set the rigidness to a lower value, but some low objects may be classified as BE as a result. To balance these two types of errors, a post-processing method for the margin area is proposed in this study, as mentioned in Section 2.3. Thus, we can use a relatively hard cloth and post-processing to remove lower objects and correctly handle steep slope areas. A typical result is shown in Figure 17, in which the cloth properly sticks to the terrain near the steep margin after the post-processing procedure.

Bridge
When handling steep slopes, an exception is made for bridges, which are defined as OBJ in the ISPRS reference datasets, but the adjacent road is treated as BE. Usually, this scene will show a gentle slope along the direction of the road but will show an abrupt elevation change in the direction of the river (Figure 18). During the post-processing procedure, movable particles over the bridge may be set as unmovable particles and stick to the bridge because the height difference between two CPs is very small along the direction of the road. Thus, the bridge will usually be classified as BE through post-processing in our algorithm. Figure 18. Illustration of a bridge: height variation along road is much less than that in direction of river.

Outlier Processing
As described in Section 2.2, the cloth particles will stop moving as soon as they reach the IHV; if some outliers exist under the ground measurements, then some particles will be obscured, and the cloth is usually propped up by these outliers. This phenomenon can increase the errors around the outliers because of the rigidness of the cloth. Thus, the outliers should be removed by some statistical filters before applying the CSF algorithm to the point cloud. Among the fifteen samples that were used in this study, sample 41 is an exception that has a large area of outliers [11] under the ground, which cannot be easily removed by a statistical approach. If the outliers could be removed either automatically or manually before applying the CSF algorithm, the results can be satisfactory. The total error before removing the outliers was 5.14, which reduced to 1.63 after removing the outliers.

Conclusions
This research proposes a novel filtering method named CSF based on a physical process. It utilizes the nature of cloth and modifies the physical process of cloth simulation to adapt to point cloud filtering. Compared to conventional filtering algorithms, the parameters are less numerous and are easy to set. Regardless of the complexity of ground objects, the samples were divided into three categories according to the shape of the terrain. Few parameters are needed, and these parameters hardly changed among the three sample categories; only an integer parameter rigidness and a Boolean parameter ST are required to be set by the user. These three groups of parameters exhibit relatively high accuracies for all fifteen samples of the ISPRS benchmark datasets. Another benefit of the CSF algorithm is that the simulated cloth can be directly treated as the final generated DTM for some circumstances, which avoids the interpolation of ground points, and can also recover areas of missing data. Moreover, we have released our software to the public [35], and we will also release our source code to the research community. We hope that the proposed novel physical process simulation-based ground filtering algorithm could help promote the scientific, government, and the public's use of LiDAR data and technology to the applications of flood simulation, landslide monitoring, road design, land-cover classification, and forest management.
However, the CSF algorithm has limitations as well. Because we have modified the physical processes of particle movement into two discrete steps, the particles may stick to roofs and some OBJ points may be mistakenly classified as BE when dealing with very large low buildings. This process usually produces some isolated points at the centers of roofs, an noise filtering can help to mitigate this problem. Additionally, the CSF algorithm cannot distinguish objects that are connected to the ground (e.g., bridge). In the future, we will try to use the geometry information of LiDAR points or combine optical images (such as multispectral images) to clearly distinguish bridges from roads.