A classifying method analysis on the number of returns for given pulse of post-earthquake airborne LiDAR data

Compared to remote sensing image, post-earthquake airborne Light Detection And Ranging (LiDAR) point cloud data contains a high-precision three-dimensional information on earthquake disaster which can improve the accuracy of the identification of destroy buildings. However after the earthquake, the damaged buildings showed so many different characteristics that we can't distinguish currently between trees and damaged buildings points by the most commonly used method of pre-processing. In this study, we analyse the number of returns for given pulse of trees and damaged buildings point cloud and explore methods to distinguish currently between trees and damaged buildings points. We propose a new method by searching for a certain number of neighbourhood space and calculate the ratio(R) of points whose number of returns for given pulse greater than 1 of the neighbourhood points to separate trees from buildings. In this study, we select some point clouds of typical undamaged building, collapsed building and tree as samples from airborne LiDAR point cloud data which got after 2010 earthquake in Haiti MW7.0 by the way of human-computer interaction. Testing to get the Rvalue to distinguish between trees and buildings and apply the R-value to test testing areas. The experiment results show that the proposed method in this study can distinguish between building (undamaged and damaged building) points and tree points effectively but be limited in area where buildings various, damaged complex and trees dense, so this method will be improved necessarily.


Introduction
The earthquake occurred in Haiti caused houses collapsed and damaged, roads and communication lines interrupted, that make difficult to rescue. The recognition of damaged buildings is an important step and a judgement index of earthquake intensity [1]. Obtaining information of damaged buildings quickly and accurately can support emergency rescue after earthquake [2]. Now the surface information of the post-earthquake area are most obtained from optical image which contains plane information and is only useful in sunny days.
LiDAR (Light Detection And Ranging) is a new measuring technique which developed in recent years. The ranging accuracy of Airborne radar measurement technology can be 11cm (500m height) [3] and now be widely used in the detection of surface information, restoration and reconstruction of the  [4]. The discrimination and extraction for damaged buildings based on airborne LiDAR point cloud is an important method to the disaster assessment [5]. Experts distinct between the undamaged buildings, collapsed buildings and trees mostly used elevation, slope, and vector of point cloud. However, it is difficult to distinguish damaged buildings from trees because elevation distribution of points on damaged buildings is similar to that of trees which is uneven and coarse. So this study analyzed the number of returns of different surface features and propose a classification method based on the ratio of point cloud echo times.

Method
2.1. The ratio of point cloud echo times (the ratio of points whose number of returns for given pulse greater than 1 of the nearby points, R) Point cloud data generally contain three-dimensional coordinate information, echo signal intensity value and multiple echo information. Among them multiple echoes information includes two aspects: one is the total echoes number of pulse that point belongs to, and the other is that the order of point in the total number of echoes. Different feature goals have different echo times, when the laser pulse irradiated to the flat building roof or bare ground surface echo is generated once, and when the pulse irradiation to the vegetation, forms multiple echoes since pulses can penetrate vegetation. Multiple echoes is an important feature of vegetation in the LiDAR data [7].
According to the echo times information of buildings and trees, this paper analysed the proportion of points whose echo time is greater than 1 in trees, undamaged buildings and collapsed buildings point cloud. Presented one method of feature point cloud classification by calculate the ratio (R) of points (M) whose echo time is greater than 1 in a certain number (N) of nearby points (Formula 1).
(1) Figure 1.The echo times of undamaged building, damaged building and tree

Verification and analysis on threshold of R
Combining hyper-spectral image in the study area (a) and (b) (Fig 3), manually select a sample of 50 undamaged buildings 50 collapsed buildings and 50 trees area. Count respectively the weight of points whose echo time is greater than 1, the results shown in Figure 2a. We can see that weight in undamaged building and damaged building samples mostly in the range of 0-0.1, but the weight in the vast majority trees area is about greater than 0.1. Normal distribution curves of R ( Figure 2b)for three types of terrain are built based on 150 samples to define the threshold，we get the intersection point A(0.0038,12.001) of curves of undamaged and damaged building and point B(0.083,0.986) of curves of damaged building and tree. Therefore the possibility of misclassification for building and tree is least when Rd= 0.0038 and Rt=0.083 and this paper makes Rd= 0.0038, Rt=0.083 as segmentation threshold of undamaged and damaged buildings, buildings and trees separately.  Figure 2. Sample distribution by R Table 1. Number of samples whose accuracy >85% To avoid the influence of different quantity points of samples on the ratio factor R, improve echo number information on the accuracy of the LiDAR point cloud classification of damage by earthquake and realize ground point automatic classification based on the ratio of point cloud echo times . The method used in this paper is to search around each point a certain number of nearby points Calculation the weight (R) of points whose echo time is greater than 1 in this area. By selecting different number (N) of nearby points, carry on comparison test in the case of split factor threshold Rd= 0.0038 and Rt =0.1. Considering the density of point cloud and planar area of buildings and trees in the study area, we select 5-50 valid points in the neighbourhood. Use this method to select a different neighbour points, for each sample point cloud classification, and calculate the accuracy (A) of sample classification. Finally this paper statistics the number of samples whose correct rate is more than 85% in 150 samples and the result is shown in Table 1. When the neighbourhood points n= 20, the number of samples whose accuracy above 85% is 133 which is the highest proportion. Therefore, this paper selected n0 = 20, Rd= 0.0038, Rt= 0.083 as point cloud classification threshold.

Data and study area
After 2010 Haiti 7.0 earthquake, Digital Globe, GeoEye, NOAA, USGS, the World Facility, obtain a satellite remote sensing images after the earthquake rapidly, meanwhile they collected pre-quake image, DEM, basic geographic information data., to provide the Government of Haiti disaster response decision supplementary information. This study took about 0.8km 2 region near the Haitian capital, airborne LiDAR point cloud data financed by Globe Facility for Disaster Deduction and Recovery (GFDDR), January 21, 2010 aerial point cloud data, point cloud density 2-5points/ m 2 . Auxiliary optical remote sensing images using the GeoEye-1 satellite imagery from Digital Globe Company obtained at January 13, 2010, resolution was 0.5m. Housing construction in this area have diverse types, such as large multi-story buildings, apartments, hotels, churches and others, they have large construction area, distributed sparsely ,among them there are several strains of trees and small woods; In this area there are also simple low, single house smaller, they intensive distributed on 0.5m satellite imagery, there between less trees. The study area covered almost all urban housing construction types, are representative.

Pre-processing of point cloud
In the process of point cloud data acquisition, point cloud contains some noise points because of defects of the scanner and environmental disturbance. The process of eliminating noise points is called point cloud de-noising. In this study we use point cloud denoising algorithm of local spatial distribution statistics to remark and eliminate points whose local points density has large difference from the overall points density based on the software of Points Cloud Magic [8]. Then we should filter points to split the ground point and the non-ground point, in this study we used three filtering algorithms: mathematical morphology filtering method, the slope filtering method and CSF(Cloth Simulation Filter)filtering method. Mathematical morphology filtering method is using the erosion and expansion of mathematical morphology operation to remove the cloud high point cloud and keep low point cloud to achieve the purpose of extracting ground [9]; Slope filtering method is based on adjacent ground point and feature point elevation mutation characteristics to extract ground point [10]; The CSF algorithm first make the original point cloud inversion, then has a certain hardness of cloth covered on it and get the terrain of the approximate estimate, finally estimate the terrain and separate ground points and un-ground points [11]. We filter the point cloud of test area 1 shown as Fig.2a by the above methods and get ground points, from result comparing: CSF filtering can mostly separate ground points shown as Fig.2d; Morphological filtering has good recognition effect for edge of different features shown as Fig.2b, but the effect of the recognition of whole feature is worse comparing with CSF filtering; Slope filtering can only extract a bit of ground points shown as Fig.2c. So this paper used the CSF filtering algorithm to get ground points. Then we extract un-ground points by eliminating ground points from point cloud.   Figure 5. Optical image (a) and LiDAR point cloud (b) of test area Table 2. Accuracy distribution of test samples To verify the efficacy of the ratio of point cloud echo times and the threshold factor, This paper selects approximately 6200m 2 area where at northwest of the presidential palace as a test area (Figure 5.a). This area is with strong representation which contains the undamaged buildings, collapsed buildings and trees. This paper extracts un-ground points from points in test area by filtering and get sixty samples outline which contains buildings and trees based on high resolution remote sensing images of pre-and post-earthquake, finally point clouds of test samples ( Figure 5.b)are obtained by outlines based on the Point Cloud Magic. To classify test point clouds, the ratio of point cloud echo times (R) and accuracy (A) of classification are calculated by method described above, if R>0.0083, the point is judged as tree, conversely is building, finally accuracy distribution of all samples is obtained shown as Table 2.a, the number of A<0.8 is 3 and A>=0.9 is 53; Considering the laser beam irradiation formed a certain area spot on the surface, part of the laser irradiate the house part irradiate on the ground or surface features near on the edge of house, that generates multiple echoes, so in this study samples whose accuracy more than 0.9 are classified correctly. Kappa coefficient calculated based on classification results and actual attributes of test samples from high resolution images in this study is 0.83. Therefore the classification result is almost the same as the reality, the classification accuracy of the method is high. To distinguish undamaged buildings from damaged buildings, this paper takes Rd= 0.0038 as the threshold. The classification accuracy distribution of 40 building samples is shown as Table 2.b, there are 20 samples whose accuracy is less than 0.2. Therefore the classification method can't distinguish buildings efficiently. In conclusion, the classification method based on the ratio of point cloud echo times has a weak ability to extract damaged buildings but efficient to distinguish buildings from trees, especially damaged buildings.

Discussion
The above results show that in a certain area the ratio (R) of points whose echo times is greater than 1 can reflect land features, effectively distinguish between buildings and trees, especially have better recognition between damaged buildings and trees, and can realize the house monomers' preliminary a b division. This method only use after the earthquake Airborne LiDAR point cloud data for damage extraction compared with remote sensing auxiliary or before and after the earthquake change detection method, can avoid the lack of before earthquake data and multivariate data matching problems, easier and more convenient. And the use of ratio of points echo times (R) can better distinguish between trees and collapsed buildings, improve the accuracy of the earthquake damage extraction. But we found in the study that, due to the collapsed buildings have different degree and various forms, the factor is difficult to accurately determine the collapsed building range boundaries, result classification error. In addition, in leafy trees areas, multiple echoes have lower chance to produce, so there will be some misclassification in that area. Therefore, it can improve classification accuracy to combine with normal vector and intensity. Besides the method takes monomer samples as study object which are obtained by manual cutting, so how to get feature monomer automatically still needs to explore.

Conclusion
Based on the 7.0 earthquake 2010 Haiti Airborne LiDAR point cloud data, this paper counted the proportion that points whose echo times greater than 1 in buildings, collapsed buildings and trees point cloud, introduced the ratio of echo times factor R, and determined classification threshold, used the method that for each point search a certain number of nearby points and calculate the proportion of point echo times greater than 1to identify surface features. The results showed that the ratio factor R for echo times can reflect the characteristic of buildings and trees, compared to other earthquake damage classification factor, this factor can improve distinction between building especially the collapsed buildings and trees. And it can initial realize the division of a single building, and laid a foundation for earthquake damage classification and extent judgment based on after the earthquake Airborne LiDAR data.